Researchers from MIT and other institutions have unveiled intriguing findings on the operational mechanisms behind large language models (LLMs) like ChatGPT, shedding light on the relatively straightforward methods these AI systems use to access stored knowledge. Despite the complexity and widespread application of LLMs in various domains, the intricacies of their functionality remain largely enigmatic. The team discovered that LLMs employ simple linear functions for retrieving and decoding stored facts, a methodology consistent across different types of information. This revelation could pave the way for enhancing AI models by identifying and correcting inaccuracies within their knowledge bases, promising a future where AI interactions become more reliable and error-free. The research, set to be presented at the upcoming International Conference on Learning Representations (ICLR 2024), marks a significant step towards demystifying the cognitive processes of AI and improving its application in real-world scenarios.