A recent study has addressed the concern of whether AI superintelligence could suddenly emerge without prediction. The research, which tested the emergence of intelligence in AI models, found that the phenomenon of AI models gaining intelligence sharply and unpredictably, known as “emergence,” might be overestimated. The study tested OpenAI's GPT-3 and Google's LaMDA models and found that when different metrics and larger test sets were used, the signs of sudden intelligence spikes disappeared, suggesting a more gradual development. The research implies that most aspects of language models are predictable, challenging the fear of an unexpected leap to AI superintelligence. The findings also emphasize the importance of benchmarking and real-world relevance in AI development over mere architectural advancements.