Dimitris Tsementzis leads the Applied Artificial Intelligence team in Goldman Sachs’ Engineering Division, driving development and adoption of commercial applications of AI across the firm. He’s a member of the Firmwide Model Risk Control Committee and a fellow of the Goldman Sachs Global Institute. GSGI Fellows partner with the institute to provide insights on topics across emerging technology and geopolitics.
Executive Summary
Engineers from Palo Alto to Beijing are racing to create superintelligence — an artificial intelligence (AI) that can think and reason, with an intellect far superior to our own. But existing state-of-the-art models may be on a path to offer “super-automation,” rather than superintelligence.1 These existing AIs are typically based on a transformer-based large language model (LLM) capable of natural language processing and understanding, and with the ability to generate content and carry out tasks. As companies pour billions of dollars into semiconductors to unlock ever-greater AI capabilities, it’s important to understand what superintelligence is, what might be needed to achieve it, and whether the latest generation of AIs will be an adequate foundation for these ambitions.
AI to AGI to (safe) SI
What could be missing to get to (safe) superintelligence?
In my view, there are at least three fundamental research challenges to solve in order to build a safe superintelligence:
Will a superintelligence ever actually be useful or commercial?
Even on the assumption that a safe superintelligence is created, it’s not clear it would prove useful or intelligible (or commercial).
Conclusion
It’s not clear whether transformer-based LLMs can be the foundational architecture that realizes a superintelligent system. More realistically, with the current architectures, the technology sector is on the path to super-automation, and new insights will be needed to take us to the path to superintelligence. Nevertheless, achieving superintelligence in a safe way is an important technical and research problem likely to be pursued by leading scientists and entrepreneurs, even if commercial or real-world impacts are likely very far off.
1Whether these ambitions can be realized is typically a debate that centers around the scaling laws of LLMs (or other models), namely to what extent can we expect their performance to improve the larger they become in terms of parameter count, data, and other factors. A useful technical reference for these investigations is here: Maor Ivgi, Yair Carmon, and Jonathan Berant, Scaling Laws Under the Microscope: Predicting Transformer Performance from Small Scale Experiments, (2022).
2A detailed classification of AGI and a definition of superintelligence is given here: Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg, Position: Levels of AGI for Operationalizing Progress on the Path to AGI, (2024).
3Nick Bostrom, How Long Before Superintelligence?, (International Journal of Future Studies, 1998).
4Melanie Mitchell’s research, for example, has grappled with this problem: John Pavlus, The Computer Scientist Training AI to Think with Analogies, (Quanta, 2021).
5The link between ants and superintelligence has been explored in various places. For instance, there have been extensive studies on leafcutter ant colonies and how they collectively form an intellect seemingly superior to the individuals comprising it. A classic reference is: Bert Hölldobler and Edward O. Wilson, The Leafcutter Ants: Civilization by Instinct, (W. W. Norton & Co. Ltd., 2010).
This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.
Our weekly newsletter with insights and intelligence from across the firm
By submitting this information, you agree to receive marketing emails from Goldman Sachs and accept our privacy policy. You can opt-out at any time.