Comperhisve Introduction to Artificial General Intelligence (AGI)
The Quest for Human-Level AI: An Overview of Artificial General Intelligence
Artificial General Intelligence (AGI) is one of the hottest topics in the AI community nowadays, especially after the OpenAI recent leadership shakeup which might have been due to disagreements about safety concerns regarding a breakthrough in AGI.
This article provides an overview of artificial general intelligence (AGI). We’ll define AGI and how it differs from narrow AI, examining its goal of achieving human-level intelligence across multiple domains. Next, we’ll assess the current state of AGI research and the remaining technical hurdles.
Finally, we’ll consider some of the debates around potential implications as work progresses on developing highly intelligent systems. The aim is to give readers a concise introduction to the field of artificial general intelligence.
Table of Contents:
What is Artificial General Intelligence (AGI)?
What is the difference between AI & AGI?
How Close Are We from AGI?
Should We Fear AGI?
1. What is Artificial General Intelligence (AGI)?
Artificial general intelligence (AGI) is a branch of theoretical artificial intelligence (AI) research working to develop AI with a human level of cognitive function, including the ability to self-teach. However, not all AI researchers believe it is possible to develop an AGI system, and the field is divided on what factors constitute and can accurately measure “intelligence.”
AGI spans many dimensions. The ability to perform generalized tasks implies that AGI will affect the job market far more than the AIs that preceded it. For example, an AI that can read an X-ray and detect disease can assist doctors. However, an AGI that can read the X-ray, understand the patient’s personal history, make a recommendation, and explain that recommendation to the patient in a kind bedside manner could conceivably replace the doctor entirely.
Computer scientists and AI researchers continue to develop theoretical frameworks and work on the unsolved problem of AGI. Goertzel has defined several high-level approaches that have emerged in the field of AGI research and categorizes them as follows:
Symbolic: A symbolic approach to AGI holds the belief that symbolic thought is “the crux of human general intelligence” and “precisely what lets us generalize most broadly.”
Emergentist: An emergentist approach to AGI focuses on the idea that the human brain is essentially a set of simple elements (neurons) that self-organize complexly in reaction to the experience of the body. In turn, it might follow that a similar type of intelligence might emerge from re-creating a similar structure.
Hybrid: As the name suggests, a hybrid approach to AGI sees the brain as a hybrid system in which many different parts and principles work together to create something in which the whole is greater than the sum of its parts. By nature, hybrid AGI research varies widely in its approaches.
Universalist: A universalist approach to AGI centers on “the mathematical essence of general intelligence” and the idea that once AGI is solved in the theoretical realm, the principles used to solve it can be scaled down and used to create it in reality.
2. What is the difference between AI & AGI?
While artificial intelligence (AI) currently encompasses a vast range of technologies and research avenues that deal with machine and computer cognition, artificial general intelligence (AGI), or AI with a level of intelligence equal to that of a human, remains a theoretical concept and research goal.
AI researcher Peter Voss defines general intelligence as having “the ability to learn anything (in principle).” Under his criteria, AGI’s learning ability would need to be “autonomous, goal-directed, and highly adaptive.” AGI is generally conceptualized as being AI that has the ability to match the cognitive capacity of humans and is categorized under the label of strong AI. (Artificial super intelligence [ASI] also sits under the strong AI category; however, it refers to the concept of AI that surpasses the function of the human brain.)
In comparison, most of the AI available at this point would be categorized as weak AI, or narrow AI, as it has been developed to focus on specific tasks and applications. It’s worth noting, however, that these AI systems can still be incredibly powerful and complex, with applications ranging from autonomous vehicle systems to voice-activated virtual assistants; they merely rely on some level of human programming for training and accuracy.
3. How Close Are We from AGI?
It is hard to predict how far are we from AGI. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. For example, Geoff Hinton, winner of the Turing Award (the highest prize in computer science), believes AGI is less than 20 years away but that it will not present an existential threat. Meanwhile, his fellow winner of the same award, Yoshua Bengio, states that we do not know how many decades it will take to reach AGI.
Microsoft researchers say they’ve already seen sparks of AGI in GPT-4. Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.
Google Brain co-founder and current Landing AI CEO Andrew Ng says the tech industry is still very far from achieving systems smart enough to do things like that. And he’s concerned about the misuse of the term itself.
I think that it’s very muddy definitions of AGI that make people jump on the ‘are we getting close to AGI?’ The answer is no, unless you change the definition of AGI, in which case you could totally be there in three years or maybe even 30 years ago.
— Andrew NG —
4. Should We Fear AGI?
This is a very important question and the answer is not yet. The current AI systems have not risen to the level of AGI. Not yet. However many people inside and outside the AI industry believe that the advent of large language models like GPT-4 has shortened the timeline for reaching that goal.
There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skills will permit them to invent their own plans and objectives.
Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build thoughtful safety guardrails at every step.
References
Are you looking to start a career in data science and AI and do not know how? I offer data science mentoring sessions and long-term career mentoring:
Mentoring sessions: https://lnkd.in/dXeg3KPW
Long-term mentoring: https://lnkd.in/dtdUYBrM
Great work 👏👏