Opinions of Sunday, 8 March 2026
Columnist: Kwami Ahiabenu
Can artificial intelligence surpass human intelligence? This is a million-dollar question, though quite futuristic, an answer can be found in the concept of Artificial General Intelligence (AGI).
It is imperative to emphasise that AGI is not available today, nor is there evidence of its existence, though some organizations such as OpenAI, DeepMind, and Anthropic are researching advanced AI systems integrated with agentic capabilities. We have a long way to go, with current attempts only exhibiting fragmented capabilities: that is, AGI performing complex tasks in a localized and limited manner.
Given that there is complexity in defining “intelligence,” and testing intelligence is even tougher, AGI can be described as a type of AI that has the capacity to understand, learn, and apply knowledge across a wide range of tasks at a human level and maybe beyond.
AGI can also be understood through distinct lenses: functional AGI, which refers to its ability to perform diverse intellectual tasks; cognitive AGI, characterized by its capacity to replicate human-like reasoning, including common sense, abstract thinking, and contextual understanding; self-learning AGI, which implies the system's inherent ability to continually improve itself without human intervention; and finally, philosophical AGI, which explores debates around self-awareness, emotions, and potentially even consciousness.
Today, most AI systems, known as Artificial Narrow Intelligence (ANI), are designed to perform specific tasks; for example, generating texts, image recognition, game playing, among others. ANI is characterized by its inability to truly understand the world. It also cannot independently transfer learning in a manner that humans can, since it requires human-designed training and objectives in order to perform a given task.
An example can illustrate this notion: a medical AI can analyze X-rays, but it can’t independently diagnose and offer treatment unless separately trained and supervised by humans.
The futuristic AGI goal is to deploy a system that independently learns new tasks without being retrained from scratch, reasons abstractly, plans with a long-term goal in mind, adapts naturally to new environments, and is equipped to transfer knowledge between domains in the same way humans do.
AGI is not going to happen tomorrow, but it matters because if AGI becomes possible, it could mean the availability of invaluable tools to address important global challenges, including accelerated scientific discoveries, revolutionizing economies, helping lower barriers to innovation and creativity, and providing a myriad of other solutions to complex global problems in a timely manner for billions of people in a responsible manner.
That said, AGI will open a can of worms, since it will raise major ethical and safety challenges with no easy answers. Given this background, AGI can be considered one of the most important technological questions of the 21st century.
Although AGI aims to match or exceed human-level general intelligence, it is different on several levels. It cannot necessarily think or experience consciousness like humans, nor does it possess an innate ability to process information like humans.
Also, the philosophical debate on whether AGI would be conscious or not is not going to be settled any time soon. In one way or another, new and old philosophical traditions that have attempted to find answers to deeper questions are invoked in this debate: What is a human? What is the good life? What is knowledge? And what is our relation to the cosmos and each other?
Thus, whether machines can truly have minds and consciousness like humans ("strong AI") or they can best only imitate human behavior in the form of "weak AI" continue to be key questions on the table. More importantly, the issues of ethical and practical threats of AGI are central to this debate.
Given that AGI is a work in progress, AGI research and development is a fast-emerging area, where companies are racing to invest. On the question of when AGI will become a reality, nobody knows.
This space is filled with a lot of speculation, with multiple shifting timelines and a number of websites claiming to track progress of work, such as Life Architect AGI, AI-2027, AGI Index, The AGI Clock, Takeoff Tracker, and Skynet Countdown, etc.
That said, it must be noted that the realization of Artificial General Intelligence (AGI) has seen a significant milestone due to a shift from theoretical speculation to more plausible outcomes in the medium to long term.
The long road toward achieving AGI is complex and fraught with significant challenges. It demands massive investments, creating substantial economic obstacles.
Key issues include equipping machines with implicit knowledge, intuition, and long-term knowledge retention. Further challenges encompass ethical concerns, ownership frameworks, explainability, and improving transparency, trust, and interpretability in AGI decision-making.
The evolution of AGI also grapples with governance primacy, human-AGI interaction, and inherent risks. Critical AGI safety risks involve misuse (a system's capacity to cause harm from user instructions), misalignment (AI pursuing unintended actions), structural risks (multiple agents lacking clear accountability), and mistakes (unintentional harm by AI).
In conclusion, the transition from AI to AGI is happening now, with no definite dates about when it will fully evolve. The goal is to enable the development of responsible, socially attuned AGI that aligns more closely to ethical principles, human values, and societal norms.
Today is a good time to collectively decide on the shape and direction of AGI. Although there are no easy answers, ensuring humanity stands at the centre of whatever shape AGI takes can ensure its utility for our world.