It’s clear that we are in the initial stages of Artificial Intelligence (AI), using chatbots likeChatGPT, which are powered byLarge Language Models (LLMs). However, AI is not just limited to chatbots. AI agents, AGI, and Superintelligence are the next paradigms of the AI era we are about to witness. So in this article, I explain what is Superintelligence and how Safe Superintelligence can protect mankind from powerful AI systems.
What is Superintelligence?
As the name suggests, Superintelligence is a form of intelligence that far surpasses the brightest and most ingenious human minds in every domain out there. It possesses knowledge, skills, and creativity, an order of magnitude higher than biological humans.
Keep in mind that Superintelligence is a hypothetical concept where AI systems gain superior cognitive abilities, beyond human capabilities. It can unlock new paradigms in scientific discovery, solve problems that have challenged human minds for centuries, think and reason much faster than humans, and perform actions in parallel.
It’s often said that Superintelligence will be even more capable than AGI — Artificial General Intelligence. David Chalmers, a cognitive scientist, says that AGI will gradually lead to superintelligence. An AGI system can match the abilities of humans in reasoning, learning, and understanding. However, Superintelligence can go beyond that and exceeds human intelligence in all aspects.
Since Superintelligence can surpass human capabilities, there are many risks associated with this technology. Nick Bostrom, a prominent thinker argues that there is an existential risk to humanity if Superintelligence is not aligned with human values and interests. It can lead to unimaginable outcomes for human society, possibly leading to human extinction.
Apart from that, Bostrom also raises questions about ethical issues regarding the creation and use of superintelligent systems. What will happen to the rights of the individual, who is going to control it, and what will be the impact on society and welfare? Once such a system is developed, there is a high chance that it can evade human attempts to control or limit its actions.
Not just that, Superalignment can lead to an “Intelligence Explosion”, a term coined by the British mathematician I.J. Good in 1965. He theorized that a self-improving intelligent system can design and create even more powerful intelligent systems leading to an intelligence explosion. In such a scenario, unintended consequences may follow which can be harmful to mankind.
Many AI theorists have argued that taming and controlling a superintelligent system will require rigorous alignment with human values. Such a system must be aligned in a way that it interprets and performs actions correctly and responsibly.
Leike alleged that “safety culture and processes have taken a backseat to shiny products.” He has now joined Anthropic, a rival AI lab. Sutskever, on the other hand, has announced a new company calledSafe Superintelligence Inc. (SSI)that aims to create a safe superintelligent system. SSI says that it’s “the most important technical problem of our time.“
Led by Sutskever, the company wants to solely work on achieving safe superintelligence, without having to engage with management or product cycles. While working at OpenAI, Sutskever gave an interview to The Guardian where he emphasized the potential risks and benefits of powerful AI systems.
Sutskever says, “AI is a double-edged sword: it has the potential to solve many of our problems, but it also creates new ones.” He contends that “the future is going to be good for AI regardless, but it would be nice if it were good for humans as well.“
Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.