Ticker

6/recent/ticker-posts

Researchers Say It'll Be Impossible to Control a Super-Intelligent AI


Since decades, people have debated whether or not artificial intelligence would one day destroy humanity. In 2021, experts gave their opinion on whether or not humans will be able to govern a highly advanced computer super-intelligence. The response? Nearly certainly not.

The problem is that managing a super-intelligence that is beyond human comprehension would need simulating and analyzing that super-intelligence (and control). It is, however, impossible to develop such a simulation if we are unable to understand it.

The authors of the new article contend that we cannot establish rules like "do no damage to people" unless we are aware of the kinds of situations that an AI is likely to encounter. We are unable to impose restrictions if a computer system operates at a level beyond the capacity of our programmers.

The researchers concluded that a super-intelligence presents a fundamentally distinct challenge from ones that are generally investigated under the rubric of "robot ethics."

This is due to a superintelligence's potential for being multifaceted and its capacity to mobilize a variety of resources in order to achieve goals that may be beyond human comprehension, let alone being under human control.

The team's thinking was inspired in part by Alan Turing's 1936 formulation of the halting issue. The challenge is determining whether a computer program will cycle endlessly in search of an answer or come to a resolution and stop.

While we can know that for certain specific programs, it is mathematically impossible to develop a mechanism that will allow us to know that for every hypothetical program that might ever be created, as Turing demonstrated through some clever arithmetic. That takes us back to AI, which in a super-intelligent state might conceivably store every computer program in existence at once in its memory.

It's mathematically impossible for us to be completely certain either way, which implies that any program designed to stop AI from killing people and ruining the earth, for example, may reach a conclusion (and cease). As a result, it's not containable.

According to computer scientist Iyad Rahwan of the Max-Planck Institute for Human Development in Germany in 2021, "In effect, this renders the confinement algorithm ineffective."

Limiting the capacities of the super-intelligence is an alternative to teaching AI some morals and ordering it not to destroy the planet, something that no algorithm can be 100% assured of accomplishing, according to the researchers. It could, for instance, lose access to some networks or portions of the internet.

The research disproved this notion as well, claiming that it would restrict the use of artificial intelligence; the argument being that if we aren't going to utilize it to address issues that are beyond the capabilities of people, then why even construct it?

If we continue to advance artificial intelligence, because to its incomprehensibility, we could not even be aware when a super-intelligence outside of our control manifests. That suggests that we need to start really evaluating the directions we're taking.

In 2021, computer scientist Manuel Cebrian from the Max-Planck Institute for Human Development remarked, "A super-intelligent machine that dominates the world seems like science fiction." However, there are now machines that carry out some crucial jobs on their own without the programmers completely understanding how they did it.

The concern of whether this may eventually spiral out of control and endanger mankind emerges.


The research was published in the Journal of Artificial Intelligence Research.