Is it possible to control high -level artificial intelligence?

According to a study carried out by researchers from the Max Planck Institute for Human Development and published in the Journal of Artificial Intelligence Research, humanity will be totally unable to control a high -level artificial intelligence.The problem is that to control that hypothetical intelligence, it would be necessary to have a simulation.But if we cannot understand it, it will be impossible to create that simulation.

For authors, rules of the type ‘not cause harm to humans’ would be quite impossible to establish if we do not understand the class of scenarios that will occur to artificial intelligence (AI).So once a computer system starts working at a higher level and outside the reach of our programmers, we will no longer be able to establish limits.

«A superintelligence poses a problem fundamentally different from those that are typically studied under the motto of the‘ Ethics of the robots ’, -the researchers are writing -.This is because a superintelligence has multiple facets and, therefore, is potentially capable of mobilizing a diversity of resources to achieve objectives that are potentially incomprehensible to humans, and much less controllable ».

The arrest problem

Part of the reasoning of the team is based on the ‘detention problem’ presented by Alan Turing in 1936.The problem focuses on whether or not a computer program will reach a conclusion and response (and therefore will stop), or it will simply be repeated eternally trying to find a.

¿Será posible controlar una inteligencia artificial de alto nivel?

As Turing demonstrated mathematically, and although for some programs we can know, it is impossible to find a way of knowing that for each potential program that could once be written.Which leads us back to artificial intelligence (AI) super intelligent, which could contain all possible computer programs in their memory simultaneously.

Therefore, any written program, for example, to prevent artificial intelligence (AI) from damaging humans and destroying the world, can reach a conclusion (and stop) or not: it is mathematically impossible for us to be absolutely safe,which means that we cannot control it."Indeed," says Iyad Rahwan, study co -author, "this uses the containment algorithm".

Artificial intelligence

The only alternative to telling artificial intelligence (AI) that does not destroy the world, something that no algorithm can be absolutely sure to be able to do, it would be to limit the abilities of superintelligence.For example, disconnecting it from some parts of the Internet or certain critical networks.

But the study also rejects this idea, since it would limit the scope of artificial intelligence.If we are not going to use it to solve problems beyond the scope of humans, why create it?

On the other hand, if we continue with artificial intelligence, we may not even know at what time it escapes our control, such is its incomprehensibility.Which means that we must start seriously considering the direction in which we advance.

"A superintelligent machine that controls the world sounds like a science fiction," says Manuel Cebrian, another of the study's formants-.But there are already machines that perform certain important tasks independently without the programmers understanding how they learned them ».The question, therefore, is the following: could this become uncontrollable or even dangerous for humanity?The authors seem to be convinced that yes.

Source: José Manuel Nieves / ABC,

Reference article: https: // www.ABC.es/ciencia/ABCi-para-humanidad-sera-imposible-controlar-superinteligencia-artificial-202111160224_noticia.html,