His concern has a lot to do with money. The heavyweights of the technology sector are betting heavily in this regard. Google, for example, acquired the company DeepMind, specialized in the development of neural networks in which Musk had already invested. The search giant is working on a computer system capable of distinguishing a human face from a dog's face in a video, people skating or sleeping, a cat... And all by itself and without anyone putting labels on the file previously. The investment in artificial intelligence is such that today we can enjoy digital assistants that make many day-to-day tasks easier for us, as is the case with Amazon's Echo.
The idea is that he learns, so to speak, after 'feeding himself with millions of recordings'. IBM, for its part, is fine-tuning its Watson supercomputer, which in 2011 defeated the human champions of the US quiz show 'Jeopardy!' The intention of it is to improve the cognitive functions of the device and test its capabilities to perform medical diagnoses, personality analysis and translations in real time. Facebook engineers are not far behind and have devised an algorithm that can successfully recognize a face 97% of the time, even if it has been poorly captured.
Musk assures that things are going too fast, and that is why AI is a technology that can be as dangerous as nuclear briefcases. In the choir of the doomsayers of the artificial apocalypse, the voice of the British philosopher Nick Bostrom, from the University of Oxford, stands out, who compares our destiny to that of horses, when they were replaced by cars and tractors. In 1915, there were about twenty-six million of these equines in the US. In the 1950s, only two million remained. The horses were slaughtered to be sold as dog food. For Bostrom, AI poses an existential risk to humanity comparable to the impact of a large asteroid or nuclear holocaust. All this, of course, provided that we can build thinking computers. But what exactly does this mean?
Actually, the concept of artificial intelligence is not as recent as it seems. More than seventy years have passed since the time of Alan Turing – who is considered its father – and the construction of his Bombe device, which allowed the codes of the German Enigma machine to be deciphered. At one point in the film 'The Imitation Game' (Morten Tyldum, 2014), in which Benedict Cumberbatch plays the famous mathematician, a detective asks him: 'Will machines one day be able to think like humans?' To which he replies: 'Most people think not.'
The problem is that he is asking a stupid question. Of course, machines cannot think like people. They are different, and they think differently. The question is: 'because something is different, does that mean it can't think?' The detective then questions him about the title of his article, 'The Imitation Game'. "It's a game, a test to determine if someone is a human or a machine," says Turing. 'There is a general theme. A judge asks, and from the answers, he decides if he talks to a person or a machine'. The scene may be invented, but its content is real. The quiz exists.
AI is all the rage thanks to literature and cinema. But what is the real degree of progress? Years ago, I was at the Robotics Institute in Pittsburgh, in the US, one of the temples of this discipline. At that time, he was part of a TVE team that collected the latest techno-scientific advances in a popular series called 2.Mil. I have to admit it: I was shocked by the image of robotics that science fiction has instilled in us.
The gadgets they had there were little more than clunkers in the hands of engineers in jeans, and they looked like they came out of a geek garage. They broke down at the slightest chance. They told me about Florence, a robot nurse who was going to revolutionize geriatrics. In reality, she was a kind of barrel with a head to which they had glued silicone eyes and lips to draw smiles.
Florence had a built-in television camera and monitor. Her batteries were draining fast. And, of course, she did not understand what we were saying. Everything she said had to be scheduled in advance, so an engineer worked overtime to get her out into the hall and give us a welcome message.
She had read many things about what they did in Pittsburgh, especially about Xavier, a robot that knew where he was going, a revolution. But it was nothing more than another barrel with wheels that moved through the corridors of the institute thanks to a map that he had in her memory. In front of some stairs, she stopped so as not to kill herself. Apparently she broke into the rooms to tell dirty jokes. That morning I saw Xavier as he was being dragged away, an image I will never forget. He was in the catacombs of robotics! I went to the office of Hans Moravec, one of the most famous visionaries, but everything he said was hard to believe.
Moravec was convinced that in fifty years androids would displace humans. For more than an hour he talked nonstop about the evolution of these devices and their growing intelligence, thanks to the advancement of microprocessors and their ability to handle more and more information. It was a captivating talk. The evolution of the machines was going to be unstoppable. 'The time has come for us to leave,' concluded this Austrian-born scientist.
Moravec dropped out of high school to found an industrial robot company with 3D vision. Before, he had shown me on his computer an image perceived by one where chairs and tables were seen that had a pixelated appearance. And how could the machine know what was what? In that summer of 1999, Moravec said that he was fascinated by a new internet search engine, the most intelligent and best designed. It was the first time I heard of Google.
In 2014, Google bought an AI company from Musk and has developed the first autonomous car, which has already traveled a couple of million kilometers without a driver, and the system that differentiates cats from people on YouTube. The world is literally invaded by inconceivable amounts of information flowing through the network and computing power is constantly increasing. But do we really have reason to fear that a machine will one day think like us?