CAMBRIDGE – An elderly statesman, a former megatech executive and a computer expert meet in a bar. What are you talking about? Artificial intelligence, of course, because everyone is talking about it (or that, call yourself Alexa, Siri or whatever). Don't expect a science fiction future; the age of AI has arrived. Machine learning, in particular, is having a huge impact on our lives and will greatly affect our future as well.
That's the message of this exciting new book by former US Secretary of State Henry A. Kissinger, former Google CEO Eric Schmidt, and MIT Dean Daniel Huttenlocher. Which brings a caveat: AI will call into question the primacy of human reason, which has been around since the dawn of the Enlightenment.
Can machines think? They are intelligent? And what do those terms mean? In 1950, the famous British mathematician Alan Turing proposed avoiding these difficult philosophical dilemmas and using performance as the evaluation criterion: if the performance of a machine is indistinguishable to us from that of a human being, it must be classified as "intelligent". Old computer programs generally produced rigid, static solutions that failed this "Turing test," and the field of AI was stagnant throughout the 1980s.
But in the 1990s there was a revolutionary advance, when a new modality was applied based on letting machines learn by themselves, instead of acting guided by programs that sought to codify human understanding. Unlike classical algorithms, which contain instructions to produce accurate results, machine learning algorithms contain instructions to improve imprecise results. Thus was born the modern field of machine learning: programs capable of learning through experience.
At first, the technique of layering machine learning algorithms on neural networks (inspired by the structure of the human brain) was limited by a lack of computing power. But that changed in recent years. In 2017, AlphaZero (an AI program developed by DeepMind, a Google subsidiary) defeated Stockfish, the best chess program in the world. The remarkable fact is not that one program beat another, but that it taught itself how to do it. Its creators gave him the rules of chess and instructions to find a winning strategy. After just four hours of learning by playing against himself, AlphaZero became world chess champion by defeating Stockfish twenty-eight times without losing a single match (there were 72 ties).
When playing, AlphaZero uses its ability to recognize patterns in vast sets of possibilities that are impossible for the human mind to perceive, process, or employ. Since then, similar machine learning methods have enabled AI to not only defeat human chess experts but to discover entirely new chess strategies. As the authors point out, with this AI transcends the Turing test: it is no longer just about performance that is indistinguishable from human intelligence, but about performance that outperforms humans.
There are also generative neural networks, which have the ability to create new images or texts. The authors cite OpenAI's GPT-3 as one of the most notable generative AIs today. In 2019, the company developed a linguistic model that trains itself by consuming texts available on the Internet, and that, based on a few words and the detection of patterns in sequential elements, can extrapolate sentences and paragraphs. The system is capable of composing new and original texts that pass the Turing test: displaying intelligent behavior indistinguishable from that of a human being.
I had firsthand experience of its capabilities: I gave the program a few words, and after an Internet search, it produced a credible fake news story about me in less than a minute. Of course it was an invention, but my case does not matter. What if it was a story about a political leader in the middle of a crucial election? What future awaits democracy if a few days before a vote any Internet user can launch generative bots to flood political discourse?
Democracy has had enough of the problem of political polarization, a worsening of which is contributed to by social media algorithms that try to get clicks (and ad revenue) by presenting users with ever more extreme ideas to get them “hooked”. " To the place. The problem of fake news is not new, but what is certainly new is its rapid, cheap, and large-scale amplification by AI algorithms. That there is a right to free expression does not imply a right to free amplification.
The authors believe that these fundamental issues are becoming relevant as global networking platforms such as Google, Twitter and Facebook are using AI to gather and filter volumes of information that would be impossible for their users to process. But these filtering methods lead to a segregation of users, by creating social echo chambers that encourage disagreements between different groups. What one person considers an accurate description of reality becomes very different from the reality seen by other people or groups, and this reinforces and deepens the polarization. Increasingly, AI decides what is important and what is true; And the results do not bode well for the health of democracy.
Of course, AI also has many potential benefits for humanity. An AI algorithm can give a more reliable interpretation of mammogram results than a human technician. (This raises an interesting problem for physicians who choose not to follow the machine's recommendation: do they expose themselves to malpractice lawsuits?)
The authors cite the case of halicin, a new antibiotic discovered in 2020 thanks to MIT research in which, using AI, it was possible to create models of millions of compounds in a few days (a task beyond the reach of human beings). humans) to explore bactericidal methods not yet discovered or explained. The researchers noted that discovering halicin without AI, using traditional experimentation methods, would have been too expensive or impossible. As the authors observe, the potential of AI is enormous: translating languages, detecting diseases, and creating models of climate change are just a few examples of its possible field of action.
The book does not say much about the ghost of the IAG (artificial general intelligence), that is, software capable of performing any intellectual activity, even relating tasks and concepts in an interdisciplinary way. Whatever the long-term future of the IAG, we already have enough problems with the machine learning-based generative AI that already exists. It can draw conclusions, offer forecasts, and make decisions, but it is not self-aware, nor can it think about its place in the world. It has no intentionality, motivation, morality, or emotions. That is, it is not the equivalent of a human being.
vjindal:How to open files provided in each CFT question in terminal? or do we have to use windows?
— Sequoia IITK Thu Jun 22 18:22:54 +0000 2017
But despite the limits of today's AI, we shouldn't underestimate the profound effects it's already having on the world. In the words of the authors:
“Unaware of the many modern conveniences that AI already provides us, we have slowly and almost passively become dependent on technology without registering the fact of our dependence or its consequences. In daily life, AI accompanies us, helping us make decisions about what to eat, what to wear, what to believe, where to go and how to get there (...) But the price (largely unnoticed) of these and other possibilities is that people's relationship with reason and reality is altered.
AI is already influencing world politics. Because it is an enabling technology of general scope, the inequality in its distribution cannot but affect the global balance of power. At this stage, while machine learning is within everyone's reach, the major AI powerhouses are the United States and China. Of the seven largest global companies in the area, three are American and four are Chinese.
Chinese President Xi Jinping has set a goal that by 2030 China will be the leading country in AI. Kai-Fu Lee (of Sinovation Ventures in Beijing) notes that with its huge population, the world's largest Internet, abundance of data, and little concern for privacy, China is well positioned for AI development. Furthermore, Lee argues that having a huge market and numerous engineers may be more important than having world-class universities and scientists.
But the quality of data is just as important as the quantity, and the same can be said for the quality of chips and algorithms. In this it is possible that the United States takes the lead. Kissinger, Schmidt, and Huttenlocher argue that since the development of more advanced AI depends on the availability of data and computing power, designing training methods that reduce the use of both factors is a critical frontier.
In addition to economic competition, AI will have an important effect on military competition and warfare. In the words of the authors, "the introduction of non-human logic into military systems will transform strategy." When generative machine learning AI systems are fighting against each other, humans may find it difficult to anticipate the results of their interaction. Having speed, range of spin, and stamina will all become more valuable.
AI will make conflicts more intense and unpredictable. The attack surface of societies connected through digital networks will be too vast for human operators to defend manually. The existence of lethal weapons capable of choosing targets and starting combat autonomously will reduce the ability of human beings to intervene in time. No matter how hard we try to ensure that the system operates under human control or supervision, there will be strong incentives for preemptive strike and premature escalation. Crisis management will become more difficult.
These risks should encourage governments to institute consultation mechanisms and arms control treaties; but it is not yet clear what an arms control would look like in the case of AI. Unlike nuclear and conventional weapons (bulky, visible, heavy, and countable), verifying swarms of AI-powered drones or torpedoes is more difficult, and the algorithms that manage them are even more elusive.
Given the importance and pervasiveness of AI in civil applications, limiting its overall development will not be easy. But we still have time to do something about military targeting capabilities. The United States already distinguishes between AI-enabled weapons and autonomous AI weapons. The first are weapons with more precision and lethal power, but which are still under human control; the latter can make lethal decisions without human intervention. The United States says it will not possess weapons of the second type.
In addition, the United Nations has been studying the possibility of instituting a new international treaty for the prohibition of weapons of this nature. But will all countries sign it? How will compliance be verified? And with the learning capabilities of the generative AI, will the weapons evolve in ways that allow them to evade the restrictions? In any case, measures are needed to limit the drive towards automaticity. And of course, no automaticity should be allowed in the vicinity of nuclear weapons systems.
Without denying the degree of lucidity and wisdom contained in a book that is very well written, I wish the authors had said something more in relation to possible solutions to the problems concerning the human control of AI at the national and international level. They point out the weaknesses of the AI: it is not self-aware, it does not feel or know what it does not know. As brilliant as he is at outperforming humans in some activities, he is incapable of identifying and avoiding mistakes that would be obvious to any child. The Nobel Prize for Literature Kazuo Ishiguro puts it brilliantly in his novel Klara and the Sun.
Kissinger, Schmidt, and Huttenlocher note that AI's inability to self-verify glaring errors highlights the importance of developing assessment processes that allow humans to identify boundaries, analyze proposed courses of action, and build in resilience. to systems in case the AI fails. AI should not be allowed to be used in any system until its creators have demonstrated through such testing that it can be trusted. As the authors say: “Creating professional certifications, regulatory oversight mechanisms, and AI oversight programs (in addition to the auditing expertise that their implementation will require) will be a crucial project for society.”
In this sense, the rigor of the regulatory regime must depend on the implicit danger in the activity. The AI used to drive a car has to be subject to more control than the one used on entertainment platforms like TikTok.
Finally, the authors propose to create a national commission, made up of prestigious figures from the highest levels of government, business and academia, which will have two functions: to ensure that the country remains intellectually and strategically competitive in AI and to raise global awareness of AI. the cultural derivations of technology. These are wise words, but I wish they had said more in relation to how to achieve those important goals. In the meantime, they have created a pleasurable introduction to issues that will be crucial to our future and will force us to rethink the very nature of humanity.
Joseph S. Nye, Former United States Assistant Secretary of Defense for International Security, Former Chairman of the United States National Intelligence Council, and Former Assistant Secretary of State for Security Assistance, Science and technology, is a professor at Harvard.
Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher, The Age of AI: And Our Human Future
This article was originally published on Project Syndicate.