A few weeks ago, Cristiano Ronaldo claimed that the competition with Messi had made him a better player, and that he had made Messi a better player. Their rivalry had helped both mark an era in football history. That same rivalry has led Nadal, Federer and Djokovic to break the records of all the tennis players that came before them. What does the rivalry between Cristiano and Messi or between Nadal and Federer have to do with algorithms?
Competition, well understood, is capable of bringing out the best in everyone. This was understood by a young researcher at the University of Montreal named Ian Goodfellow five years ago. He proposed a machine learning method based on competition between two algorithms in a zero-sum game. In this game, the wins or losses of one algorithm are balanced by the wins and losses of the other.
An algorithm is a set of instructions developed to execute a certain task. Algorithms do not understand tiredness or excuses. An algorithm does not give up and they are capable of doing millions of repetitions until they find a solution. Under these premises, Goodfellow developed a novel learning methodology that has revolutionized artificial intelligence in the last five years. If we put together two algorithms competing with each other, and that competition is designed in a "healthy" way, the possibilities are endless.
Known as Generative Adversarial Networks, they have represented a disruptive advance in fields such as information generation, image and audio editing, security and ethics in artificial intelligence.
To get an idea of Goodfellow's impact and his methodology, look no further than the number of times his work has been cited by other researchers in the last four years. In just 4 years he has achieved more than 60 thousand appointments. Something that has taken highly reputable researchers a lifetime. Adversarial Networks have multiple applications that include the creation of digital content, the detection of Fake News (DeepFake), or the development of a more ethical artificial intelligence, among many others.
Who watches the watchers? Using Adversarial Networks to Introduce Ethics in Artificial Intelligence
Twenty-three years ago writer Alan Moore posed this question in his comic Watchmen. Based on this idea and the possibilities offered by Goodfellow's Adversarial Networks, at the Autonomous University of Madrid (UAM) a method has been developed that allows certain sensitive factors (eg gender, ethnicity, age) to be eliminated from decision-making. an algorithm.
The method, called SensitiveNets or Sensitive Networks, has been tested in facial recognition technologies. These technologies have been shown to be sensitive to human biases and their performance is affected by demographic factors such as gender, age or ethnicity.
This means that our skin color, age or gender can determine the result of a search based on these technologies. Researchers from different renowned institutions (MIT, Maryland University, University of Notre Dame) have shown that there are demographic groups that receive discriminatory treatment from these artificial intelligences.
In other words, systems err more on people from disadvantaged groups. This deal only widens the gap in an already very unequal society. The technology developed by the UAM is based on the use of two algorithms facing each other as we have mentioned before.
The first is a traditional facial recognition algorithm (El Vigilante) based on popular deep learning. The second is a detector of sensitive information (The one who watches the Watcher). Learning the first will lead him to train himself to recognize faces and in turn not allow the second to be able to detect sensitive information in the process (eg gender, ethnicity, age).
The second algorithm aims to search for any trace of this sensitive information in the decision of the first, in a kind of game of cat and mouse. Both algorithms are trained with a new database of 24,000 identities that allows for more than 100 billion possible iterations.
At the end of a learning process that takes no more than two hours, an artificial intelligence is achieved in which sensitive information has been eliminated from the decision-making process. This technology can be applied in multiple fields such as the aforementioned facial recognition, but also in automatic resume analysis algorithms, credit scoring algorithms, or any other machine learning system exposed to human bias.
To understand the difficulty of the task, it must be taken into account that sensitive information can be extracted from multiple sources. It was recently reported that Apple's credit algorithm grants more credit to men than women even when both have similar incomes. Apple defends itself by arguing that they do not use gender as an entry into their system.
But this does not mean that the algorithm cannot detect it through information as simple as establishments and purchasing habits. Therefore, it is necessary to monitor these algorithms so that they are not detecting sensitive information (gender) and taking it into account in their decision-making process, wherever it comes from.
Aythami Morales Moreno is a member of the BiDA-Lab research group and contracted professor at the Department of Electronics and Communications at the Autonomous University of Madrid