PARIS SCIENCES & LETTRES (PSL)
Thank your for your subscribe
Oops something went wrong. Please check your entry

AI Regulation: understanding the real challenges

2016 saw a striking technology breakthrough: the AI robot Alpha Go beat world champion Lee SeDol with a score of 4:1. This remarkable victory was hailed as another milestone in the 60-year history of the artificial intelligence industry, but together with the euphoria came doubts and concerns: will robots replace humans? Will artificial intelligence ultimately endanger the human species?Such worries are not entirely unfounded, hence the frequent proposals that have called for government regulations on AI development, reminiscent of the tightening of regulations on gene technology research a decade ago. However, the essence of the problem is this: how do we impose effective, but not overly aggressive regulations on a threat that, for now, is imagined, one that has not yet become a reality?In fact, the dilemma of regulation is not how to balance the advantages and disadvantages of the technology, but how to thoroughly understand and contain the potential threats generated by artificial intelligence. In other words, is the hazard of artificial intelligence to be defined as “replacing humans”? If so, the only rational regulation would be to entirely ban further R&D efforts on this technology. But is it fair to sentence this emerging technology to death, while subjects as ethically challenging as gene engineering are still up and running? Artificial intelligence is already pervasive in our lives. It is widely used in search engines, social networks and news reporting. Its popularity begs us to reevaluate the concerns we hold against it. If we put aside these concerns, what would be the real dangers of artificial intelligence? The only way to devise appropriate, effective regulation is by finding the right answer to this question. This article will seek to achieve this.

25
May 2017
lire en français
lire en français
Executive resume

Stephen Hawking claimed that artificial intelligence has the potential to be the Terminator of human culture. It could be “either the best, or the worst thing, ever to happen to humanity,” he stated in the opening conference at the inauguration of Cambridge University’s Leverhulme Center for the Future of Intelligence, on October 19, 2016. This is not the first such caveat from Hawking. He expressed similar views in an interview with the BBC in 2014. Afterwards, Hawking was active in introducing the necessary regulations on AI research. In fact, the Leverhulme Center was founded largely to tackle risks from AI.

What’s so scary about AI?

Hawking is not the only one who has voiced his concerns. Elon Musk, the founder of Tesla and SpaceX, has repeatedly put red flags on AI technology. During the MIT AeroAstro Centennial Symposium, he claimed that AI could be the greatest threat to human existence. Two years later at the Global Code Conference, he again warned that humanity could be reduced to AI’s pet. Following such calls for attention, 892 AI researchers and 1445 experts co-signed and published the 23 Asilomar AI Principles to prevent any deviation of AI study and application.

The concerns do not only lie in the possibility of AI replacing or even ruling mankind. Its critics also believe that it will damage employment and widen income gaps. Yuval Noah, the author of Homo Deus: A Brief History of Tomorrow, once pointed out that AI will give rise to a string of social problems including massive unemployment, which will push us further into an era of the utmost inequality, where a tiny number of individuals rise to form a group of elites who possess “superpower”, while the majority will be stripped of their positions in an economic or political context.

On the other hand, some people believe that these concerns are excessive. Mark Zuckerberg once paralleled AI with the invention of the aircraft: if our ancestors 200 years ago had been afraid of collapse and failure, we would not be flying airplanes today. Looking at history, every single piece of revolutionary technology, be it atomic energy or gene engineering, has risen to its position amid doubts and worries, but none has catapulted human society into chaos, and the human race has survived. This is a fair argument to suggest that the concerns expressed about AI are nothing but noise.

Admittedly, the development of AI entails major risks, so regulators should not let its growth go unchecked. As stated above, the early regulations adopted unanimously by the international community a decade ago helped rein in the risks arising from gene engineering. After the OECD coined the concept of a “knowledge society” in the 1960s, technology came to be considered one of the most important indicators of national competitiveness, along with land and population. Policy makers around the globe should look carefully at how to provide enough space for AI to develop. The essence of the current discussion is no longer whether there should be regulations, but what should be regulated and how.

It is not conducive to base our decisions on the words of people like Hawking, Musk or Noah, as they have only “envisioned” the threats from AI, without providing scientific explanations. This is not enough to answer the questions of what to regulate and how. To provide policy makers with helpful suggestions, we must understand the principles, capabilities, potential value and risks of AI. This article hopes to provide such an understanding.

The cornerstone of algorithm: data and rules

AI and its algorithms are now well-known to many people, since the AI boom in 2016: movies recommended to you on a video website automatically match your tastes; facial recognition systems at train stations can automatically check if you have a ticket for that day; you can book a medical appointment simply by talking to your phone; your DNA info has been transmitted to the healthcare system to make tailored drugs to cure your disease. The search engines, social networks and chat software we use on a daily basis are all manifestations of how AI is creeping into our lives. Computer devices “consume” massive amounts of data, but they also provide all sorts of information, products and services that are relevant to you.

And how is it happening? Will algorithm-based AI technology continue to grow and ultimately break free from human control? To answer these questions, let’s take a look at the algorithms behind AI.

The ultimate goal of AI technology is to empower machines to possess human-like powers of thought. To achieve this target, a machine must acquire the ability to learn, which explains why we frequently equate the concept of “machine learning” with AI. Machine learning is a process in which a machine is given a set algorithm and known data. This allows it to create a model, to which it will later add judgments and analyses. However, we should remember that the machine-learning algorithm is different from traditional algorithms. An algorithm, by nature, is a series of commands executed by a computer. Traditional algorithms prescribe precise actions under set conditions, in the utmost detail, whereas machine-learning algorithms allow robots to change these rules according to the data history. Take the movement of walking as an example. Programmers need to set up every single step of the process using the traditional algorithm. However, if we give a machine the ability to analyze and learn how humans walk, it can tackle unprecedented scenarios independently.

Of course the above example only describes machine learning at its basic level. Only by digging deeper into machine learning processes can we truly understand their potential concrete effects on society. There are many types of machine learning algorithm based on the current available technology. These can be classified into five schools: symbolism, connectionism, evolutionism, analogism and Bayesianism. Each school believes in different machine learning logics and philosophies.

The symbolists believe that all information can be simplified as symbols, making the learning process as simple as data- and assumption-based induction. Based on data (fact) and knowledge (assumption), the symbolists train the machine with a formula for raising assumptions, verifying data, raising new assumptions, then producing new rules to enable it to make judgments in the new environment. The symbolist school works in line with its philosophical foundations. Its key to success lies in the comprehensiveness of the data and the reliability of the preset conditions. In other words, a lack of data or irrational preset conditions will directly affect the results of machine learning. A typical example is “Russell’s turkey”. The turkey easily concludes that it will be fed daily at 9am, after 10 consecutive days of this treatment. However, 10 days is a short period of time (incomplete data) and is insufficient to draw such a conclusion (accepting rules after 10 days of data accumulation = unreasonable preset conditions). So tragically, on the morning of Thanksgiving, the turkey is slaughtered instead of being fed.

The problem with data and preset conditions is not exclusive to the symbolist school. It is a common problem also seen in other schools. The connectionist school simulates how the human brain acquires knowledge. It automatically adjusts the weight of each node through a network of simulated neurons and back-propagation of algorithms, to ultimately achieve learning capability. Again, the keys are completeness of data and reliability of the preset conditions. The evolutionist school believes machine learning can be achieved through interactions and experiments on different rule sets, guided by preset targets, in order to find the rule set that best fits the test data – another manifestation of the importance of data and preset conditions. The analogist school runs in the same vein. It considers that machines can make reasonable decisions based on similarities with existing data analysis. It is fair to say that the completeness of the dataset and the default setting for the similarity of different scenarios play a critical role in machine learning. In comparison with the four schools mentioned above, Bayesianism has less requirements in terms of coverage of the data group, as its advantage comes from the ability to learn and explore future unknowns. Machines following the Bayesian paradigm back-check their previous assumption and test its credibility based on the new input data. Even so, the result is still subject to the data and rules picked up by the machines beforehand. In other words, data completeness and preset conditions are still dominant factors, even for machines that use the Bayesian way of learning.

In fact, regardless of the different schools of machine learning, all types of algorithm can be perceived as comprising three parts: expression, assessment and optimization. In theory, machines have the capability to infinitely self-optimize to improve their learning capability, and therefore to learn anything and everything. However, all the data, methods and principles used for assessment are fed and determined by human beings. Therefore, it is impossible for machines to replace mankind, though they may evolve to be too complicated for humans to easily understand them.

AI regulation: what are the real challenges?

In an interview with Wired magazine, Barack Obama once stated that AI is still in its preliminary stages and that excessively strict regulation is neither necessary nor desirable. He said that more investment in R&D is needed to assist the transfer between basic research and application research. His remarks are in line with the mainstream views, which are always criticizing the low efficiency of regulations and rent-seeking by regulators. However, putting aside this bias against regulations, we would agree that the development of AI must be subject to regulation. In the 23 Principles of Artificial Intelligence published by people including Hawking and Musk, the authors call for AI development to be kept on the right track. In this paper, we share this view, but not simply because we believe mankind will be replaced by machines.

To answer the question of what we should regulate and how in the domain of AI, perhaps the second part of this essay, explaining concepts of machine learning, can offer some clues. Data and preset rules are of fundamental importance to algorithms. Thus, governance of data and rules lies at the heart of AI regulation.

The data we provide to machines will determine their ability to generate corresponding learning results. Where do these data come from, and how do machines use these data? Incomplete data feeding will lead to learning mishaps (as in the example of Russell’s turkey), whereas generalized data collection gives rise to concerns about privacy protection and conflict of interests. Hence, data regulation is the precondition for setting up AI regulation. On the basis of protecting individual data rights, we ought to regulate and encourage data sharing and application, to better guide AI development.

We also need to think about who makes the rules for machine optimization and through which procedures. AI poses a real threat, even if our concerns are excessive given its current level of development. AI is transforming our lives, gradually and subtly. Machine optimization rules must be sufficiently regulated and overseen, to ensure they are not abused. This problem is similar to the problem regarding Facebook: how do we guarantee that the news feed will be unbiased and without preference for any special interest group? With more and more people subscribing to tailor-made news, AI may have the power to influence a presidential election. This is why we wish to argue that concepts such as transparency and open-source should be included when establishing AI regulations.

It took AI over 60 years to make a leap in its applications, after the maturity of the internet, big data and machine learning. It is foreseeable that it will play an even more important role in the future. There is no need for us to panic over this imminent scenario, but we do have to exercise caution. Regulating AI and imposing the appropriate measures to achieve this goal should be set as the focal point of our policy agenda. By writing this article, we hope to address a “general” concern over AI, through a targeted analysis of regulation policies. More detailed policy suggestions are beyond the scope of this essay. However, there could be further discussion in similar writings in the future, if we obtain greater attention and insight from scholars and professionals.

Jia Kai
Visiting Scholar at the University of California in Davis
TAO Tong
Cofounder of Komolstar