
A recent survey by the IE University in Madrid reveals that one in four Europeans would be ready to put an artificial intelligence in power. Should we be concerned for democracy or, on the contrary, welcome Europeans’ confidence in technology?
Europeans ready to elect an AI?
According to the study in question, about one in four out of the 25,000 Europeans surveyed would be prepared to be governed by an AI. It worth noting that there are significant variations between countries, because where the European average is around 30%, respondents in the Netherlands are much more open to having a government run by a supercomputer (+ 43%) than in France (+ 25%). “The idea of a pragmatic machine, impervious to fraud and corruption” is one of the reasons that seems most compelling to the interviewees. Added to this are the options that Machine Learning would enable: in fact, the AI described would be able to improve by studying and selecting the best political decisions in the world…. It would then be able to make better decisions than existing politicians.
Will we see an AI in power?
If we had had to answer this fascinating question in the survey with a yes or a no, we would have found it difficult. A book would be needed to deal in depth with all the issues raised here, starting with the definition of AI.
It’s well known that discussions on the subject began in the 1950s with Alan Turing’s article, which asked whether “Machines can think” and proposed the test that was to be named after him.[1] According to him, machines will one day think for themselves and, rather than as has been claimed – that they are limited to what they are taught – they will be likely to go beyond this stage of learning. In the 1960s Herbert Simon said: “Machines will be able, in twenty years, to achieve everything a man can do.”[2]
It’s a tautology to say that for an AI to be capable of assuming power, the prophecies on their potential must be fulfilled. As Gérard Berry points out: “The problem is that in most discussions we take great care not to define the word “intelligence” , characterising it in the form best suited to the current discussion. But this word is far too rich to be defined simply (…) when I hear phrases like “in thirty years, machines will be smarter than man, that will be the Singularity”, I am more than sceptical” [3] This observation takes us back to an essential point: imagining that one day we can be governed by machines, assumes that we can achieve the Singularity.
The concept of the Singularity posits that the advent of truly autonomous artificial intelligence would trigger a spurt of technological growth that would induce unpredictable changes in human society. However, while the number of trans-humanists continues to grow, there are also many sceptics. In this vein as we have already mentioned[4], Professor Jean Gabriel Ganascia, another expert on the subject, is very critical of the concept of the Singularity and has written a book in which he even claims that it is a myth[5]. It seems, however, that this historic moment is the prerequisite for realisation of the key scenario posed in our survey…
Would “The Robot” make a good president?
But let’s suppose that one day the concept of the Singularity comes into being. At that point would it be desirable for us to be governed by this super-intelligent being, the fruit of the finest human minds? Of course, science fiction offers us some terrifying scenarios: Hal 9000, the AI that takes control of Mission Jupiter, or Skynet, the AI in the film Terminator that oppresses humans to the point of wanting to eradicate them… In Hollywood, AI is of necessity tyrannical. But we suspect that the story of a nice robot king adored by his “subjects” would not sell tickets. That however is the question we are asking.
We have to acknowledge that part of the fascination with AI is relateddem to loss of control. We are afraid to unleash a power that will get out of control : [6] the robot-soldier who stops obeying orders; the robot-philosopher who starts thinking for himself; and so we imagine a robot president capable of making its own decisions which we could not grasp. That would mean it could act as it pleases in our interests and, potentially, since it would be autonomous, against our interests. Admittedly, by doing so, it would contravene Asimov’s three laws of Robotics.[7] In the light of these considerations, we believe that before we start fantasising about the potential good governance of an AI, there would be another preliminary question: could an AI be capable of accomplishing a political act? Starting by voting, for example? Problems accumulate far outside our competences, so being aware of them helps to better understand the issue without succumbing to fantasies and spreading urban myths.
Ubering politics before “robotising” it?
Given that not a day passes without us learning that a robot is applying for new jobs and claiming to replace humans, there is no reason why elected officials would escape scot free. But before you start having nightmares about the merciless reign of “President AI”, there are a good many more steps to be taken.
The great debate that has just taken place in France gives us a good example. In fact, the managers in charge of the mechanism announced from the outset that they would use AI solutions to deal with filed claims. An announcement that seems to be moving in the right direction, because it is hard to see how a human brain would be able to summarise a huge data source of more than a million items without the assistance of a machine[8]…
But this is far from the only possible application. For example, as a Deloitte study entitled “How Artificial Intelligence Could Transform Government” suggests, AI could help US government officials save a phenomenal number of hours: “Simply automating tasks that computers already routinely do could free up 96.7 million federal government working hours annually, potentially saving $3.3 billion. At the high end, we estimate that AI technology could free up as many as 1.2 billion working hours every year, saving $41.1 billion” These figures really do give pause for thought. We can imagine the potential productivity gains. And in this case, we can see very concretely the value of using AI in the service of politics.
To conclude, we would like to quote the French AI specialist, Laurent Alexandre, who has just published “Will AI kill democracy too ? “. In a recent interview he announced: “We are experiencing a digital coup”. He insists that European politicians are not up to the standard of the giants confronting this subject in Asia or the USA: “It is urgent that our elites concerned for the future of democracy wake up.” It is clear that if they do not address it, they may soon find themselves having to campaign against AIs.
[1] The Turing test raises the question of whether you can build a machine that cannot be distinguished from a human during a blind written conversation. Turing did not use the term AI.
[2] We highly recommend our readers to catch up with articles on AI from our expert Marc Rameaux, https://www.europeanscientist.com/fr/author/marc-rameaux/
[3] Gérard Berry, L’Hyper-puissance de l’information, algorithmes, données, machines, réseaux, Odile Jacob, p. 421.
[4] See our editorial: Singularity and teleportation: Hollywood science fiction vs European scepticism? https://www.europeanscientist.com/fr/editors-corner-fr/singularite-et-teleportation-science-fiction-hollywoodienne-vs-scepticisme-europeen/
[5] “In 1993, the year 2023 was in thirty years, which left some time; in 2010, with this term approaching, Ray Kurzweil offered himself an additional respite, again a little over thirty years, which saves him from having to give empirical pledges of his claims. It’s just like in the Middle Ages, with the anticipation of the date of the apocalypse.”, in Jean-Gabriel Ganascia, Le Mythe de la singularité [The myth of the singularity], coll. “Open Science”, Editions Le Seuil.
[6] As we have already mentioned on several occasions, this fear is similar to that of biotechnology. See also Marc Rameaux’s article on this subject: https://www.europeanscientist.com/fr/opinion/ia-et-ogm-les-deux-revelateurs-de-notre-rapport-a-la-nature-premiere-partie/
[7] It is no coincidence that one of Asimov’s laws was created after writing of a novel about a robot nanny that was designed not to hurt the child he had to care for.
[8] Some noted that an AI that dispatches claims to different decision-makers would be much more effective than an AI dedicated to synthesis at all costs regardless of efficiency.
This post is also available in: FR (FR)