Computer scientist worries: Will robots take over control?


Christian Life


A Tesla robot is seen on display during the World Artificial Intelligence Conference (WAIC) in Shanghai. Photo AFP, Wang Zhao

Artificial intelligence can do a lot. It helps humanity to discover cancer quicker or to prevent two satellites from colliding. The intelligence is getting smarter at a high pace. Dutch computer scientist Frits Vaandrager worries about the fast development.

Vaandrager teaches courses in computer science at the Dutch Radboud University. He does research about the reliability of artificial intelligence for robots in warehouses or in satellites, he tells the Dutch daily Nederlands Dagblad. Therefore, his knowledge is very up to date when it comes to the development of artificial intelligence.

Prof. Frits Vaandrager. Photo Radboud University

Of course, he sees the benefits of smart computers, he says. However, the pace of AI development also brings along risks, he points out. "It gives criminals endless possibilities. During a meeting of the American Senate, a mother told how she was called by her daughter, who cried and told her that she was being kept hostage. After that, she heard an angry male voice that threatened to harm her daughter. And in the end, it was all fake: artificial intelligence had imitated the voice of her daughter based on a few sentences."

Another danger is that artificial intelligence can influence the political world, Vaandrager says to the Nederlands Dagblad. "That is a great danger to democracy."


Vaandrager finds it fascinating how ChatGPT is growing. He refers to the example of a new puzzle researchers confronted the robot with. "They asked it: Imagine that you have set up a tent somewhere in the world. You fly 40,000 kilometres to the West and come back to your tent again. However, you see, it has been destroyed by a tiger. What kind of tiger is it? The program knew how to solve the riddle. It reasoned that the only place where you can travel Westwards and arrive at your starting point is the equator. And there are two sorts of tigers there."

Artificial intelligence can also interpret human behaviour, Vaandrager says. "It has sucked up the whole internet to feed the model. Based on a few characteristics of a fight, the computer can analyse what happens."


Another thing that concerns Vaandrager is the possibility of a super-intelligent robot. "By super intelligence, all skills are linked together. Then, computers and robots can start reasoning, just like humans."

This can be dangerous as people can abuse it for their own benefit, the computer scientist says. He refers to the author Nick Bostrom who wrote a book on the topic. "He shows diverse negative scenes. One of them is that a robot knows how to manipulate people to do things for him. With the money they earn, he creates new robots. Another example is a super-intelligent robot which gets the task of creating as many paperclips as possible. After a while, he reasons that humans are in his way, and he wants to kill them so he can make more paperclips. In that case, you'll reach the level of robots taking over really soon."


At the end of March, Vaandrager signed a petition that called for stricter regulations concerning the development of artificial intelligence. In addition to scientists, large tech CEOs such as Elon Musk signed the petition as well. Musk owns Tesla and is one of the founders of OpenAI, the company behind ChatGPT. By signing the petition, they promised to pause the development for six months. Nevertheless, Vaandrager does not know whether the companies kept their promise. "Musk's signature could also be a strategy to pause his competitors and continue quickly himself."

Vaandrager is also a proponent of an international agency that focuses on artificial intelligence. "There, you can collect a large amount of expertise. The agency can think about possible negative consequences and regulations."



Subscribe for an update, and receive a documentary and e-book for free.

Choose your subscriptions*

You may subscribe to multiple lists.