Artificial Moral Agents?
- Datum:
- 01 February
- Tijd:
- 00:00
- Locatie:
- Corona, Luna building, TU/e campus
‘In the past, computer scientists did not focus on ethics and fairness’
How ethical are self-learning algorithms, when do we say AI is ‘fair’ and should we be concerned about robot rights? These and other questions were discussed during Artificial Moral Agents, a well-attended event that took place 5 February.
“These are exiting times,” Mykola Pechenizkiy tells a packed audience. The TU/e Full Professor of Data Mining is one of four speakers during the symposium Artificial Moral Agents, a meeting on the ethics of artificial intelligence. The event was organized by the Center for Humans and Technology in cooperation with the Data Science Center Eindhoven and the High Tech Systems Center.
It’s not only the ever-increasing ways in which artificial intelligence can be applied that excites Pechenizkiy but also the fact that the developers of AI increasingly consult philosophers, sociologists and other experts outside their field of expertise. “In the past, computer scientists did not focus on the problems of ethics and fairness in machine learning and AI. Now they do.”
Not a luxury
As becomes apparent from each of the four lectures, this is no luxury. The audience is presented with several examples of self-learning algorithms that were supposed to make life easier but turned out to have unwanted side effects: software that contributes to discrimination on the basis of gender, color or income; programs that manipulate voting and search behavior; surveillance systems that present a threat to privacy, and so on.
“Algorithms are prejudiced in many different ways,” Pechenizkiy sums up. But fortunately, the problems are not unsolvable thanks to interdisciplinary cooperation. By thinking about how an ethically responsible system should operate, researchers are trying to prevent unwanted side effects.
CeesJan Mol, advisor digital transition at Simpaticon B.V., decided to attend the symposium after he saw an announcement on LinkedIn. He shares Pechenizkiy’s enthusiasm for interdisciplinary cooperation. As a student he once graduated on prejudices against people of color in the business sector. “Now we can link such ideas to hard data,” he says enthusiastically. “You can prove that this is a real problem.”
Nightmare
Vincent Müller, ethics researcher at the University of Leeds, is one of the four speakers at the symposium. He discusses one of modern man’s oldest nightmares: the fully autonomous robot. Whether it ruthlessly subjects mankind to its will or conforms to man’s every whim like a docile slave instead, shouldn’t a robot have rights? Müller uses some stimulating arguments to make short shrift of those ethicists who turn this into a topical issue. As long as robots aren’t autonomously reasoning social agents yet, we better worry about building ethically responsible robots.
The word ‘autonomous’ is somewhat misleading anyway. Speaker Jurriaan van Diggelen, researcher Perceptual and Cognitive Systems at TNO, says that even the InSight Mars lander is told what to do by the engineers who created him. He uses examples from the defense sector to illustrate how man and machine often work together as a team.
Robot as advisor
Cheryl de Meza, company lawyer from Utrecht, came to Eindhoven to be informed on the latest developments in the field of AI and robotics. She is glad she came. Speaker Elizabeth O’Neill, ethics researcher at TU/e, was one of the people who made her think.
O’Neill talked about artificial intelligence as ‘moral agent:’ algorithms that help people make ethical choices. “There are risks attached to that and that’s important in my line of work as well,” says De Meza. “I have a background in ICT and privacy. There have been some experiments in other countries with robolawyers. Using a robot as an advisor is not without risks.”
According to De Meza, these kinds of legal issues are ‘uncharted territory,’ no legislation exists yet. “When it comes to responsibility and liability, you can only turn to already existing regulations. In most cases that will probably suffice, but it is important for a lawyer to gain more in-depth knowledge. We have to get started with this.”
From theory to practice
As far as CeesJan Mol is concerned, a bit more attention could have been paid to the ‘getting started’ part. “Much of the discussion focused on theoretical notions,” he says. “No mention was made of what would be the first step in making things operational. And where was the business industry, where were the representatives of small businesses and enterprises?”
They were right there in the auditorium, says Wijnand IJsselsteijn, scientific director of the Center for Humans and Technology. “Of all the people who were present, between twenty and thirty percent were representatives of the business industry.”
IJsselsteijn is very content with the symposium. “You can see how the worlds of artificial intelligence and social sciences grow closer. There was a willingness to discus on both sides, as well as respect and an appreciation of the relevance of each other’s fields of expertise. This is progress. Our center was founded for this very purpose, to advance that kind of cross fertilization.”
Source: TU/e Cursor
Photo Bart van Overbeeke