Tech philosopher Buijsman: ‘Far too few regulators for AI act’

The European Parliament is working on the world’s first artificial intelligence (AI) legislation. What does technology philosopher Stefan Buijsman expect from this?

Stefan Buijsman: "Tech companies are already preparing for European AI legislation." (Photo: Thijs van Reeuwijk)

The European Union is currently finalising a law on artificial intelligence. The so-called AI Act is the first of its kind worldwide and distinguishes three risk categories. Surveillance systems for example constitute an ‘unacceptable risk’ and their use will be banned. Assessment of CVs and other personal data are a ‘high risk’ and will be regulated. Other AI applications are allowed to be developed and used freely.

Dr Stefan Buijsman (1995) sees it as his mission to contribute to effective and responsible use of artificial intelligence and to make it widely understandable. For those not yet familiar with him, Buijsman graduated as a philosopher in Leiden when he was eighteen. Two years later, he obtained his PhD (the youngest ever) from Stockholm University. In 2020, he published his book Alsmaar Intelligenter, indeed about artificial intelligence. For over two years, Buijsman has been working as an AI researcher at TU Delft’s Faculty of Technology, Governance & Management. Delta spoke to him, as how does he feel about the AI Act?

Buijsman: “The European Parliament has been working on it for some time. In the past two years, they mainly looked at applications. Now they have also included more general models like ChatGPT in the legislation. You can use ChatGPT for all sorts of things. You can code with it or have it write exams. This was initially not seen as high-risk technology. Now the European Parliament has included some requirements after all.”

Is that a good development?
“Yes, especially now that ChatGPT is so user-friendly and Google and Microsoft’s models will soon be too. They make it very simple to produce fake news that looks or sounds very convincing. AI makes it very easy to fill the internet with such messages. As these then feed back into the algorithms that scan the internet for shared messages, a dangerous and self-reinforcing feedback loop is created.”

The European Parliament is now demanding that AI systems be human-supervised, secure, transparent, traceable, non-discriminatory and environmentally friendly. Is that achievable?
“In principle, this is the list of desired characteristics. Whether it is achievable will also depend on what exactly the MEPs mean. They say they demand fairness and no discrimination. But we know that an algorithm becomes less accurate if you make it discriminate less, because it does not match the not very fair reality as well.
For instance, there are simulation studies of a bank handing out loans completely fairly. This turns out to cost them 30-40% of their profits, because they give more loans to people who can’t pay it in the end and end up in debt as a result.
So choices have to be made about the accuracy of AI versus the amount of inequality we accept as a society. The Europarliament’s wish list is to get everything perfect at the same time, which is just not possible. But it is good to realise that as a society we have to make conscious trade-offs about whether we find companies’ considerations acceptable.”

The examples in the bill are very much about identification technology. For example, biometric identification and emotion recognition are banned. Can European regulations stop that technological development?
“Part of the reason they specifically name these applications is because China seems to be moving in that direction with mass surveillance. The European Parliament is now saying: we don’t want that. Europe stands for democratic values and rejects a state-controlled society. We are developing the underlying technology of recognising faces and emotion just as much, but for other purposes. For instance, it is allowed to use artificial intelligence to assess emotions in candidates in job interviews, although you have to meet many requirements. It is a bit questionable to what extent that works but it is already happening in the US.”

According to the press release, the European Parliament will vote on the AI law in June. What recommendations do you have?
“The legislation is nice and ambitious, but don’t forget the enforcement. With privacy legislation, for example, supervision never got off to a good start. The Personal Data Authority (Autoriteit Persoonsgegevens or AP) employs too few people to effectively supervise handling of privacy-sensitive data. And now the government seems to want to put the same understaffed agency in charge of monitoring all algorithms in the Netherlands.”

Or is this legislation meant to fall back on when you really see something going wrong without the illusion that you can prevent all unwanted developments?
“The EU is obviously not going to be able to tackle everything anyway, but the idea is that regulators will be able to intervene effectively when something clearly goes wrong. Even that is difficult now because there are far too few staff. Conflicts risk to end up in endlessly long lawsuits that certainly the big tech companies can throw a lot of money at to keep litigating. Companies are already complaining that they are struggling so much. In fact, they are already preparing for AI-regulation.”

  • Want to know more? Consult the European website on the AI act.
Science editor Jos Wassink

Do you have a question or comment about this article?

Comments are closed.