Campus

Humans of TU Delft: Stefan Buijsman on responsible AI

Who are the people who study or work at TU Delft? We meet them in this series. This time: Stefan Buijsman researches how to develop and use AI in a fair and responsible way.

Stefan Buijsman: “What I like about TU Delft is that we're trying to think through concretely what we need now and how we can make things better.” (Photo: Heather Montague)

English only


“I’ve been at TU Delft a bit more than two years working as a researcher in the Philosophy of AI, mostly at the intersection between philosophy and computer science. I do a lot of work together with the people building AI systems, mainly focused on how to responsibly use them, and we’re jointly developing new tools to make this possible. I do the theoretical work of looking at what we want from AI systems. 


At the Faculty of Technology, Policy & Management we have a digital ethics centre which is focused on these kinds of issues. We work with the Government and large organisations on some of the big challenges that they want to face. For example, we have a project with the Erasmus Medical Center on AI systems in the intensive care units because they have a massive personnel shortage. They have lots of rooms in intensive care, but they don’t have people to supervise the patients so half of the rooms are empty. We’re looking at whether there is a good way of adding AI systems to make this more manageable, to have more patients there with the same number of personnel. It’s going to be a tough thing to do responsibly, what with the stakes being so high. 


Before I came to Delft, I was in Stockholm working in philosophy. I tried to do interdisciplinary work by working with psychologists at the time, but there was still a lot of sitting in a room somewhere trying to think of a good idea or theory. What I really like about TU Delft is that we’re in there, in practice, trying to think through concretely what we need now and how we can make things better. 


The New Scientist is a scientific magazine and every year they give an award for a promising young scientist that they think is both engaged in public outreach and doing good and important research for society. All the universities in the Netherlands and Belgium can nominate one of their scientists and TU Delft was kind enough to nominate me this year. There are 15 candidates that people can vote for and then the top three go into the final round where they do a short pitch before a jury. If I’m selected, my pitch will be about the responsible AI work that we’re doing here at TU Delft. 


‘Many practical applications have a very direct impact on people’


I’m happy to participate to get some exposure for this work. It’s crucial now that we’re developing these AI systems that are being implemented all over the place. I think also in terms of how the public debate is shaping up now, a lot of it is focused on how AI is going to take over the world and not at all on the practical things. But the government is putting AI into practice right now to determine if you get benefits and companies are using it in recruitment and determining prices, so there are many practical applications that have a very direct impact on people. It’s crucial that we don’t just let them do their thing, but really steer this and say this must happen responsibly and it must conform to the values that we have for these systems. 


We have very lofty goals because we want these systems to be fair and we want them to respect people’s privacy but those things often come into conflict. Sometimes we may have to make a choice about which we find more important and then why do we find that more important? Thinking about why we choose a certain trade-off between the things that we find important is one of the important roles for philosophy. For example, we can’t steer or measure if a system is treating people with a second nationality differently if we have no information on nationality. Often, the Government will explicitly not keep track of this kind of sensitive data to protect people’s privacy. However, the result is that we can’t determine whether there is disparate treatment of these different groups. 


We want our systems to be fair, but what do we really mean with fairness? That’s again where philosophy comes in, to look at what it means in a certain context to say that something is fair. This translation from the really abstract down to the computer science is something you must do in an interdisciplinary team. But that starts with saying what does this fair thing mean? These are very difficult questions.” 



Want to be featured in Humans of TU Delft? Or do you know someone with a good story to tell? Send us an e-mail at humansoftudelft@gmail.com 


Heather Montague / Freelance writer

Editor Redactie

Do you have a question or comment about this article?

delta@tudelft.nl

Comments are closed.