Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Short

‘AI system should know its own limits’

‘AI system should know its own limits’



 


Every artificial intelligent system makes mistakes, said Professor Inald Lagendijk during his lecture on Engineering Meaningful Human Control for the Bataafsch Society last Monday. For an AI system that has to distinguish apples from pears, mistakes might be amusing, but for an HR system that selects candidates for a job, they are a lot less so.


It is therefore important that AI systems are designed to be aware of their own limitations. They should then transfer the decision-making to a responsible person as soon as a task is outside its domain. Does this ability already exist? No, says Lagendijk, but that is where we need to go. Delft AiTech and the NL AI coalition are working on integrating moral values such as privacy, honesty, safety and transparency into AI applications.

Science editor Jos Wassink

Do you have a question or comment about this article?

j.w.wassink@tudelft.nl

Comments are closed.