The Netherlands Organisation for Scientific Research (NWO) has tightened its rules on AI. Notably, when applying for research funding, scientists are not allowed to ‘arm themselves against’ covert assessments by chatbots.
Last year, NWO already stipulated that assessors are not permitted to use generative AI when assessing grant applications. By uploading them to an AI programme, they would be breaching their duty of confidentiality, and the reliability of an AI assessment is also open to question.
Hidden prompts
In an updated guideline, NWO has further tightened the rules. It now states that assessors must formally confirm within the system that they have not used AI.
The new guidelines also state that applicants must not include ‘hidden prompts’ in their text. Last year, it emerged that scientists from countries including Japan and South Korea had deliberately included hidden instructions in their papers specifically for AI bots, so that reviewers using AI would give them a positive assessment. Tests showed that various chatbots were indeed more positive about papers containing such hidden prompts.
At first glance, NWO’s tightening of the rules seems superfluous, because if assessors do not use AI, the hidden prompts have no effect whatsoever. However, a spokesperson for NWO has stated that it cannot be ruled out that an external assessor might still use an AI tool.
And although AI is currently not permitted in the assessment process, “this could well be a development that NWO will explore in the future”.
Approved tools
Outside the assessment process, NWO staff are permitted to use AI, but only if the tool has been approved. Last year, only the translation application DeepL was permitted. Microsoft Copilot Chat has since also been declared safe. Tools such as ChatGPT and Perplexity remain excluded.
According to NWO, AI monitoring software that could detect whether AI has been used anywhere in the process is not reliable and is therefore not used.
HOP, Naomi Bergshoeff
Comments are closed.