Science

AI #5: A psychiatrist in your pocket: is that reassuring or worrying?

Artificial intelligence is this year’s theme at TU Delft. Delta highlights six AI studies. Part 5: Karin Bogdanova is critical on AI systems that monitor your mental health.

Illustration generated by AI. (Image: New Media Center with Adobe Firefly)

She hates autumn. It is increasingly difficult for her to get up and she hardly leaves her flat. She mostly hangs out on the sofa and streams some videos. She should really call and chat more, but if someone then asks how she’s doing? Then just nothing for a while. Sensors in her phone indicate that she has been barely active for days and that she is in the dark a lot. GPS tells us she hasn’t been outside. In the microphone, her voice sounds dull and, according to the log, she barely makes contact. A trained AI network draws the conclusion from all this data that she may be slipping into depression. 

Does that sound like science fiction? Not to the developers of ‘digital phenotyping’ or ‘personal sensing’; a moment-to-moment remote measurement of individual behaviour using data from personal gadgets such as smartphones and health watches. ‘With more than three billion people in the world having internet access (…) this technology has enough reach to study normal human behaviour and deviations from it.’ So write three Indian psychiatrists in an article touting digital surveillance as the path to digital personal psychiatry. 

Promising
‘Given the advances in machine learning, the data will get better and better’ the authors argue, ‘and also the possibilities for evaluation (of the data, ed.) and control (of the patient, ed.) will become more precise.’ The fact is that surveillance via smartphones is already being piloted for various behavioural problems. The authors mention addiction, autism, post-traumatic stress, schizophrenia, mood disorders, sleep problems and suicide prevention. And the list is longer. 

Other authors see ‘personal sensing’ as a promising option for testing psychopharmaceuticals. It offers the possibility of comparing an active substance and a placebo on an unprecedented scale outside the laboratory. ‘Digital phenotyping can provide an unparalleled detailed picture of individuals’ lives with which we can broaden our understanding of psychiatric diseases, and evaluate drug efficacy,’ argue two psychiatrists in Nature

Shortcomings
“Many claims are made about the possibilities of remote psychiatric care”, says PhD student Karin Bogdanova, “but the applications all fall short in terms of responsiveness and competence. Apps do not respond adequately to the needs of people in mental distress.” Bogdanova draws on Joan Tronto’s care ethics, who defined the four pillars of care: responsiveness, responsibility, attentiveness and competence.

An anthropologist in the field of science and technology, Bogdanova does not develop AI applications herself. Instead, she is working on a framework for the application of AI in psychiatry. She is a PhD student at the Faculty of Industrial Design Engineering (IDE). “I am not against it,” she explains about AI applications in psychiatry. “Even though I know there is a lot to do about privacy, patient autonomy and giving consent to use data. But what I do resist is the idea that interpreting phone data would be an objective measure of behaviour, or that AI could solve the mental health crisis.” 

‘Apps do not respond adequately to the needs of people in mental distress’

In the studies that she has read on digital mental healthcare, she misses the cultural context. First and foremost, this is the culture of the patient himself. Whether and how they express mental distress depends on their environment. Then there is the culture of psychiatry with its doctor-patient relationship, but also about what constitutes a symptom, roles, technologies, availability of other mental health support etcetera. A third factor is the technological culture, for instance about consent to the use of telephone data. In China, people think very differently from Europe with its privacy laws. “Different people experience things differently, psychiatrists deal with patients differently, and with their data. So, it is not a good idea to use AI to develop some kind of standard approach to mental health that ignores all cultural differences,” says Bogdanova. 

How then?
Bogdanova also sees initiatives that could bring about positive social change. One example is improved access to psychiatric help for patients in need. “Instead of having to wait a year for a person with suicidal thoughts to see a doctor, digital phenotyping can help identify that there is an urgency and quickly provide appropriate support. You can imagine the patient opening an app that, based on interpreted behaviour, gives direct access to the healthcare provider and bypasses the rigid healthcare infrastructure.” 

With information technology rapidly changing healthcare, Bogdanova says it is high time to think about the direction of change. “Will patients have control over their own data, what will be the role of nurses, and how will diagnoses come about? We need to think about those questions carefully before all kinds of personal data are incorporated into the healthcare system. Because I do think that will happen.” 

  • Anthropologist Karin Bogdanova is one of five PhD students within the AI DeMoS lab, which investigates how AI can help bring people from different perspectives together. Research directors are ethicist Olya Kudina (Faculty of Technology, Policy & Engineering, TPM) and design researcher Nazli Cila (IDE). The PhD students are supported by TPM Faculty members including Ibo van de Poel and Sabine Roeser and Elisa Giaccardi and Pieter Desmet from the IDE Faculty
Science editor Jos Wassink

Do you have a question or comment about this article?

j.w.wassink@tudelft.nl

Comments are closed.