The Ethics of Affective Computing in Behavioral Health
In the United States today, over half of adults with mental illness receive no treatment and millions live in areas with insufficient behavioral health services. Artificial intelligence promises to be a game changer in this field. While AI’s potential to expand access here is undeniable, rapid development alongside current deregulation efforts creates risk of unforeseen implications.
Empathetic chatbots, wearables that detect and predict mood changes, and sentiment analysis of faces are existing applications of this line of thinking which pose particular risk.
/The History of Affective Computing & Current Applications
Professor and computer scientist Joseph Weizenbaum spearheaded one of the earliest attempts at creating artificial emotional intelligence in the mid-1960s. He created ELIZA, a natural language processing program that included a script that emulated a psychotherapist. The program was built to paraphrase what a user had said in efforts to foster a sense of understanding and empathy.
Weizenbaum eventually became concerned over his creation. In Computer Power and Human Reason (1976), he raised issues with psychiatrists who believed the DOCTOR program could grow into an automatic form of psychotherapy and was also concerned to see how quickly people anthropomorphized it.
In 1997, the movement toward artificial emotional intelligence gained steam when Rosalind Picard published Affective Computing. Picard argued that AI should be trained to be emotionally intelligent, as good decision-making is inseparable from emotion. For Picard, incorporating emotional guideposts into AI became increasingly necessary as we relied on computers for split-second decisions.
Affective computing has seen tremendous growth in recent years. Affectiva, for instance, is a company that uses AI to detect changes in face muscles and vocal expression by analyzing physiological arousal, brain activity, and heart rate. Other examples are Feel Monitor, a wearable that offers "24/7/365” monitoring for behavioral health, and Mindstrong Health which analyzes smartphone activity to predict mental health symptoms. There is also Abby, a chatbot that promises "round-the-clock support and guidance" and to be "always at your fingertips to help you navigate life's challenges.
An increasingly crowded market is emerging for affective computing technologies.
The market for these technologies is poised to continue growing. In 2021 alone, 22% of US adults used a mental health chatbot and 47% said they would be interested in using one if needed.
/Ethical Dilemmas: Freedom, Identity?
The 24/7 availability of a chatbot can be marketed as a groundbreaking solution to access issues, but it also carries the risk of fostering dependence and stunting emotional growth. Take Replika for example. It’s marketed as “the AI for anyone who wants a friend with no judgment, drama, or social anxiety.” While this offers a safe space, it circumvents the emotional stakes that characterize human relationships. In turn, it discourages mutual emotional respect as there are no consequences for hurting an AI’s feelings. Of course, human relationships depend on the mindful consideration of how one’s actions affect broader society.
Similar concerns surround mental health wearables that use objective data to understand mental health conditions. Take for example the marketing of Feel Therapeutics: “Objective measurement tools have been utilized in almost every medical field for centuries–time to bring objective measurement into the field of mental health TODAY.” Digital biomarkers, which rely on objective, quantifiable data collected through devices like smartphones serve as an example of such objective measurement. These technologies promote a decontextualized view of these self as a collection of data points and overlook our relationship to our broader social environment.
This decontextualization mirrors the chemical imbalance hypothesis in psychiatry which suggested that mental illnesses could be biologically located and diagnosed in an objective way. Yet, no viable biomarkers have been found for mental illnesses, and medications often prove no more effective than placebos.
We see a similar message reflected in Affectiva’s promise to reach “a complete and nuanced understanding of human behavior” and Paul Dagum’s belief that Mindstrong’s technology had the potential to detect “all” mental disorders. These remarks suggest that technology can neatly compartmentalize human emotions yet disregard our present inability to understand their etiology. This inability to understand emotional etiology makes the ethics of wearables that predict emotions, such as Mindstrong’s technology that “can tell you’re depressed before you know it” even more pressing.
/A Window Into the Future
Michel Foucault is well-known for his concept of power, which he viewed as the subtle control over the body and the shaping of identity through the surveillance of a populations' behavior. We can imagine that the rise of mental health wearables would undoubtedly interest Foucault. Foucault might argue that AI technologies, while claiming to objectively measure mental states, may be constructing rather than measuring them.
Gladiš makes a similar point in his concept of “automation bias” where one favors the suggestions of automated systems over their own data, even when the technological suggestion appears contradictory. Foucault would also raise questions about what companies producing wearables do with their data. This could be answered by studies which found that 81% of the top mental health apps sent data to Facebook or Google and 92% sent data to third parties.
As we now arrive at the age of AI deregulation, successful implementation of AI hinges on an acute awareness of these ethical dilemmas.