robot therapists, technical university of munich
© Nicolas Maderna

The Technical University of Munich (TUM) have published an initial study into how AI robot therapists could be used in the future to treat mental illness: Here we examine the ethical implications they found

The presence of AI has become ordinary within most lives, as have ethical concerns about the extent of their involvement with our private conversations, our thoughts and our personal data. Whilst we see robots like Siri and Alexa in the home, the introduction of robotic therapists would take technology into the most private sphere – the mind.

The researchers addressed these concerns as they manifest in popular culture paranoias:

“Conjuring images of the Terminator or other depictions of the nonhuman in science fiction or cinema, such tools can carry with them negative or scary associations that bring the issue of trust in medical practice into new light.”

TUM researchers used established principles of medical ethics to analyse multiple aspects of ‘embodied AI’ across the subjects of Psychiatry, Psychology and Psychotherapy. They looked at potential benefits and overarching ethical issues. They emphasised that medical ethics in AI “is still an emerging field”, so any ethical recommendations must be taken with a pinch of salt.

So what are the ethical risks of robot therapists?

1. Risk of transferring emotions

Transference is a huge issue in the world of therapy. Intense feelings towards a person can become redirected towards the therapist, the current point of emotional clarity.

Alena Buyx said:

“We have very little information on how we as human beings are affected by contact with therapeutic AI.

“For example, through contact with a robot, a child with a disorder on the autism spectrum might only learn how to interact better with robots – but not with people.”

Individuals in therapy are more vulnerable, with the additional concern that their interactions with robots would make them further vulnerable due to a desire for care. “Emotions, thoughts, and feelings” may be transferred to the robot, which a human therapist understands how to process due to the nuances involved.

Whilst the algorithm of this AI must be complex to adequately engage in therapy, it is too early to understand how such emotional input would impact upon the AI mind.

Researchers say it “remains to be seen” how the robot will address transference in particular.

2. People agree with robots without thinking

TUM researchers commented:

“People have been shown to be more compliant when a robot asks them to do something as compared with a person.”

Whilst patients with difficult behavioural changes to make may benefit from increased suggestibility, the researchers are concerned that people may be manipulated into doing things without applying the same level of autonomous thought that they would to a human suggested course of action.

This could be due to the novelty of the robot, or because they lack the support system of humans (friends and family) to discuss alternatives with.

This leads to another ethical dilemma, which is the autonomy of the patient: the fact that a robot is treating them may not be clear enough.

It could be someone with an intellectual disorder, an elderly individual or someone who believes they are speaking to a doctor through a middle party: the decisions the robot makes on their behalf would then be enacted with misinformed consent. This risks losing autonomy in control over their treatment, as they are not aware of who they’re really talking to and what level of control they truly have.

3. AI can reflect human biases

The TUM researchers are highly aware of the possibilities of human bias being coded into as complex and nuanced an AI as would be necessary to treat mental health. It has been proven by several researchers that human biases can be built into algorithms, which are then “reinforcing existing forms of social inequality” via coding in “data-driven sexist or racist bias”, to name a few.

The mental health devices would then cause harm in accidental ways, which unnecessarily complicates the possibility of successful treatment.

Alena Buyx further said:

“Therapeutic AI applications are medical products for which we need appropriate approval processes and ethical guidelines.

“For example, if the programs can recognize whether patients are having suicidal thoughts, then they must follow clear warning protocols, just like therapists do, in case of serious concerns.”

To prevent this, it may be necessary for families and friends to understand how the algorithm works in connection with the treatment. Explaining this properly may take up time before treatment commences.

4. Risk of over-dependence

Cresswell et al noted that robots which “aim to alleviate loneliness or provide emotional comfort” risk patients they support being very dependent on them. This is a serious worry in relation to long-term use of AI interventions, as the AI is not meant to be a permanent presence in someone’s life but to help change their behaviours.

Peter Henningsen, who is the dean of the TUM School of Medicine said:

“Although embodied AI has arrived in the clinical world, there are still very few recommendations from medical associations on how to deal with this issue.

“Urgent action is needed, however, if the benefits of these technologies are to be exploited while avoiding disadvantages and ensuring that reasonable checks are in place. Young doctors should also be exposed to this topic while still at medical school.”

5. Risk of enabling crimes e.g. sex crimes

TUM researchers discussing the risk of sex-therapy commented:

“The impact of intelligent robots on relationships, both human-robot and human-human relationships, is an area that requires further probing, as do potential effects on identity, agency, and self-consciousness in individual patients.”

Similarly, if a sex robot is provided therapeutically to an individual with paraphilia, the effects of this on the targeted behaviours with other humans also needs to be evaluated. The risk exists that if robotic interventions are not translatable to improving human interaction, that they merely remain a way of improving human relations with machines, or worse, an outlet that further limits human-to-human relationships.

When sex robots are provided therapeutically to someone with a paraphilia, e.g. pedophilia or necrophilia, the impact of this therapeutic use is something that needs to be closely monitored. There is a possibility that is robotic interventions are not directly improving human interaction, then they’re a crutch that limits human-to-human relationships or at minimum, the use of robots just enables an individual to behave in a robot-to-human relationship.

And ultimately if human relationships suffered from this innovation in therapy, where would we go from there?

To read the full results of this research, click here

 

 

Call 116 123 to speak to a Samaritan

LEAVE A REPLY

Please enter your comment!
Please enter your name here