Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Artificial Compassion—From An AI Scholar

Version 1 : Received: 29 April 2021 / Approved: 30 April 2021 / Online: 30 April 2021 (10:36:45 CEST)

How to cite: Mason, C. Artificial Compassion—From An AI Scholar. Preprints 2021, 2021040784. https://doi.org/10.20944/preprints202104.0784.v1 Mason, C. Artificial Compassion—From An AI Scholar. Preprints 2021, 2021040784. https://doi.org/10.20944/preprints202104.0784.v1

Abstract

This paper describes a new generation of computational intelligence founded on the ancient idea of compassion called Artificial Compassion. The creation of Artificial Compassion is the result of two coinciding historical developments. The first is the increasing discoveries of human sciences in new fields like neuroendocrinology and psychoneuroimmunology. This provides the spark for Artificial Compassion. For example, we once thought with certainty that our brain is fixed for life but neuropsychology and a device called the fMRI have shown it is “plastic”. It changes constantly throughout our lives in response to our experiences. Remarkably, we also now know it is changed for the better through positive emotions like compassion, kindness and happiness. So, too, are the immune, endocrine, genetic, cardio and neural systems influenced and changed by our emotional experiences. This new perspective on emotions and plasticity validates much of ancient wisdom in medical systems outside the west. Long held assumptions about emotion are unsuitable for humanity. The second development is ‘machine rub off’. We are in symbiotic relation with our devices today and we are plastic. We are changed by our interactions but many people have computer rage. We need Artificial Compassion to replace computer rage with positive plasticity.

Keywords

Artificial Intelligence, Robots, Compassion, Human Sciences, Positive Plasticity

Subject

Social Sciences, Anthropology

Comments (1)

Comment 1
Received: 27 May 2021
Commenter: Dr Elizabeth M Morrow
The commenter has declared there is no conflict of interests.
Comment:

Artificial Compassion must be the foundation of our digital future

Nothing in our daily lives brings us more frustration and upset than digital technologies that do not work for us. We have all felt the rage of a computer that decides to reboot at the wrong time, the smartphone that is out of battery, the cash machine that is out of use, the car that needs a diagnostic check-up. It is making me cross thinking about it.

Humans need and love technology that works for them.

This article by the renowned technologist and NASA scientist, Dr Cindy Mason, is an important contribution to the field of digital technology for two main reasons:

1) It explains how humans are in a deeply connected emotional relationship with the digital technologies that influence every aspect of our lives and the ‘rub off’ we feel.
2) It suggests that we can all be happier - and more in control - if future technologies are designed with inbuilt compassion for humans.

What Mason calls the growth of the “digitizing society” needs to reflect that adults and children experience the world through relational spaces: to ourselves, each other, our environment, and our world. These can be felt as positive and uplifting or as negative and harmful to our wellbeing.
Most of the technology we have now has been designed by educated white men to perform tasks that educated white men enjoy – like playing games and driving cars. Mason argues that for the good of all of us future technologies, and especially those using artificial intelligence like machine learning, need to be:

- more humane, e.g., more inquisitive, creative, expressive, and nurturing, rather than task orientated
- far more inclusive and representative of the diversity of humans as a strength of humanity
- consider the impact on different types of people in society, especially minority groups across the digital divide.

This article presents a vision for “Artificial Compassion” as a new foundation for a more humane digital future. But what is impressive is that it also provides the technical expertise to do it. It takes the reader step by step through the learning that has been captured over many years building robots.

One of the most fascinating points of this paper, and Mason’s work, is that robots can be made to be compassionate if they are given the right software programming, called ‘cognitive architecture’. It means deciding how you want a robot to think and creating that in code, sensors, and ways the robot communicates or behaves. Technologists can programme the internal ‘mind’ of the robot to reflect what humans understand as compassion.

In simple terms, Artificial Compassion means a digital agent has the software to reflect on what they are thinking or doing. They are a far more advanced version of most of our present technology that is based on a sense-think-do model. The robot can pick up on the impact it is having on the human and even more importantly it can learn to pre-empt the impact its behaviour might have and judge its own behaviour.

Our future digital society will give great power and control to some people and not others – which is clearly a potentially dangerous thing. Advances are being made internationally in digital ethics to build safety nets, accountability structures, transparency and clarity about the dangers and advantages of the technology. However, there is a pressing need for the funders, regulators, technologists, scholars and educators, digital designers, students learning to code, and the public to be educated about Artificial Compassion.

It is up to us all, to demand a compassionate digital future.


Dr Elizabeth Morrow
Research Analyst and Inclusion Specialist
Research Support Northern Ireland
Belfast, United Kingdom
elizabethmmorrow@hotmail.co.uk
+ Respond to this comment
Comment 2
Received: 27 May 2021
Commenter: Dr Elizabeth M Morrow
The commenter has declared there is no conflict of interests.
Comment:

‘Artificial Compassion’ must be the foundation of our digital future

Nothing in our daily lives brings us more frustration and upset than digital technologies that do not work for us. We have all felt the rage of a computer that decides to reboot at the wrong time, the smartphone that is out of battery, the cash machine that is out of use, the car that needs a diagnostic check-up. It is making me cross thinking about it.

Humans need and love technology that works for them.

This article by the renowned technologist and NASA scientist, Dr Cindy Mason, is an important contribution to the field of digital technology for two main reasons:

1) It explains how humans are in a deeply connected emotional relationship with the digital technologies that influence every aspect of our lives and the ‘rub off’ we feel.
2) It suggests that we can all be happier - and more in control - if future technologies are designed with inbuilt compassion for humans.

What Mason calls the growth of the “digitizing society” needs to reflect that adults and children experience the world through relational spaces: to ourselves, each other, our environment, and our world. These can be felt as positive and uplifting or as negative and harmful to our wellbeing.
Most of the technology we have now has been designed by educated white men to perform tasks that educated white men enjoy – like playing games and driving cars. Mason argues that for the good of all of us future technologies, and especially those using artificial intelligence like machine learning, need to be:

- more humane, e.g., more inquisitive, creative, expressive, and nurturing, rather than task orientated
- far more inclusive and representative of the diversity of humans as a strength of humanity
- consider the impact on different types of people in society.
-
This article presents a vision for “Artificial Compassion” as a new foundation for a more humane digital future. But what is impressive is that it also provides the technical expertise to do it. It takes the reader step by step through the learning that has been captured over many years building robots.

One of the most fascinating points of this paper, and Mason’s work, is that robots can be made to be compassionate if they are given the right software programming, called ‘cognitive architecture’. It means deciding how you want a robot to think and creating that in code, sensors, and ways the robot communicates or behaves. Technologists can programme the internal ‘mind’ of the robot to reflect what humans understand as compassion.

In simple terms, Artificial Compassion means a digital agent has the software to reflect on what they are thinking or doing. They are a far more advanced version of most of our present technology that is based on a sense-think-do model. The robot can pick up on the impact it is having on the human and even more importantly it can learn to pre-empt the impact its behaviour might have and judge its own behaviour.

Our future digital society will give great power and control to some people and not others – which is clearly a potentially dangerous thing. Advances are being made internationally in digital ethics to build safety nets, accountability structures, transparency and clarity about the dangers and advantages of the technology. However, there is a pressing need for the funders, regulators, technologists, scholars and educators, digital designers, students learning to code, and the public to be educated about Artificial Compassion.

It is up to us all, to demand a compassionate digital future.


Dr Elizabeth Morrow
Research Analyst and Inclusion Specialist
Research Support Northern Ireland
Belfast, United Kingdom
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.