Motivation and emotion/Book/2023/Artificial emotion

Overview


Emotion artificial intelligence (AI) does not involve teary-eyed robots who can't seem to move on from their ex, but rather refers to the field of affective computing in which various forms of AI attempt to accurately recognise, interpret, and mimic human emotions (Zhao et al., 2022). This capability is achieved through the use of AI technology to analyse facial expressions (e.g., see Figure 1), body language, verbal cues and behavioural responses of other intelligent entities to gather data and make an inference of the most likely emotional state (Assuncao et al., 2022). Current AI forms are not able to 'feel' emotions, that is, the technology does not posses any innate emotions, nor are they likely to reach that level of complexity without a physical living body, however, there are still excessive beneficial applications AI's 'experience' of emotion across the globe (Cao et al., 2022). For example, emotion AI is increasingly valuable in fields such as healthcare, business, marketing, employment, early intervention, etc. all of which will be discussed in the following chapter alongside the potential limitations and ethical considerations of implementing emotion AI in today's society.

Focus questions:


 * What are the ethical considerations of emotion in current and future AI systems?
 * What is the relationship between Artificial Intelligence and models of emotion?
 * Explain the detection of psychological disorders using emotion recognition AI?

Defining Artificial Intelligence
First established in the 1950's, AI is a branch of computer science in which researchers have attempted to create technology and machines who's embedded software is able to 'think' and make comprehensive decisions as if mimicking the human brain (Haenlein & Kaplan, 2019). AI may also refer to these machines or robots themselves, with highly advantageous capabilities in computing large quantities of data and accurately deciphering this information in a fraction of the time compared to the neurological processes of humans (Latika Kharb, 2018). There are various differing forms of AI technology available and emerging in today's society, the most common being 'Reactive Machines'. This type of AI does not hold any capacity for memory, but rather is able to respond to it's environment and surroundings. Other common AI sub-types include Limited Memory (harnesses memory functions for learning and response improvement), Theory of Mind AI (able to comprehend other's needs), and Self Aware AI (closely replicates human intelligence) (Islam et al., 2022).

Ethical considerations of emotion in artificial intelligence systems
Ethics can be described as the innate and moral set of principles or 'rules' that dictate one's actions and behaviours in society (Ghotbi, 2022). But how is this relevant in the field of artificial intelligence? Well, considering most AI is attempting to directly mimic human emotion and behaviour, there are various ethical considerations involved, namely the possibility of unreliability and error of judgement. While an overwhelming percentage of modern AI technology is deemed to be highly accurate and efficient in decision-making capabilities, this is not a fail proof system and sometimes errors can occur (LaGrandeur, 2015). This is particularly true for instances of more complex comprehension and ethical reasoning, where AI cannot quite match the level of human intellect required. Due to the ever-increasing presence of AI software in areas such as healthcare, business, marketing, etc. there is a multitude of negative consequences for real life people should AI make an error in judgement. For example, many AI systems are being introduced to assist with precision surgery (see Figure 2). If AI were to make an error in the placement of a bodily incision or determine an incorrect level or type of anaesthetic, the patient is then placed at increased risk of harm and potential loss of life.

Potential AI bias and corruption
Similarly, there is growing suspicion surrounding the potential for AI to become biased or corrupt, engaging in unethical decision-making. For example, Monteith et al. (2022) provide evidence of AI being used to determine the most suitable applicants for job positions in highly elite organisations. While this may seem an efficient and unbiased method of pruning through applicants for the best possible employee, recent studies show that some AI can in fact hold predetermined biases toward varying racial facial features depending on innate coding used to develop such recognition software to begin with (LaGrandeur, 2015). Furthermore, initial data input of AI programming and training systems may be biased to naturally associate certain aspects of gender with specific occupations, increasing the inaccuracy of final decisions.

There is also speculation surrounding the idea of 'emotional AI' being predominately used as a marketing stint  to make such technologies more attractive to the average human consumer (Latika Kharb, 2018). It is no surprise that people would be more willing and likely to buy into products, companies, schemes, etc. if they feel the technology used is somewhat capable of understanding and resonating with their own emotions. This instigates a further conflict of morals and ethics on whether it is fair to compare and reduce human emotion and sensitivity to a piece of artificial intelligence at all (Stark & Hoey, 2021).

So what do you think? Should consumers take this seeming perk at face value, or is this upgrade in AI capability purely for profit and financial gain?

CASE STUDY

An artificial intelligence machine referred to as 'Deep Patient' was utilised in assessing the diagnosis and intervention of real life patients in Mount Sinai Hospital. Unfortunately, it was later found that up to 28% of Deep Patient's assessments had been incorrect, providing patients with a diagnosis or intervention that was not fit for the reality of their symptoms.


 * Imagine you are a patient being treated at Mount Sinai Hospital and Deep Patient is going to assess your mental health.
 * What might be some of the consequences for your health and wellbeing should Deep Patient make an error?

Relationship between artificial intelligence and models of emotion


Emotion is a highly complex and multifaceted concept, however, we can attempt to understand emotions as varying psychological states which occur as a result of intertwining neurophysiological changes (Borod, 2000). This encompasses our thoughts, feelings, mood, relationships, mental and physical behavioural responses and experience of pleasure. Depending on the current emotional state of the body and mind, one can use this information to inform and develop 'feelings', in other words, feelings are the interpretation of emotional states based on initial thoughts and responses to that emotion (Heimerl et al., 2020).

Paul Ekman first posed the idea of six basic emotions and accompanying expressions, a concept which remains widely accepted in the field of emotional research to this day (Bartneck et al., 2017). Ekman's model of core emotions consist of happiness, sadness, anger, fear, surprise and disgust, each of which most individual's are able to readily display and also recognise in others. These emotions can be recognised through evaluating the combination of physiological state (body language, facial expressions), verbal depictions (tone of voice, language, volume), and resulting behaviours (punching, hugging, crying, etc.) (Kagan, 2008).

Facial expression detection
Although current forms of AI cannot truly feel emotions for themselves, there is significant evidence to suggest AI can be used to accurately recognise, interpret, mimic and display the emotional expressions of other entities (Assuncao et al., 2022). Using the information provided in emotional models, advanced forms of artificial intelligence can detect and analyse certain behavioural indications that humans may be experiencing specific emotions. (Heimerl et al., 2020). Due to the integration of highly developed motion detection software, Emotion AI is able to analyse and extract key information about the human face including shape, size, fine lines and indentations, motion, etc. and reference this data in comparison with pre-existing knowledge of human emotional expression (Cowen et al., 2020). Such detection capabilities of Emotion AI hold immense benefits from increased quality and relatability of media content to police officials being able to immediately identify a threat or suspicious activity to potentially prevent major crime or identify perpetrators. Despite these advantageous applications, Emotion AI is only deemed accurate in identifying basic core emotions (see Figure 3) and remains limited in more complex emotional recognition (Pfeifer, 1988). There is also rising scepticism concerning the logic behind facial expression and emotion detective AI, with claims that not all facial expressions can convey exactly the emotions someone may be feeling. e.g. you can still produce a physical smile even though experiencing inner frustration, sadness, or anger (Cowen et al., 2020).

Table 1: AI emotion analysis:

Natural Language Processing
Natural language processing (NLP) comprises of certain AI abilities to hear, understand and process spoken words as if the technology itself were human (Mathews, 2019). Unsurprisingly, this form of AI can also be used to detect emotion, as so much of our conveyance of emotion and feeling is communicated to others through tone, volume, language, etc. Artificial intelligence uses NLP to identify slight changes in an individual's mood via interpretation of vocal patterns and speech indications (Mathews, 2019). In fact, there are specific combinations of tone and language which can infer which emotion is most likely being experienced at any given time. For example, common 'aggressive' phrases such as "Whatever" and "You're not listening!" when paired with a higher pitch voice and increased volume of speech allow AI to detect the likely emotion of anger (Basu et al., 2017). This information can then be used to form an appropriate response for that emotion, such as responding in a calm and even manner, or presenting likely media to facilitate or reduce present feelings.

NLP artificial intelligence has a multitude of growing applications including language translation, data analysis, computer generated calls, digital assistants, etc. and as such, is becoming an increasingly prominent part of using technology of any kind. To give a more specific example, one study found NLP software combined with AI was able to accurately match gaming opponents emotional states and demonstrate 'empathy' for their losses (Marchi et al., 2019). Emerging capabilities of AI are also working towards creation and matching of voice to real human beings whom they are conversing with, presenting the same tone and demeaner. Although impressive, this technology does pose some security risks in terms of voice activated locks and protection, as voice recognition AI could be used to illegally infiltrate the personal data of others.

Flash quiz!

{Facial expressions are always an accurate representation of our current emotions: - True + False
 * type="(+)"}

Detection of psychological disorders using emotion recognition AI


Mental health is currently one of the biggest topics of conversation around the globe, with an estimated 280 million people suffering from depression and anxiety worldwide (Carvajal-Velez et al., 2021). But how does this relate to artificial intelligence? Present and emerging forms of emotion recognition AI are suggested to accurately detect the presence of psychological illness in human beings through verbal interactions and facial/behaviour analysis software (Lee et al., 2021). Specifically, innovative AI systems have demonstrated the capacity to break down emotional cues displayed by human patients (see Figure 4) in order to detect potential signs of depression and cause for diagnosis (Nemesure et al., 2021).

Some forms of AI can even detect symptoms of depression through analysing an individual's interactions with social media content from the images posted to the type of language used to comment on other's content (Ahmed et al., 2022). This data was then cross-referenced with assessment from legitimate medical professionals, finding emotion recognition AI up to 70% accurate in detecting depressive symptoms of real individuals. Despite these figures seeming highly advantageous in favour of AI in mental illness detection, Monteith et al., (2022) reinforce the notion that any AI diagnoses should be regarded as an interpretation of likely symptoms rather than conclusive medical fact.

Treatment outcomes and early intervention
So we've established the role of AI in detection and diagnosis of depression, but what next? Various studies show significant statistical evidence for the implementation of AI technology in the planning phase of treatment for those currently suffering from psychological illness, as well as potential benefits of early intervention (Balcombe & De Leo, 2022). Emotion AI has the capability to accurately recognise indicative symptoms across a wide range of data collection systems, allowing a collective consensus to be drawn on the current demographics which are most at risk of developing depression (Joshi & Kanoongo, 2022). This creates a new potential for early intervention to be instilled in such populations, such as providing further education about mental illness and it's consequences as well as offering preventative therapy options.

Artificial intelligence can also improve the symptoms of depression on a more individual level, with the increased ability to create personalised and unique treatment plans based on specific data analysis of the patient at hand (Lee & Park, 2022). Moreover, the concept of 'wearable' AI is on the rise, with new devices being developed which individual's can wear on their person at all times (Abd-alrazaq et al., 2023). This technology will act to keep a 24/7 insight int any developments into the patient's health condition, introducing improved fundamental diagnosis and monitoring of depression and anxiety symptoms.

No matter the method, implementing AI for early intervention and treatment of mental disorders expands the horizons of providing assistance before more serious symptoms develop. This is so beneficial not only to those individuals at risk, but also to the greater society, as reducing the prevalence of psychological disorders and treatment would decrease a massive economic and emotional burden (Lee et al., 2021).

Consequences of false assumption
Emotion AI is not always 100% accurate in it's judgements and reasoning, resulting in unavoidable room for error and potential for false detection of depressive symptoms. All forms of emotional recognition artificial intelligence are initially presented with 'training' or predetermined programming of how to detect emotion, whether that be through facial recognition, language, data, etc. Not all systems are going to be perfect first try, and so unfortunately, problematic programming persists in some AI. This can lead to a biased reading of other entities emotional state and therefore an incorrect interpretation of psychological symptoms (Nemesure et al., 2021). Consequences of false emotion detection can be extremely detrimental for the patient in question, for example, if a patient were to be diagnosed with anxiety rather than depression (the pair have many overlapping symptoms), they may be provided with a treatment plan or medication which does not accurately address their underlying condition (Joshi & Kanoongo, 2022). This can result in unnecessary prolonged suffering for the patient, financial loss on incorrect therapy, and damaged reputation for the organisation which implemented such AI to begin with.

Food for thought


 * Would you trust emotional AI to accurately interpret your emotions?
 * Do you believe technology holds a deserving place in the treatment of psychological illness and wellbeing?

Conclusion
So it all comes down to the million dollar question: Can artificial intelligence experience emotion? Professionals agree that while current AI systems are not able to truly feel human emotions for themselves, there is sound evidence for the abilities of AI to accurately recognise, interpret, and even mimic the emotions experienced by humans in their vicinity (Assuncao et al., 2022). Emotion recognition AI thrives on the concepts of emotion represented through facial expression and variances in language, tone, etc. with current applications of AI being harnessed for the successful early detection and intervention of certain psychological disorders, particularly depression. As with anything in life, there are a number of ethical considerations to be pondered including potential bias, error in judgement, and whether it is morally okay to allow technological systems to be considered on the same level of sensitivity and emotion as real human beings (Stark & Hoey, 2021). Having analysed the vast range of research concerning emotional AI, there appears to be an overwhelmingly positive consensus of potential for AI in generations to come, with hopeful advancements in technological abilities on the near horizon. Although AI can sometimes predict mental illness and assist in the preparation of treatments or intervention, these detections are not yet considered to be sufficient enough for full diagnosis without the secondary opinion of a human medical professional (Lee & Park, 2022). But who knows? With all the advancements AI has already accomplished and continuously growing technological ability, maybe solid diagnosis from a machine alone will be a viable concept in the near future.