The first high goal of AI is an intelligent software that is able to generalize from the data base available to it from the real world, i.e. to draw independent conclusions, whose cognitive frame lies outside the knowledge domain underlying the data base.
The second high goal is to carry out communication and interaction with people on this basis, i.e. to realize meaningful (semantic) communication that does not repeat already known knowledge or recombine it (amazingly) in a clever way.
The third high goal is to make communication with people empathetic, i.e. to perceive the emotional state of the interlocutor, to interpret this state and to modify the AI's own way of conducting a conversation according to this interpretation and to design it in such a way that the human interlocutor feels accepted.
Research is still very far from all three goals. However, even just approaching the three goals is already a prerequisite for assuming responsibility in the development of AI projects, in the sense of an ethic as already formulated by I. Kant.
Einstein defined empathy as follows:
“Empathy is the attempt to see the world patiently and seriously through other people's eyes. It cannot be learned at school. It is cultivated throughout life.”
We are well aware of this. In our project IASON we want to realize an approach to the development of "Artificial Empathy", see also https://en.wikipedia.org/wiki/Artificial_empathy. It is not intended to achieve such high goals, but to support people, patients with Alzheimer's disease, but also their relatives and caregivers in coping with their practical life, i.e. especially communication.
Alzheimer's patients are considerably restricted in their verbal expression, which is based on an increasingly destroyed short-term memory. However, depending on the progress of the disease, long-term memory is still readily available for a long time. There are many memories in the body, see Straub (2020) in the menu item “Further information”. However, there is also one, the “sensory memory”, as Prof. Birbaumer describes it in his lecture of WS 2001 “Medical Psychology and Medical Sociology I, 3rd hour”, see https://timms.uni-tuebingen.de/tp/UT_20011112_001_medpsych_0001, which is in principle inaccessible to consciousness, but whose information is or can be used in conscious or preconscious actions.
The lecture deals with the behaviour of split-brain patients whose “connecting bridge” (the corpus callosum) between the left and right hemisphere of the brain has been severed, usually to prevent overlapping epilepsies. If a text is presented to these patients separately to the right or left eye, the language area (usually localised on the left side) produces a more or less consistent result text which also contains terms originating from the inaccessible hemisphere, in this case the right hemisphere. The conclusion is that, after all, some information must have “penetrated” from the right hemisphere of the brain to the left, but the patient cannot be aware of it in any way.
This gives reason to link the topic "microexpressions" in this context with the development of artificial empathy, see https://en.wikipedia.org/wiki/Microexpression.
This should not only help to understand the patient better, but also help the patient to find his way better in the world and to remember things that have long been considered forgotten, at least in verbal form.
Important scientific and practical questions arise, such as “Which microexpressions occur in AD patients in which conversational situations, with what frequency and intensity?”, “Which parameters change as the disease progresses?”, “Which occurrences/changes of microexpressions would be recognizable in a very early phase of the disease?”, “Can these findings be used for early diagnosis/prediction?”, “What correlations with the EEG exist?”, “Are verbal statements of AD patients of recognizing/non-recognizing accompanied by microexpressions and are they consistent or contradictory?”.
The last question in particular is also very important for the practice in empathic contact with patients. In principle, the verbal statements of patients with limited consciousness / memory are at least contradictory and not very reliable. At the same time, as already mentioned in Prof. Birbaumer's lecture above, deeper and unconscious sensory processing does indeed take place, as the split-brain experiments mentioned above showed, in which the “speech centre” then produced a melange of the two sentences read separately on the left and right, which was wrong but plausible.
An innovative application of microexpressions in the light of this finding would be an answer to the very practical (and emotional) question: “Does the AD patient recognize his relatives and friends (indicated by microexpressions), even if he/she verbally denies this?” A positive answer to this question would be very important for the relatives/caregivers/doctors/nurses for their interaction and especially for their emotionality and empathy.
For this purpose, an artificial empathy for the IEEDA ALOIS is to be developed, which moderates the dialogue between the persons involved and which is able to inform the relatives/caretakers about it based on the analysed microexpressions as well as the verbal expressions of the Alzheimer patients. In times of the COVID-19 crisis, in which many contacts between patients and relatives have to take place via video communication, this is possibly a chance to show a “silver lining on the horizon” to all involved in this critical situation.
Dr. Thomas Fritsch and Angelika Relin have written an essay (PDF, 3.6 MB, German only) on “Empathy and artificial digital assistants” .