Social Robots in Elderly Care
by Hille Haker
Header image created on DreamStudio by Joseph Vukov
Insofar as AI systems are being developed for the medical care of people with disabilities, chronic illnesses, or in old age, three fields of application are frequently mentioned: care assistance, monitoring or surveillance, and social or therapeutic support. There are multiple social robots, also called care robots, currently on the market. But no other is as well documented as “Paro,” a care robot resembling a toy seal. For example, 70% of Danish nursing homes use Paro. He was also introduced in Fukushima after the reactor accident in 2011 and in the training of future nursing staff. Here, I am only interested in care robots as they are used in healthcare. I will narrow my view even further and examine the use of social-therapeutic robots in the care for the elderly.
Why Introduce Care Robots?
The reasons for the introduction of robots for the care for elderly are diverse, among them these:
Proponents point to the demographic change: this is leading to an increased need for nursing and caregivers in all industries. The care provided in these contexts is already accompanied to a significant extent by technological equipment. And the distinction between these technological care aids, which are already regarded as "traditional" (stair lifts, hair washing machines, etc.), and care robots is not clear-cut.
The shortage of caregivers is becoming dramatic: fewer and fewer people can care for their aging relatives or are willing to work professionally in care work. Robots may not only take over this physical work, but also provide a monitoring system that reduces the need for the personal presence of relatives and/or caregivers. According to the frequently expressed argument, the shortage of caregivers means that robots are needed to ensure high-quality and sufficient care in the future.
The in-home use of care robots can foster the independence sought by many elderly people. The interplay of care assistance, monitoring, and social interaction increases the likelihood that people will be able to live longer in their familiar surroundings.
It is expected that the use of care robots will help to reduce costs by enabling caregivers to concentrate on "core tasks," i.e., the interaction with and support of elderly or sick people.
Apart from the accompaniment robots, social-therapeutic robots are used in elderly care mainly because they are predictable, because they have the obvious ability to stimulate people emotionally, and because they can help regulate feelings such as sadness, anger, or loneliness. That they can do this is now well established in empirical studies.
Social Contexts of Care Work
With respect to care for elderly people, the social and professional context is relevant: traditionally, care for the elderly has been provided by families—and by women in particular—and has been coined as “love’s labor” by feminist scholars. The shift to institutionalized care in industrialized countries underwent professionalization in the 20th century, but at the same time, the professional roles often remained connected to models of femininity. This goes hand in hand with the assumption of female social commitment and, more generally, of caring for others.
Due to the physical demands of care for elderly in particular, nurses who work permanently in their profession are not only risking lower quality of life than other health professions but also earlier health issues for themselves. Nobody can deny the demographic factors and the cultural value of independence. Yet the prognoses of a future nursing shortage must be seen against the background of increasingly economically oriented institutions. These institutions define efficiency requirements in terms of profitability and not necessarily in terms of justice done to all elderly people and to the nursing profession in general.
Elderly people have the right to be cared for. Societies owe them their solidarity, based upon the generational contract in line with social values that shift over the course of a lifetime. Many people wish for themselves—and therefore welcome for others, who may just be but a few years ahead of their own need for care—the expansion of outpatient services with great flexibility. Furthermore, no social network can exist without a continuum of voluntary work, freelance work, informal employments, and professional services.
The use of robots can be therefore one element in a complex network of services, and their use can be manifold: elderly people, their partners, and/or family members can, for example, independently acquire social robots and "keep" them similar to pets; caregivers and nurses can use them either in outpatient settings or in nursing homes; and care institutions can use them in individual or group care. Each context of application is subject to different forms of social cooperation and interaction; as I will show in a moment, the constellation is crucial for the ethical evaluation of care robots and entails that a generalized assessment of them should be avoided.
The Debate on Social Robots for Elderly Care
In the following, I will distinguish the lines of argumentation according to three models and then end with some ethical considerations.
1. REPLACEMENT OF HUMAN CARE
This argument – though often not called replacement but compensation for nurse shortage – is often found in the rhetoric of the developers of care robots and the advertising companies, arguing that robots would save costs in staff and/or medication. Social robots, companies promise, can – and should – fill the gap when other forms of care are not possible. For example, they may step in when the presence of relatives or caregivers is not possible, or when support animals are not possible because of concerns for their safety or that of the elderly. The basic emotional relationship that humans need in all stages is often lacking in the care for the elderly. Robots therefore fill a gap rather than taking the place of humans: they are filling a role that was not otherwise being satisfied by either patient or caregiver. Finally, robotic care for elderly with dementia, studies show, seems to be easier and less stressful than verbal communication, and social robots are ideal for the emotional communication these patients need.
Critics warn that the application of robots like Paro infantilizes the elderly. They hold that it exploits their dependence and vulnerability, simulating social relationships rather than augmenting them authentically. They hold that the deception is problematic. While some elderly people may well understand the explanations what a “robot” is, this is by no means the case when robots are advertised for individuals suffering from dementia and/or Alzheimer’s disease.
When social robots are connected to monitoring or surveillance systems, other concerns come up. For example, critics fear that data are being acquired, stored, and used without the consent or understanding of the person’s best interest. Privacy and exploitation for extrinsic purposes become a bigger problem than the cute seal makesus think.
By now, industrialized, capital societies have created working, living, and housing conditions that make it almost impossible to care for the elderly in an environment that they know. The use of robots is the ultimate consequence of an already distorted model of social relationships. Ultimately, critics warn, robots will change human relationships, potentially compromising the truthfulness and trust between caregiver and caretaker.
While the previous argument welcomed or warned of a “replacement” of humans, it is perhaps more accurate to conceive of social robots as a supplement, a complementary tool in the care for the elderly. That’s the perspective emphasized by the complementary model. Technical devices of nursing assistance – a robot hairdresser, for example – can provide relaxation by stimulating the scalp, without being defined as a "social robot." But it is not clear why this touch should not include a social component. The focus of the complementary model is thus on its use by third parties, be they relatives or caregivers or nursing staff. Unlike a view that describes social robots as replacing humans, the complementary model emphasizes ways in which third parties have control over the robot.
Critics ask, however, whether the complementary model is rather naïve, given recent developments in robotics. For instance, the increasing independence of robots may create incentives to shift towards replacement. If the robots are subject to the control of the caregivers, this presupposes the presence of caregivers; but then the question of cost-effectiveness returns that motivated their implementation in the first place. Thus, on one hand, if social robots are used in a merely complementary way, playing with dolls and/or stuffed animals may be as effective as using their technical counterparts. On the other hand, with the increasing “autonomization” of the robots, the caregivers may have less control over the effects of the robots than assumed.
As robots become more complex in the coming years and decades, the range of our interactions with them will increase and social robots will become increasingly commercialized. At the same time, it is unlikely that their use can be controlled in the sense of the complementary model in such a way that the robots are integrated exclusively into therapeutic measures: in fact, to proceed on such an assumption overlooks not only the fact that robots are evolving, but also that our interaction with them is changing us as well.
If, for example, elderly people in nursing homes become calmer with the help of robots, then staff shortages in nursing homes may eventually make the use of robots more and more attractive. In contrast to staff, robots only cause costs in their acquisition, but not in their use – the reality of everyday working life may therefore suggest an inevitable slide from the complementary model to the replacement model.
In this model, more than in the complementary model, the blurring of boundaries between the a) reciprocal, symmetrical interactions between humans, b) the reciprocal but asymmetrical interactions of humans with animals, and c) the only partially reciprocal asymmetrical interactions with robots is indeed an ethical problem. That means: even if we can achieve therapeutic successes with a stuffed animal seal, the ethically sound implementation requires moral imagination and creativity of the caregivers.
At some point, it may become more natural for caregivers to turn to robots because they are more efficient and, perhaps surprisingly, more familiar. After all, it's not just the robots that are changing. In the future, nursing staff will also have grown up in the era of “internet of things,” and the use of robots in everyday life, while unfamiliar to today’s adult generation, may indeed increase independently of the healthcare sector. In other words, the use of robots in care for the elderly must be seen against the general background of the automation and digitalization of everyday life. It is possible that the use of various technical devices, including social robots, will be seen as much less problematic in the future than it is today.
The Contribution of Christian Ethics
Interdisciplinary approaches to bioethics and new technologies have been developed together with the sciences. These approaches aim to protect the dignity and rights of individuals and groups, enable institutionalized responsibility, and ensure that agents can be held accountable for their actions. Often, the so-called four principles of autonomy, non-maleficence, beneficence, and justice are evoked. These are indeed helpful to map questions that may arise in a given technology application.
Christian ethics approaches are as diverse as their philosophical counterparts; they may follow or incorporate philosophical-ethical approaches – for example, a human rights approach or utilitarian ethics – or they may incorporate other, varied perspectives. In conversation with these perspectives, Christian and/or Catholic ethics insists on two premises of any ethical theory (their justification is an issue, of course, but I’ll leave that open here). The first concerns the dignity of the human person that points to both their vulnerability and agency. This premise is often spelled out in the right not to be harmed, the right to exercise their freedom, and the right to be assisted when necessary. The second premise is the principle of responsibility entailed in the concept of agency, pointing to the moral capability to respond to others, to account for one’s actions, and to the prospective social cooperation based upon mutual responsibility in and for one’s society. I would like to formulate these premises in a thesis:
Dignity points to the precariousness of human freedom because of one’s susceptibility to suffering and being harmed by others, one’s social dependency, and the structural vulnerabilities that increase the possibility of being deprived of one’s rights to be recognized, protected, and provided with the necessary means to live a decent life. Responsibility points to persons’ agency as freedom to act, respond to others, account for one’s actions, and transform unjust conditions into more just ones in social and political actions.
Building upon this concept of vulnerable agency—one that entails both dignity and responsibility—Catholic Social ethics uses normative reference points that orient moral judgments. These are gathered in the so-called principles of Catholic Social Teaching: in addition to human dignity, these are justice, subsidiarity, solidarity, the care for the common good, and the priority of the poor (“option of and for the poor”), understood as priority of all those who are deprived of the recognition, protection, and provision of their rights. This means: from a social ethics perspective, the starting point is not the robot as commercial product but the robot as envisioned response to a social ill—in this case, the sufficient and dignified care for patients in different circumstances.
The above-mentioned principles are too abstract and too remote from any context to offer direct guidance. We can, however, pair these principles with the tradition of moral theology, which has developed criteria for prudential, contextual moral judgments. These criteria will bring us closer to an ethical evaluation of social robots: among them are the attention to circumstances of an action or practice, learning from experiences, consideration of the anticipated consequences in the future (precaution and imagination), or openness to practical, if not pragmatic solutions.
As a first step into a conversation with AI scientists, IT companies, healthcare providers, and patients, I would therefore like to use my reflections above to formulate some considerations for the use of social robots. Together with the argumentation models unfolded above, they could serve as a starting point for a more in-depth ethical reflection on the ethical issues.
The playful, imaginative interaction of humans and (social) robots may foster emotional balance and thereby maintain or increase the well-being of persons dependent on social and/or emotional support. There is no reason to reject their use in the care for the elderly.
If interactions between clients/patients and caregivers or human-animal interactions are possible in terms of time and space in care constellations, preference should be given to these interactions. If used, social robots must be integrated into the general (care-giving) interactions, not replace them.
If robots perform actions that trigger certain emotional reactions, the intimacy and integrity of the persons must be secured. Deliberate deception that exploits the emotional vulnerability of an elderly person must be prevented by all means. This must be ensured by information material and communication guidelines with relatives. Caregivers must also be continuously trained, with the aim of familiarizing them with the technological developments in robotics and care, and ensuring an ethically responsible integration of technologies.
Caregivers must also be continuously trained, with the aim of familiarizing them with the technological developments in robotics and care, and ensuring an ethically responsible integration of technologies.
If elderly people are capable, they must be informed about how robots work. If they are not capable to consent to the utilization of robots, proxy consent must be sought, and nurses must be trained to use robots in a manner that secures the recognition, protection, and provision of patients’ rights. This requires situation- and context-dependent training. Ethicists should work together with nurses and AI specialists. Or, vice versa, these specialists must work together with ethicists in every step of robotics projects. Universities should develop interdisciplinary projects to secure such trainings.
Robots can pose a safety risk under certain circumstances. The accountability for their use must be clarified, especially if they are used in the absence of caregivers.
Social robots, moreover, can and will be developed at the same time as monitoring robots. While both are terminologically distinct, their functions are often not clearly separated—monitoring robots can take over their task behind the façade of a robot toy. It is therefore important that in the development of social robots, patients’ right to privacy is not violated:
When robots are simultaneously used for monitoring and/or surveillance purposes, this must be further scrutinized and be in line with the right to privacy and integrity. We have already developed multiple guidelines, and they must be implemented rather than reinvented with every new application.
The technical "roadmaps" of robotics must be accompanied by ethical roadmaps. As in other areas, it is important to prevent social practices from being overrun by technological developments without discourse in civil society. The current increase in the development of social robots within and beyond healthcare practices is an indication that there is an urgent need for debate and reflections. Universities should offer the opportunity for interdisciplinary projects between the sciences and humanities to support these efforts.
1 A short video on “Paro” can be found here: https://www.youtube.com/watch?v=PAJ2GXzaJtQ
2 Maurits Butter, et al., Robotics for Healthcare: Final Report (European Commission EC, 2008).
3 Lillian Hung et al., "The Benefits of and Barriers to Using a Social Robot Paro in Care Settings: A Scoping Review," BMC Geriatrics 19, 1 (2019), 1244-6.
4 Eva Feder Kittay, Love's Labor: Essays on Women, Equality, and Dependency (New York: Routledge, 1999).
5 See my colleague’s book on this topic: Sandra Sullivan-Dunbar, Human Dependency and Christian Ethics (Cambridge: Cambridge University Press, 2017).
6 Hung et al. include literature on this topic.
7 Barbara Klein et al., "Social and Emotional Robots for Aging Well?” GeroPsych: The Journal of Gerontopsychology and Geriatric Psychiatry 26, 2 (2013): 81-2.
8 I have developed the concept of “vulnerable agency” as interpretation of human dignity in Hill Haker, “Vulnerable Agency – Human Dignity and Gendered Violence,” in Haker, Hille, Towards a Critical Political Ethics: Catholic Ethics and Social Challenges. (Basel: Schwabe Verlag, 2020), 135-67.
9 For reflection on the virtue of prudence cf. Thomas Aquinas Summa Theologiae, II-II, Quaestiones 47-56.
Hille Haker holds the Richard McCormick S.J. Endowed Chair in Catholic Ethics at Loyola University Chicago, after teaching at Goethe University Frankfurt and Harvard Divinity School. She holds a Ph.D. and Habilitation in Catholic Ethics, MA in German Literature, and BA in Philosophy from the University of Tübingen, Germany. She served on several Bioethics Committees, including the European Commission’s High Expert Group “European Group on Ethics in Science and New Technologies” (2005-2015). Her most recent book is Towards a Critical Political Ethics. The Renewal of Catholic Social Ethics (2020). On AI, she recently published "Experience, Identity and Moral Agency in the Age of Artificial Intelligence. In: in Artificial Intelligence and Human Enhancement: Affirmative and Critical Approaches in the Humanities, edited by Herta Nagl-Docekal and Waldemar Zacharasiewicz (Berlin: De Gruyter), 51-77. Beyond bioethics, her interests are in moral identity and questions of recognition and responsibility, critical theory and social ethics, feminist ethics, and literature and ethics.