Robots in healthcare:
a solution or a problem?
Policy Department for Economic, Scientific and Quality of Life Policies
Directorate-General for Internal Policies
Authors: Zrinjka DOLIC, Rosa CASTRO, Andrei MOARCAS
PE 638.391 - April 2019
EN
IN-DEPTH ANALYSIS
Requested by the ENVI committee
Robots in healthcare:
a solution or a problem?
Workshop proceedings
Abstract
This report summarises the presentations and discussions of a
workshop on the use of robots and AI in healthcare, held at the
European Parliament in Brussels on Tuesday 19 February 2019.
The aim of the workshop was to provide background information
and advice for Members of the ENVI Committee on the status and
prospects of applying robotic and artificial intelligence (AI) based
technologies in healthcare.
The first part of the workshop focused on the practical
application of AI and robots in healthcare, while the second part
examined the ethical implications and responsibilities of AI and
robotic based technologies in healthcare.
This document was requested by the European Parliament's Committee on Environment, Public Health,
and Food Safety.
AUTHORS
Zrinjka DOLIC, Milieu Consulting
Rosa CASTRO, Milieu Consulting
Andrei MOARCAS, Milieu Consulting
ADMINISTRATOR RESPONSIBLE
Miks GYÖRFFI
EDITORIAL ASSISTANT
Roberto BIANCHINI
LINGUISTIC VERSIONS
Original: EN
ABOUT THE EDITOR
Policy departments provide in-house and external expertise to support EP committees and other
parliamentary bodies in shaping legislation and exercising democratic scrutiny over EU internal
policies.
To contact the Policy Department or to subscribe for updates, please write to:
Policy Department for Economic, Scientific and Quality of Life Policies
European Parliament
L-2929 - Luxembourg
Email: Poldep-Economy-[email protected]
Manuscript completed: March 2019
Date of publication: April 2019
© European Union, 2019
This document is available on the internet at:
http://www.europarl.europa.eu/supporting-analyses
DISCLAIMER AND COPYRIGHT
The opinions expressed in this document are the sole responsibility of the authors and do not
necessarily represent the official position of the European Parliament.
Reproduction and translation for non-commercial purposes are authorised, provided the source is
acknowledged and the European Parliament is given prior notice and sent a copy.
For citation purposes, the study should be referenced as: Dolic, Z., Castro, R., Moarcas, A., Robots in
healthcare: a solution or a problem?, Study for the Committee on Environment, Public Health, and Food
Safety, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament,
Luxembourg, 2019.
© Cover image used under licence from Shutterstock.com
Robots in healthcare: a solution or a problem?
PE 638.391 3
CONTENTS
LIST OF ABBREVIATIONS 4
EXECUTIVE SUMMARY 5
EU POLICY CONTEXT 7
PROCEEDINGS OF THE WORKSHOP 10
2.1. Introduction 10
2.1.1. Welcome and opening 10
2.2. Panel 1: Practical applications of artificial intelligence and robots in healthcare 11
2.2.1. Current use of robots in clinical practice and its perspectives 11
2.2.2. Robots in general service delivery of healthcare establishments 12
2.2.3. Other healthcare related areas of implementation of robot technologies 13
2.2.4. First round of Questions and Answers 14
2.3. Panel 2: Ethical evaluation and responsibilities of AI and robots in healthcare 15
2.3.1. Ethical aspects of using robots in healthcare 15
2.3.2. Main challenges and opportunities of using robots in healthcare 16
2.3.3. Socio-economic rationale of implementing robot technologies in healthcare 17
2.3.4. Questions and Answers 18
2.3.5. Closing remarks by the Chair 20
ANNEX 1: PROGRAMME 21
ANNEX 2: SHORT BIOGRAPHIES OF EXPERTS 23
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
4 PE 638.391
LIST OF ABBREVIATIONS
AI
EC
EP
EU
GP
GDPR
MEP
MS
CERNA
IoT
Robots in healthcare: a solution or a problem?
PE 638.391 5
EXECUTIVE SUMMARY
This report summarises the presentations and discussions at the “Robots in Healthcare: a solution or
a problem?” workshop held on 19 February 2019 and hosted by Mr Alojz PETERLE (MEP), co-chair of the
Health Working Group within the ENVI Committee. The aim of the workshop was to provide
background information and advice for Members of the ENVI Committee on the status and prospects
of applying robotic and artificial intelligence (AI) based technologies in healthcare.
The workshop began with the intervention of co-chair, Mr Alojz PETERLE (MEP), who welcomed the
speakers and participants and opened the discussion by highlighting the importance of EU health in
the face of changing demographics placing ever increasing challenges in providing healthcare and
support to the elderly. Mr. Peterle emphasized there are many ethical, social and legal questions that
arise in the use of AI and robots in healthcare. Next, Ms Mady DELVAUX-STEHRES (MEP), JURI Vice Chair
and Rapporteur of the Report with recommendations to the Commission on Civil Law Rules on
Robotics, emphasized the importance of placing the human at the centre of AI and robotics
applications and ensuring that guiding principles can effectively be implemented in all the different
areas where AI and robotics are used.
The first part of the workshop focused on the use of robots in clinical practice. Professor Alexandre
MOTTRIE, highlighted that innovations in the field of robotics are driving developments leading
to more precise surgical procedures. He argued that such developments have a huge potential in
making surgery safer and more cost effective by reducing the amount of time needed to perform
surgery and the likelihood of complications associated with readmission of patients to the hospital.
Professor Mottrie concluded his presentation by emphasising the need to define and standardise
EU training pathways for improving education in this area.
Dr Kathrin CRESWELL, provided an overview of current robotics and AI applications in healthcare,
ranging from back office (e.g., pharmacy stock control) to semi-autonomous and autonomous robotic
applications. Dr Creswell explained how each type of application poses a different set of questions and
challenges, which should be anticipated and addressed by either modifying the technological design
or the social environments where such applications would be used.
The first panel finished with a presentation on the applications of robotics to different psychological
interventions by Dr Daniel DAVID. Professor David provided an overview about the ethical and social
acceptability of robots, and more generally about human-robot technologies as an effective tool for
treating clinical psychological disorders. Professor David concluded that science has an important role
to play in shaping new values of human-robot interaction by changing negative stereotypes of robots
as well as values of human safety, efficacy and cost effectiveness.
The second panel discussed ethical, legal and socio-economic aspects of using robots in healthcare.
Dr Raja CHATILA, outlined the many ways that robots and AI can be used in healthcare from
applications to process and analyse medical data for diagnosis to those enhancing motor sensory
applications in active prosthesis. He emphasised that robots and AI present many risks and tensions
that will require validation and certification requirements as well as respect for a set of ethical
principlesincluding specific principles developed for AI and robotics as well as classical ethical
principles developed within medical practice.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
6 PE 638.391
In her presentation, Dr Robin PIERCE, focused on the main policy challenges associated with the use of
robotics in healthcare. She started by highlighting the wide range of capacities for robotics applications
in healthcare. Next, she focused on issues related to the protection of data and privacy in the care and
clinical contexts. Dr Pierce finished her presentation by highlighting the regulatory complexity posed
by the use of robotics and AI in healthcare.
The final speaker of the workshop, Dr Andrea RENDA, explained that current socio-economic
challenges affecting healthcare in Europe may justify the use of AI and robotics in healthcare. However,
unleashing the potential of AI and robotics would require a good integration with the existing and
future ‘technological stack’ from high performance computing, to 5G connectivity, nanotech and IoT.
Among the potential risks associated with the use of AI and robotics in healthcare, Dr Renda mentioned
the possibility that technologies contribute either to address inequalities or to exacerbate them. For
instance, while the use of ‘junk’ AI (e.g., a cheap alternative to standard healthcare) could contribute to
save healthcare costs, it could also run out of control and be detrimental to good quality healthcare.
Mr Alojz PETERLE wrapped up the workshop by thanking the speakers and audience for sharing their
knowledge and views. He reiterated that AI, robots and digitization will mark the EU healthcare system
in the years to come and closed by remarking that while it is unclear whether AI will serve to humanize
societies, it is hoped these applications will remain instruments that are used for a more personalized
health approach.
Robots in healthcare: a solution or a problem?
PE 638.391 7
EU POLICY CONTEXT
The EU health sector is facing increasing demands on services brought on by an ageing population,
growth in chronic diseases, budgetary constraints, and a shortage of qualified workers. Technological
developments in the fields of robotics and AI can provide countless opportunities for addressing these
challenges resulting in significant cost savings. Along with the integration of digital technologies, the
application of robotics and AI could lead to improvements in medical diagnosis, surgical interventions,
prevention and treatment of diseases, and support for rehabilitation and long-term care. AI and digital
solutions could also contribute to more effective and automated work management processes, while
offering continuous training for health and care workers. Estimations are that the market for AI in
healthcare will reach around $6.6 billion by 2021 (about EUR 5.8 billion) with significant cost savings
for healthcare systems
1
.
Among some of the most interesting applications for the health and care sectors are the following:
Robotic surgery allowing more accurate, less invasive and remote interventions relying on the
availability and assessment of vast amounts of data;
Care and socially assistive robots allowing to meet the expanding demands for long-term
care from an ageing population affected by multi-morbidities;
Rehabilitation systems supporting the recovery of patients as well as their long-term
treatment at home rather than at a healthcare facility;
Training for health and care workers offering support for continuous training and life-long
learning initiatives.
While the integration of digital technologies, robotics and AI promises revolutionary changes in the
EU health sector, there are important ethical, legal, socio-economic and technological challenges that
need to be addressed in order to unleash the potential of these technologies. Significant investments
are required for the development of healthcare solutions, possibly through partnerships between the
public and private sectors. Devices used for medical purposes need to navigate existing regulatory
processes, respect ethical standards and gain the trust and acceptance of patients and healthcare
providers. The socio-economic impacts of robotics and AI also must be taken into consideration,
especially with regard to the effects that the adoption of automated solutions will have for health
workers and patients
2
.
Apart from the technological viability of potential solutions, one of the most pressing questions to be
addressed is how health and care would be transformed, and whether these technologies could lead
to possible repercussions for human dignity. A first potential risk within the use of care robots and
nursing robots to take care of the elderly and dependent people is that they lead to worse outcomes
because the human element of care is left out. The automated elements of these types of robots will
need to be balanced with systems that ensure a human presence in health and care activities. A second
purported risk relates to the autonomy and moral agency of robots and AI
3
. Should robots become
1
See https://www.accenture.com/us-en/insight-artificial-intelligence-healthcare 150 billion of annual savings for the US economy. See
also https://www.sitra.fi/en/news/artificial-intelligence-based-systems-help-achieve-better-services-cost-savings-social-health-sector/
describing the findings of a 2017 study that estimated that the only way for the Finnish social and health service reform to achieve its
targets was by adopting digitalisation and artificial intelligence on a broad scale.
2
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and
Social Committee and the Committee of the Regions - Coordinated Plan on artificial intelligence (COM(2018) 795 final).
3
Stahl, B.C. and Coeckelbergh, M., 2016. Ethics of healthcare robotics: Towards responsible research and innovation. Robotics and
Autonomous Systems, 86, pp.152-161.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
8 PE 638.391
capable of making autonomous decisions, the need arises for a system that deals with the responsibility
for potential harm caused by robots and automated systems
4
.
Regulatory and legal challenges should also be navigated. For instance, regulatory approval for medical
devices is needed to ensure the safety and efficacy of interventions
5
. Following discussions on the need
to adapt to new technological changes, the new EU Regulation on medical devices, which replaced the
previous Directive, includes specific provisions addressing software medical devices
6
. Whether the
existing legal framework is fit to address current or future challenges such as regulating the liability of
all players involved in the design and deployment of AI and robotics applications (e.g., doctors,
producers and healthcare centres), remains an open question
7
. With AI, robotics and the digitalisation
of healthcare building upon the collection, aggregation and analysis of a vast amount of sensitive data,
many questions also arise about privacy, data protection, data security and data sharing
8
. AI research,
and machine learning in particular, involves the access to large quantities of data regarding patients
and healthy citizens for instance to collect data about genomics, environmental factors and lifestyles.
This raises questions regarding the ownership of data, informed consent and good data sharing
practices, particularly in light of the new GDPR
9
.
Healthcare has been identified as one of the key areas for robotics, AI and digitalisation developments
within several strategic EU documents. Among other initiatives, the European Parliament has been
leading a worldwide debate about the need to establish civil law rules applicable for robotics and
calling for further action by the European Commission to address the challenges in this area
10
. Some of
the responses to this request will be given through the guidance by the European Commission
expected by mid-2019 related to the Product Liability Directive and to the liability and safety rules
applicable to AI and robotics.
The EU Commission’s European strategy published on April 2018 emphasised the need to encourage
the development of AI applications centred on people’s needs in terms of health-related services and
long-term care
11
. The strategy builds upon Europe’s advantages in terms of scientific and industrial
development while it seeks to increase investments in AI (both public and private), prepare for
disruptive socio-economic changes and support an adequate ethical and legal framework.
The strategy also set out a proposal for working with Member States to develop a coordinated plan on
AI in order to maximise the impact of investments both at EU and national levels and foster synergies
and cooperation across the EU. The proposal for a coordinated plan was signed by Member States and
4
European Parliament, Committee on Legal Affairs, Report with recommendations to the Commission on Civil Law Rules on Robotics
(2015/2103(INL)), Rapporteur: Delvaux, Mady.
5
Healthcare Robotics: Current Market Trends and Future Opportunities:
https://www.roboticsbusinessreview.com/healthmedical/healthcare-robotics-current-market-trends-and-future-opportunities/
.
6
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive
2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and
93/42/EEC.
7
See European Parliament resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and
robotics (2018/2088(INI)), saying that “that the existing system for the approval of medical devices may not be adequate for AI
technologies” and calling on the Commission to oversee the evolution of these technologies to assess whether other changes will be
needed in the future.
8
Pesapane, F., Volonté, C., Codari, M. and Sardanelli, F., 2018. Artificial intelligence as a medical device in radiology: ethical and regulatory
issues in Europe and the United States. Insights into imaging, 9(5), p.745.
9
Stahl. B.C. and Coeckelbergh, M., op. cit.
10
See European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics
(2015/2103(INL)).
11
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and
Social Committee and the Committee of the Regions, Artificial intelligence for Europe, (COM(2018) 237).
Robots in healthcare: a solution or a problem?
PE 638.391 9
the European Council in June 2018
12
. The coordinated plan on AI was published on 7 December 2018
13
.
In terms of investments, the plan proposes to boost the EU’s potential in AI, including by increasing
investments under the Horizon 2020 research programme in the period 2018-2020. The Commission
has proposed an investment of at least EUR 1 billion per year coming from the Horizon Europe and
Digital Europe programmes under the next programming period 2021-2027. The expectation is that
Member States and the private sector will also contribute and that total investments coming from
the public and private sectors will gradually reach around EUR 20 billion per year, which would be
equivalent to investments done by other continents (e.g., North America or Asia)
14
.
The coordinated plan on AI also acknowledges that to bring Europe to the front of AI’s development, a
combination of factors is required:
Sufficient public investment in AI, especially in those areas or applications where the public
sector is needed. For instance, some applications of AI and robotics in the health area hold vast
potential benefits but either have a limited market or are subject to significant spillovers; these
applications might have to be incentivised via public funding;
Good collaboration with the private sector, academia and SMEs is essential to achieve the
required investments and outcomes;
A regulatory and ethical framework that incentivises innovation while addresses potential risks
and uncertainties arising from the uses of AI is essential to build the trust and acceptance of
users (patients and healthy citizens alike);
In addition to the above, for the healthcare sector, any precondition such as the necessary
infrastructure in terms of e-health (e.g., electronic health records) and digital health services
and products would also be required.
The challenges and opportunities stemming from AI and robotics applications in all areas including
healthcare are currently being debated by many experts and stakeholders. For instance, a high-level
expert group on artificial intelligence is currently supporting the implementation of the European
strategy on artificial intelligence through recommendations related to policy developments and
ethical, legal and societal challenges related to AI
15
.
12
https://ec.europa.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence.
13
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and
Social Committee and the Committee of the Regions, Coordinated Plan on artificial intelligence (COM(2018) 795 final).
14
European Commission, USA-China-EU plans for AI: where do we stand? Digital Transformation Monitor, January 2018.
15
https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
10 PE 638.391
PROCEEDINGS OF THE WORKSHOP
2.1. Introduction
2.1.1. Welcome and opening
MEP Mr Alojz PETERLE, Co-Chair of the ENVI Health Working Group
Mr Alojz PETERLE, MEP, opened the event by thanking the audience and all the speakers. He introduced
the topic of the workshop and noted the importance of EU health in the face of changing
demographics placing ever increasing challenges on the care and support of the elderly. He shared his
personal experience of visiting Japan where the shortage of nurses is driving demand in the use of
robots in aged care homes. He mentioned that in Japan there are approximately 5000 aged care homes
that are testing robots to care for the elderly, who represent over a quarter of the ageing population.
He then posed the question whether the use of robots can play a key role in filling workforce gaps in
care services across Europe, highlighting the example of Germany where the share of elderly people is
growing beyond younger generations who can care for them.
Mr Peterle commented that there are many other opportunities for robotic and artificial intelligence
(AI) applications in healthcare which raise the attention of regulators due to the challenges they
present to existing legal frameworks and the new legal, social and ethical questions they raise. He
stated that robots and AI present many new complex challenges related to human dignity, security,
privacy, safety, employment and liability, which justify a need for developing new laws and principles.
Mr Peterle mentioned that the report on the comprehensive European industrial policy on AI and
robotics produced by the Committee on Industry, Research and Energy released earlier this year
addresses many of these issues
16
. The report notes information from a Eurobarometer survey which
suggests that Europeans are uncomfortable with the use of robots in everyday healthcare.
Mr Peterle introduced Ms Mady DELVAUX-STEHRES (MEP), Juri Vice Chair and Rapporteur of the Report
with recommendations to the Commission on Civil Law Rules on Robotics. Ms Delvaux-Stehres
welcomed the occasion to emphasize that applications of AI and robotics in healthcare deserve special
attention since they address important social needs and involve the inclusion of vulnerable persons.
She informed the panel that a recently established high-level working group of experts by the
European Commission is tasked with defining guiding principles on the application of robotics and AI.
This new development she hopes will provide an opportunity of defining principles that, beyond
capturing all the different areas where AI and robotics are used, extend to ensuring these technologies
are deployed in ways that serve humans and uphold European fundamental values. A key challenge is
to ensure these objectives are realised in practical ways and in this context, it is important that experts
in the fields of science, engineering and law are consulted so that the right questions are asked.
16
For more information see: http://www.europarl.europa.eu/doceo/document/A-8-2019-0019_EN.html.
Robots in healthcare: a solution or a problem?
PE 638.391 11
2.2. Panel 1: Practical applications of artificial intelligence (AI) and robots
in healthcare
2.2.1. Current use of robots in clinical practice (inside the body, on the body and outside
the body) and its perspectives
Professor Alexandre MOTTRIE, Head of the Urology department, Onze-Lieve-Vrouw Hospital,
Aalst, Belgium
Professor Alexandre MOTTRIE began his presentation by emphasising the benefits of robotic tools
which have resulted in improvements to the quality and precision of surgical procedures. He argued
that robot-assisted technologies can result in high volume centres where operations performed in
these centres can be shown to achieve a lower rate of post-operative complications, resulting in fewer
readmissions and reduced costs. He pointed to the high rates of complications that result from
classical surgery, with one out of three patients undergoing bowel surgery at risk of minor or major
complications. For classical hernia surgery, the complication rate is one out of six. Data from
the US shows the costs of surgery annually is estimated to be $170 billion US, with an estimated
$41 billion US spent on readmission due to complications. According to national population studies in
Europe, just over half (52%) of all surgeries were due to complications that were unexpected. In view
of this, Prof. Mottrie stressed that the use of robotic tools in surgery should be focused on lowering
readmission rates by up to 50% which can contribute to approximately $10 billion in savings gained
annually.
Prof. Mottrie continued his presentation by showing examples where applications of robot-assisted
technologies in surgery and other health settings are growing, making surgery safer and more
accurate. The first example concerned the Stryker Mako robot which, compared to the most
experienced surgeons, can more accurately restore a patient’s hip anatomy, ensuring optimal leg
length. In another example, innovations in fluorescence imaging technology have enabled surgeons
to reconstruct bowels. This innovation has completely minimised complications associated with this
type of surgery with not even one bowel leakage being reported. In the final example, Prof. Mottrie
demonstrated how robotic bronchoscopy can achieve a 95% success rate in detecting cancer lesions
for lung biopsies. Currently only two out of three bronchoscopists are able to perform a successful lung
biopsy. He argued that in the future this type of treatment is likely to lead to focalised treatment where
the cancer is not only detected but removed shortly thereafter, leaving the patient cancer free in the
span of 24 hours.
Professor Mottrie concluded his presentation by stressing the importance of updating training.
Traditional methods of training surgeons in the operating theatre will eventually become redundant
with the administration of simulation. This evolution will call for defining and standardising new EU
training pathways for improving the skills of surgeons and for creating a new system of certification
and re-certification. He emphasised that training should move away from the ‘see one, do one
approach to quality assurance, with the aim being to reduce the cost burden on the health system by
lowering complication rates by over 50%. Currently, this type of training is being piloted through
‘CC-ERUS’, developed by ORSI Academy, a think tank of partners comprising industry, insurance,
academy and scientific experts. In 2013, the pilot resulted in the first validated robotic curriculum being
dispersed in over 39 different robotic centres in Europe. However, in order to improve standardisation,
assistance is required by the EU to help with support studies.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
12 PE 638.391
2.2.2. Robots in general service delivery of healthcare establishments
Dr Kathrin CRESSWELL, Chief Scientist Office Chancellor's Fellow, Director of Innovation, Usher
Institute of Population Health Sciences and Informatics at the University of Edinburgh, UK
Dr Kathrin CRESSWELL focused her presentation on discussing findings from research she published
in the Journal of Medical Internet Research in 2018 on existing social and likely future challenges
of robotics applications in healthcare. Dr Cresswell began by providing an overview of current robotic
applications in healthcare, ranging from back office (e.g., pharmacy stock control), human tools
(e.g., robotic surgery), semi-autonomous (e.g., service robots) and autonomous robotic applications
(e.g., humanoids). Her research suggests that robotic applications have significant potential
opportunities for improving safety, quality and efficiency. However, there are four key barriers that
need to be addressed in order to maximise these benefits.
One barrier concerns the absence of a clear pull from professionals and patients. Dr Cresswell stated
that negative attitudes and concerns from the public, patients and healthcare staff appear to be
contributing to a lack of demand and acceptance for some robotic applications in healthcare settings.
These attitudes appear to be strongly influenced by perceived threats to professional roles (e.g., job
losses) and in popular media (e.g., Terminator). Overall, a trusting relationship between healthcare staff
and patients was perceived to require human input. She pointed to the importance of recognising
cultural differences in the acceptance of robots and gave the example of Japan where robots are
culturally more embedded.
Another barrier concerns the at times contested nature of robotic appearances. Dr Cresswell referred
to the ‘uncanny valley’ phenomenon to illustrate the point that humans tend to be suspicious of robots
that resemble humans as they may be seen as a threatening “ghostly human counterpart”. In some
instances, robots also fall short of human expectations which may result in them not being used
effectively. One approach to address this problem involves developers designing zoomorphic robots
(e.g., seal) knowing that people would have no prior experience of interacting with a seal and therefore
avoiding any potential feelings of aversion.
A further barrier relates to the integration of robotic applications into existing healthcare staff work
practices. She emphasised that robotic applications may have difficulty in reconciling the tensions that
exist in the healthcare industry between standardisation through automation versus the unpredictable
nature of healthcare work. Robotic applications designed to work in specific settings with a limited
number of humans around them used as a tool (e.g., surgical robot) were perceived as being less
difficult to implement than those that were designed to operate in human-dense surroundings.
The fourth and final barrier relates to the emergence of new ethical and legal challenges. Dr Cresswell
pointed to the absence of any existing liability or ethical framework and the difficulty of law keeping
up with the rapid pace of technological developments in this area. She reiterated that while regulation
is important, it should be designed in a way that promotes routine use without stifling innovation.
Dr Cresswell concluded her presentation by emphasising that each type of application poses a different
set of questions and challenges, which can be anticipated through systematic formative evaluation.
They can then be addressed either by modifying the technological design or the social environments
where such applications are used.
Robots in healthcare: a solution or a problem?
PE 638.391 13
2.2.3. Other healthcare related areas of implementation of robot technologies
Professor Daniel DAVID, Professor of Clinical Psychology and Psychotherapy at the University of
Cluj-Napoca, Romania
Professor Daniel DAVID began his presentation by outlining the framework of robotic applications in
the treatment of mental health. He defined the scope of this framework, which focuses on three key
areas such as mental disorders, prevention of mental disorders and optimization (e.g., self-regulation
of emotions). These three areas are further supplemented by four key components of treatment, which
includes assessment, conceptualisation of the problem, psychological intervention and counselling or
therapeutic relationships. Underlying this framework is the provision of a personalised evidence-based
approach. He noted there have been many studies conducted by professionals from various fields such
as engineering to computer science that focused on addressing different aspects of this framework to
test the effects of robot-enhanced psychotherapy. He sought to capture the results of this research
through his own meta-analysis in 2014.
His findings show that only 12 out of a total of 861 studies were sufficient to demonstrate the efficacy
of robot-enhanced therapy. Most studies were excluded because they focused on describing the
process of robotic development rather than measuring psychological outcomes. This was in large part
due to a lack of expertise in conducting clinical and psychological research, as methods require the
consideration of ethical issues and meeting sufficient conditions such as control groups and
quantitative data. He argued this finding justified a need for creating a robot-based psychotherapy
framework to enable the development of this field using rigorous scientific methods.
Based on the framework described earlier, and to help studies testing the efficacy of robots, he
identified three key roles where robots could be used in mental health. The first role relates to the
‘Robot-therapist’ where the robot completely replaces the therapist for some specific psychological
interventions. In this case, the therapist will focus more on elaborating the protocol and supervising
the robot. The second role relates to the ‘Robo-mediator’, which involves making the role of the
therapist more efficient by having the robot take on a specific mediating role. The third role relates
to the ‘Robo-assistant’ where the robot is used as a tool to optimize the activities of the therapist.
For each role, three conditions are needed to test the efficacy and effectiveness of robot-enhanced
psychotherapy. These include studies based on outcome, the mechanism of change to understand why
it works and cost-effectiveness.
Prof. David continued his presentation by describing a number of studies and projects where these
conditions were fulfilled. The first example he provided was the ‘Dream Project’ which investigated the
use of robots in the treatment of children with Autism Spectrum Disorder. The project was the first
largest clinical trial that tested the effectiveness of robots in modelling human behaviour (e.g., verbal
and non-verbal communication techniques) and showed that children with autism were very engaged
with robots being used in this way. This he argued strongly suggests that robots could be used as a
mediator between the therapist and the child.
Another example related to the use of artificial intelligence in the treatment of major depression. This
study was conducted to test whether an ‘Avatar’ could be used to administer therapy to clients at home
in between sessions. In this case, the avatar was able to administer various psychological tests and
through several sensors set up at the patient’s home was able to collect and integrate data on the
patient’s emotions and behaviour. If the client began experiencing any problems, the avatar responded
by establishing a cut-off point and providing basic interventions. If the behaviour persisted the avatar
would signal these concerns to the therapist. In another similar example, Prof. David described the use
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
14 PE 638.391
of a ‘roboRetman’ for helping children to regulate their emotions. He noted that an unexpected
development resulting from these studies was the application of this type of therapy in the treatment
of auditory verbal hallucinations for individuals with psychosis.
Prof. David closed his presentation by emphasising that robot-assisted therapy should be viewed as
technological developments that enhance the effectiveness and efficiency of existing treatments. He
argued that robots and AI are best used in supervised applications, which is likely to create more trust
among stakeholders and improve the quality of therapy. However, more consideration should be given
to addressing ethical and safety issues associated with these technologies such as the potential for
robots being used as human replacements and issues of cyber security. Prof. David finished by stating
that results of studies that demonstrate improvements in treatments can play an important role in
shaping new values of human-robot interaction related to human safety, efficacy and cost
effectiveness. For example, by changing negative stereotypes that many people have about the use of
robots and AI in the treatment and cure of humans.
2.2.4. First round of Questions and Answers
Mr Peterle asked the panel if they had any questions they would like to raise. Prof. Mottrie was the first
to respond by reiterating the importance of working within a framework where humans decide how
robots will be used. He agrees with speakers from the first panel that the best use of robots is in
supervised applications.
Ms Delvaux-Stehres asked Prof. Mottrie if surgical robots should be compulsory for every surgeon and
who should be responsible for deciding which patients are operated on by this technology.
Ms Delvaux-Stehres also asked Dr Creswell to clarify why she thought robotic applications are better
suited for back office functions when the presentation by Prof. David highlighted the benefits of these
technologies when used in more direct interactions with humans.
Dr Cresswell replied by stating that while there are examples of robots being used successfully in these
settings, in most cases they have limited reach and functionality. She added that the use of robots in
back office functions can be characterised as the ‘low hanging fruit’, meaning they can be more easily
implemented than autonomous or semi-autonomous tasks. In part, the extent to which different
applications could be implemented at different times needs to be clarified through a better
understanding on the difference between robots and AI-applications.
Prof. David added to this point by stating that education can facilitate the rate of acceptance in using
these technologies, including by managing expectations. For this reason, it is important for
interventions to be designed by well-trained professionals who are most capable of managing
expectations. Ensuring that policy is evidence-based is also important for these purposes, but the
science is fundamental at the beginning and during the process. Dr Cresswell agreed by stating that
she is a firm believer of conducting formative evaluations as policy making is often slow in responding
to emerging findings.
Prof. Mottrie replied to Ms Delvaux-Stehres’ questions by stating that training in robotic surgery should
be held to higher standards than all other types of surgery. He stressed the importance of regulation
playing a key role in standardsing training and noted that quality assurance training is the way to go in
the use of surgical robots. He emphasised that he would like to see the European Commission leading
the standardisation of this type of training and ensuring it is applied throughout the healthcare system.
Mr Peterle explained the challenges of influencing the regulation of robotics and AI applications, which
concern a lack of understanding of the needs and the ability of regulation in keeping up with the rapid
pace of technological developments. He explained that in other contexts like Japan, the use of robotics
Robots in healthcare: a solution or a problem?
PE 638.391 15
is not human-focused and that while he would prefer a more human-focused approach in Europe, the
needs may drive a different outcome, such as workforce shortages of nurses. He also emphasised that
five years ago, not enough was known about cyber security and the use of artificial accounts, which
now we know have been used to influence American elections and is likely to influence European
elections this year. He posed the question of who will be responsible if the robots make a mistake in
surgery or if an autonomous car crashes? These developments pose new questions and present new
challenges for regulators.
Prof. David added another challenge for regulators is to ensure new lines of research are not blocked.
As developments happen quickly in this field, any prevention by regulators could force research to
move underground where there is no control over how these applications are being used. Mr Peterle
acknowledged this comment relates to ethical and legal dimensions and will be addressed in the next
panel.
2.3. Panel 2: Ethical evaluation and responsibilities of AI and robots in
healthcare
2.3.1. Ethical aspects of using robots in healthcare
Professor Raja CHATILA, Professor of artificial intelligence, Robotics and Ethics at Pierre and
Marie Curie University
Professor Chatila began by outlining the many domains that robots and AI are used in healthcare from
processing and analysing medical data for diagnosis to enhancing motor sensory applications in active
prosthesis. He described the benefits of these applications and how they can enhance quality of life,
for example by assisting or helping humans through emotional support (e.g., robot companions),
helping people with impairments or disabilities perform or enhance motor functions (e.g., active
prosthesis) or supporting surgeons in performing various surgical functions (e.g., interactive
instruments and augmented reality). He also described how robot and AI applications are being used
in predictive medicine to give people more control over their health, highlighting the example of the
Watson System produced by IBM which uses statistics, a patient’s profile and other similar profiles to
identify the probability of a person developing any diseases. He noted the most commonly used
applications of robotics and AI occur in the category of processing and analysing medical data for
diagnosis, falling within the scope of the four ‘Ps’ of medicine (e.g., predictive, preventive, personalized,
and participative).
Prof. Chatila continued by explaining that despite the many benefits of these applications, such
systems and devices, including the results they produce, have created new ethical and social risks and
tensions in the legal system. He provided an outline of these risks, highlighting in particular the impacts
to privacy, human dignity and autonomy (e.g., isolation), the possibilities of human augmentation and
technical dependencies which can have the opposite effect of fostering learning (e.g., medicine
without doctors). In showing how the use of data raises privacy issues and what can be done to
minimize this risk, he provided the example of the Health Datahub project in France which provides a
platform for the exchange of health data between public and private institutions.
Prof. Chatila then emphasised the importance of ethics and values in guiding the applications of these
technologies. He noted the current ethical framework in medical practice, which is based on the
principles of beneficence (e.g., do good), non-maleficence (e.g., do no harm), autonomy (e.g., preserve
human agency), and justice (e.g., be fair), is a good starting point but insufficient in addressing all the
ethical issues that arise in AI-based systems. While the Commission for the Ethics of Research in
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
16 PE 638.391
Information Sciences and Technologies (CERNA) has sought to address some of these issues with the
development of recommendations concerning robots, Prof. Chatila argues these are incomplete, with
specific principles needed for guiding the use of AI and robotised systems.
He then presented a set of additional ethical issues to consider for AI-based systems and noted that AI
and robotized systems, which deal with data, should be aligned with a particular set of values and
principles in order to achieve a degree of “technical dependability". These values include transparency,
accountability, explicability, auditability and traceability, and neutrality or fairness. He explained that
AI-based systems should be transparent and foster trustworthiness by being explicit and open about
decisions informing the design of these systems. Accountability is also an important element, which he
defined in terms of liability and responsibility and would entail humans always being involved in the
chain of command for any output produced by an AI-based system. Here, he stressed that it should be
humans who are ultimately responsible for AI-based decisions. Regarding explicability, auditability and
traceability, Prof. Chatila noted producers and developers of these systems should keep track of the
decisions they make and ensure these decisions are communicated transparently to users, so users
have an understanding of how decisions affecting them are taken. This is particularly relevant for
systems that operate with some level of autonomy. Lastly, he stressed that AI-based systems must be
based on neutrality or fairness in order to ensure that factors influencing outcomes are not biased.
Bringing his presentation to a close, Prof. Chatila emphasised the need for incorporating technical
approaches in the design of AI-based systems to ensure these systems are dependable and resilient.
He emphasised the importance of creating enforcement possibilities, such as validation and
certification requirements, which are needed to foster user trust in these systems and which regulation
can provide.
2.3.2. Main challenges (including legal ones) and opportunities of using robots in
healthcare
Associate Professor Robin L. PIERCE, Associate Professor of the Tilburg Institute for Law,
Technology, and Society at the University of Tilburg
Dr Pierce focused her presentation on three key policy challenges associated with the use of robotics
in healthcare. These include: (1) the expansion in the roles and capacities of robots; (2) privacy and data
protection in care and clinical settings and; (3) robots as convergent technologies. According to
Dr Pierce, these challenges need to be considered during robot design and implementation stages to
ensure outcomes are sustainable and desirable.
In discussing the first key policy challenge, Dr Pierce described the many ways where the integration
of robots in healthcare settings is accelerating, with robots being used as diagnostic aids in the
treatment of diabetes to social and cognitive coaches or therapists and companions. She highlighted
the growing potential of robots providing specific therapeutic interventions due to inbuilt
semi-autonomous and autonomous features. While these can provide ‘technically limitless’
opportunities, she argued that less is known about the short and long-term consequences of having
automated interventions built into these capacities. She stressed that robots will play a substantial role
in the collection and processing of data used for monitoring health, which raises a number of privacy
and data protection issues.
On this theme, Dr Pierce noted that privacy and data protection issues fall into two dimensions:
regulatory and principled. The regulatory dimension relates to the General Data Protection Regulation
(GDPR), which she argued is limited in addressing the use of robots in healthcare and clinical settings.
Robots in healthcare: a solution or a problem?
PE 638.391 17
For example, the definition of data minimization (Article 5) requires data processing to be relevant and
limited to the purposes for which it is being collected. This, she argued is problematic if the aspect of
data minimization is applied to monitoring health data, as all types of information collected for this
purpose (e.g., behaviour changes, speech patterns, geospatial monitoring etc.) can be interpreted as
relevant to health status. Another important aspect she raised is the complexity of consent
requirements, which need to be defined in relation to data processing, care and intervention.
In discussing the principled dimension, Dr Pierce stressed that a growing use of robotic interactions in
the home raises wider considerations of privacy values. She argued that reasons concerning the right
to self-development, self-actualisation and the right to family and a private life require heightened data
and privacy protections, and raise new issues to do with affording protections for third parties present
in the home.
Dr Pierce finished her presentation by highlighting the regulatory complexity posed by the use of
robotics and AI in healthcare. In particular, she highlighted the healthcare domain comes with its own
set of ethical principles which are deeply entrenched and heavily regulated (e.g., confidentiality,
doctor-patient relationship, shared decision-making, and resource allocation). Regulation will
therefore need to extend beyond technical requirements by considering intersecting normative
frameworks due to the convergence of AI and robotics in the healthcare domain.
Mr Peterle noted that given the many emerging challenges to do with privacy facing Europe
(e.g., manipulation of elections), this topic will be addressed in the next mandate.
2.3.3. Socio-economic rationale of implementing robot technologies in healthcare
Dr Andrea RENDA, Senior Research Fellow, Centre for European Policy Studies and Digital
Innovation Chair at the College of Europe
The final speaker of the workshop, Dr Renda began his presentation by discussing the current
socio-economic challenges affecting healthcare in Europe. He outlined many issues that may justify the
use of AI and robotics in healthcare from the ageing population, shortages in healthcare workers,
differentiating points of care, waste and the overuse of prescription medicine, a growing rise in lifestyle
diseases, long diagnosis timelines and the disproportionate rise of chronic and non-communicable
diseases in poorer populations. He argued that AI and robot technologies are unlikely to solve all of
these challenges, with most of the opportunities belonging to high-performance computing, big data
(e.g., 5G/6G connectivity), nanotechnology and the Internet of Things (IoT). Thanks to this emerging
‘technological stack’, machines are now able to perform more accurate and efficient functions.
Unleashing the potential of AI and robotics would therefore require a good integration with the
existing and future ‘technological stack’. In demonstrating this, he described how the race for
supercomputers would eventually lead to ‘cyborgization’, where humans will have enhanced functions
and information reporting simply by wearing implanted or wearable devices that can be connected
and used to produce and communicate data. Such developments are very likely to extend to the
healthcare industry with the emergence of ‘mass customisation’ where for example, a patient can be
administered a personalised pill based on their specific profile.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
18 PE 638.391
He continued by stating that a first look at these developments suggests an enormous number of
promises for improving the effectiveness and efficiency of healthcare, which mainly fall into the areas
of optimization and prediction. These include real time prevention and more accurate monitoring over
time, reduced waste and greater efficiency of healthcare delivery, mass-customisation and monitoring
of the cost effectiveness and overall performance of health therapies throughout the life cycle. He then
added that AI and robotics can also raise potential risks some of which had already been addressed by
previous speakers in the panel. Among the potential risks that had not been addressed were the
possibility that such technologies contribute either to address inequalities or to exacerbate them. For
instance, Dr Renda described how the use of ‘junk’ AI (e.g., a cheap alternative to standard healthcare)
could be used in certain portions of the population and in certain countries to save healthcare costs.
This, he argued, if not kept under control could be detrimental to good quality healthcare by worsening
biases that already exist in society.
Dr Renda also briefly addressed risks of privacy and a loss of agency and self-determination. He
highlighted that when a device is communicating specific data it will be very important for the patient
to know which type of data is being communicated and how it is being translated into a decision, and
whether the decision is based on cost effectiveness or the person’s wellbeing. He added that ethical
considerations are not always easy to define, particularly when robots are making decisions
autonomously. For example, there might be circumstances where the most accurate and effective
decision-making techniques are the least explainable to the patient. These and other considerations
are currently being developed further as part of drafting ethical guidelines for the European
Commission’s High-Level Expert Group on AI. He noted the real challenge here is not only in capturing
all the ethical considerations but in knowing how to implement them.
Dr Renda concluded his presentation by emphasizing that AI and robotics in healthcare should be held
to higher standards than other more general applications, particularly as this is an area where principles
of design-for-all and beneficence are required to ensure equitable access and active participation by
all. He also underlined the importance of dedicated public funding to ensure specific targets for
realising societal wellbeing are achieved, and suggested Horizon Europe as the source of this funding
which he hopes by 2021 will include specific targets for healthcare. Finally, he stated that he hopes the
issues raised during the workshop by all speakers will be considered in addressing concrete targets so
that the complementary development of humans and machines is ensured and healthcare
developments maintain the goal of improving all citizens health and wellbeing in the medium and long
term.
2.3.4. Questions and Answers
Mr Alojz PETERLE opened the floor to discussion
After thanking the speakers for contributing to the workshop, Professor Mottrie took the opportunity
to emphasize that the aim of using robotic tools in surgery is to minimize the need for conducting
invasive procedures, which are more likely to lead to better results for patients. In illustrating his point,
he used the example of kidney stone treatment where today lasers are being used to remove kidney
stones compared to 50 years ago when larger incisions were required. He argued that rather than
completely eradicating the need for surgery, such developments have enabled surgeons to perform
surgery better and with less damage.
Next, Mr. Miklós GYÖRFFI asked the panel to address the issue of cybersecurity and the impact of rising
costs on the healthcare system. He remarked that EU policy on cybersecurity tends to take a macro
approach by focusing on big systems, while the issue of AI and robotics in healthcare requires a more
Robots in healthcare: a solution or a problem?
PE 638.391 19
personalised approach to cybersecurity. On the second issue, he noted that he was concerned about
skyrocketing costs in healthcare in the developed world and sought clarification from the panel on how
new technologies are contributing to reducing these costs and in maintaining the balance of spending
on healthcare.
Professor Chatila replied by agreeing on the need for specific approaches in dealing with cybersecurity
in healthcare. He argued that privacy in the healthcare domain is paramount and special measures will
be needed to protect patient data. One approach could be in the use of datahubs. He emphasised that
due to the pervasiveness of cybersecurity, we should not expect such measures to be completely
bulletproof. However, the threat can be minimised by treating patient data as special kinds of data.
Adding on to this, he emphasised that measures in the protection of patient data will need to focus on
the collection and control of data.
On the second question, Prof. Chatila highlighted that predictive medicine was developed in response
to tackling rising costs of healthcare by reducing the need for treatment, which makes up a significant
portion of the costs in the health system. He also stressed that he hopes the healthcare industry will
avoid following the same approach as the manufacturing industry, which prioritised automation over
humans as a way of reducing costs. He argued that if machines replace humans this would change the
way we provide services to patients. This can be avoided by creating a system that is based on robots
as helpers or aids.
On the issue of privacy and cybersecurity, Dr Renda remarked the potential for two issues to arise in
the collection and use of data, which if not addressed can create problems in the overall effectiveness
of the system. The first concerns personally identifiable information which falls under privacy laws and
the consequences that arise when information is shared with third parties, which can result in profiling
or even more intrusive or discriminatory decision-making. The second part is the potential for a
patient’s therapeutic care and overall path to be corrupted. Dr Renda argued there are no obvious
solutions available today for rectifying these issues. One possible solution is to design systems that
support humans in identifying potential threats from the outside, such as an AI personal assistant that
helps navigate threats and provides protection from hacking. However, this he argues requires
separation from the AI system that becomes personal from the AI system that is used elsewhere, and is
challenging to implement as AI is likely to become so pervasive that it develops both as a form of
support to the individual and in meeting needs on the supply side.
Finally, Dr Renda reiterated his earlier comment on the potential for ‘AI junk’ technology in healthcare.
He emphasized situations could arise where it will be a luxury to have a human-in-the-loop, meaning
it will be much cheaper to have the machine operating on its own. He argues this would be disastrous
as evidence shows that machines do make mistakes.
A speaker from the audience asked the panel to comment on the extent to which the human-in-the-
loop principle can be applied in the context of surgery and healthcare more broadly, whereby the final
decision rests with the human.
Dr Pierce replied to the question by referring to Article 22 of the GDPR on automated decision-making,
which notes that a person is entitled to an explanation on how decisions affecting him or her were
made. She argued that it is highly unlikely a robot can provide this information and if so, this would
present problems for the doctor-patient relationship, which from a legal basis is required. She also
added that it raises questions about the right to challenge an automated decision, which would be
difficult to circumvent for both legal and ethical reasons.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
20 PE 638.391
Adding to Dr Pierce’s response, Dr Renda pointed to the inevitability of AI-based systems in making
decisions about morality and used the example of the Trolley problem in self-driving cars to show that
programming will require machines to make trade-offs between life and death. For this reason, it is not
enough to simply state that decisions must be taken by humans. He also stated that in the US, general
practitioner doctors are increasingly becoming reluctant to override decisions made by algorithms as
they risk exposing themselves to liability. This he argued requires designing liability rules in a way that
give doctors a duty to override the system when they see that it is retrieving information or formulating
decisions that are inaccurate or dangerous. He argues that without such considerations, we risk
creating a system where doctors are slaves to the machines.
Dr Renda concluded by briefly addressing the issue of skills. He argued that in addition to providing
complementary skills to machines, curricula would need to prioritise training future doctors and nurses
in dealing with situations when the machine makes a mistake. He used the example of the Air France
plane crash that occurred 10 years ago to show that pilots were not able to take over from the
automated systems in extreme conditions and highlighted that, in imagining the skills of the future,
we should keep this in mind by ensuring that doctors remain doctors in situations when machines get
it wrong.
2.3.5. Closing remarks by the Chair
Mr Alojz PETERLE wrapped up the session by thanking the speakers and audience for useful sharing.
He reiterated that AI, robots and digitization will mark the EU healthcare system in the years to come
and closed by remarking that while it is unclear whether AI will serve to humanize societies or
contribute to more social differentiation, it is hoped these applications will remain instruments that are
used for a more personalized health approach.
Robots in healthcare: a solution or a problem?
PE 638.391 21
ANNEX 1: PROGRAMME
Organised by the Policy Department A
Economic, Scientific & Quality of Life Policies for the Health Working Group
of the Committee on the Environment, Public Health and Food Safety (ENVI)
The purpose of this workshop is to inform participants and Members of the ENVI Committee about the
current status and potential applications of robotic and artificial intelligence (AI) in healthcare. The first
part of the workshop will provide an overview of current uses and potential applications of robotics
and AI in healthcare including clinical practice, non-clinical activities in healthcare establishments
and other healthcare activities such as rehabilitation or long-term care. The second part of the
workshop will discuss the ethical and legal challenges as well as socio-economic implications of using
robots and AI in healthcare.
Workshop
“Robots in Healthcare: a solution or a problem?”
Tuesday 19 May 2019 from 15:00 to 17.00
Altiero Spinelli Building, ASP A1 G-2
European Parliament, Brussels
Chair: Mr Alojz PETERLE (MEP)
AGENDA
15:00 15:05 Opening and welcome by the Chair Mr Alojz PETERLE (MEP)
Panel 1 - Practical applications of artificial intelligence (AI) and robots in healthcare
15:05 15:15 Current use of robots in clinical practice (inside the body, on the body and
outside the body) and its perspectives
Prof. Alexandre MOTTRIE, Head of the Urology department, Onze-Lieve-Vrouw
Hospital, Aalst, Belgium
15:15 - 15:25 Robots in general service delivery of healthcare establishments
Dr Kathrin CRESSWELL, Chief Scientist Office Chancellor's Fellow, Director of
Innovation, Usher Institute of Population Health Sciences and Informatics at the
University of Edinburgh
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
22 PE 638.391
15:25 15:35 Other healthcare related areas of implementation of robot technologies
Prof. Daniel DAVID, Professor of Clinical Psychology and Psychotherapy at the
University of Cluj-Napoca
15:35 15:50 Questions & Answers
Panel 2 Ethical evaluation and responsibilities of AI and robots in healthcare
15:50 16:00 Introduction to Panel 2 by the Chair Mr Alojz PETERLE (MEP)
16:00 16:10 Ethical aspects of using robots in healthcare
Prof. Raja CHATILA, Professor of artificial intelligence, Robotics and Ethics at Pierre
and Marie Curie University
16:10 16:20 Main challenges (including legal ones) and opportunities of using robots in
healthcare
Associate Prof. Robin L. PIERCE, Associate Professor of the Tilburg Institute for Law,
Technology, and Society at the University of Tilburg
16:20 16:30 Socio-economic rationale of implementing robot technologies in healthcare
Dr Andrea RENDA, Senior Research Fellow and Head of Global Governance,
Regulation, Innovation and the Digital Economy at the Centre for European Policy
Studies and Digital Innovation Chair in the European Economic Studies Department at
the College of Europe
16:30 16:40 Questions & Answers
Closing Session
16:40 17:00 Conclusions and closing by the Chair Mr Alojz PETERLE (MEP)
Robots in healthcare: a solution or a problem?
PE 638.391 23
ANNEX 2: SHORT BIOGRAPHIES OF EXPERTS
Professor Alexandre MOTTRIE
Professor Alexandre MOTTRIE, MD, PhD, graduated in 1988 from the School of Medicine at the
Catholic University of Leuven, Belgium. He completed his residency in Urology in 1994 and successfully
defended his Ph.D. in the University of Saarland, Homburg-Saar, Germany. Professor Mottrie is a
pioneer in robotic surgery and started this type of surgery in 2001. He developed different procedures
in robotic surgery. At his department, he started laparoscopic and robotic surgery to become training
center in this field. He trained numerous colleagues from all over Europe and beyond in the field of
robotic surgery. With over 4000 robotic procedures, he has one of the largest experiences in that field.
In 2010, Prof. Mottrie founded the ORSI-Academy, an innovation center in robotic and minimal-invasive
surgery. As CEO, he is doing basic research on improving training and education in surgery. He has
authored multiple scientific papers and organised several international Congresses and Masterclasses
in these fields. He has been actively involved in multiple congresses by performing live-surgery, giving
courses and/or presenting state-of-the-art lectures. He is the Scientific Director of the ERUS-congresses.
Currently, he is the president of the EAU Robotic Urology Section (ERUS), past-president of the Society
of Robotic Surgeons (SRS) and the past-president of the Belgian Laparoscopic Urology Group (BLUG).
He is the actual Editor of the Surgery-in-Motion Section of European Urology. He is Associate Professor
in the Universität des Saarlandes Homburg-Saar (Germany) and the University of Ghent (Belgium). He
received the “Golden Telescope Award” at the Hamlyn Symposium of the Imperial College in London
(20/06/2015) for lifetime achievements in the robotic field.
Dr Kathrin CRESSWELL
Dr Kathrin CRESSWELL is an experienced social scientist who has worked in the field of medical
informatics for over a decade, evaluating large-scale digitally enabled change projects in healthcare
contexts. During her career, she has collaborated with international leaders, published over 80
peer-reviewed papers, and consulted for the World Health Organization and Harvard Medical School.
She is currently holding a prestigious Chief Scientist Office Chancellor’s Fellowship and is also Director
of Innovation for the Usher Institute of Population Health Sciences and Informatics at the University
of Edinburgh, UK.
Dr Daniel DAVID
Dr Daniel DAVID is an ”Aaron T. Beck” Professor of clinical cognitive neurogenetic sciences at
Babes-Bolyai University, Cluj-Napoca, Romania, and an adjunct professor at Icahn School of Medicine
at Mount Sinai School of Medicine, New York, USA. Dr Daniel DAVID is the director of the International
Institute for the Advanced Studies of Psychotherapy and Applied Mental Health. He is also the Director
of Research at the Albert Ellis Institute, New York, USA. Dr Daniel DAVID is a top international expert in
evidence-based assessment and psychological interventions combined with technology, including
virtual/augmented reality assessment/therapy and robotherapy. He is certified as trainer/supervisor in
cognitive-behavioral interventions by: (1) the Academy of Cognitive Therapy, USA and (2) the Albert
Ellis Institute, USA. He is currently the most cited psychologist from Romania in the international
literature for his scientific contributions, according to Thomson Reuters Web of Science/GoogleScholar.
IPOL | Policy Department for Economic, Scientific and Quality of Life Policies
24 PE 638.391
Dr Raja CHATILA
Raja CHATILA, IEEE Fellow, is Professor of artificial intelligence, Robotics and Ethics at Pierre and
Marie Curie University in Paris, France. He is director of the SMART Laboratory of Excellence on
Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics.
He contributed in several areas of artificial intelligence and autonomous and interactive Robotics along
his career. His research interests currently focus on human-robot interaction, machine learning and
ethics. He was President of the IEEE Robotics and Automation Society for the term 2014-2015. He is
chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the
High-Level Expert Group on AI with the European Commission, and member of the Commission on the
Ethics of Research on Digital Science and Technology (CERNA) in France.
Associate Professor Robin L. PIERCE (JD, PHD)
Robin PIERCE, JD, PhD, is associate professor at the Tilburg Institute for Law, Technology, and Society
(TILT) in The Netherlands. She obtained a law degree (Juris Doctor) from University of California,
Berkeley and a PhD from Harvard University where her work focused on genetic privacy. Currently, her
work focuses on AI in medicine, addressing translational challenges for the development, and
integration of emerging technology for clinical and health applications, applying legal, ethical, and
policy analysis to complex questions of research, translation, and uptake. She has taught courses in
Data Protection and Privacy, Regulation (Law and Technology Masters Program), as well as courses in
The Ethical Basis of Public Health and Healthcare Delivery (HSPH), Legal and Ethical Issues in
Biotechnology (TU Delft), and Social Issues in Biology (Harvard Medical School). She has served on
numerous research ethics committees, including Harvard School of Public Health, Harvard University,
and Harvard Medical School hospitals. She has published across disciplines in such journals as
European Data Protection and Law Review, Social Science and Medicine, and The Lancet. She serves on
the editorial board of the Journal of Bioethical Inquiry. She leads the Health Law, Ethics, and Technology
initiative at TILT.
Dr Andrea RENDA
Dr Andrea RENDA is a Senior Research Fellow at CEPS, where he directs a research group on Global
Governance, Regulation, Innovation and the Digital Economy (GRID). He is a non-resident Senior Fellow
at Duke University’s Kenan Institute for Ethics. From September 2017, he holds the “Google Chair” for
Digital Innovation at the College of Europe in Bruges (Belgium). For this academic year (2018/2019), he
is also a Fellow of the Columbia Institute of Tele-information (CITI) at Columbia University, New York.
His current research interests fall at the intersection of technology and policymaking and include
regulation and policy evaluation, regulatory governance, innovation and competition policies, and the
ethical and policy challenges of emerging digital technologies. He is a Member of the High-Level Group
on Economic and Social Impacts of Research of the European Commission, DG RTD; a member of the
European Commission High Level Expert Group on artificial intelligence; a member of the European
Commission's Blockchain Observatory and Forum; and a member of the Italian Expert Group on AI set
up by the Italian Ministry of Economic Development. He leads the CEPS Task Forces on artificial
intelligence and blockchain.
PE 638.391
IP/A/ENVI/2018-16
Print ISBN 978-92-846-4771-2| doi: 10.2861/641603| QA-01-19-370-EN-C
PDF ISBN 978-92-846-4770-5 | doi: 10.2861/146274| QA-01-19-370-EN-N
This report summarises the presentations and discussions of a workshop on the use of robots and
AI in healthcare, held at the European Parliament in Brussels on Tuesday 19 February 2019. The aim
of the workshop was to provide background information and advice for Members of the ENVI
Committee on the status and prospects of applying robotic and artificial intelligence (AI) based
technologies in healthcare.
The first part of the workshop focused on the practical application of AI and robots in healthcare,
while the second part examined the ethical implications and responsibilities of AI and robotic based
technologies in healthcare.