How the introduction of innovative information …...How the introduction of innovative information...
Transcript of How the introduction of innovative information …...How the introduction of innovative information...
How the introduction of innovative information technologies & big data affects bioethical theory &
practiceChrysanthi Sardeli, MD, PhD
Obstetrician-Gynecologist, Assoc. Professor of Pharmacology-Clinical Pharmacology,
Ethics Expert
Department of Clinical Pharmacology, School of Medicine, Faculty of Health Sciences, AUTh
Laboratory for the Study of Medical Law and Bioethics, Faculty of Law, AUTh
Directorate-General for Research and Innovation, European Commission
Conflict of Interest
• None relevant to this talk
Healthcare & the 4th Industrial Revolution
Consider these questions
• If forced to choose, should an operating system driven by AI technology save a pregnant woman or her fetus?
• Should algorithms draw on people’s race, gender or religion if it makes them more efficient?
• When our colleague has automated their work routine so they can cut down on time and costs, should we do the same to keep up?
Healthcare & the 4th Industrial Revolution
What is changing?
Transformation/improvement (?) of:
• Health services (electronic health records, prescribing, diagnostic images interpretation, insurance data, treatment delivery, etc.)
• Biomedical research activities (clinical trial records, biobanks, registries, genomic databases, etc.)
• Public health policies (immunization records’ keeping, disease surveillance, vital statistics, etc.)
The issues arising
• Epistemological • The value of evidence generated through analysis of disparate data, quality of
data analyzed, data usage & re-usage (formatting/metadata), concerns regarding systems used & their capabilities, …
• Technical• The development of robust data security systems, liability, autonomy or even
emancipation of AI systems, …
• Ethical• Values, fundamental human rights & liberties are endangered, accountability,
responsibility, integrity, trust become uncertain
Big data analytics
• Novel analytics techniques (incl. machine learning & neural networks) disclose novel patterns & inferences about health
• The means to achieve these commendable results are not always conforming with universally accepted transparency standards of evidence-based medicine & by displacing the role of human agents, issues of accountability and liability become more puzzling than ever
‘’Blackbox’’ medicine
• The use of opaque computational models to make decisions related to health care• Combination of large scale high quality datasets with sophisticated predictive
algorithms to identify and use implicit, complex connections between multiple patient characteristics
• A defining feature of ‘’blackbox medicine’’ is that those algorithms are nontransparent — the relationships they capture cannot be explicitly understood, and sometimes cannot even be explicitly stated• The lack of transparency exists by nature of development, it is not
deliberately created thus
Algorithms & Ethical Challenges
AI vs. Machine Learning vs. Deep Learning• Programs that can sense, reason, act, adapt (mimic human behavior)
• Algorithms whose capabilities improve as they are exposed to more data over time (previously given rules apply, no explicit programming)
• Multilayered neural networks learn from vast amounts of data
• Emancipation of machines
• Limited human control
• Liability issues
Impact of AI technologies
• Reduction of individuals’ control over their data
• Adverse effects on privacy
• Increase in discrimination
• Ruling class domination and wealth
• Harm
• Threats of harm from autonomous systems (and weapons)
• Rise in surveillance
Personalized & precision (or stratified) medicine• The use of increasing amounts of personal data, especially diagnostic
genetic tests, to tailor treatments to an individual patient
• Personalized medicine has the potential to save & extend lives, to avoid unnecessary treatment, & to hasten & streamline the process of drug discovery, but can only use a limited set of relationships
• Adverse effects on privacy, informed consent, limited availability, high costs, discrimination, ruling class/wealth domination
Human enhancement
• Physical enhancements (e.g. advanced prosthetics)
• Cosmetic enhancements (e.g. plastic surgery)
• Longevity enhancements (any intervention that extends the lifespan of an otherwise healthy individual. Such developments are being developed, e.g. anticipated age-slowing interventions)
• Affective and emotional enhancements (e.g. potentially some forms of neurostimulation)
• Cognitive enhancements (e.g. drugs that increase our ability to focus)
• Moral enhancements (e.g. anaphrodisiac for correctional purposes)
Impacts of human enhancement technologies(positive or ambiguous)• Improve human capability• Advance human health and well-
being: longer lifespans and expectancy
• Enhance bodily integrity (stop/reduce effects of disease/disability)
• Improve individual abilities and inventiveness
• Mitigate social inequalities• Reduce unfairness and injustice
• Alter human capability• Affect human psychology• Increase technological reliance • Change perception of 'human'/way
of being• Change personal status and
interpersonal relations• Change freedom of choice and
cultures of disabled people• Transform workplaces• Shift towards 'enhanced' society
Impacts of human enhancement technologies(negative)• Harm human health (health risks,
addiction to HETs, increase in hacking threats to the body)
• Compel individuals to remove or change 'abnormal' or 'unenhanced' traits
• Cause resentment between enhanced and unenhanced people
• Create social inequalities (elites of enhanced, perpetuate social divisions)
• Diminish/damage human freedom, autonomy and bodily integrity
• Increase competition between humans
• Disrupt society (particularly vulnerable populations)
• Homogenize society leading to loss in diversity
• Promote unfairness and injustice
• Increase surveillance (monitoring of individuals)
• Facilitate ostracization of unenhanced or sceptics
• Give pharmaceutical industries an unprecedented power over human bodies
• Increase risks related to an increased reliance on technology generating an increased vulnerability in case of technological failure
Mobile phones/wearables’ data
• Data usage can help map the movements of infected individuals, providing information regarding high risk contacts & behaviors
• Sensors & apps help doctors detect disease & monitor treatment
• Invasion of privacy, ‘’slut-shaming’’ & issues with informed consent & security, ownership/profit vs. public health interest & protection
Are patients safe or (at least) safer?
• Proponents argue machines don’t get tired, don’t allow emotion to influence their judgement, make decisions faster and can be programmed to learn more readily than humans
• Opponents say human judgement is a fundamental component of clinical activity and the ability to take a holistic approach to patient care is the essence of what it means to be a doctor (or generally speaking, a healthcare provider)
Ethical issues
• The widespread introduction of new healthcare technologies will help some patients but expose others to unforeseen risks (of physical harm & to privacy). What is the threshold for safety on this scale – how many people must be helped for one that might be harmed? How does this compare to the standards to which a human clinician is held?
• Who is responsible for harm caused by AI mistakes – the computer programmer, the tech company, the regulator or the clinician?
Ethical issues
• Should a doctor have an automatic right to overrule a machine’s diagnosis or decision? Should the reverse apply equally?
• Autonomy of the patient/participant should always be respected –what about physician/researcher autonomy?
• Intellectual property rights – who owns what when a new technology creates a new product/technology/treatment/procedure?
• Privacy & confidentiality
Ethical issues
• Validation, replication, trade secrecy – how can one test and validate something not understood or hidden & how can innovation be fostered if raw data are not readily available & accessible to all?
• Inclusion of new stakeholders in legal/ethical decision making – from states, pharmaceutical industry & health care providers to patients, data analytics industry and social media developers• Individuals making their genetic and genomic data available for a fee, patients
organizing their own clinical trials or monetizing their data, corporate stakeholders take on leading roles in the health data ecosystem due to their advanced (and often proprietary) technical know-how and their role in setting technical standards
The Facebook research scandal
• (ok one of them!)
• Facebook conducted an experiment that involved selectively manipulating the contents of some users' news feeds to measure the effect of "emotional contagion", that is, the extent to which our moods are affected by those of others in our virtual proximity
• When this experimentation was revealed, public outcry swiftly followed: users were incensed at having been made the subject of research without their knowledge
• The research provoked an "editorial expression of concern" from PNAS, who noted that the study was "not fully consistent with the principles of informed consent and allowing participants to opt out“
• What about deprivation of agency for a 3rd party’s gain? However check out Terms of Service
Informed consent – or not-so-informed after all?• What is a person receiving information about & giving informed
consent to?
• Informed consent is virtually impossible to obtain online
• Not-so-informed consent is often given on rather unspecified terms regarding future use of biological samples (US & EU laws actually permit it, undermining individual control)
• No consent obtained/no opting out choice/no information in the case of citizen science projects or self-tracking activities or organizations
• New strategies develop: broad, dynamic, tiered, blanket, implied, open, portable legal consent
‘What should one do?’
• When humans make decisions, they instinctively locate right and wrong in several places at once: in the motives behind the choice, in the type of action we do, and in the consequences, we bring about. Only if you focus on just a single place in the decision-making process – just on consequences, say – then you can rank and compare every option, but inevitably you will miss something out
• So, if an AI system was certain what to do when good deeds lead to a bad outcome, or when bad motives help people out, we should be very wary: it would be offering moral clarity when really there wasn’t any
‘What should one do?’
• It is just conceivable that AI, rather than being a cause of moral problems, could help solve them. By using big data to anticipate the future and by helping us work out what would happen if everybody followed certain rules, artificial intelligence makes rule- and consequence-based ethics much easier. Applied thoughtfully, AI could help answer some tricky moral quandaries. In a few years, the best ethical advice may even come from an app on our phones
Ethics Guidelines for Trustworthy Artificial Intelligence• 7 key requirements that AI systems should meet in order to be
deemed trustworthy• Human agency and oversight
• Technical Robustness and safety
• Privacy and data governance
• Transparency
• Diversity, non-discrimination and fairness
• Societal and environmental well-being
• Accountability
‘’Gοvernance’’ & oversight activities
• Self-regulation incl. Ethics by Design
• Soft law & codes of conduct
• Review bodies
• Auditing mechanisms
• Expert advice
• Coordination initiatives among public authorities, researchers, and private actors
• Deliberation
• Citizens’ forums
• Public engagement initiatives
Key substantive features of systemic oversight
• The capacity to cope with the uncertainty that surrounds data collection and data uses through adaptive & flexible mechanisms (response to change)
• The capacity to address the expanded temporality of data-related activities (from storing to re-analysis) through dynamic monitoring & responsiveness (according to use rather than source)
• The ability to cope with the relational nature of big biomedical data by means of reflexivity & inclusiveness
Thank you! Are there any questions?