Skip to main content

Artificial Intelligence: Ethical and Privacy Considerations

Practice Accelerator
August 28, 2023

Introduction

Artificial intelligence (AI) is defined as the “capability of a computer to mimic human cognitive functions such as learning and problem-solving.”1There are many examples of the potential uses of AI technology in medicine, both “behind the scenes” and at the bedside2,3:

  • Operations: ensuring adequate staffing levels, allocating patient beds, triaging patient messages in clinician inboxes  
  • Medical research: reproducing researchers’ findings, identifying novel drug candidates  
  • Augmented patient care: computer-aided diagnostics, medication management, digital consultations, risk stratification, remote health monitoring

While it is unlikely that AI will replace clinicians completely, it will become increasingly important for clinicians to keep up with standards of care that emerge as a result of this new technology. It is simply impractical for a clinician with a busy practice to stay on top of the explosive quantity of new data and research that is constantly being published.4 AI systems can be designed precisely to perform this function. Moreover, there is a seemingly endless (and growing) list of administrative tasks for which a clinician is responsible. AI technologies can potentially alleviate this burden and empower the clinician to spend more time where it matters: with their patients.2

Although the potential benefits of AI in health care have been widely theorized, the practical and ethical concerns have been less well-characterized. Discussed below are important considerations involving patient privacy (ie, HIPAA concerns) as well as the ethical use of AI in daily clinical practice.

Patient Privacy and Health Data in Modern Medicine


There is a fundamental tension inherent in the use of AI: machine learning (ML) algorithms are only as robust as the datasets that power them. AI systems can learn from5:  
•    Electronic health record (EHR) data   
•    Genomic databases  
    Google search inquiries for specific symptoms  
•    Digitized pharmaceutical records   
    Smartphone applications such as menstrual cycle trackers  
•    Real-time health data available from the internet of things (IoT)   
    Devices such as wearable activity, step, or health trackers

But who owns this data? How does one balance the potential benefits of innovation with the human right to privacy? How does one know when a privacy violation has occurred?

The major US federal law that has governed the protection of health data since 1996 is the Health Insurance Portability and Accountability Act (HIPAA). The law prohibits ‘covered entities’ (namely, health care providers and insurance companies) from engaging in unauthorized use or disclosure of protected health information (PHI). While PHI may be used for purposes such as direct patient care, quality improvement, and billing purposes, the use of PHI for AI research and development is not authorized under HIPAA without institutional review board (IRB) approval/waiver or explicit patient authorization. However, there are many instances where patient datasets collected by a health system have been used for AI development after undergoing a ‘deidentification’ process, during which each patient record is stripped of 18 patient identifiers specified by HIPAA (names, birthdates, email addresses, etc).6,7

What lawmakers in 1996 did not anticipate, however, was that health data would eventually be derived from multiple sources outside of health care systems themselves. Unfortunately, in the modern era, it is possible to triangulate de-identified data with outside third-party databases, effectively "reidentifying" the dataset by connecting it with a unique individual's identity.7 Given that this is now the case, updated legislation and policies are urgently needed.7

Towards this end, in January 2021 the Food and Drug Administration (FDA) created an Artificial Intelligence/ Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.8 Incorporating feedback from community workshops, peer-reviewed publications, and marketing submissions, the FDA identified the following near-term goals for regulating the development of AI in health care8:

  • Issuing a framework for a predetermined change control plan: the manufacturer must explain what will change through the use of AI, and how the algorithm will use ML to achieve that change
  • Device labeling must focus on transparency to enhance public trust in AI/ML
  • Develop methodology for the identification and elimination of algorithmic biases

Ethical Use of Artificial Intelligence in Clinical Practice


Inevitably, there will be ethical concerns that clinicians must be mindful of while utilizing AI technology in their practice. Here are 3 considerations that will be nearly universal2:

The “black box” dilemma

While current AI (such as deep neural networks) is capable of recognizing and teaching itself patterns, at times it can be difficult for health care providers to discern why a particular recommendation is made. As clinicians are ultimately responsible for patient care decisions, clinicians must demand transparent, step-wise illustrations of the clinical reasoning process of various AI applications. 

An example involves UK researchers who investigated the use of an AI algorithm to predict which pneumonia patients were less likely to die and thus could safely be treated in an outpatient setting. The algorithm learned that patients with a history of asthma were associated with a lower risk of mortality, and thus the AI system (incorrectly) recommended outpatient treatment. However, the reason lower mortality was achieved is because asthma patients tend to be treated more aggressively for pneumonia, and are often admitted to the ICU where they receive a higher level of care. This essential context is why the reasoning process of any AI system must be completely transparent to clinicians.2,9

Algorithmic bias

As mentioned previously, AI algorithms are only as powerful as the datasets used to train them. As such, datasets must be representative of the human populations they are intended to serve. For example, many dermatology datasets contain images of skin lesions from majority Asian or Caucasian patients. This dataset could introduce bias and inaccuracies into the algorithm when attempting to apply AI technology to diagnose patients of other ethnicities.2,3,7,10

Automation bias

Automation bias occurs when clinicians have more trust in the diagnostic capacity of technology than their clinical judgment. Clinicians must remain wary of "rubber stamping" a recommendation made by an algorithm, as the clinician is ultimately responsible for the individual patient under their care. While AI can be leveraged to reduce medical error and maximize treatment effectiveness, clinicians must safeguard against cognitive dependency and atrophy of their own clinical skills.2

Conclusion


With the rise of AI technology in medicine, clinicians and patients alike should be informed of modern-day privacy concerns and demand updated policy and legislation, particularly involving data ownership and access. AI will largely augment clinicians’ abilities to provide quality patient care. However, there are several ethical concerns that every clinician should carefully consider before implementing AI technology in their practice.


References 

  1. Microsoft Cloud Computing Dictionary. Artificial intelligence (AI) vs. machine learning (ML): Understand the difference between AI and machine learning with this overview. https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/…. Accessed 2023.  
  2. Arora A. Conceptualising Artificial Intelligence as a Digital Healthcare Innovation: An Introductory Review. Med Devices (Auckl). 2020;13:223-230. doi:10.2147/MDER.S262590 
  3. Byrne MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, et al. AI in Clinical Medicine: A Practical Guide for Healthcare Professionals. Wiley-Blackwell; 2023. https://onlinelibrary.wiley.com/doi/book/10.1002/9781119790686  
  4. Densen P. Challenges and opportunities facing medical education. Trans Am Clin Climatol Assoc. 2011;122:48-58. 
  5. Kish LJ, Topol EJ. Unpatients-why patients should own their medical data. Nat Biotechnol. 2015;33(9):921-924. doi:10.1038/nbt.3340 
  6. McGraw D, Mandl KD. Privacy protections to encourage use of health-relevant digital data in a learning health system. npj Digit Med. 2021; https://www.nature.com/articles/s41746-020-00362-8 
  7. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019;25(1):37-43. doi:10.1038/s41591-018-0272-7 
  8. Artificial Intelligence/ Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. US Food & Drug Administration; 2021. https://www.fda.gov/media/145022/download  
  9. Academy of Medical Royal Colleges. Artificial Intelligence in healthcare. http://www.aomrc.org.uk/reports-guidance/artificial-intelligence-in-hea…. Published January 28, 2019. 
  10. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. https://www.nature.com/articles/s41591-018-0300-7

The views and opinions expressed in this blog are solely those of the author, and do not represent the views of WoundSource, HMP Global, its affiliates, or subsidiary companies.