Jennifer Huddleston, Jeffrey A. Singer, and Christopher Gardner
AI and Healthcare: A Policy Framework for Innovation, Liability, and Patient Autonomy—Part 3
Two laws, the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information for Economic and Clinical Health Act (HITECH), pigeonhole Americans into a one-size-fits-all medical data privacy regime. While adequate for the 1990s, these decades-old regulations now fail to protect our privacy and actively stymy lifesaving AI research. The existing laws around health data and data privacy illustrate what happens when static regulation meets dynamic technologies.
Understanding Existing Health Data Laws
HIPAA and HITECH are frequently misunderstood as all-encompassing protections for our sensitive medical data. One reason for this confusion is that these laws were never primarily intended as privacy protections. HIPAA was mainly intended to improve the portability of health insurance for the individual, and HITECH was later passed to subsidize the adoption of electronic health records. The result is a narrow scope of privacy protections that imposes an onerous regime of paperwork upon medical professionals in a failing attempt to protect our medical information. Such a spectacular failure stems primarily from carveouts in the HIPAA privacy rule for de-identified data and personal medical information transferred for treatment, payment, or health care operations.
Beyond just failing to protect American health data, HIPAA has effectively walled off the healthcare system from many of the advancements seen in the rest of the American economy. HIPAA’s requirements and regulations generate enormous compliance costs, meaning that most startups simply can’t effectively be in compliance unless they have ultra-wealthy backers. Furthermore, these compliance costs limit the sources of large, high-quality clinical datasets to the handful of healthcare intermediaries surveilling the vast quantities of medical information stored in their systems by American doctors and hospitals. By heavily restricting those who can collect and sell medical information, HIPAA drives up the cost of lifesaving research and AI development.
What Current Policy Might Mean for AI in Health Care
It’s outrageous that outdated laws are blocking access to the data AI needs to do what it does best—save lives.
The opportunities for AI-driven advancements in healthcare seem nearly limitless. AI can be used for everything from streamlining drug approvals at the Food and Drug Administration to assisting or even automating surgeries. The question therefore remains: how do we enable the benefits of AI health tech while balancing concerns about the privacy of an individual’s health data?
The reality is this new technology might require a complete overhaul of the HIPAA privacy rule. HIPAA currently takes a one-size-fits-all approach to American data privacy, data aggregation, and data deidentification. By requiring one approach to these issues in the 1990s, legislators effectively barred the healthcare industry from using the data security and efficiency advantages that have developed over the past 2 decades. Nowhere is this clearer than in HIPAA’s standards for de-identification. Under the current standards, access to basic demographic information could link “deidentified” medical information like an HIV diagnosis, psychiatric records, or prescription information back to an individual. This puts patients at a serious risk of discrimination and social stigma if a potential employer, journalist, or neighbor can access such personal information.
However, this same information, when properly deidentified, can be incredibly beneficial for health AI. This can help medicine identify trends and treatments that more specifically fit an individual. In some cases, having more data may even help make systems more anonymous and better overcome concerns around issues like bias. In short, our current paradigm may create a Catch-22 that both stifles innovation and fails to actually protect privacy.
The one-size-fits-all approach is pervasive in medical regulation when it comes to new technologies. While this may be currently seen best in AI, other recent innovations have met similar frictions despite their potential benefits to both practitioners and patients.
A prime example of this is found in the potential for 3‑D printing to be used to manufacture patient-specific devices at the point of care. Current FDA regulations are intended to effectively regulate traditional medical device manufacturers. However, with 3‑D printing now allowing healthcare facilities to produce certain devices as needed, there is significant legal ambiguity. Just a few of the questions raised are who counts as the manufacturer of a device, how to test a device, and what levels of patient customization would require a new submission for FDA approval?
Similar issues have emerged surrounding the regulation of at-home medical devices and testing. Such opportunities allow individuals to gain access to the medical data stored within our body. Government restrictions, however, can prevent individuals from having access to this information even when they only want it for personal use and accompanying decision-making. The FDA’s one-size-fits-all approach has historically been used to restrict access to everything from pregnancy tests to HIV tests to 23andMe’s genetic testing. Even at the height of the COVID pandemic, the first at-home COVID nasal swab test kits required a prescription for access—the justification? The FDA wanted to ensure “efficiently track and monitor results.”
One of the answers to these regulatory delays has been the “Right to Try” movement. With FDA approval for a drug taking on average 8.5 years, the time that a drug spends awaiting a bureaucrat’s stamp is time that many people simply don’t have. (Hopefully, AI-related advances may be able to diminish the time required for this process but further discussion of such merits its own consideration.) The Right to Try movement pushed to allow terminally ill patients access to drugs before full FDA approval. By expanding patient autonomy, this movement reduces bureaucratic delay, encourages greater responsiveness and innovation in drug development, and offers a potential path to survival for many. Regulators making decisions that standardize one approach to an issue have resulted in a healthcare system that struggles to innovate and implement new technologies, particularly AI.
Conclusion
Clarifying and updating the rights around health data privacy could allow patients to feel more in control of their own data and more comfortable with the use of AI in healthcare. Our existing health data laws show how static regulation can lock in existing technologies in ways that fail both consumer data expectations and limit more innovative solutions. Individuals’ health data may be particularly sensitive or vulnerable, but appropriate uses of data can also be empowering and provide lifesaving and life-improving information both for a particular individual and more generally.
Click to read Part One and Part Two of this series.
