Show me the Liability – Corporations and AI in Medicine

AI in medicine, much like AI in any field, is ripe for development. One such application is AI scribes which are tools that can generate medical transcription from a live encounter between a physician and a patient. As Kristy pointed out in her Abridge article, AI scribes in medicine will undoubtably facilitate improved clinician reports and ideally lead to better patient care and optimized outcomes. For instance, in Ontario, the OntarioMD (a division of the Ontario Medical Association) recently did a study involving 150 primary care providers using AI scribe technology which determined that family physicians reported spending 70- 90% less time on paperwork.[1] Clinicians in Canada have thus far integrated AI scribes including but not limited to Scribeberry, AutoScribe, Empathia and Heidi (an international AI scribe company) in their clinical practices.

While the benefits of AI scribes are easy to conceptualize, at this stage AI scribes are currently unregulated despite the influx of clinicians adopting the technology. Both the College of Physicians and Surgeons of British Columbia[2] and the Canadian Medical Protective Association[3] have put out thoughtful guidelines that delineate the clinician’s role and responsibility in utilizing AI scribe technology. Additionally, they detail the physician’s obligations in utilizing this technology including privacy, data protection, intellectual property (are there copyright limitation based on where is the data sets were trained on?) and liability risks. These liability risks include the incorporation of bias with the potential for a human rights cause of action and the potential for civil liability for medical negligence.

Currently, there is no approach to ensure that AI systems address systemic risks during their design and development.[4] The proposed Artificial Intelligence and Data Act (“AIDA”) contemplates how to mitigate these risks associated with high impact AI systems. Apparently, the onus will be on the corporation to assess the safety, bias and incorporate risk mitigation strategies.[5]

My question is simple and intersects with business organizations. Who has liability in the instance where of the AI technology caused harm? The physician who implemented the technology and has a responsibility to evaluate the response before implementing an action. They would undoubtably shoulder responsibility for a negligent act; however, is the corporation developing AI technology responsible for the information that it provides. Does corporate liability extend to the corporation who created the AI system whose AI technology misidentified a patient, or misinterpreted a CT scan leading to a delay in treatment or patient death? Will AIDA ensure that corporations are not exempt from liability?

The rapid acceleration of AI technology in medicine has profound legal implications that are currently not bound by ethical, regulatory and legal frameworks. In BC, lest you think we lead the way in artificial intelligence in medicine, on call physicians can either phone the pharmacist or fax a prescription. Let’s not discuss the security and privacy implications of a fax transmission.

[1] https://www.ontariomd.ca/pages/ai-scribe-overview.aspx

[2] https://www.cpsbc.ca/files/pdf/IG-Artificial-Intelligence-in-Medicine.pdf

[3] https://www.cmpa-acpm.ca/en/research-policy/public-policy/the-medico-legal-lens-on-ai-use-by-canadian-physicians#a-framework-for-stakeholder-responsibilities

[4] https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act

[5] Ibid

One response to “Show me the Liability – Corporations and AI in Medicine”

  1. jadeyliu

    Hi Doris,

    Thank you for this great post and thoughtful questions about the ethical implications of the technology. As I read your post, I was reminded of this Global News article I read last week about the use of OpenAI’s Whisper transcription tool in medical settings and the serious problems with the technology (https://globalnews.ca/news/10832303/ai-transcription-medical-errors/). Basically, Whisper has been prone to “hallucinations”, which are made-up blocks of text as long as paragraphs in some cases. Included in those hallucinated words are sometimes descriptions of violence and completely incorrect medical treatments. The article points out that Whisper is used to create closed captioning for Deaf and hard of hearing people. In such situations, Deaf and HOH people relying on Whisper’s CC have no method to verify the accuracy of the transcription. The flaws in AI technology thus exacerbate the marginalization of a group that has already historically faced prejudice within society. Yet, considering how the corporate world currently treats people with disabilities (especially in the face of developing schemes to generate more profit), I don’t believe fixing Whisper’s CC for Deaf and HOH people would be a priority for developers. So while AI has a lot of potential to help in medical situations, it also has a very high potential to reinforce inequality, especially without a clearly defined entity liable for ethical breaches.

Leave a Reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.