The rapid rise of generative AI (Gen AI) has led to intense speculation about the kinds of jobs that AI will supplement, and which ones it will replace1. While Gen AI holds significant potential for the medical industry, offering healthcare professionals tools to facilitate more effective healthcare delivery, what are the limits of these technologies for medical purposes, and what are the legal and ethical implications involved in incorporating these tools into medical practice?
As Gen AI continues to evolve, the question becomes more complex still. Given the unique combination of expert problem solving and human empathy required by medical professionals, is AI capable of replacing our physicians?
While AI holds the potential to drive innovative medical advancements, its integration into the delivery of healthcare services raises significant legal and regulatory considerations. Stakeholders are often surprised to learn that software itself can be considered a medical device if its intended use is for a “medical purpose”2. This designation is relevant because medical devices can require pre-market approval from Health Canada and the collection of safety and efficacy data, which can be a significant commercialization hurdle for product manufacturers to overcome.
Software that is purely administrative (such as software intended for clinical communication, electronic records, or general wellness) is typically not considered to have a medical purpose and is not regulated as a medical device in Canada3. Further, most software that is only intended to support a healthcare professional or patient/caregiver in making decisions about the prevention, diagnosis, or treatment of a disease or condition is typically not considered a medical device—as long as it is not intended to replace the clinical judgment of a healthcare professional4.
In practice, however, it can be difficult to neatly distinguish between software that “supports” clinical judgment and software that “replaces” clinical judgment. For example, AI-based software such as chatbots are becoming more common in virtual care settings as a triaging tool. If the chatbot is replacing part of the patient’s interaction with a healthcare professional, this blurs the line between supporting and replacing a clinical decision.
While Health Canada has provided some guidance on the matter, its approach to regulation continues to evolve5. We expect more guidance on software as a medical device as AI continues to develop.
As AI-based software continues to revolutionize healthcare delivery, we can expect to see changes to Canada’s current healthcare reimbursement models. In fact, a significant obstacle to integrating AI-based software into Canada’s public healthcare system is the lack of a clear reimbursement model associated with the use of AI tools. While private insurers may cover such technologies, there is no overarching reimbursement process for such devices and technology in the public model.
Physicians in Canada generally work on a billing-by-task model and AI might be incompatible in some respects with the current public billing system. For example, an AI capable of answering patients’ questions could reduce the amount physicians can bill under the current system, and some critics suggest that this could create a disincentive for the adoption of AI in the healthcare setting.
In determining whether to integrate AI into patient care, institutions will likely assess the benefits of efficiency against any impacts on reimbursement models. Manufacturers of AI-enabled healthcare products will also need to take such impacts into account when commercializing their technology and selecting key customer bases. To address this inconsistency, provincial governments will need to consider reimbursement regulations and billing policies, for example by creating new billing codes, to avoid allowing Canada to become an outlier compared to other jurisdictions (for more on regulatory considerations for artificial intelligence, read “What’s new with artificial intelligence regulation in Canada and abroad?”).
Like any other tool in healthcare, AI-based software carries risks and can expose healthcare professionals to liability. Depending on a medical practice’s risk tolerance, this could be another roadblock to the integration of new AI technologies into Canada’s healthcare system.
Current legal principles place the responsibility for meeting a sufficient standard of care on human operators; however, as AI becomes more autonomous, new legal principles, which take into account the role of AI, may arise. One possible theory of liability suggests that physicians who use AI tools in the healthcare setting should treat the technology similarly to a medical student, meaning that such tools would require oversight by the licensed physician so that the patient receives a sufficient standard of care6 (for more on AI liability, read “Who is responsible when AI causes harm? AI and product liability”).
If an AI tool operates independently, such as an imaging analysis program that does not require the independent judgment of a healthcare professional to review results, the question of liability becomes more complex. Who assumes responsibility if the AI makes an error? This issue is compounded by the fact that AI tools that make autonomous decisions could appear to practice medicine without a medical license from the appropriate regulatory body. We anticipate clearer guidance on this issue as more AI-based healthcare technologies enter the market. As the law in this area continues to develop, AI product manufacturers should consider including clear limitations in their tools’ terms of use, in order to help mitigate potential liabilities.
Privacy impacts are also key when considering potential liability in the adoption of AI for medical purposes. To develop datasets, train AI models, and generate outputs, patient personal health information must be leveraged (for more on persona data considerations, read our related article, “What should be included in my organization’s AI policy?: A data governance checklist”).
Developers who build AI-driven healthcare software must stay informed about relevant laws and regulatory guidance. Developers should also consider seeking clarifications or requesting pre-submission meetings with appropriate regulatory bodies when using emerging AI technologies.
This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.