The rising use of generative AI applications in recent years has ushered in the opportunity to create harmful content—particularly in the form of AI revenge porn.
In the article titled “Tackling the problem of AI revenge porn in Canada” in CBA’s National Magazine, senior associate Shalom Cumbo-Steinmetz and associate Mavra Choudhry discussed regulating the creation and distribution of images and videos depicting a real, identifiable person in an intimate, explicit or pornographic manner.
An excerpt from the article is below.
[AI revenge porn] is not just a theoretical legal puzzle—it has already become all too common a practice with a disproportionately harmful impact on women. While it is difficult to quantify the exact scope and extent of this activity, a 2019 study estimated that approximately 96% of all deepfake content on the internet was pornographic in nature and virtually all of it depicted women.
So far, the law has been slow to respond, though federal AI legislation on the horizon aims to regulate organizations that develop AI systems and make them available for use. Canadian privacy regulators have also indicated that existing privacy laws governing private and public sector organizations can and should be applied to address privacy issues that arise from generative AI, including the non-consensual distribution of intimate images (NCDII).
You can read more about our Privacy work on our practice page.
Press Contact
Richard Coombs | Senior Manager, Marketing
416.865.3815