In a bid to combat the proliferation of deepfake pornography, Google has announced a significant policy change that will impact advertisers promoting websites and apps associated with this harmful content.
According to a recent report by Engadget, Google’s Inappropriate Content Policy will now explicitly prohibit advertisers from promoting platforms that create or distribute deepfake pornographic content. This includes websites, services, and applications that facilitate the fabrication of deepfake porn, provide instructions on its creation, or endorse various deepfake porn creators. The updated policy is set to take effect on May 30, giving current advertisers time to review and adjust their ad campaigns accordingly.
This move builds upon Google’s existing measures to address synthetic sexually explicit content, particularly within its Shopping ads platform. Previously, Google prohibited the promotion of services involved in generating, distributing, or storing synthetic sexually explicit content or content containing nudity. This encompassed tutorials on creating deepfake pornographic material as well as the services themselves.
By implementing these stricter ad policies, Google aims to curb the dissemination of deepfake pornography and mitigate the potential harm associated with manipulated and non-consensual imagery. The decision reflects the tech giant’s ongoing commitment to combatting online abuse and protecting user safety and privacy on its platforms.