Taylor Swift is no stranger to controversy, but this week she faced a new kind of scandal: AI-generated fake images that sexualized and exploited her image without her consent.
Swift was portrayed in various sexually explicit and suggestive poses that were widely shared on X, a social media platform formerly known as Twitter.
The images were created using AI-powered tools that can generate realistic but false images from text prompts. These tools, such as ChatGPT and Dall-E, are becoming more accessible and popular, especially among artists and hobbyists who use them to create novel and creative content.
However, they can also be abused to create harmful and malicious content, such as deepfakes, voice clones, and fake endorsements.
The incident involving Taylor Swift is not the first time that AI-generated images have been used in that manner. In 2019, a website called DeepNude was shut down after it was exposed for using AI to create nude images of women from their clothed photos. The website claimed to be a form of entertainment, but it was widely criticized for violating women’s privacy and dignity.
Similarly, in that same year, a Chinese app called ZAO was banned after it allowed users to swap their faces with celebrities in videos, raising concerns about identity theft and fraud.
The X account that shared false images of Taylor Swift has since been suspended. However, it had already amassed over 27 million views within the 19 hours it was up.