© Teki7 / Shutterstock
In recent days, a new controversy has erupted on social media following the spread of a viral post accusing ChatGPT of applying an ideological double standard by allowing offensive or provocative graphic representations of Christian figures, while refusing to generate similar images of the Prophet Muhammad.
The debate does not revolve solely around the creative freedom of artificial intelligence, but rather around who decides which religions may be transgressed and which must remain untouchable—and according to what criteria.
The controversy, amplified by influential accounts such as Libs of TikTok, has reopened an uncomfortable but necessary discussion about censorship, corporate self-regulation, and fear of violent retaliation as a determining factor in content moderation policies.
According to the viral post, ChatGPT allegedly acknowledged that its refusal to generate images of Muhammad—even in artistic or symbolic contexts—is not based on a principle of religious equality, but on a strategy to “minimize harm” and avoid content deemed “high-risk.”
In other words, the platform would be admitting that not all religions are treated equally, and that the primary criterion is not the offense itself, but the likelihood of violent consequences in the real world.
This argument is not new. For years, Western media outlets, cartoonists, and publishers have avoided depicting the Prophet of Islam following violent episodes such as the 2015 Charlie Hebdo attack. Meanwhile, Christianity—the majority religion in the West—has been a constant target of satire, blasphemy, and cultural provocation without comparable consequences.
The underlying question is troubling:
Are threats being rewarded while tolerance is being punished?
Various analysts have warned that this type of technological self-censorship may set a dangerous precedent, one in which fear replaces principles and large technology corporations act as global moral arbiters without democratic accountability.
Even from a secular perspective, unequal treatment among religions contradicts the values of neutrality and pluralism that these platforms claim to uphold. This is not about promoting gratuitous offense, but about demonstrating that the standard being applied is not the same for everyone.
This case exposes an uncomfortable truth about the modern digital world: selective censorship does not protect social harmony—it erodes freedom of expression and reinforces power narratives rooted in fear.
If artificial intelligence aspires to be an impartial tool in service of humanity, it cannot operate under criteria that legitimize intimidation as a means to obtain cultural or religious privileges.
Defending freedom does not mean promoting hatred, but surrendering fundamental principles out of fear of violence is a dangerous path.
