Grok is an artificial intelligence tool developed by AI, the company founded by Elon Musk, and integrated into the social media platform X (formerly Twitter), which is sparking controversy. Unlike other widely used AI systems such as ChatGPT or Gemini, which rely on layered content moderation, risk classification, and safety guardrails, Grok was designed with a markedly different philosophy.
Musk has repeatedly described Grok as an AI that is less constrained by traditional content restrictions and more resistant to censorship, positioning it as a system that “seeks truth” and challenges mainstream narratives even when those narratives involve sensitive or controversial topics.
This approach has made Grok appealing to users who criticize what they see as overregulation in artificial intelligence. However, it has also placed the platform at the center of a growing global debate.
Critics argue that Grok’s looser moderation framework may inadvertently enable the circulation of harmful content, particularly sexually explicit material and manipulated images. The concern becomes significantly more serious when such content is accessible to children and adolescents, a population already deeply immersed in digital ecosystems.
The controversy: explicit content, deepfakes, and the vulnerability of minors
Grok has sparked international controversy after allegedly generating sexually explicit deepfakes involving manipulated images of women and children. Deepfakes — synthetic media created using artificial intelligence to alter or fabricate images and videos — have become increasingly realistic and difficult to detect. When combined with weak oversight mechanisms, these technologies pose serious risks to privacy, dignity, and child protection.
Users have reportedly employed Grok to digitally manipulate images by removing or altering clothing, a trend that has amplified fears around nonconsensual sexualization and online exploitation. Child protection advocates warn that such practices normalize harmful behavior and create new forms of digital abuse that are difficult to trace or regulate.
A recent CNN report highlighted the scale of the issue. According to the investigation, “Researchers from AI Forensics, a European nonprofit organization that investigates algorithms, analyzed more than 20,000 randomly generated images created by Grok and 50,000 user prompts between Dec. 25 and Jan. 1. The researchers found a high prevalence of terms such as ‘her,’ ‘put on,’ ‘take off,’ ‘bikini,’ and ‘clothes’ — all related to sexual content.”
The findings suggest not isolated misuse, but structural gaps in how the platform filters prompts and outputs. When minors are exposed to such systems, the consequences can be severe, ranging from psychological harm to increased vulnerability to grooming and exploitation.
Global reactions: Why Malaysia and Indonesia blocked Grok
In response to these concerns, countries such as Malaysia and Indonesia have moved to block Grok entirely. Authorities in both nations argue that the platform presents unacceptable risks to children by allowing access to sexually explicit content without sufficient restriction or censorship.
These decisions must be understood within a broader cultural and legal framework. Malaysia and Indonesia have large Muslim populations and enforce strict anti-pornography laws. In these societies, regulating sexual content is not only a policy issue but also a cultural and moral imperative. Blocking Grok aligns with existing legal norms designed to protect minors and uphold community standards.
Yet the broader question remains: If governments in Asia are willing to take decisive action to protect children from AI-generated harm, why has Latin America been comparatively silent?
Colombia and AI adoption: innovation without clear safeguards
Colombia has actively embraced artificial intelligence as part of its national digital transformation agenda. AI tools are increasingly used in education, healthcare, public administration, fintech, and public services. In 2025, the country adopted its National Artificial Intelligence Policy (CONPES 4144), which promotes ethical, inclusive, and sustainable AI development.
While this policy represents an important step forward, it remains largely aspirational. Critics point out that it does not establish binding obligations for AI platforms, nor does it include specific mechanisms to protect children from harmful AI-generated content. As AI becomes more embedded in daily life, these regulatory gaps grow more visible and more dangerous.
Political debate in Colombia: Proposals and warnings
The controversy surrounding Grok has begun to resonate within Colombia’s political sphere. Several lawmakers have acknowledged that existing laws are insufficient to address the risks posed by generative AI, particularly for children and adolescents.
One of the most prominent voices in this debate is Senator Sonia Bernal, who has promoted the creation of a congressional commission focused on artificial intelligence. Bernal has publicly stated that the commission’s role is to “study, analyze, formulate concepts, and monitor public policies, government actions, and technological developments related to artificial intelligence in Colombia.”
In the Chamber of Representatives, Project Law No. 384 of 2025 seeks to regulate AI platforms with a specific focus on protecting children and adolescents. The proposal aims to impose obligations on AI providers to ensure ethical use, transparency, and accountability, particularly in cases involving image manipulation, sexual exploitation, and digital abuse.
Member of Congress Alirio Uribe has framed the issue as a matter of human rights, noting that AI regulation is no longer optional. He has warned that Colombia risks falling behind global standards if it fails to address how emerging technologies affect vulnerable populations.
Also, the Ministry of Information and Communications Technologies has indicated that there are initiatives in Colombia that have been approved or are currently under debate, such as proposals to regulate the use of social media by children under 14, measures that, although they are not exclusively tied to artificial intelligence, contribute to restricting in a certain way the children’s exposure to digital risk. However, most of the proposals are still moving through the legislative process and have not been fully enacted as binding laws.
The Ministry has emphasized the importance of aligning Colombia with international AI governance frameworks, while acknowledging the challenge of balancing innovation with protection.
Experts in digital rights sound the alarm
Beyond political institutions, digital rights experts and civil society organizations in Colombia have expressed concern about the rapid adoption of AI without adequate safeguards. Advocacy groups argue that voluntary corporate policies are insufficient when it comes to protecting children from explicit or harmful content.
Experts have called for mandatory age-verification systems, algorithmic transparency, and independent audits of AI platforms. Colombian digital rights scholar Catalina Botero Marino has warned that algorithmic systems operating at scale require strong institutional oversight to prevent human rights violations, particularly in contexts involving freedom of expression, privacy, and child protection.
The Communications Regulation Commission has also promoted initiatives such as the National Consensus on Digital Care, which seeks to coordinate government, families, schools, and technology companies around shared responsibilities for protecting minors online. However, participation remains voluntary, and enforcement mechanisms are limited.
Screen time, AI, and the cost of inaction
The urgency of this debate is underscored by screen-time data. According to figures from Colombia’s Communications Regulation Commission, children spend an average of eight hours or more per day in front of screens. This prolonged exposure dramatically increases the likelihood that minors will encounter harmful content — especially when AI systems generate material without effective moderation.
In the absence of clear public policy, responsibility is often shifted to families and schools, many of which lack the tools or knowledge to supervise AI-driven platforms. Experts warn that this regulatory vacuum leaves children exposed at a scale that traditional child-protection frameworks were never designed to handle.
As the debate in Colombia, Latin America, and around the world continues over how to make the use of AI systems safer — especially for minors — Elon Musk appears largely unsympathetic to the idea of imposing restrictions.
Musk has rejected calls to censor Grok, framing the debate as a threat to free speech. He has suggested that charging users for access could limit misuse. However, critics argue that paywalls do not solve the underlying problem.
Subscription models do not guarantee age verification, nor do they prevent minors from accessing content through shared accounts or indirect exposure. Monetization, experts say, cannot replace ethical AI design, strong moderation, and enforceable child-protection standards.
Why Latin America cannot afford to look away
Across Latin America, the risks associated with AI-generated sexual content remain under-regulated and under-discussed. Governments face pressing challenges — from inequality to public security — that often overshadow digital child protection.
Yet failing to act now could have long-term consequences. Without clear rules, AI platforms risk shaping childhood, sexuality, and social norms without accountability. For Colombia and the region, the Grok controversy should serve as a warning: Innovation without safeguards leaves the most vulnerable unprotected.
Protecting children in the age of artificial intelligence requires proactive regulation, cross-sector cooperation, and a firm commitment to placing children’s rights at the center of technological progress.