
As generative AI tools become tightly embedded within social media platforms, concerns are growing about how easily images can be altered and circulated, often with little friction and limited oversight.
Associate Professor Abhinav Dhall, from the Department of Data Science and AI in Monash University’s Faculty of Information Technology, has been examining the risks posed by Grok, the AI image and text tool integrated into X, particularly as governments debate whether tighter restrictions are needed and how individuals can better protect their images from misuse by AI systems.
“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits,” Dhall said. “As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images.”
That visibility can accelerate harm. Altered images can circulate alongside original posts, gaining traction through replies and reposts before context or verification has a chance to intervene. According to Dhall, the problem is compounded by questions of consent and ownership. “In many cases, the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe.”

“Focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly”
The technical systems behind these edits may struggle to recognise the broader implications. “Grok may perceive the post’s edit request to be benign; however, in some cases the system may not fully grasp the ethical context or the emotional, reputational and privacy impact the manipulated image can have on the person involved,” Dhall said. His research highlights how perception can shift with carefully framed prompts. “In our research, we have found that outputs generated using vision-language models can lead to negative changes in perception about the subjects in an image, especially if the prompt is crafted carefully.”
These concerns have sharpened in Australia amid reports of AI-generated deepfakes circulating online. The eSafety Commissioner has been monitoring complaints involving Grok, including the creation of sexualised or violent images generated from user-uploaded photos. Because Grok’s edits appear directly within public threads on X, such content can spread rapidly before it is removed, drawing scrutiny from regulators and political leaders.
Dhall said prevention remains critical. “To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms.” He stressed particular caution when it comes to minors. “It is also important to avoid posting children’s photos publicly.”
“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms.”
When misuse is suspected, early action can limit damage. “If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible,” he said.
Australia already has laws covering image-based abuse, including non-consensual sharing of intimate material, and recent reforms have expanded offences linked to AI-generated sexual content. Dhall argues that the debate should focus on enforcement rather than outright bans. “Rather than restricting Grok completely, a more balanced approach is to strengthen and enforce rules against misuse of images and misrepresentation,” he said. “Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly.”
He added that platforms must shoulder greater responsibility. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”
Support independent community journalism. Support The Indian Sun.
Follow The Indian Sun on X | Instagram | Facebook
Support Independent Community Journalism
Dear Reader,The Indian Sun exists for one reason: to tell stories that might otherwise go unheard.
We report on local councils, state politics, small businesses and cultural festivals. We focus on the Indian diaspora and the wider multicultural community with care, balance and accountability. We publish in print and online, send regular newsletters and produce video content. We also run media training programs to help community organisations share their own stories.
We operate independently.
Community journalism does not have the backing of large media corporations. Advertising revenue fluctuates. Platform algorithms change. Costs continue to rise. Yet the need for credible, grounded reporting in a multicultural Australia has never been greater.
When you support The Indian Sun, you support:
• Independent reporting on issues affecting migrant communities
• Coverage of local and state decisions that shape daily life
• A platform for small businesses and community groups
• Media training that builds skills within the community
• Journalism accountable to readers
We cannot cover everything, but we work to cover what matters.
If you value thoughtful reporting that reflects Australia’s diversity, we invite you to contribute. Every donation helps us maintain the quality and consistency of our work.
Please consider making a contribution today.
Thank you for your support.
The Indian Sun Team










