Clearer penalties and faster takedowns amid AI image abuse: Prof Dhall

By Our Reporter
0
192
Associate Professor Abhinav Dhall (second from left), from the Department of Data Science and AI at Monash University, during a meeting at Jio Institute focused on academic collaboration and research exchange, alongside colleagues from Monash University and Jio Institute. Photo via Linkedin

As generative AI tools become tightly embedded within social media platforms, concerns are growing about how easily images can be altered and circulated, often with little friction and limited oversight.

Associate Professor Abhinav Dhall, from the Department of Data Science and AI in Monash University’s Faculty of Information Technology, has been examining the risks posed by Grok, the AI image and text tool integrated into X, particularly as governments debate whether tighter restrictions are needed and how individuals can better protect their images from misuse by AI systems.

“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits,” Dhall said. “As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images.”

That visibility can accelerate harm. Altered images can circulate alongside original posts, gaining traction through replies and reposts before context or verification has a chance to intervene. According to Dhall, the problem is compounded by questions of consent and ownership. “In many cases, the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe.”

Associate Professor Abhinav Dhall

“Focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly”

The technical systems behind these edits may struggle to recognise the broader implications. “Grok may perceive the post’s edit request to be benign; however, in some cases the system may not fully grasp the ethical context or the emotional, reputational and privacy impact the manipulated image can have on the person involved,” Dhall said. His research highlights how perception can shift with carefully framed prompts. “In our research, we have found that outputs generated using vision-language models can lead to negative changes in perception about the subjects in an image, especially if the prompt is crafted carefully.”

These concerns have sharpened in Australia amid reports of AI-generated deepfakes circulating online. The eSafety Commissioner has been monitoring complaints involving Grok, including the creation of sexualised or violent images generated from user-uploaded photos. Because Grok’s edits appear directly within public threads on X, such content can spread rapidly before it is removed, drawing scrutiny from regulators and political leaders.

Dhall said prevention remains critical. “To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms.” He stressed particular caution when it comes to minors. “It is also important to avoid posting children’s photos publicly.”

“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms.”

When misuse is suspected, early action can limit damage. “If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible,” he said.

Australia already has laws covering image-based abuse, including non-consensual sharing of intimate material, and recent reforms have expanded offences linked to AI-generated sexual content. Dhall argues that the debate should focus on enforcement rather than outright bans. “Rather than restricting Grok completely, a more balanced approach is to strengthen and enforce rules against misuse of images and misrepresentation,” he said. “Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly.”

He added that platforms must shoulder greater responsibility. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”


Support independent community journalism. Support The Indian Sun.


Follow The Indian Sun on X | InstagramFacebook

 

Donate To The Indian Sun

Dear Reader,

The Indian Sun is an independent organisation committed to community journalism. We have, through the years, been able to reach a wide audience especially with the growth of social media, where we also have a strong presence. With platforms such as YouTube videos, we have been able to engage in different forms of storytelling. However, the past few years, like many media organisations around the world, it has not been an easy path. We have a greater challenge. We believe community journalism is very important for a multicultural country like Australia. We’re not able to do everything, but we aim for some of the most interesting stories and journalism of quality. We call upon readers like you to support us and make any contribution. Do make a DONATION NOW so we can continue with the volume and quality journalism that we are able to practice.

Thank you for your support.

Best wishes,
Team The Indian Sun