Fake AI accounts on the rise, experts call for stronger regulations

Sinar Daily
April 14, 2025
1.4k
324

SHAH ALAM – Artificial Intelligence (AI) has surged in popularity, providing convenience and efficiency.

However, its darker side—spreading misinformation—has raised significant concerns, particularly with the rise of fake AI-generated accounts on social media platforms that deceived users and spread false information, leading to confusion and mistrust.

Experts stressed that a more structured and strategic communication plan, incorporating public service announcements (PSAs) across multiple media platforms, was necessary to educate users and counter misinformation effectively.

Taylor’s University, School of Media and Communication (SOMAC), Faculty of Social Sciences and Leisure Management senior research fellow Associate Professor Dr Massila Hamzah, highlighted the importance of ongoing detection and mitigation efforts among policymakers, social media companies, content creators and users.

"The goal is to implement effective precautions, verify content for manipulation and misinformation and detect deepfake algorithms or fake accounts, ensuring they are promptly flagged for further action.

"For example, policymakers must have a good system for monitoring both new and existing social media platforms, as well as a systematic reporting mechanism for users to report fake accounts," she said when contacted.

Massila urged users to remain vigilant and actively report suspicious content, as early detection could prevent the rapid spread of misinformation.

"This may assist in early detection, whereby profiles, accounts, or information can be taken down in a timely manner," she added.

She acknowledged that while the Communications Ministry's public awareness campaigns have encouraged vigilance against AI-generated scams, a more structured strategic communication approach through PSAs was crucial to reaching all users.

She emphasised that well-executed media exposure could cultivate "intelligent AI users" who critically assess AI-generated content.

In Malaysia, she said discussions on AI-generated misinformation mirror global concerns about balancing technological advancement with ethical considerations.

"There is a dire need for robust regulatory frameworks to be reviewed. More importantly, public awareness must be strengthened to enable users to act smart in mitigating the risks associated with AI-generated fake accounts.

"Creating awareness will empower users to be more vigilant in critically assessing the purpose behind AI content creators, who are, ultimately, humans," she said.


Regulatory measures and industry guidelines

Meanwhile, Malaysian Research Accelerator for Technology and Innovation (MRANTI), Innovation Commercialisation, head and AI expert Dr Afnizanfaizal Abdullah said implementing strong regulatory frameworks was crucial in addressing AI-generated misinformation.

He stressed the need for governments to establish AI development guidelines to ensure transparency and accountability.

"For example, the European Union’s Digital Services Act (DSA) sets a precedent by holding platforms accountable for the spread of misinformation and harmful AI-generated content.

"In Malaysia, the National Artificial Intelligence Roadmap 2021–2025 (AI-Rmap) outlines a comprehensive plan for developing and implementing AI technologies," he told Sinar Daily.

He added that Malaysia’s National Artificial Intelligence Office led the country’s AI agenda, while the National Guidelines on AI Governance and Ethics, introduced by the Science, Technology, and Innovation Ministry in September 2024, provided a framework for responsible AI use, focusing on fairness, transparency, accountability, and privacy.

Afnizanfaizal also highlighted Malaysia’s recent regulatory framework for internet messaging and social media service providers, introduced in August 2024.

Effective from Jan 1, 2025, platforms with over eight million users will be required to obtain an Applications Service Provider Class Licence under the Communications and Multimedia Act 1998.

"The framework aims to enhance user data protection, implement age restrictions, address online harms and manage harmful content generated by AI, including deepfakes," he added.

Beyond national policies, he emphasised the importance of global industry standards in promoting responsible AI development.

He cited the ISO/IEC 42001 AI Management System Standard, which provided guidelines for ethical AI practices, and noted how companies like IBM have established AI ethics boards and adopted voluntary ethical standards.

Afnizanfaizal urged industry players to integrate ethics into AI development by investing in detection tools that utilise machine learning to identify AI-generated content.

He suggested that a combination of AI monitoring and human moderation would lead to more effective content regulation.

"Transparency is key—platforms should release regular reports on their efforts to combat misinformation, citing X's (formerly Twitter) transparency reports as a benchmark for accountability," he added.

He also called for stronger collaboration in addressing AI-related threats.

"Initiatives like the Partnership on AI foster cooperation between academia, technology companies and civil society to establish ethical best practices for AI development.

"Public-private partnerships can facilitate knowledge-sharing, enabling more effective tools and strategies to mitigate AI-driven misinformation," he said.


AI-generated scams and digital deception

Public education is a crucial component in tackling AI-generated misinformation. Awareness campaigns can help users identify AI-generated content and verify sources, ensuring a more informed digital community.

This is essential as AI-driven deception reaches new levels, posing a significant threat.

Recently, women’s health and weight management coach Jaymie Moran exposed an AI-generated Instagram account with over 800,000 followers. The account impersonated a medical professional, offering advice and selling supplements through an Amazon storefront.

"She makes it seem like she’s an insider in the industry and then uses it to sell supplements through an Amazon storefront.

"This account is using AI to impersonate a doctor and spread misinformation. AI-generated content is getting better every day and it’s getting harder to spot what’s actually real," Moran revealed in a video post.

In today’s digital age, unchecked misinformation can have serious consequences.

AI-generated voices and images are also being used to impersonate celebrities, including influencer Khairul Aming, singer Datuk Siti Nurhaliza and actor Datuk Aaron Aziz, to fraudulently endorse products on social media platforms such as TikTok and Instagram.

Sinar Daily
April 14, 2025
1.4k
324

Read More News

5 min read
Read More
5 min read
Read More
5 min read
Read More
5 min read
Read More
5 min read
Read More
5 min read
Read More