Advertisement

AI Spambot AkiraBot Exploits OpenAI’s GPT-4o-mini to Target Websites with Fake SEO Services

AI Spambot AkiraBot Exploits OpenAI’s GPT-4o-mini to Target Websites with Fake SEO Services

An AI spambot named AkiraBot has been using OpenAI’s GPT-4o-mini to inundate websites with cleverly disguised spam comments promoting fake SEO services. AkiraBot has targeted approximately 80,000 websites, primarily small to medium-sized businesses operating on platforms like Shopify, GoDaddy, Wix.com, and Squarespace, according to cybersecurity experts at SentinelOne.

How AkiraBot Operates

AkiraBot employs OpenAI’s chat API with the prompt: “You are a helpful assistant that generates marketing messages,” enabling it to create customized messages tailored to specific industries, thereby increasing the chances of avoidance from automated detection systems. For instance, messages sent to construction companies differ from those shown to hair salons. These spam messages appear in comment sections, contact forms, and even Live Chat widgets, aiming to convince site owners to purchase questionable SEO services.

Innovative Evasion Tactics

To effectively bypass security measures, AkiraBot uses more than just AI-generated content. It deploys additional tools to navigate around CAPTCHA filters and a proxy service to prevent network detection, making it a sophisticated threat since its appearance in September 2024. Despite its name, AkiraBot has no connection to the well-known Akira ransomware group.

Countermeasures and Security Actions

In response to AkiraBot’s activities, OpenAI has disabled the API key exploited by the spambot. A representative stated, “We’re continuing to investigate and will disable any associated assets. We take misuse seriously and are continually improving our systems to detect abuse.” SentinelOne has expressed gratitude to OpenAI’s security team for their collaborative efforts to prevent misuse of AI technology.

Broader Implications of AI Misuse

This incident is part of a wider trend where AI tools are used for unethical purposes, such as foreign governments producing propaganda or cybercriminals developing custom AI models like WormGPT to automate fraud. Despite these concerns, AI technology continues to evolve, emphasizing the importance of vigilance and collaboration in addressing potential threats.


Source: https://www.pcmag.com