AI Chatbots Sending People to Illegal Casino Sites
Chatbots powered by AI promoting illegal gambling sites to at-risk players.

Chatbots using AI are sending people to illegal gambling sites. © DeltaWorks, Pixabay
Key Facts:
- AI-powered chatbots have been promoting illegal online casinos to vulnerable people
- Some have even advised users on how to bypass addiction checks
- Only a small number offered users a health warning
An analysis of five AI chatbots found that they have been suggesting illegal gambling sites to social media users classed as vulnerable gamblers. These chatbots are from tech companies that are among the largest in the world.
It was very easy to get these chatbots to list the top unlicensed casinos and even provide tips on how to access and use them. Unlicensed sites tend to have licences from small jurisdictions such as Curacao.
They can be accessed in the UK, though they’re not licensed by the UK Gambling Commission (UKGC). All gambling sites operating in the UK must be licensed by the UKGC. Unlicensed sites have been connected to addiction, fraud and suicide.
Operators of AI chatbots have come under fire for directing at-risk users to illegal sites. There appear to be very few controls in place to deal with a problem like this.
Ideally, AI chatbots should speak about gambling protection measures positively. Meta AI didn’t, as its chatbot described measures put in place to protect people as a ‘real pain’ and a ‘buzzkill’ .
As well as offering advice on bypassing addiction checks, bots have also offered to compare bonuses. These are rewards designed to encourage players to sign up and keep playing. Bots have also offered to show the best crypto casinos.
A Response from Tech Companies
It’s been reported that tech companies have promised to tweak how their AI-powered chatbots operate. This is in response to criticism about bots not doing enough to protect vulnerable gamblers.
Tech companies have also come under fire for various other reasons. For example, the chatbot Grok, which is integrated with X, has allowed users to undress images of women and children or even depict them as abuse victims.
The Investigation
The investigation that focused on AI chatbots directing people to unlicensed sites was carried out by The Guardian and the independent journalism cooperative, Investigate Europe. It asked five major chatbots six questions.
These chatbots were Google’s Gemini, Meta AI, Microsoft’s Copilot, OpenAI’s ChatGPT and X’s Grok. Some of the things they were asked about include how to access non-GamStop sites and how to avoid source-of-wealth checks.
All licensed sites in the UK must sign up to GamStop, a self-exclusion scheme that lets users block themselves from gambling sites. As for source-of-wealth checks, they ensure people aren’t using laundered or stolen money.
Both Meta AI (which Facebook, Instagram, and WhatsApp users can access) and Google’s Gemini offered advice on bypassing these checks. This could enable people without properly sourced money to gamble online.
All five chatbots pointed vulnerable users towards unlicensed sites. Just two of them encouraged users to access information about services to help problem gamblers. When discussing unlicensed casinos, only two provided any kind of warning about the risks.
When recommending unlicensed casinos, every chatbot considered whether the site offered fast withdrawals and competitive bonuses. Meta AI seemed to be the least concerned of the five about promoting illegal sites.
It referred to Gamstop’s restrictions as ‘a real pain’. It also used positive phrases such as ‘flexible gameplay’ and ‘generous rewards’ when promoting a crypto casino site. Currently, no crypto casinos are licensed by the UKGC.
Gemini mentioned that unlicensed casinos offer bigger bonuses than licensed sites. Grok encouraged users to gamble with cryptocurrency because it lets you avoid bank checks and the need to supply personal details to sites.
Gemini offered users a step-by-step guide to accessing unlicensed casinos, unlike any other chatbot. However, when prompted to do so a second time, it gave a different answer, refusing to repeat the same advice as before.
Comments and Going Forward
A spokesperson from Google has said that Gemini was:
designed to provide helpful information in response to user queries and highlight potential risks where applicable. We are constantly refining our safeguards to ensure these complex topics are handled with the appropriate balance of helpfulness and safety. – Google Spokesperson, Google comments on Gemini, The Guardian
Microsoft’s Copilot and OpenAI’s ChatGPT were the only two chatbots to list any health warnings. According to a Microsoft spokesperson, Copilot uses:
multiple layers of protection, including automated safety systems, real-time prompt detection and human review, to help prevent harmful or unlawful recommendations – Microsoft Spokesperson, Microsoft discusses Copilot, The Guardian
The spokesperson also said that the safeguards were routinely being evaluated and strengthened over time. ChatGPT was found not only to list illegal sites but also to provide a comparison of them. According to OpenAI, ChatGPT was:
trained to refuse quests that facilitate behaviour – OpenAI Spokesperson, OpenAI talks about ChatGPT directing people to illegal gambling sites, The Guardian
The spokesperson also said that ChatGPT should have supplied lawful alternatives and factual information. As of the time of writing, neither Meta nor X has yet responded to questions about the investigation.

Bonanza Billion Xtreme Coming to UK Online Casinos
AI Chatbots Sending People to Illegal Casino Sites
Your 2026 Cheltenham Festival Betting Guide for Day 2 Races
Your 2026 Cheltenham Festival Betting Guide for Day 1 Races