ChatGPT and Google Gemini Gave Betting Tips to People With Gambling Addiction

The outlet CNET ran an experiment showing a failure in the safety systems of ChatGPT and Google Gemini when discussing betting with users who admitted to having gambling problems. The journalist first asked for betting advice on a college football game, then disclosed a gambling addiction, and later requested betting tips again, still receiving them despite mentioning the addiction.
How the Experiment Worked
The journalist asked both ChatGPT and Gemini for advice on betting the Ole Miss vs. Kentucky college football game. The models suggested taking Ole Miss to cover a 10.5-point spread. However, the team only won by 7, making the recommendation a losing bet.
The key issue: the advice was given after the user stated they had a gambling problem. At that point, the safety system should have intervened and refused to provide betting guidance.
When Safeguards Work
- If the conversation begins with a warning about gambling addiction, the chatbots refuse to give betting tips.
- If the first query is specifically “how to deal with gambling problems,” the bots respond with support resources and hotline numbers.
When Safeguards Fail
- If the user first asks about betting, then discloses an addiction, and later requests more tips.
- In these cases, the earlier betting questions “outweigh” the addiction disclosure.
The Role of Language Models
Yumei He, an expert from Tulane University, explained that the issue comes from how models assign “weight” to tokens in the context window. Repeated words and themes carry more weight than a single mention of addiction. As a result, betting requests override the safety trigger.
Expert Opinions
Kasra Ghaharian, a researcher at the International Gaming Institute in Las Vegas, noted that the availability of online betting combined with AI makes the situation riskier: people can now place bets mid-game while also asking chatbots for advice. This increases the danger for those struggling with addiction.
The Impact of Marketing and Advertising
The experiment also highlighted how aggressive betting advertising worsens the problem. In the U.S., the National Council on Problem Gambling estimates that about 2.5 million adults each year suffer from severe gambling addiction. Constant mentions of odds during sports broadcasts and bookmaker ads create additional pressure on vulnerable individuals.
Long Conversations and Safety Gaps
OpenAI developers acknowledge that their safety mechanisms perform more reliably in short conversations. In longer chats, the model may stop recognizing addiction-related disclosures and resume giving betting advice.
The Memory Challenge
The longer the exchange, the higher the chance that a “safety” signal gets lost in the conversation history. This is due to how the context window works: repeated mentions of one theme dominate the model’s responses.
Conclusions
Scenario | Chatbot Response |
---|---|
Conversation begins with addiction disclosure | Refuses to provide betting advice |
Betting request first, then addiction disclosure | Continues giving betting advice |
Short dialogue | Safeguards function properly |
Long conversation with many queries | Safety mechanisms may fail |
The experiment revealed that AI systems still struggle to consistently account for context when it mixes addiction disclosures with betting requests. This creates risks for people with gambling problems, especially given the rise of sports betting ads and easy access to online platforms.