Responsible Gaming
RG detection is always on — every player message is analyzed in real-time for signs of gambling-related distress. There is no toggle to disable it.What Triggers Detection
The AI flags messages that indicate:- Signs of gambling addiction or inability to control gambling
- Financial hardship due to gambling (can’t afford essentials)
- Self-exclusion requests citing gambling concerns
- Emotional distress related to gambling losses
- Mentions of self-harm or suicidal thoughts
- Legal action threats
What Doesn’t Trigger Detection
These are handled through normal conversation flow:- General account closure requests (without a gambling reason)
- Frustration or complaints about service
- Technical issues or general support inquiries
- Casual profanity not directed at anyone
Choosing an Action Mode
When an RG concern is detected, the AI responds based on the action mode you select. There are three options:Escalate
Immediately hand off to a human agent while providing a supportive response to the player.
Empathic Response
Respond with empathy and support without escalating to a human agent. Best for straightforward RG situations.
Follow Instructions
Define detailed step-by-step guidance with the ability to route to specialized AI Procedures. Best for complex scenarios.
Escalate
The default mode. When RG is detected, the AI sends a supportive message and then transfers the conversation to a human agent. You can customize the AI’s response by providing an Example Response — a sample message that the AI uses as a reference for tone and messaging. The AI will adapt based on your example rather than copying it verbatim.Empathic Response
Similar to Escalate, but the conversation stays with the AI instead of being handed off. Use this when your team prefers the AI to handle RG situations directly. You also provide an Example Response as a tone guide. Since there’s no handoff, the example should include any resources or next steps you want the AI to share with the player.Follow Instructions
For operators who need fine-grained control over RG handling. Instead of a simple example response, you get a rich text editor where you can write detailed instructions for the AI. The editor supports @ actions — type@ to insert one of two actions inline with your instructions:
| Action | What It Does |
|---|---|
| @Route to AI Procedure | Routes the conversation to a specific AIP. A search dropdown lets you pick which one. |
| @Transfer to Live Agent | Escalates to a human agent, same as the Escalate mode. |
“If the player mentions self-exclusion, @Route to Self-Exclusion Procedure. If they express financial distress but don’t request self-exclusion, provide our responsible gaming resources and @Transfer to Live Agent for follow-up.”
Follow Instructions mode may not be visible in your workspace. Contact your CSM to enable it.
Toxicity Detection
Toxicity detection is a separate system from RG — it handles abusive language directed at the AI agent, not gambling-related distress. It can be toggled on or off independently. When enabled, the AI categorizes problematic language into four severity levels, each with a different automatic response:| Severity | Description | What Happens |
|---|---|---|
| Threats | Violence or harm directed at agent | Immediately escalated to a live agent |
| Severe Profanity | Directed at the agent personally | Chat closes immediately — no warnings |
| Moderate Profanity | About the situation, not the agent | One warning issued, chat closes if repeated |
| Light Profanity | Casual expressions, not targeted | Ignored — conversation continues normally |
Custom Warning Instructions
When a player triggers a moderate profanity warning (the one-warning scenario), the AI delivers a warning message. You can customize the tone of this warning by providing Custom Warning Instructions. This text guides how the AI phrases the warning, letting you match your brand voice. For example:“Our platform values respectful communication. Please keep our conversation professional so I can best assist you. Continued profanity will result in chat closure.”
Notification Channels
When a conversation is escalated — whether from RG detection, toxicity, or any other reason — your team is notified through your existing escalation setup:- Slack — If HITL is configured, escalated conversations appear in your designated Slack channel with context and a direct link
- Helpdesk — The conversation is transferred to your human agent queue in Zendesk, LiveChat, Zoho, or Intercom
- Email — For email-based conversations, escalation notifications follow your desk escalation settings
RG and toxicity escalations use the same notification infrastructure as all other escalations. Configure your channels in HITL settings.
Content Shield
For content detection beyond the built-in RG and toxicity models, use Content Shield. Content Shield uses Automation Rules to detect specific phrases or patterns and take automatic action — including silent escalation, tagging, or routing to specific groups. Use Content Shield for:- Detecting self-harm language that goes beyond standard RG triggers
- Flagging regulatory keywords specific to your jurisdiction
- Creating custom escalation paths for different types of sensitive content
Audit Trail
Every RG-flagged and toxicity-flagged conversation is logged in your conversation history. The AI’s detection, response, and any escalation are recorded for compliance auditing.Related
- RG Rules & Safeguards — Overview of RG compliance capabilities
- Content Shield — Custom content detection and automated actions
- HITL Settings — Configure escalation notification channels
- Automation Rules — Build custom content detection rules