$7.2 million to make insurance magical

Here

©General Magic Technologies Inc. 2026

What Licensed Brokers Need to Know Before Using AI

RIBO published AI guidance for Ontario brokers in May 2025. Here's what it means for your brokerage

RIBO Brokers and AI: What You Need to Know

RIBO published its AI guidance in May 2025. If your Ontario brokerage uses any AI tool that touches client communication, here's what it means for you.

AI is showing up in insurance brokerages fast. Automated quoting tools, chatbots on agency websites, AI-generated emails, text message agents that follow up with prospects. The technology is moving quicker than most brokers expected.

RIBO noticed. In May 2025, they published "Responsible AI Use Among RIBO Licensees," the first formal guidance on how Ontario brokers should handle AI in their day-to-day operations. It's not a regulation with penalties attached yet, but it signals exactly where the rules are headed. And it sets clear expectations that RIBO can enforce through existing Code of Conduct obligations.

If you're a RIBO-licensed broker using AI in any capacity, or thinking about it, here's what you actually need to know.

RIBO Isn't Banning AI. They're Setting Guardrails.

The guidance isn't anti-technology. RIBO's own research, conducted with the Behavioural Insights Team in 2024, found that AI can improve efficiency, service quality, and the ability to deliver personalized advice. They're not telling brokers to avoid it.

What they are saying is that your existing professional obligations don't disappear just because a machine is doing some of the work. The Code of Conduct still applies. Fair Treatment of Customers still applies. The five foundational duties still apply: competency, integrity, conflict disclosure, data protection, and confidentiality.

AI is a tool. The licensed broker is still responsible for everything that tool produces.

The Four Things RIBO Expects

The guidance organizes around four areas. Here's what each one means in practice.

1. Competency and Accountability

What RIBO says: Staff need training to understand when AI is being used in their workflows and the risks that come with it. Brokers don't need to become AI engineers, but they need to know enough to identify where automation is running and what could go wrong.

What this means for you: If your brokerage adopts an AI tool, you can't just plug it in and walk away. Someone at the firm, ideally the principal broker, needs to understand what the tool does, where it connects, and what decisions it's making on behalf of the brokerage. RIBO also expects firms to set up governance structures for ongoing monitoring. That doesn't have to be complicated. It could be as simple as a monthly review of AI-generated outputs and a documented process for flagging issues.

The key point: even when you're using a third-party vendor's AI, your brokerage is still accountable for the results.

2. Suitability and Pre-Client Review

What RIBO says: Anything generated or altered by an AI tool must be reviewed by a licensed broker before it reaches the client. Firms should also audit AI-generated outputs over time to check for accuracy and systematic bias.

What this means for you: This is the "human in the loop" requirement and it's the one most brokerages will need to think carefully about. If you're using a chatbot on your website that answers coverage questions, a licensed broker needs to be reviewing what that chatbot says. If you're using AI to draft client emails, someone licensed needs to read them before they send.

This doesn't mean a broker has to approve every single text message one by one. It means the brokerage needs a system in place to ensure quality control. That could be spot-checking a sample of conversations weekly. It could be setting up alerts for certain types of questions (coverage advice, claims guidance) that get routed to a human. The goal is oversight, not bottlenecking.

For automated texting tools specifically, the practical question is: does your vendor give you visibility into what the AI is saying to your clients? If you can't see it, you can't review it, and you're out of compliance.

3. Transparency and Human Oversight

What RIBO says: Customers must know when they're engaging with AI rather than a human. Brokerages should be transparent about their use of customer-facing AI tools, including chatbots, automated quoters, and messaging agents. A human must always be available to step in for coverage questions and to correct any inaccurate or misleading output.

What this means for you: If a client is texting with your brokerage and an AI is responding, the client needs to know that. This doesn't require a disclaimer on every message, but the interaction should make it clear they're not chatting with their broker personally.

The human availability piece matters too. RIBO isn't saying AI can't handle the first touch or routine questions. They're saying a licensed broker needs to be reachable when the conversation gets into coverage specifics, suitability questions, or anything where professional judgment is required.

The brokerages that handle this well will use AI for the 80% of interactions that are routine (appointment scheduling, document collection, status updates, follow-up reminders) and route the 20% that require professional advice to a licensed human.

4. Privacy and Data Protection

What RIBO says: Brokerages should not process client information through open AI systems. Vendors need to be vetted to confirm that customer data stays under the firm's control, isn't stored by the vendor beyond what's needed, and isn't used for training purposes. Firms should have clear internal policies distinguishing authorized from unauthorized AI tool use.

What this means for you: This is where vendor selection becomes a compliance decision, not just a feature comparison. When you're evaluating AI tools, you need to ask direct questions. Where is client data stored? Is it encrypted? Does the vendor use client data to train their models? Can data be deleted on request? Is the vendor SOC 2 certified? Are they PIPEDA compliant?

If a producer at your brokerage is copying client details into ChatGPT to draft an email, that's a problem under this guidance. RIBO is drawing a clear line between purpose-built tools that keep data contained and general-purpose AI tools where data could end up anywhere.

Your internal policy should spell out which AI tools are approved and which aren't. Put it in writing. Make sure every producer and CSR knows it.

What RIBO Hasn't Said Yet (But Probably Will)

The May 2025 guidance is a starting point, not the final word. There are a few areas where you should expect more detail in the coming months.

Formal enforcement mechanisms. The current guidance reads as recommendations, but RIBO can enforce these expectations through the existing Code of Conduct. If a client complaint lands on RIBO's desk because an AI chatbot gave bad coverage advice and no broker reviewed it, the brokerage is exposed. Don't wait for formal penalties to take this seriously.

Record-keeping requirements for AI interactions. RIBO already requires brokers to maintain written records of client communications, including verbal and email exchanges. It's reasonable to expect that AI-generated conversations will fall under the same obligation. If your AI tool doesn't produce exportable conversation logs, that's a gap you'll need to address.

Cross-border considerations. If your brokerage serves clients in multiple provinces, RIBO's guidance applies to your Ontario operations, but other provincial regulators may issue their own rules. Harmonization is an open question.

A Practical Readiness Checklist

If you want to get ahead of this, here's what to work through now.

Vendor audit. Review every AI tool your brokerage currently uses. For each one, document where client data goes, whether the vendor is SOC 2 certified, and whether client data is used for model training.

Internal policy. Write a clear policy on which AI tools are approved for client-facing use and which are not. Distribute it to all staff.

Human-in-the-loop process. Define how AI-generated client communications get reviewed. This could be spot-checking, keyword-triggered routing, or full review for certain conversation types. Document the process.

Transparency protocol. Decide how your brokerage will disclose AI use to clients. Update your website, onboarding materials, and messaging workflows accordingly.

Training. Brief your team on RIBO's expectations. They don't need to understand how large language models work. They need to know which tools use AI, what the risks are, and what their responsibilities are.

Conversation logging. Confirm that your AI tools produce exportable logs of every client interaction. Store them the same way you store email and call records.

The Bigger Picture

RIBO's guidance is a signal that the regulator takes AI seriously and expects brokerages to do the same. The brokerages that treat this as a checklist to get through will be fine. The ones that ignore it are taking a real risk as enforcement catches up.

But there's an opportunity here too. The Ontario brokerages that adopt AI responsibly, with proper transparency, oversight, and data protection, are going to be able to serve more clients, respond faster, and compete with direct-to-consumer insurers who have had technology advantages for years. RIBO isn't trying to hold you back. They're trying to make sure you do it right.

General Magic's Cell agent is built for RIBO-compliant brokerages. AI transparency, human-in-the-loop routing, SOC 2 certification, and full conversation logging come standard. See how it works

Related Articles

Related Articles

Related Articles

General Magic

Request an AI summary?

General Magic

Request an AI summary?

General Magic

Request an AI summary?