Designing Sam

The content decisions behind an AI assistant that real estate agents could trust

Assurant was building an AI assistant for their Home Warranty Agent Portal — a platform where real estate agents create quotes, manage orders, and get support. The assistant needed to help agents move fast on simple tasks without ever giving wrong information on high-stakes questions.

In insurance, that line between "helpful" and "dangerous" isn't abstract. Wrong coverage information is a regulatory risk. An overly casual tone undermines credibility. A chatbot that can't gracefully hand off to a human makes agents trust the whole platform less.

I led the content strategy for this assistant — from personality and voice to escalation logic and guardrails. This case study isn't about what I delivered. It's about the thinking behind the decisions.

The core design problem

Most AI assistant projects start with a question about capability: what can it do? I started with a different question: what should it refuse to do?

Real estate agents are working in a compliance-heavy space. They need accurate, trustworthy answers — and they need to know when they're not getting one. An assistant that confidently answers everything is more dangerous than one that knows its limits. So the first design decision was: Sam's intelligence is defined by what it escalates, not just what it answers.

That principle shaped every content decision that followed.

Designing the voice

I'd already built a content style guide for the Home Warranty product — Assurant's first-ever content standards for that vertical. I adapted Assurant's brand voice to the specific audience: real estate agents navigating transactional workflows where clarity and warmth both matter.

That guide became Sam's foundation. The assistant would inherit the same voice principles, terminology, and tone as the rest of the portal, because inconsistency kills trust, and trust is the whole point.

But an AI assistant has voice challenges that static UI doesn't. Specifically:

  • Tone has to flex by scenario without breaking character.

Sam answering "Where's my dashboard?" needs to sound different from Sam saying "I can't help with that — let me connect you to someone who can." Both need to feel like the same personality. I created tone examples across four scenario types: simple navigation (direct, concise), workflow guidance (step-by-step, supportive), error handling (empathetic, solution-oriented), and escalation (graceful, transparent). The personality stays constant. The register shifts.

  • Warmth can't come at the cost of precision.

In consumer apps, you can be playful. In insurance, playfulness reads as carelessness. I defined Sam's personality as helpful, patient, and trustworthy — never pushy, never overly casual. The line I kept coming back to: Sam should feel like a knowledgeable colleague, not a customer service script and not a friend.

  • The name matters more than you think.

I used Copilot to generate options, feeding it the PRD, the voice guidelines, and audience context. We needed something approachable without being casual, trustworthy without being robotic, universal enough for a diverse agent base. Sam stood out — and when I presented it to the Head of Marketing, it landed immediately.

I wanted to make sure the way the assistant sounded, responded, and escalated mirrored the same clarity, empathy, and structure we were building into the rest of the portal.

Designing the guardrails

This was the hardest part of the project, and the part I'm proudest of.

Working with the Head of Marketing, a Senior Strategist, the Head of Engineering, and two PMs, I built a decision framework for every type of query Sam might receive. The question was always: does Sam answer, or does Sam escalate?

We mapped it in Excel by query type, risk level, and action:

  • Basic navigation ("Where are my quotes?"): Sam answers directly. Low risk, simple retrieval.

  • Documented product information: Sam answers from the knowledge base. Medium risk, requires accuracy.

  • Coverage specifics ("Is water damage covered under this plan?") : Sam escalates to a human. High risk. Even if the answer exists in documentation, the nuance of coverage terms means a confident wrong answer could create regulatory exposure.

  • Sensitive data ("I need to update my credit card"): Sam escalates immediately. Non-negotiable.

  • Anything uncertain: Sam escalates. The default is humility, not confidence.

The engineering team initially wanted Sam to handle more. Their instinct was capability: if Sam can answer it, Sam should answer it. I pushed back — not because the technology couldn't do it, but because the trust model couldn't afford it. In insurance, one wrong answer doesn't just frustrate a user. It exposes the business. I made the case that Sam's value wasn't in answering everything — it was in being reliably right about what it did answer, and transparently honest about what it didn't.

That was probably the most important content decision on the project.

Designing for coherence across channels

One thing I noticed early: the marketing emails agents received, the portal UI, and the planned chatbot were being built by different teams with different assumptions about tone. An agent could get a warm, encouraging email, then land in a portal that sounded clinical, then talk to a chatbot that sounded like neither.

I audited the existing marketing emails and rewrote them, working closely with the Head of Marketing and Senior Strategist. This wasn't about "fixing copy" — it was about making sure the agent's experience from first email to portal to chatbot felt like one voice. If we were asking agents to trust Sam, it couldn't sound like a stranger in a product they already knew.

Knowledge base recommendations

The AI team hadn't started building Sam's knowledge base yet. I saw the opportunity to set content standards before the content existed — which is always easier than retrofitting quality after the fact.

I recommended a structure organized by query type (product info, workflow help, troubleshooting) and tagged by risk level (low-risk vs. compliance-sensitive). I defined quality standards: verified and compliant, written in plain language and active voice, reviewed by subject matter experts for accuracy, legal for compliance, marketing for brand alignment, and content for clarity. And I recommended version control so updates could be tracked as policies changed.

I didn't curate the knowledge base content myself — that came later in the project timeline. But I built the framework so the team filling it had clear standards to work from.

I left Assurant before Sam launched. But I delivered a complete content foundation: personality guidelines, voice and tone standards with scenario examples, content guardrails, an escalation logic framework, knowledge base recommendations, and rewritten marketing emails aligned with the portal voice.

These weren't just documents. They were decisions — each one grounded in a specific reasoning about what agents need, what the business can't afford to get wrong, and where the line sits between AI capability and AI trustworthiness.

What I left behind

What I'd do differently

1 - Test with agents.

The guidelines were validated by stakeholders across product, marketing, engineering, and design. But real agents interacting with Sam's responses would have surfaced things internal review can't — especially around tone expectations and trust signals in high-stress moments.

2 - Define measurement upfront.

I'd build the success metrics (adoption rate, response accuracy, escalation quality, agent satisfaction) into the strategy document from day one, so the team launching Sam would have a clear framework for evaluating whether the content decisions were right.

Thanks for reading!