Designing an AI assistant that real estate agents could trust

How strategic content design turned an AI feature into a unified, human-centered experience.

TLDR

Role: AI Content Strategist (self-initiated)

Project: Enterprise B2C Agent Portal (home warranty sales platform)

Challenge: Assurant was building an AI assistant for their Agent Portal, but no one was leading the content strategy. Without it, the assistant's voice would fragment the portal experience and undermine agent trust.

My contribution: I proposed and led the AI content strategy — defining Sam's personality, voice & tone guidelines, content guardrails, and escalation logic. I collaborated with AI engineering, product, and marketing teams to establish the content foundation for Sam's development.

Deliverables: Personality guidelines defining Sam's character and communication principles - Voice & tone guidelines with examples across scenarios - Content guardrails (with Product & Marketing) - Escalation logic framework (with AI engineering, dev, and Marketing) - Knowledge base recommendations (structure, quality standards, review processes)

Tools: Figma, Azure ADO, Excel (for escalation logic), Copilot

Assurant was building a Home Warranty Agent Portal to help real estate agents sell home warranties more effectively. The portal would let agents create quotes, manage orders, access resources, and get support — all in one place.

The more seamless the experience, the more home warranties agents would sell. And the more they sold, the better for Assurant's business.

As the project evolved, the team decided to add an AI assistant to help agents navigate the portal, answer questions, and escalate complex issues to human support.

But there was a gap.

No one was leading the assistant's content strategy. Without it, the AI's voice would be disconnected from the rest of the portal — fragmenting the experience and undermining the trust agents needed to rely on it.

I saw the opportunity and volunteered to lead.

0.1 The opportunity

0.2 Why I stepped in

At that point, I had already been working on content governance for the Agent Portal — building systems to keep tone, structure, and terminology consistent across the entire experience.

When I learned the portal would include an AI assistant, it became clear: if the assistant didn't speak the same language as the product, the experience would break.

Agents wouldn't trust it. They'd avoid it. The investment would fall flat.

So I asked to join every AI and product meeting.

This wasn't just curiosity. It was a strategic necessity.

I wanted to make sure the way the assistant sounded, responded, and escalated mirrored the same clarity, empathy, and structure we were building into the rest of the portal.

0.3 Building the foundation

I had already created a mini content style guide for the Agent Portal, grounding the UX writing in Assurant's brand voice while adapting it for the precision and warmth transactional UX demands.

Now, I used that foundation to shape the AI assistant.

I worked with the AI team to understand what was technically feasible, then partnered closely with Marketing to ensure every brand aspect was updated and accurate — aligning all consumer-facing terminology across the portal and the assistant.

Once the first version was ready, I met with stakeholders across product, marketing, and design to review and validate it.

When I presented it to the Product Owner, they shared it with the entire team. That moment confirmed what I already believed: clarity creates alignment.

0.4 Meet Sam

With the voice foundation in place, it was time to bring the assistant to life.

I used Copilot to generate a list of name options based on the content style guide, product requirements, and insights from marketing. The goal was to find a name that felt:

  • Friendly without being casual

  • Neutral without feeling robotic

  • Universal enough to resonate with a diverse agent audience

From that exercise, one name stood out: Sam.

It felt approachable, trustworthy, and the Marketing team loved it. It was exactly what agents needed from a tool designed to support them in high-stakes moments.

With a name and personality in place, we began building Sam's "mind."

0.5 Setting knowledge base standards

The AI team hadn't yet started building Sam's knowledge base — they were still in planning. I saw an opportunity to set the foundation early. I provided recommendations for how the knowledge base should be structured and maintained:

Content standards:

  • Verified and compliant: accurate, legally sound, and aligned with Assurant's policies.

  • Up-to-date: reflecting the latest product features and business rules

  • Easy to understand: plain language, active voice, no ambiguity

Review process:

  • Subject matter experts validate technical accuracy

  • Legal reviews insurance/compliance-sensitive content

  • Marketing ensures brand alignment.

  • Content team audits for clarity and grammar

Organization structure:

  • Categorize by query type (product info, workflow help, troubleshooting)

  • Tag by risk level (low-risk vs. compliance-sensitive)

  • Version control for updates

While I didn't curate the knowledge base content myself (that came later in the project timeline), I established the framework the team would use to build it — ensuring Sam's responses would be accurate, compliant, and trustworthy from day one.

0.6 Building voice, tone, and escalation logic

I created the voice & tone guidelines that would govern how Sam communicated.

Voice & tone guidelines included:

  • Personality principles: Sam should be helpful, patient, and trustworthy (never pushy or overly casual)

  • Tone examples showing how Sam should respond in different scenarios:

    • Answering simple questions (direct, concise)

    • Guiding through workflows (step-by-step, supportive)

    • Handling errors or confusion (empathetic, solution-oriented)

    • Escalating to human support (graceful handoff)

Content guardrails (developed with Product & Marketing)

  • Never provide coverage specifics or pricing exceptions (compliance risk).

  • Never handle sensitive data (credit cards, SSNs)

  • Never make promises outside Assurant's policies

  • Always defer to human support when uncertain

Escalation logic (refined with AI engineering, dev, and Marketing):

I worked with the team to define clear criteria for when Sam should answer vs. escalate. We used an Excel framework to map scenarios

Basic navigation ("Where's my dashboard?")

Answer directly

Low-risk, simple

Rationale

Query type

Sam’s action


Factual, documented info

Coverage specifics ("Is water damage covered?")

Answer using knowledge base

Sensitive data, requires secure handling

Payment changes ("I need to update my credit card")

Escalate to human

The goal: Sam should help agents move fast on simple tasks, but never risk giving inaccurate information on high-stakes questions. In insurance, wrong information isn't just bad UX — it's regulatory risk.

0.7 What this project taught me

Designing an AI assistant from scratch goes beyond the technology behind it.

You have to design for people — how they think, how they communicate, and how they trust information.

This project reinforced why I love what I do: I help shape systems that sound human, act with clarity, and connect teams around a shared voice.

I learned that:

  • Ownership creates opportunity. Asking to join those meetings changed the scope of my role and the quality of the final product.

  • Voice is infrastructure. A clear tone and escalation strategy build trust, both for users and for teams.

  • AI reflects its inputs. The quality of the system mirrors the clarity, empathy, and consistency of the content behind it.

0.8 Reflection

I left Assurant before Sam launched, but I documented everything so the team could continue building from the foundation I'd established.

What I delivered:

  • Personality guidelines: defining Sam's character, tone, and communication principles

  • Voice & tone guidelines: examples showing how Sam should respond across scenarios

  • Content guardrails: Preventing compliance and legal risks (developed with Product & Marketing)

  • Escalation logic framework: Criteria for when Sam should answer vs. escalate (refined with AI engineering, dev, and Marketing)

  • Knowledge base recommendations: Structure, quality standards, and review processes

    These deliverables gave the AI and product teams a strategic content foundation to build from — ensuring Sam would communicate clearly, compliantly, and consistently.

What I would measure if Sam had launched:

If I had stayed to see Sam in production, I would track:

  • Adoption rate: % of agents who used Sam vs. contacted support

  • Response accuracy: % of queries Sam answered correctly

  • Escalation quality: % of escalations that were appropriate (not premature, not delayed)

  • Agent satisfaction: post-interaction NPS or satisfaction score

    These metrics would validate whether Sam achieved its goal: helping agents work efficiently while maintaining trust.

The lesson:

Sometimes all it takes to lead is seeing a gap and proposing to fill it. I didn't wait for permission. I saw the content risk, proposed a solution, and built the strategic foundation for Sam's content strategy.

That's what strategic content design looks like.

What’s next?

View Money Canvas' project
View Plata Inversiones project