Skip to main content
From the desk of Dr. James L. Norrie — September 17, 2025

Most chatbots are accurate enough — but not persuasive. Real impact happens when advice feels personal, arrives on time, and is grounded in facts you can trust.

That’s why we fuse behavioural science with AI. When tone and structure match how each person makes decisions, guidance turns into action.

The human problem with machines that talk

Information lands when it feels like it understands us. Even good guidance gets ignored if the tone feels generic. People need to feel heard before they will listen — especially in high-stakes online moments where risk hides in plain sight.

Quick take: Trust is a pre-condition for action. Timing matters. Language and structure should match the user’s disposition.

Adding genuine personality to AI

Our research shows that personality — risk appetite, rule-following tendencies, and authority response — shapes how people take advice. Using the patented myQ™ instrument, we style-align responses so tone, structure, and phrasing match each user’s disposition. We call this fusion of psychology and AI: AIQ™.

Same intent, different styles:

Rule-focused: “Company policy says don’t approve unknown payments. Open a verified ticket so Finance can confirm.”

Risk-tolerant: “High-risk request. Two taps: forward to Finance, then archive. Save time and avoid exposure.”

 Human factor

People act when they feel understood. One-size-fits-all tones erode trust.

 Style-alignment

myQ™ maps risk style; AIQ™ adapts tone and structure so guidance lands.

 RAG accuracy

Curated Retrieval-Augmented Generation reduces hallucinations and drift.

Getting the facts right, too

Warmth without accuracy fails. That’s why we pair hyper-personalised AI chatbots with Retrieval-Augmented Generation (RAG) from curated sources to reduce hallucinations.
The result is guidance that is both engaging and correct — a foundation for digital trust.

  • Curated sources: policy, knowledge base, approved vendors
  • On-policy answers: filtered for tone, scope, and brand
  • Traceable: citations and logs for audit and improvement

Why this matters in real life

Employees often hesitate to ask humans for “basic” help. In pilots, a company chatbot became the first line of defence for cybersecurity decisions, and failure rates dropped by more than half as people consulted — and acted on — advice in the moment.

50%+
Drop in failure rates when guidance is trusted and timely

Beyond cybersecurity

The same approach can nudge medication adherence, support safer financial decisions, and improve everyday choices wherever trust and timing matter. Personalised AI chatbots turn information into action.

Ethical guardrails required

Any system this persuasive must be responsible. Our SAFER approach stresses consent, transparency, minimising undue influence (especially for vulnerable users), and auditability end-to-end.

  • Consent & clarity: users know when AI is advising.
  • Source trails: claims map to policy or vetted sources.
  • Controls: configurable influence limits for sensitive contexts.
  • Audit: logs for compliance and improvement.

What good looks like

  • Hyper-personalised replies aligned to myQ™ risk styles
  • Behavioural prompts that earn trust and change behaviour
  • RAG-backed answers grounded in policy and curated sources
  • SAFER guardrails with measurable outcomes

Machines that talk like us only matter when they also understand us.

Start a micro-pilot
Prefer the designed version? Download the printable PDF with visuals and references.
Download PDF

Leave a Reply