Version 1.0
|
Last Updated
September 2025
Introduction: What the Smart Agent Service Manual is
The Smart Agent Service Manual (SASM) is Clarity’s framework for running safe and reliable AI-driven customer service. Think of it as the rulebook that tells our AI agents exactly what they are allowed to do, what they must avoid, and when they need to hand over to a human.
Clarity has three main products:
Voice-of-customer: A feedback analysis platform that takes raw customer input, organizes it into clear themes, and removes noise like irrelevant details. It uses AI to surface key issues, spot patterns, and gauge sentiment, helping teams see what matters most. It also comes with extra features like alerts, reporting, and search to make the feedback easier to track and act on.
Customer Service: A central system that brings all customer inquiries into one place, whether they come from live chat, mobile, WhatsApp, or web forms. It manages team inboxes, schedules, and routing so issues go to the right people. It also includes a knowledge base with company policies and product details, which AI agents use to draft replies or even resolve simple cases on their own.
AI Agents: Intelligent assistants that help teams respond faster and more accurately. They pull answers directly from your knowledge base, suggest replies you can approve or edit, and learn from feedback over time. With built-in guardrails, AI agents can safely handle common requests on their own or escalate complex issues to the right person.
The SASM governs AI Agents and Customer Service (or what we call internally the AI Hub). It ensures that answers are based only on approved company information, that financial decisions such as refunds follow strict policies, and that every action is recorded for audit purposes.
System overview for AI Hub
AI Hub
Knowledge Base generation from historical tickets and uploaded docs (with article criticality, confidence, and frequency).
Agent Assist in‑ticket (reasoned, personalized drafts; human‑in‑the‑loop with thumbs up/down and issue reporting).
Deflection Agent that auto‑responds when confidence ≥ policy threshold and hands off when low.
Feedback to training queue with AI summarization, recommended fixes, related articles, and a reviewer workflow (Pending → In‑Progress → Resolved/Dismissed).
How Clarity prevents AI mistakes
One of the biggest concerns with AI is that it can sometimes make up information, a problem known as “hallucination.” Our system is built to stop this.
Use of approved sources only: The AI can only respond using your company’s approved knowledge base. It cannot create answers from scratch or pull in outside information. Every reply is linked to the exact source that supports it.
Verification checks: Before an answer is shown to a customer, it goes through automated checks to confirm that it is fully supported by the source material. If the system is not confident, it will not answer.
Escalation when uncertain: If the AI cannot find enough evidence, it automatically passes the case to a human support agent. It also provides a short summary of what it knows and what is missing, so the human can respond quickly.
Human feedback loop: Support agents can give feedback on AI suggestions, marking them as helpful or unhelpful. This feedback is collected, reviewed by managers, and used to improve the knowledge base and the AI’s behavior over time.
The result is a system where customers either receive correct and verified information or are transferred smoothly to a human agent. Crucially, they never receive a guess.
Financial actions and refunds
Handling money requires special care. The SASM treats actions like refunds, fee adjustments, or account changes as tightly controlled steps.
Policies define actions: Each financial action has written rules that describe the conditions required, the evidence needed, and who must approve it. This is set by your company.
Safe automation: Small, low-risk refunds can be handled automatically if all the rules are met.
Dual approval for larger cases: For medium transactions, a supervisor needs to approve. For larger or higher-risk ones, both a supervisor and a finance officer must approve. These rules can be adjusted to fit your own setup.
Full audit trail: Every decision records what rules were checked, which evidence was used, and who approved it. This log can be reviewed at any time by auditors or regulators.
This approach ensures financial operations are both efficient and safe.
Example financial actions & refunds (safe automation by policy).
Objectives:
Accuracy & fairness (right outcome, reasoned).
Safety (no unauthorized money movement).
Auditability (who decided what, when, and why).
Governance and accountability
Executives and regulators expect clear accountability. SASM provides it.
Clear ownership: A Chief Model Risk Officer (CMRO) is responsible for overseeing AI operations and has the authority to stop deployments if needed.
Documentation: Every model, policy, and change is recorded with a standardized template. Version history and change logs are always available.
Incident protocols: If something goes wrong, issues are classified by severity with clear response times. A no-blame culture encourages employees to report problems quickly, so they can be fixed before they escalate.
This gives leadership confidence that the AI system is managed with the same seriousness as other critical business processes.
Testing and validation
Before the AI is trusted with real customer interactions, it must pass through careful testing:
Internal testing: Our own teams use the system to handle their tickets first. (This is often called “dogfooding,” meaning using our own product internally before releasing it.)
Shadow mode: The AI suggests answers, but only human agents send them to customers. This lets us measure accuracy safely.
Pilot rollout: The AI handles a small portion of cases under close monitoring. If any issues appear, changes can be rolled back immediately.
We measure:
How often answers are fully supported by company information.
How reliably the AI refuses to answer when it is unsure.
Whether automated resolutions actually solve customer issues without the customer needing to come back.
How accurate refund decisions are compared to human judgment.
How well the system withstands security tests, such as attempts to trick it into giving unauthorized answers.
Even after launch, we continue monitoring in real time, with alerts for unusual behavior and scheduled audits for fairness and content quality.
Security and privacy
SASM assumes strong protections from day one:
Data protection: All information is encrypted during transmission and while stored. Keys are rotated regularly, and customers may manage their own encryption keys if they choose.
Personal data handling: Sensitive personal information can be automatically hidden or removed. Retention periods can be customized to your compliance needs.
Identity and access: Single sign-on (SSO), multi-factor authentication, and role-based access control ensure that only the right people can access the system.
Data usage: Foundation AI models are not trained on your customer data. They only use your approved knowledge base to generate responses.
Service levels and key metrics
We track and report on clear metrics that matter to executives, customers, and regulators:
Truthfulness: At least 99.5 percent of answers must be supported by approved company information.
Escalation discipline: The AI must escalate 100 percent of questions it cannot confidently answer.
Automated resolution: Routine questions are resolved by the AI without the customer needing to reopen the case.
Refund accuracy: Automated refund decisions match human gold standards within defined tolerances.
Response times: Performance is measured against contracted service levels for time to first response and time to full resolution.
Implementation playbook
Rolling out SASM follows a clear, step-by-step plan:
Discovery: We work with your leaders to identify use cases, refund logic, and risk tolerance. For example, if you’re a bank - security is heightened.
Knowledge base setup: We import your historical tickets and documents, then tag articles by importance and reliability.
Policy writing: Together, we create the rules for confidence thresholds, approvals, and escalation.
Systems integration: We connect Clarity to your ticketing and financial systems.
Controls and tests: We establish baseline metrics and run safety and stress tests.
Shadow and pilot rollout: The AI is introduced gradually, first in shadow mode, then in a limited pilot.
Full launch with oversight: The CMRO oversees the launch with daily reviews in the first weeks.
Continuous improvement: Feedback and monitoring drive regular updates to the knowledge base and policies.
Alignment with global standards
SASM aligns with leading international frameworks so that enterprises in regulated industries can adopt it confidently:
NIST AI Risk Management Framework: Governance, risk identification, measurement, and mitigation.
Federal Reserve SR 11-7: Standards for model risk management, validation, and documentation.
ISO/IEC 42001: International management system standard for AI, including continuous improvement and corrective actions.
Closing note
The Smart Agent Service Manual is more than a technical guide. It is the framework that ensures Clarity’s AI agents are safe, trustworthy, and fully auditable. For executives, it provides peace of mind that customer support can be automated without increasing compliance risk. For regulators, it provides transparency and accountability. And for customers, it ensures fast, accurate, and fair support.
References
National Institute of Standards and Technology (NIST). AI Risk Management Framework 1.0 (AI RMF 1.0). January 2023. https://www.nist.gov/itl/ai-risk-management-framework
NIST. Generative AI Profile (Draft). June 2024. https://www.nist.gov/itl/ai-risk-management/generative-ai-profile
Board of Governors of the Federal Reserve System & Office of the Comptroller of the Currency. Supervisory Guidance on Model Risk Management (SR 11-7). April 2011. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
International Organization for Standardization (ISO) & International Electrotechnical Commission (IEC). ISO/IEC 42001:2023 Artificial Intelligence Management System (AIMS) — Requirements. December 2023. https://www.iso.org/standard/81230.html
Dotan, Ravit. Responsible AI Frameworks and Evaluation Practices. Responsible AI Research, 2022–2024. https://www.ravitdotan.com
Clarity Internal Documentation. Smart Agent Service Manual v1.0 Development Transcript and Technical Notes. September 2025.