AI Agent Playbook
A practical guide to designing, building, and operating AI-powered agents with Serval at Alto — for any team.
Philosophy & Approach
The mindset behind effective AI agent design — start with the human, automate with intention.
The Human-Centered Agent
The goal is not to replace your team with automation. The goal is to free your team from repetitive, predictable work so they can focus on strategic initiatives, process improvements, and the complex problems that humans are inherently good at solving.
A well-designed AI agent should:
- Think like your team — Follow the same workflows, priorities, and safety checks already in place. The agent should make decisions the way a well-trained team member would.
- Talk like your team — Use a tone that matches how your team currently supports employees. If your team is warm and conversational, the agent should be too.
- Act like your team — Show good judgment, escalate when something is unclear, and always log work appropriately. The agent should never guess when it should ask.
- Know its limits — Always give employees the option to talk to a human. Clearly explain when and why a request is being handed off.
When to Automate
Automation works well for requests that are:
- High-volume and repetitive — the same request comes in multiple times per day or week
- Clear decision criteria — eligibility or next steps can be determined from available data
- Predictable paths — the resolution follows a consistent pattern
- Minimal judgment — doesn't require weighing nuanced tradeoffs
HR: PTO balance inquiries, onboarding checklists, benefits enrollment questions, policy lookups
Finance: Expense report status, invoice routing, budget approval workflows, vendor setup requests
Security: Phishing report triage, access review reminders, compliance questionnaire routing
When to Keep Humans in the Loop
Some requests should always involve human review, even if parts can be automated:
- Sensitive access — production systems, financial data, PII
- Judgment calls — context matters, and the "right" answer depends on circumstances
- High-risk actions — mistakes would cause significant damage or compliance issues
- Novel situations — requests the agent hasn't seen before
The Automation Spectrum
The "New Hire" Mental Model
Imagine headcount was approved for your team, but the role was filled by someone with no relevant skills or context. This new hire doesn't know what your team does best or how you do it — but they learn very quickly. How would you approach training them?
This is essentially what onboarding an AI agent is like.
You wouldn't hand them a 200-page manual and walk away. You'd start with the basics, give clear instructions, observe how they handle requests, correct mistakes early, and gradually expand their responsibilities. You'd explain not just what to do but when to ask for help.
Serval Building Blocks
Understanding when to use Guidance, Workflows, Knowledge Base, and Access Management.
Quick Reference
| Tool | Purpose | Use For |
|---|---|---|
| Guidance | Teaches the agent how to behave | Decision logic, tone, escalation rules, multi-step procedures, linking workflows |
| Workflows | Executes deterministic actions | API calls, provisioning, data lookups — runs the same way every time |
| Knowledge Base | Provides reference information | FAQs, policies, how-to articles, documentation |
| Access Mgmt | Handles SCIM & JIT provisioning | Application access, role assignments, time-bound permissions with approvals |
Guidance in Depth
Guidance documents are instructions you write for Serval's help desk agent. When a user submits a request that matches your guidance, the agent follows your instructions to respond.
When to use Guidance:
- Agent needs to follow a consistent tone or communication style
- Agent must make decisions in a multi-step troubleshooting flow
- Agent should follow an internal playbook or SOP
- Agent needs to handle edge cases or sensitive situations
- Agent must determine whether and when to run a workflow
Guidance Structure
| Field | Purpose | Example |
|---|---|---|
| Name | Short title the agent uses to identify this guidance | "Figma Access Request" |
| When should Serval use this guidance? | When this guidance applies — write from the user's perspective | "User is requesting access to Figma for design work" |
| How should Serval handle this? | The actual instructions the agent follows | Step-by-step instructions, conditions, workflow references |
Guidance + Workflows
Guidance and workflows work together but don't always need to be paired. For straightforward actions with a single input and clear outcome, a workflow can run without guidance. Add guidance when:
- The agent needs significant context before running a workflow
- The outcome depends on confirming details with the user
- There are edge cases, policy constraints, or expectations to explain
- The agent needs help choosing whether or not to run the workflow
Workflows in Depth
Workflows are AI-powered automation tools that execute deterministic actions. You describe what you want in plain language, Serval generates the code, and it runs exactly as written every time.
Workflow Types
| Type | Trigger | Use Case |
|---|---|---|
| Help Desk | AI agent during conversations | User-facing automation |
| Team-Only | Team members manually | Internal operations |
| Scheduled | Defined schedule | Daily reports, recurring tasks |
| Webhook | External systems via API | Cross-system integration |
| Event-Triggered | System events | Reactive automation |
Create a workflow when you need to call an external API or use a native Serval connection, the action should be repeatable and consistent, or the action might need approval before execution.
Knowledge Base
The Knowledge Base stores reference content the agent retrieves when answering questions. Unlike Guidance (which tells the agent how to act), Knowledge Base tells the agent what to say.
Best practice: Link to authoritative external documentation rather than duplicating content. If Apple's support site has the definitive troubleshooting guide, link to it. This keeps information current and reduces maintenance.
Access Management
Access Management automates just-in-time role-based access provisioning directly from help desk requests. It manages the complete lifecycle: request → approval → provisioning → audit → revocation.
| Concept | What It Does |
|---|---|
| Access Profiles | Control who can request access to specific applications or roles |
| Access Policies | Define the rules: time limits, approval requirements, justification |
| Provisioning Methods | Determine how access is granted: IdP groups, direct API, workflows, or manual |
Always-Used Guidance
Some guidance should apply to every conversation. Mark guidance as "Always use" for tone of voice, universal ticket routing rules, compliance requirements, and security protocols.
How Guidance Gets Applied to Tickets
When a user submits a request, Serval matches it against your guidance library. Multiple pieces of guidance can be tagged to a single ticket:
- Always-Used guidance is included in every conversation automatically
- Scenario-specific guidance is matched based on the description
- The agent considers all tagged guidance when formulating its response
| ✓ Do This | ✗ Avoid This |
|---|---|
| One "Response Formatting & Tone" guidance with all formatting rules | Separate Always-Used guidance for "Use first names," "Be concise," "Don't use em dashes" |
| One "Knowledge Handling" guidance covering sourcing and citations | Separate guidance for "Check knowledge base first" and "When to cite articles" |
| Scenario-specific guidance for things that don't apply universally | Marking guidance as Always-Used "just to be safe" |
When to Create New Always-Used vs. Update Existing
| Situation | Action |
|---|---|
| Adding a new formatting rule | Update existing Response Formatting guidance |
| Adding a new compliance requirement | Update existing Compliance guidance, or create new if distinct category |
| Creating guidance for a specific app or scenario | Don't mark as Always-Used — use scenario-specific guidance |
| Unsure if it applies universally | Start as scenario-specific, promote to Always-Used only if needed everywhere |
Agent Tone & Personality
Your agent's personality should reflect your team's values and communication style. This is typically configured through Always-Used Guidance.
| Element | What to Consider |
|---|---|
| Tone | Warm and conversational? Professional and efficient? Match how your team actually talks to employees. |
| Formatting | Bullet points or prose? How much detail by default? When to offer more vs. keep it brief? |
| Escalation language | "Let me connect you with the team" feels better than "Escalating to human agent." |
| Boundaries | Never guess at sensitive info, never promise timelines it can't guarantee, never make policy decisions. |
Decision Tree: Which Tool Should I Use?
Key Takeaways
| Tool | Remember |
|---|---|
| Guidance | How to behave — train the agent like a new hire |
| Workflows | What to do — deterministic, repeatable actions |
| Knowledge Base | What to say — reference content for answers |
| Access Management | Who gets what — JIT provisioning with approvals |
| Always-Used | Universal rules — use sparingly |
Guidance Design Patterns
How to write effective guidance that's brief but airtight.
Core Principles
- Be brief but airtight. Guidance should be as short as possible while being comprehensive enough that an AI agent considering every possibility would still do the right thing.
- Don't retread steps. If the user has already tried something, don't make them repeat it. Ask what they've attempted and skip those steps.
- Use direct, personal language. Write "Here's what I'd recommend" rather than "The recommended approach is..." The agent should sound like a helpful colleague, not a knowledge base article.
- Focus on actionable guidance. Tell the agent what to do, not why the underlying system works. Save explanations for Knowledge Base articles.
- Link to authoritative sources. Use hyperlinks to external documentation rather than duplicating troubleshooting content.
Guidance Structure Template
Use this structure for consistent, effective guidance:
| Section | Purpose |
|---|---|
| When should Serval use this? | Describe trigger conditions from the user's perspective |
| How should Serval handle this? | Direct instructions as imperative statements (numbered steps) |
| Important context | Policies, constraints, or edge cases the agent needs to know |
| Related resources | Links to documentation, troubleshooting guides, or support articles |
Tag Taxonomy
Use consistent tagging to organize guidance and make it discoverable. Here's an example color-coding convention (adapted from IT — adjust for your team):
| Color | Purpose | Examples |
|---|---|---|
| Dark blue | System/application name | Okta, 1Password, AWS, Jamf, Slack |
| Pink | Request type | access-request, troubleshooting, how-to |
| Green | Automation level | auto-resolve, approval-required, manual-only |
| Brown | Device type (when relevant) | Mac, Windows, iOS, Android |
Adapt this taxonomy to your team's needs, but keep it consistent across all guidance.
Writing Effective Descriptions
The Description field determines when the agent matches a request to your guidance. Write descriptions from the user's perspective.
| ✓ Good Descriptions | ✗ Weak Descriptions | Why It's Weak |
|---|---|---|
| "User is requesting access to Amplitude for analytics and reporting" | "Amplitude access" | Too vague, doesn't capture intent |
| "User reports their Mac is running slowly or freezing" | "Mac troubleshooting" | Too broad, will match irrelevant requests |
| "User asks how to add someone to a 1Password vault" | "1Password" | Doesn't indicate request type |
Description Writing Tips
- Start with "User is..." or "User asks..." or "User reports..."
- Include the specific action or problem
- Add context about the goal when it helps differentiate
Handling Edge Cases
Good guidance anticipates edge cases and tells the agent how to handle them.
Eligibility checks:
"Before running the workflow, verify the user is in [department/group]. If not, explain they'll need manager approval and route to @[Approval Workflow]."
Missing information:
"If the user doesn't specify which [resource], ask them to clarify before proceeding."
Error handling:
"If the workflow fails, apologize for the inconvenience and create a ticket for the [team] with the error details."
Already attempted steps:
"Ask what troubleshooting the user has already tried. Skip any steps they've completed and pick up from where they left off."
Example: Access Request Guidance
Example: Troubleshooting Guidance
Real-World Examples
These are production guidance documents from the IT team that demonstrate the patterns in action. The same patterns apply to any team — swap the tools and scenarios for your domain.
Example from IT: Always-On Guidance — Knowledge Handling & Response Formatting
| Element | What It Demonstrates |
|---|---|
| Clear decision path | Numbered hierarchy tells agent exactly where to look first |
| Explicit "do not" rules | Prevents common mistakes (guessing, citing archived articles) |
| When TO / When NOT TO | Removes ambiguity about citation behavior |
| Concrete formatting rules | "Do not use em dashes" — specific enough to follow |
| Stated goals | Agent understands the purpose, not just the rules |
Example from IT: Troubleshooting Guidance — Okta FastPass Fingerprint Prompts
| Element | What It Demonstrates |
|---|---|
| Specific trigger | "FastPass asking for fingerprint on every login" — not just "Okta issues" |
| Root cause first | Leads with most common cause (WARP) before deeper troubleshooting |
| Policy context | Explains why this happens so agent can explain to user |
| Numbered decision tree | Clear path: check WARP → fix WARP → if still broken, check FastPass |
| Links to authoritative docs | Points to setup guides rather than duplicating instructions |
| Expected behavior | Defines what "working" looks like so agent knows when issue is resolved |
Key Patterns Across These Examples
| Pattern | How It's Applied |
|---|---|
| Brief but airtight | Both examples are concise but cover edge cases |
| Decision paths | Numbered steps with clear branching ("If X, then Y") |
| Link don't duplicate | Setup guides are linked, not copied |
| Explain the "why" | Policy context helps agent explain to users |
| Explicit negatives | "Do not guess," "Do not cite if not relevant" |
Official Serval Documentation
Before building your first guidance, review these resources:
| Resource | What You'll Learn |
|---|---|
| Guidance Overview | Core concepts, when to use guidance vs. knowledge base, combining guidance with workflows |
| Guidance Use Cases | Example guidance for IT, Security, HR, and Finance scenarios |
| Always-Used Guidance | When and how to use guidance that applies to every conversation |
Rollout Strategy
A phased approach to launching your AI agent — from build to production.
Phased Approach
| Phase | Duration | Focus |
|---|---|---|
| Build | 2–3 weeks | Build core automation scope. Focus on highest-volume, lowest-risk requests first. Test in private Slack channels. |
| Pilot | 2 weeks | Recruit volunteers to test. Create dedicated pilot channel. Collect feedback. Iterate daily. |
| Feedback | 1 week | Synthesize feedback. Address critical gaps. Document edge cases. |
| Production | Ongoing | Announce to org. Monitor closely. Handle escalations. Continue iterating. |
Pilot Program Design
Volunteer selection: Choose volunteers who represent different roles, technical comfort levels, and use cases. Include both power users and occasional users.
"You're helping us test our new AI assistant. During the pilot, please use #[pilot-channel] for your requests. The AI will try to help, but you can always ask to speak with a human. Your feedback helps us improve before we roll this out to everyone."
Feedback Dimensions
| Dimension | Question |
|---|---|
| Accuracy | Did the agent understand the request? |
| Completeness | Did it solve the problem? |
| Tone | Did the interaction feel helpful and in line with team style? |
| Gaps | What couldn't the agent handle? |
Communication Templates
Pilot Announcement
We're testing an AI assistant to help with [type of requests]. For the next two weeks, a small group will try it out and share feedback. If you're interested in being a pilot tester, reach out to us or react to this post.
Production Launch
Introducing [Agent Name] — your new AI assistant for [type of requests]. Get help anytime by messaging in #[channel] or DMing @[agent]. [Agent Name] can help with [top 3–5 use cases]. For anything it can't handle, it'll connect you with the team directly.
Success Metrics
| Category | Metrics |
|---|---|
| Volume | Tickets handled by agent, % auto-resolved, escalation rate |
| Quality | Time to resolution, user satisfaction, accuracy rate |
| Operational | Guidance gap rate, workflow failure rate, approval turnaround time |
Governance & Maintenance
Keeping your AI agent accurate, current, and continuously improving.
Ongoing Audit Cadence
| Frequency | Activities |
|---|---|
| Weekly | Review agent responses for accuracy. Check for guidance gaps. Monitor escalation patterns. |
| Monthly | Update guidance based on feedback & Serval Suggestions. Check workflow success rates. |
| Quarterly | Comprehensive review of all guidance. Update policies and documentation. Review agent tone with stakeholders. |
Handling Guidance Gaps
When the agent encounters a request it can't handle, it should escalate. Track these escalations to identify gaps.
Process
- Review escalated tickets weekly
- Identify patterns — same request type coming up multiple times
- Determine if automation is appropriate
- If yes → create guidance and/or workflow
- Test before publishing
- Monitor for correct handling
Serval Suggestions
Serval analyzes ticket patterns and can suggest new guidance automatically. Review suggestions regularly:
| Action | When to Use |
|---|---|
| Accept | Matches your approach, sufficient pattern evidence |
| Configure | Directionally correct but needs adjustment |
| Deny | Doesn't fit your scope or approach — write your own from scratch |
Version Control
Treat guidance and workflow configurations like code:
- Serval maintains a changelog of published Workflows & Guidance
- Have a rollback plan if changes cause issues — this can take just a few clicks to revert
Knowledge Maintenance
Keep your Knowledge Base and linked documentation current:
- Set review dates for time-sensitive content (this can be automated with Serval)
- Update links when external documentation changes
- Hide guidance for non-customer facing articles
- Document when guidance was last verified as accurate