Chapter 01

Philosophy & Approach

The mindset behind effective AI agent design — start with the human, automate with intention.

The Human-Centered Agent

The goal is not to replace your team with automation. The goal is to free your team from repetitive, predictable work so they can focus on strategic initiatives, process improvements, and the complex problems that humans are inherently good at solving.

A well-designed AI agent should:

  • Think like your team — Follow the same workflows, priorities, and safety checks already in place. The agent should make decisions the way a well-trained team member would.
  • Talk like your team — Use a tone that matches how your team currently supports employees. If your team is warm and conversational, the agent should be too.
  • Act like your team — Show good judgment, escalate when something is unclear, and always log work appropriately. The agent should never guess when it should ask.
  • Know its limits — Always give employees the option to talk to a human. Clearly explain when and why a request is being handed off.

When to Automate

Automation works well for requests that are:

  • High-volume and repetitive — the same request comes in multiple times per day or week
  • Clear decision criteria — eligibility or next steps can be determined from available data
  • Predictable paths — the resolution follows a consistent pattern
  • Minimal judgment — doesn't require weighing nuanced tradeoffs
Good automation candidates
Access requests with clear eligibility rules, common macOS issues (tier 0–1), status lookups and information retrieval, standard troubleshooting with known solutions, and routing requests to the right team.

When to Keep Humans in the Loop

Some requests should always involve human review, even if parts can be automated:

  • Sensitive access — production systems, financial data, PII
  • Judgment calls — context matters, and the "right" answer depends on circumstances
  • High-risk actions — mistakes would cause significant damage or compliance issues
  • Novel situations — requests the agent hasn't seen before

The Automation Spectrum

Fully Manual
Humans do everything
AI-Assisted
Agent gathers, humans decide
Approval Required
Agent acts, humans approve
Fully Automated
Agent handles end-to-end
Important
Start conservative. Move work toward full automation only after you've validated the agent handles it correctly.

The "New Hire" Mental Model

Imagine headcount was approved for your team, but the role was filled by someone with no relevant skills or context. This new hire doesn't know what your team does best or how you do it — but they learn very quickly. How would you approach training them?

This is essentially what onboarding an AI agent is like.

You wouldn't hand them a 200-page manual and walk away. You'd start with the basics, give clear instructions, observe how they handle requests, correct mistakes early, and gradually expand their responsibilities. You'd explain not just what to do but when to ask for help.

Key insight
The agent is extremely capable but starts with zero context. It will be literal and thorough, consider every possible interpretation, never assume information you haven't provided, and follow instructions exactly as written. Your job is to transfer your team's knowledge into guidance the agent can follow.
Chapter 02

Serval Building Blocks

Understanding the core tools — Guidance, Workflows, Knowledge Base, and Access Management.

Quick Reference

ToolPurposeUse For
GuidanceTeaches the agent how to behaveDecision logic, tone, escalation rules, multi-step procedures
WorkflowsExecutes deterministic actionsAPI calls, provisioning, data lookups — runs the same way every time
Knowledge BaseProvides reference informationFAQs, policies, how-to articles, documentation
Access MgmtHandles provisioningApplication access, role assignments, time-bound permissions
Think of it this way
Guidance is what you'd train a new agent on. Knowledge Base is the documentation you'd point them to. Workflows are the buttons they'd click.

Guidance in Depth

Guidance documents are instructions you write for Serval. When a user submits a request that matches your guidance, the agent follows your instructions to respond.

When to use Guidance:

  • Agent needs to follow a consistent tone or communication style
  • Agent must make decisions in a multi-step troubleshooting flow
  • Agent should follow an internal playbook or SOP
  • Agent needs to handle edge cases or sensitive situations
  • Agent must determine whether and when to run a workflow

Guidance Structure

FieldPurposeExample
TitleShort title the agent uses to identify this guidance"Figma Access Requests"
DescriptionWhen this guidance applies — write from the user's perspective"User is requesting access to Figma for design work"
ContentThe actual instructions the agent followsStep-by-step instructions, conditions, workflow references
Critical
The description is critical for matching. A vague description means the agent won't find your guidance when it should.

Guidance + Workflows

Guidance and workflows can work together but don't always need to be paired. For straightforward actions with a single input and clear outcome, a workflow can run without guidance. Add guidance when:

  • The agent needs significant context before running a workflow
  • The outcome depends on confirming details with the user
  • There are edge cases, policy constraints, or expectations to explain
  • The agent needs help choosing whether or not to run the workflow

Workflows in Depth

Workflows are AI-powered automation tools that execute deterministic actions. You describe what you want in plain language, Serval generates the code, and it runs exactly as written every time.

Workflow Types

TypeTriggerUse Case
Help DeskAI agent during conversationsUser-facing automation
Team-OnlyTeam members manuallyInternal operations
ScheduledDefined scheduleDaily reports, recurring tasks
WebhookExternal systems via APICross-system integration
Event-TriggeredSystem eventsReactive automation

Create a workflow when you need to call an external API, the action should be repeatable and consistent, you want audit logging, or the action might need approval before execution.

Knowledge Base

The Knowledge Base stores reference content the agent retrieves when answering questions. Unlike Guidance (which tells the agent how to act), Knowledge Base tells the agent what to say.

The decision rule
If it tells the agent how to act → Guidance. If it tells the agent what to say → Knowledge Base.

Best practice: Link to authoritative external documentation rather than duplicating content. If Apple's support site has the definitive troubleshooting guide, link to it. This keeps information current and reduces maintenance.

Access Management

Access Management automates just-in-time role-based access provisioning directly from help desk requests. It manages the complete lifecycle: request → approval → provisioning → revocation.

ConceptWhat It Does
Access ProfilesControl who can request access to specific applications or roles
Access PoliciesDefine the rules: time limits, approval requirements, justification
Provisioning MethodsDetermine how access is granted: IdP groups, direct API, workflows, or manual

Always-Used Guidance & Agent Tone

Some guidance should apply to every conversation. Mark guidance as "Always use" for tone of voice, universal ticket routing rules, compliance requirements, and security protocols.

Use sparingly
Too many always-used items can slow response quality and create conflicting instructions.

Defining Agent Personality

  • Tone — Is the agent warm and conversational? Professional and efficient? Match how your team actually talks to employees.
  • Formatting — Bullet points or prose? How much detail by default?
  • Escalation language"Let me connect you with the team" feels better than "Escalating to human agent."
  • Boundaries — Never guess at sensitive information, never promise timelines it can't guarantee, never make policy decisions.
Chapter 03

Guidance Design Patterns

Practical patterns for writing clear, effective guidance that your AI agent can follow reliably.

Core Principles

Be brief but bulletproof. Guidance should be as short as possible while being comprehensive enough that an AI agent considering every possibility would still do the right thing.

  • Don't retread steps. If the user has already tried something, don't make them repeat it. Ask what they've attempted and skip those steps.
  • Use direct, personal language. Write "Here's what I'd recommend" rather than "The recommended approach is..." The agent should sound like a helpful colleague, not a knowledge base article.
  • Focus on actionable guidance. Tell the agent what to do, not why the underlying system works. Save explanations for Knowledge Base articles.
  • Link to authoritative sources. Use hyperlinks to external documentation rather than duplicating troubleshooting content. This keeps guidance current and leverages Serval's ability to surface linked resources.
Example
In IT we reference support.apple.com as an external source that Serval is allowed to use when troubleshooting Apple device related issues — rather than duplicating Apple's troubleshooting steps in our guidance.

Guidance Structure Template

Every guidance document should follow a consistent structure to help the agent parse instructions reliably:

SectionPurpose
When should Serval use this?Describe trigger conditions from the user's perspective
How should Serval handle this?Direct instructions as imperative statements (numbered steps)
Important contextPolicies, constraints, or edge cases the agent needs to know
Related resourcesLinks to documentation, troubleshooting guides, or support articles

Example structure

When should Serval use this guidance?
User is requesting access to [application] · User reports [specific problem] · User asks how to [specific task]

How should Serval handle this?
1. First, verify [initial check]
2. If [condition], then [action]
3. Run the @[Workflow Name] workflow
4. If the issue persists, [escalation path]

Tag Taxonomy

Use consistent tagging with Guidance to organize guidance and make it discoverable. Adapt this taxonomy to your team's needs, but keep it consistent across all guidance.

Writing Effective Descriptions

The Description field determines when the agent matches a request to your guidance. Write descriptions from the user's perspective.

✓ Good Descriptions✗ Weak Descriptions
"User is requesting access to Amplitude for analytics and reporting""Amplitude access" (too vague)
"User reports their Mac is running slowly or freezing""Mac troubleshooting" (too broad)
"User asks how to add someone to a 1Password vault""1Password" (doesn't indicate request type)
Critical
A vague description means the agent won't find your guidance when it should. Always describe the situation from the employee's perspective, including what they're trying to accomplish.

Handling Edge Cases

Good guidance anticipates edge cases and tells the agent how to handle them. Common patterns:

Missing information

"If the user doesn't specify what's needed to complete the request, ask them to clarify before proceeding."

Error handling

"If the workflow fails, apologize for the inconvenience and create a ticket for the [team] with the error details."
Best practice
Think through the most common ways a request can go sideways, and give the agent a clear path for each. The goal is zero ambiguity — if the agent has to guess, you need more guidance.
Chapter 04

Rollout Strategy

A phased approach to launching your AI agent — from build to production.

Phased Approach

PhaseDurationFocus
Build2–3 weeksBuild core automation scope. Focus on highest-volume, lowest-risk requests first. Test in private Slack channels.
Pilot2 weeksRecruit volunteers to test. Create dedicated pilot channel. Collect feedback. Iterate daily.
Feedback1 weekSynthesize feedback. Address critical gaps. Document edge cases.
ProductionOngoingAnnounce to org. Monitor closely. Handle escalations. Continue iterating.

Pilot Program Design

Volunteer selection: Choose volunteers who represent different roles, technical comfort levels, and use cases. Include both power users and occasional users.

"You're helping us test our new AI assistant. During the pilot, please use #[pilot-channel] for your requests. The AI will try to help, but you can always ask to speak with a human. Your feedback helps us improve before we roll this out to everyone."

Feedback Dimensions

DimensionQuestion
AccuracyDid the agent understand the request?
CompletenessDid it solve the problem?
ToneDid the interaction feel helpful and in line with team style?
GapsWhat couldn't the agent handle?

Communication Templates

Pilot Announcement

We're testing an AI assistant to help with [type of requests]. For the next two weeks, a small group will try it out and share feedback. If you're interested in being a pilot tester, reach out to us or react to this post.

Production Launch

Introducing [Agent Name] — your new AI assistant for [type of requests]. Get help anytime by messaging in #[channel] or DMing @[agent]. [Agent Name] can help with [top 3–5 use cases]. For anything it can't handle, it'll connect you with the team directly.
Key messages to always include
AI handles routine tasks to free up the team for complex support. Humans are always available. Feedback helps improve the experience. This is about augmenting the team, not replacing personal support.

Success Metrics

CategoryMetrics
VolumeTickets handled by agent, % auto-resolved, escalation rate
QualityTime to resolution, user satisfaction, accuracy rate
OperationalGuidance gap rate, workflow failure rate, approval turnaround time
Chapter 05

Governance & Maintenance

Keeping your AI agent accurate, current, and continuously improving.

Ongoing Audit Cadence

FrequencyActivities
WeeklyReview agent responses for accuracy. Check for guidance gaps. Monitor escalation patterns.
MonthlyUpdate guidance based on feedback & Serval Suggestions. Check workflow success rates.
QuarterlyComprehensive review of all guidance. Update policies and documentation. Review agent tone with stakeholders.

Handling Guidance Gaps

When the agent encounters a request it can't handle, it should escalate. Track these escalations to identify gaps.

Process

  1. Review escalated tickets weekly
  2. Identify patterns — same request type coming up multiple times
  3. Determine if automation is appropriate
  4. If yes → create guidance and/or workflow
  5. Test before publishing
  6. Monitor for correct handling

Serval Suggestions

Serval analyzes ticket patterns and can suggest new guidance automatically. Review suggestions regularly:

ActionWhen to Use
AcceptMatches your approach, sufficient pattern evidence
ConfigureDirectionally correct but needs adjustment
DenyDoesn't fit your scope or approach — write your own from scratch
Important
Use suggestions as a starting point, not a final product. Always review and refine before publishing.

Version Control

Treat guidance and workflow configurations like code:

  • Serval maintains a changelog of published Workflows & Guidance
  • Have a rollback plan if changes cause issues — this can take just a few clicks to revert

Knowledge Maintenance

Keep your Knowledge Base and linked documentation current:

  • Set review dates for time-sensitive content (this can be automated with Serval)
  • Update links when external documentation changes
  • Hide guidance for non-customer facing articles
  • Document when guidance was last verified as accurate