IT Agent Playbook
A practical guide to designing, building, and operating AI-powered IT support with Serval at Altoira.
Philosophy & Approach
The mindset behind effective AI agent design — start with the human, automate with intention.
The Human-Centered Agent
The goal is not to replace your team with automation. The goal is to free your team from repetitive, predictable work so they can focus on strategic initiatives, process improvements, and the complex problems that humans are inherently good at solving.
A well-designed AI agent should:
- Think like your team — Follow the same workflows, priorities, and safety checks already in place. The agent should make decisions the way a well-trained team member would.
- Talk like your team — Use a tone that matches how your team currently supports employees. If your team is warm and conversational, the agent should be too.
- Act like your team — Show good judgment, escalate when something is unclear, and always log work appropriately. The agent should never guess when it should ask.
- Know its limits — Always give employees the option to talk to a human. Clearly explain when and why a request is being handed off.
When to Automate
Automation works well for requests that are:
- High-volume and repetitive — the same request comes in multiple times per day or week
- Clear decision criteria — eligibility or next steps can be determined from available data
- Predictable paths — the resolution follows a consistent pattern
- Minimal judgment — doesn't require weighing nuanced tradeoffs
When to Keep Humans in the Loop
Some requests should always involve human review, even if parts can be automated:
- Sensitive access — production systems, financial data, PII
- Judgment calls — context matters, and the "right" answer depends on circumstances
- High-risk actions — mistakes would cause significant damage or compliance issues
- Novel situations — requests the agent hasn't seen before
The Automation Spectrum
The "New Hire" Mental Model
Imagine headcount was approved for your team, but the role was filled by someone with no relevant skills or context. This new hire doesn't know what your team does best or how you do it — but they learn very quickly. How would you approach training them?
This is essentially what onboarding an AI agent is like.
You wouldn't hand them a 200-page manual and walk away. You'd start with the basics, give clear instructions, observe how they handle requests, correct mistakes early, and gradually expand their responsibilities. You'd explain not just what to do but when to ask for help.
Serval Building Blocks
Understanding the core tools — Guidance, Workflows, Knowledge Base, and Access Management.
Quick Reference
| Tool | Purpose | Use For |
|---|---|---|
| Guidance | Teaches the agent how to behave | Decision logic, tone, escalation rules, multi-step procedures |
| Workflows | Executes deterministic actions | API calls, provisioning, data lookups — runs the same way every time |
| Knowledge Base | Provides reference information | FAQs, policies, how-to articles, documentation |
| Access Mgmt | Handles provisioning | Application access, role assignments, time-bound permissions |
Guidance in Depth
Guidance documents are instructions you write for Serval. When a user submits a request that matches your guidance, the agent follows your instructions to respond.
When to use Guidance:
- Agent needs to follow a consistent tone or communication style
- Agent must make decisions in a multi-step troubleshooting flow
- Agent should follow an internal playbook or SOP
- Agent needs to handle edge cases or sensitive situations
- Agent must determine whether and when to run a workflow
Guidance Structure
| Field | Purpose | Example |
|---|---|---|
| Title | Short title the agent uses to identify this guidance | "Figma Access Requests" |
| Description | When this guidance applies — write from the user's perspective | "User is requesting access to Figma for design work" |
| Content | The actual instructions the agent follows | Step-by-step instructions, conditions, workflow references |
Guidance + Workflows
Guidance and workflows can work together but don't always need to be paired. For straightforward actions with a single input and clear outcome, a workflow can run without guidance. Add guidance when:
- The agent needs significant context before running a workflow
- The outcome depends on confirming details with the user
- There are edge cases, policy constraints, or expectations to explain
- The agent needs help choosing whether or not to run the workflow
Workflows in Depth
Workflows are AI-powered automation tools that execute deterministic actions. You describe what you want in plain language, Serval generates the code, and it runs exactly as written every time.
Workflow Types
| Type | Trigger | Use Case |
|---|---|---|
| Help Desk | AI agent during conversations | User-facing automation |
| Team-Only | Team members manually | Internal operations |
| Scheduled | Defined schedule | Daily reports, recurring tasks |
| Webhook | External systems via API | Cross-system integration |
| Event-Triggered | System events | Reactive automation |
Create a workflow when you need to call an external API, the action should be repeatable and consistent, you want audit logging, or the action might need approval before execution.
Knowledge Base
The Knowledge Base stores reference content the agent retrieves when answering questions. Unlike Guidance (which tells the agent how to act), Knowledge Base tells the agent what to say.
Best practice: Link to authoritative external documentation rather than duplicating content. If Apple's support site has the definitive troubleshooting guide, link to it. This keeps information current and reduces maintenance.
Access Management
Access Management automates just-in-time role-based access provisioning directly from help desk requests. It manages the complete lifecycle: request → approval → provisioning → revocation.
| Concept | What It Does |
|---|---|
| Access Profiles | Control who can request access to specific applications or roles |
| Access Policies | Define the rules: time limits, approval requirements, justification |
| Provisioning Methods | Determine how access is granted: IdP groups, direct API, workflows, or manual |
Always-Used Guidance & Agent Tone
Some guidance should apply to every conversation. Mark guidance as "Always use" for tone of voice, universal ticket routing rules, compliance requirements, and security protocols.
Defining Agent Personality
- Tone — Is the agent warm and conversational? Professional and efficient? Match how your team actually talks to employees.
- Formatting — Bullet points or prose? How much detail by default?
- Escalation language — "Let me connect you with the team" feels better than "Escalating to human agent."
- Boundaries — Never guess at sensitive information, never promise timelines it can't guarantee, never make policy decisions.
Guidance Design Patterns
Practical patterns for writing clear, effective guidance that your AI agent can follow reliably.
Core Principles
Be brief but bulletproof. Guidance should be as short as possible while being comprehensive enough that an AI agent considering every possibility would still do the right thing.
- Don't retread steps. If the user has already tried something, don't make them repeat it. Ask what they've attempted and skip those steps.
- Use direct, personal language. Write "Here's what I'd recommend" rather than "The recommended approach is..." The agent should sound like a helpful colleague, not a knowledge base article.
- Focus on actionable guidance. Tell the agent what to do, not why the underlying system works. Save explanations for Knowledge Base articles.
- Link to authoritative sources. Use hyperlinks to external documentation rather than duplicating troubleshooting content. This keeps guidance current and leverages Serval's ability to surface linked resources.
Guidance Structure Template
Every guidance document should follow a consistent structure to help the agent parse instructions reliably:
| Section | Purpose |
|---|---|
| When should Serval use this? | Describe trigger conditions from the user's perspective |
| How should Serval handle this? | Direct instructions as imperative statements (numbered steps) |
| Important context | Policies, constraints, or edge cases the agent needs to know |
| Related resources | Links to documentation, troubleshooting guides, or support articles |
Example structure
When should Serval use this guidance?
User is requesting access to [application] · User reports [specific problem] · User asks how to [specific task]
How should Serval handle this?
1. First, verify [initial check]
2. If [condition], then [action]
3. Run the @[Workflow Name] workflow
4. If the issue persists, [escalation path]
Tag Taxonomy
Use consistent tagging with Guidance to organize guidance and make it discoverable. Adapt this taxonomy to your team's needs, but keep it consistent across all guidance.
Writing Effective Descriptions
The Description field determines when the agent matches a request to your guidance. Write descriptions from the user's perspective.
| ✓ Good Descriptions | ✗ Weak Descriptions |
|---|---|
| "User is requesting access to Amplitude for analytics and reporting" | "Amplitude access" (too vague) |
| "User reports their Mac is running slowly or freezing" | "Mac troubleshooting" (too broad) |
| "User asks how to add someone to a 1Password vault" | "1Password" (doesn't indicate request type) |
Handling Edge Cases
Good guidance anticipates edge cases and tells the agent how to handle them. Common patterns:
Missing information
"If the user doesn't specify what's needed to complete the request, ask them to clarify before proceeding."
Error handling
"If the workflow fails, apologize for the inconvenience and create a ticket for the [team] with the error details."
Rollout Strategy
A phased approach to launching your AI agent — from build to production.
Phased Approach
| Phase | Duration | Focus |
|---|---|---|
| Build | 2–3 weeks | Build core automation scope. Focus on highest-volume, lowest-risk requests first. Test in private Slack channels. |
| Pilot | 2 weeks | Recruit volunteers to test. Create dedicated pilot channel. Collect feedback. Iterate daily. |
| Feedback | 1 week | Synthesize feedback. Address critical gaps. Document edge cases. |
| Production | Ongoing | Announce to org. Monitor closely. Handle escalations. Continue iterating. |
Pilot Program Design
Volunteer selection: Choose volunteers who represent different roles, technical comfort levels, and use cases. Include both power users and occasional users.
"You're helping us test our new AI assistant. During the pilot, please use #[pilot-channel] for your requests. The AI will try to help, but you can always ask to speak with a human. Your feedback helps us improve before we roll this out to everyone."
Feedback Dimensions
| Dimension | Question |
|---|---|
| Accuracy | Did the agent understand the request? |
| Completeness | Did it solve the problem? |
| Tone | Did the interaction feel helpful and in line with team style? |
| Gaps | What couldn't the agent handle? |
Communication Templates
Pilot Announcement
We're testing an AI assistant to help with [type of requests]. For the next two weeks, a small group will try it out and share feedback. If you're interested in being a pilot tester, reach out to us or react to this post.
Production Launch
Introducing [Agent Name] — your new AI assistant for [type of requests]. Get help anytime by messaging in #[channel] or DMing @[agent]. [Agent Name] can help with [top 3–5 use cases]. For anything it can't handle, it'll connect you with the team directly.
Success Metrics
| Category | Metrics |
|---|---|
| Volume | Tickets handled by agent, % auto-resolved, escalation rate |
| Quality | Time to resolution, user satisfaction, accuracy rate |
| Operational | Guidance gap rate, workflow failure rate, approval turnaround time |
Governance & Maintenance
Keeping your AI agent accurate, current, and continuously improving.
Ongoing Audit Cadence
| Frequency | Activities |
|---|---|
| Weekly | Review agent responses for accuracy. Check for guidance gaps. Monitor escalation patterns. |
| Monthly | Update guidance based on feedback & Serval Suggestions. Check workflow success rates. |
| Quarterly | Comprehensive review of all guidance. Update policies and documentation. Review agent tone with stakeholders. |
Handling Guidance Gaps
When the agent encounters a request it can't handle, it should escalate. Track these escalations to identify gaps.
Process
- Review escalated tickets weekly
- Identify patterns — same request type coming up multiple times
- Determine if automation is appropriate
- If yes → create guidance and/or workflow
- Test before publishing
- Monitor for correct handling
Serval Suggestions
Serval analyzes ticket patterns and can suggest new guidance automatically. Review suggestions regularly:
| Action | When to Use |
|---|---|
| Accept | Matches your approach, sufficient pattern evidence |
| Configure | Directionally correct but needs adjustment |
| Deny | Doesn't fit your scope or approach — write your own from scratch |
Version Control
Treat guidance and workflow configurations like code:
- Serval maintains a changelog of published Workflows & Guidance
- Have a rollback plan if changes cause issues — this can take just a few clicks to revert
Knowledge Maintenance
Keep your Knowledge Base and linked documentation current:
- Set review dates for time-sensitive content (this can be automated with Serval)
- Update links when external documentation changes
- Hide guidance for non-customer facing articles
- Document when guidance was last verified as accurate