See how a service company with 40+ field technicians turned messy ticket data into an automated routing system that assigns the right person to every job, faster than any dispatcher could.
Feb 1 – Mar 31, 2026 · 60 days
Accuracy Rate
97.9%
+2.1% from month 1
Tickets Routed
2,347
~1,200 / month
Auto-Assigned
2,298
98% of total volume
Dispatcher Overrides
49
Edge cases & new clients
| Ticket | Priority | Routed To | Status |
|---|---|---|---|
| WiFi outage - Bldg 4 | P1 | Field Tech A | Confirmed |
| Password reset | P3 | Remote Support | Confirmed |
| Gate access - Unit 12 | P2 | Field Tech B | Confirmed |
| Audio bleed - Lobby | P3 | AV Specialist | Confirmed |
| New client onboarding | P3 | Remote Support | Override |
| Fiber splice - Switch 4 | P1 | Infra Specialist | Confirmed |
GOOD COMPANY
A service company with 40+ technicians was processing about 1,200 tickets a month. Every ticket had to be triaged for urgency, matched to the right technician based on skill set, geography, and client history, then assigned. Two dispatchers handled this manually, all day, every day.
They were good at it. But it was entirely in their heads. When one was out sick or on vacation, assignments slowed down and mistakes crept in. Knowledge about which technician knew which client, who had the right skills for which type of work, and who was actually available, lived nowhere except in the dispatchers' memory.
The company had tried rules-based auto-assignment before: "WiFi ticket in Zone 3 goes to the remote team." It was too rigid. It couldn't account for the fact that one technician had been to a specific property eight times and knew the infrastructure, while another had never been there.
We started by looking at what the company actually had: years of historical ticket data sitting in their service management platform. The question was whether it contained enough signal to route intelligently.
Surface what the data already knows
Every ticket had a customer, an assignment, and a resolution. We analyzed thousands of closed tickets to find patterns: which technicians consistently handled which clients, who kept getting pulled into specific types of work, who had quietly become the go-to person for issues nobody had ever formally documented.
Build urgency scoring from the customer's own words
The cleanest data in the system turned out to be the incoming request itself. Subject lines, descriptions, and scope language from customers were consistent on every ticket because the customer wrote them. Words like "down," "locked out," and "all units" correlate strongly with actual urgency. We used this to replace the manual priority system that marked 90% of tickets as "Medium."
Combine history with real-time availability
Historical patterns tell you who should handle a ticket in theory. Real-time data tells you who can handle it right now. The system checks who's checked in, who's already at a nearby property, and who has capacity before making a recommendation.
Measurable Impact
Routing accuracy rate
Tickets auto-assigned per month
Dispatcher overrides needed
In most service businesses, dispatch is one of the highest-leverage roles. The right assignment means a first-time fix. The wrong one means a wasted trip, a frustrated client, and a second visit.
This company's dispatchers were experienced. They knew which technicians had which skills, which clients preferred which techs, and which properties had tricky infrastructure that required someone who'd been there before. But all of that knowledge was informal. It lived in conversations, in memory, in gut feel.
When a dispatcher was out, the backup would assign based on availability alone. The result: more reassignments, more escalations, more return visits.
The company had already tried the obvious solution: build routing rules. "Category X goes to Team Y. Round-robin within the team."
The problem is that categories are too broad. "Network" covers everything from restarting an access point (a five-minute remote task) to splicing fiber underground (a four-hour on-site job requiring specialized equipment). A static rule can't tell the difference.
Worse, static rules can't capture institutional knowledge. They can't know that one technician has handled fiber work at a specific property eight times and knows the switch layout. They can't know that another technician is the informal specialist for audio and AV systems, even though that's not in his job title. Rules describe how routing should work in theory. Patterns describe how it actually works.
When we first looked at the ticket data, it seemed like a dead end for building anything intelligent:
If we'd tried to build a routing model on note quality or time tracking, we would have failed. The data simply wasn't there.
But the structural data, the record of what actually happened, was solid on every single ticket. Who was assigned. Which customer. What category. How the ticket flowed between people. That was our training set.
The first problem to solve was triage. When everything is "Medium," nothing is prioritized.
We built urgency scoring on the cleanest data in the system: the incoming request from the customer. Subject lines and descriptions are consistent on every ticket because the customer writes them.
The scoring model looks at:
This replaced the manual priority system immediately. The inputs are clean because they come from the customer, and the scoring replicates the judgment calls dispatchers were already making, just faster and more consistently.
The second layer is where the historical data becomes powerful. For every ticket, the system asks four questions in order:
Finally, the system checks real-time availability: who's checked in today, who's already at a nearby property, and who has the lowest open ticket count.
The most valuable part of the system is the technician capability profiles. These weren't built from a skills spreadsheet or self-assessment. They were built from what the data showed each person actually does.
We analyzed thousands of closed tickets per technician: which customers they served, which categories of work they handled, what their private notes revealed about the actual work performed, and how tickets flowed to and from them.
What emerged were clear profiles:
None of this was formally documented anywhere. It was all in the ticket history.
The accuracy rate measures how often the system's assignment matches what the dispatcher would have chosen, or is accepted without override.
In practice, the dispatcher confirms or doesn't touch the assignment on about 98 out of every 100 tickets. The remaining 2% are genuine edge cases:
The system improved over the 60-day period. Week 1 accuracy was around 96%. By week 8 it was over 98%. Every confirmed assignment becomes another data point that makes the next decision better.
This system gets smarter over time, and that's the part most people miss when they think about this kind of work.
Every ticket that gets routed and confirmed adds to the historical record. New patterns emerge. A technician who starts handling a new type of work builds that into their profile automatically. A new customer who gets assigned to the same technician three times establishes a preference the system will respect going forward.
The dispatchers' role shifts from making every assignment decision to handling exceptions and validating the system's judgment on edge cases. Instead of spending their day triaging and assigning, they spend it on the 2% that actually needs human judgment: complex situations, escalations, and strategic decisions about workload distribution.
The real value compounds. Month one saves dispatcher time. Month six starts producing insights about team capacity, skill gaps, and client coverage patterns that inform hiring and training decisions.
A rules-based system says: "WiFi tickets go to the network team." That's correct in theory but useless in practice when you have eight people on the network team with different skill levels, different geographic coverage, and different client relationships.
A pattern-based system says: "WiFi ticket at this property, this technician has been there four times in the last month, knows the switch layout, and resolved the last three issues. He's checked in today and has two open tickets."
That's the difference between routing and intelligent routing.
Most service businesses are sitting on years of routing intelligence and don't know it. Every ticket that's ever been assigned, worked, escalated, or reassigned is a decision record. Every technician note (even the incomplete ones) contains signal about what kind of work that person does.
The system doesn't require perfect data. It requires enough structural data to find patterns, and most service management platforms have that by default: who was assigned, which customer, what category, what happened next.
Your ticket history is a training set. Most companies just don't treat it like one.
While this case study focused on a technology services company, the same approach applies to any business that routes work to field or remote teams:
If your business assigns work to people and the quality of that assignment affects outcomes, the same pattern-mining approach applies.
Your ticket history contains routing intelligence that no rules-based system can replicate
Start with urgency scoring (clean customer data) before tackling assignment routing (messier internal data)
Technician capability profiles built from actual work history are more accurate than self-reported skill matrices
The system compounds: every confirmed assignment makes the next one smarter
You don't need perfect data. You need enough structural data to find patterns
The dispatcher's role shifts from making every decision to handling the 2% that actually needs judgment
The historical data analysis takes 2-3 weeks. Building and calibrating the routing model takes another 2-3 weeks. Most implementations are producing accurate suggestions within 30-45 days of kickoff.
Ours was too. Notes were inconsistent, time tracking was empty, and priorities were meaningless. The system doesn't rely on those fields. It works from structural data that exists on every ticket: who was assigned, which customer, what category, what happened next. If your service platform has been recording assignments, you have enough.
It changes their role, not replaces it. Dispatchers go from making 1,200 assignment decisions a month to handling 25-30 edge cases and validating the system's judgment. The strategic parts of dispatch (managing escalations, balancing workload across the team, handling VIP clients) still need a human.
Any platform that tracks ticket assignments, customer information, and resolution history. We've worked with platforms across the industry. If your system has an API or data export capability, we can pull the historical data we need.
New technicians start with a baseline profile based on their role and geography. As they handle tickets and their assignments are confirmed, their profile builds automatically. Within 2-3 weeks of active work, the system has enough data to route to them intelligently.
Accuracy is measured by dispatcher confirmation rate: how often the system's suggested assignment is accepted without override. We also track reassignment rate (how often a ticket gets moved after initial assignment) and first-time resolution rate as secondary indicators.
Ready to solve your problem?
Let's talk through it. We'll help you identify the root cause and map out a solution—no pressure, no pitch.