RhinoSights

Your Outage Response Isn’t the Problem. Your Baseline Is.

Storm events, heat waves, and billing surges always have a cost. But a significant portion of your overtime was there before the first call spiked. Here’s how to tell the difference, and what it’s worth to close the gap.

Every utilities contact center has a story about last August. Or the ice storm in February. Or the billing migration that nobody warned operations about until the phones started lighting up. These events are real. The overtime they generate is real. But here’s the question nobody asks after the event is over: how much of that overtime would have existed on a quiet Tuesday?

That’s not a rhetorical question. It has a number. And for most utilities operations, it’s larger than leadership wants to believe.

Events are excellent at providing cover. When volume spikes, overtime is expected. When the spike passes, the overtime stops being examined. The model goes back to looking fine. And that’s the problem.

Workforce models in utilities operations carry a structural vulnerability that most other industries don’t face to the same degree: the volume environment is predictable enough to feel manageable, but volatile enough to always have a ready explanation for cost overruns. There’s always a season. There’s always an event. There’s always a reason.

That’s a dangerous combination. Because when a model drifts, shrinkage assumptions quietly become stale, headcount planning falls a hiring cycle behind, scheduling templates built for last year’s call mix get applied to this year’s. The resulting cost doesn’t announce itself. It just gets absorbed. Into overtime. Into the next event. Into a line on the P&L that finance has learned to expect.

The Drift Nobody Sees Coming

Workforce model drift is not a catastrophic failure. It’s a slow process. A shrinkage assumption that was accurate eighteen months ago drifts by two or three percentage points: not enough to trigger an alarm, but more than enough to change your staffing math. A call mix that’s shifted since the last major billing system update gets handled by the same interval templates. A new hire cohort with different absenteeism patterns gets folded into an attrition model that wasn’t built for them.

Each of these individually is manageable. Together, compounding over quarters, they create a gap between the workforce model you think you’re running and the one that’s actually operating on the floor.

In utilities, that gap has a specific financial signature: structural overtime. Not the overtime that came from the derecho. Not the overtime your supervisors called in when the new rate structure went live. The overtime that would have existed on a calm Wednesday in April, because the model was already running short.

For a 200-seat utilities contact center, the difference between event-driven and structural overtime is often $200,000 or more per year. Most operations have never separated the two.

Why Events Get the Blame

The mechanics of how this happens are worth understanding, because they’re not a failure of attention. They’re a failure of measurement.

When a storm hits, your operation goes into response mode. Supervisors pull people in. Overtime spikes. Everyone works hard. The event passes, volumes normalize, and the overtime line on the weekly report comes back down. What the report doesn’t show is the baseline it came back down to.

If that baseline is 8% higher than it was two years ago. Not because of the storm, but because of the three shrinkage drift events and the two headcount miscalculations that happened in between. That never appears as a separate line. It’s just part of what it costs to run the operation now. Leadership has adapted to it. Finance has modeled around it. It’s invisible.

Until someone actually looks for it.

The test is straightforward: take your total overtime hours from the last twelve months. Make your best estimate of the portion that was genuinely event-driven, the hours that were specifically called in response to an identifiable volume spike. Subtract that. What’s left is structural. Now multiply it by your fully-loaded hourly overtime rate.

That’s the number. For most utilities operations that have never done this calculation, it’s uncomfortable. Not because the operation is being run badly. Because the model has been running fine. And running fine is exactly the environment where drift goes undetected the longest.

What Correcting the Baseline Actually Involves

A baseline correction isn’t a headcount reduction. It isn’t a performance management exercise. It’s a modeling exercise: rebuilding the assumptions the workforce model runs on so they reflect current reality instead of historical conditions.

In practice, this typically involves three areas:

Shrinkage recalibration. Most utilities operations calculate shrinkage as a single blended figure applied uniformly across shifts. Actual shrinkage, particularly in 24/7 operations, varies significantly by shift, day of week, and season. A blended figure that looks reasonable at the aggregate level is usually overstating capacity in some intervals and understating it in others. Recalibrating to interval-level actuals typically recovers between 3% and 6% of scheduled capacity without adding a single headcount.

Call mix realignment. Utilities call mix shifts constantly: outage reports, billing inquiries, payment arrangements, new service setups. Each has a different handle time. When the mix changes but the AHT assumption doesn’t, staffing math goes quietly wrong. The fix isn’t to coach handle time down. It’s to model actual call mix into the forecast.

Headcount model reset. Attrition patterns in utilities contact centers are predictable: seasonal, often tied to weather-related burnout or post-peak exhaustion. But most headcount models are built on annual averages rather than modeled attrition curves. The result is a staffing plan that’s slightly behind at the beginning of the year and chronically behind by Q3. A model that accounts for when departures actually happen, and plans hiring cycles accordingly, removes the structural gap before it becomes structural overtime.

The Honest Version of This Conversation

If you’ve read this far, you probably have a sense of whether your operation has a structural component to its overtime or not. Most do. The honest version of the conversation isn’t “your people are doing something wrong.” It’s “the model your people are executing against hasn’t been updated to reflect how your operation actually runs today.”

That’s fixable. It doesn’t require a platform overhaul or a reorganization. It requires someone to look at the actual numbers, interval-level rather than averaged, and rebuild the assumptions from what’s true now.

The event didn’t create your overtime problem. It gave it somewhere to hide. The question is what’s underneath.

Frequently Asked Questions: WFM Platform Performance in Travel Insurance

How do you know if a WFM platform is underperforming?

A WFM platform is underperforming when forecast accuracy stalls, variance increases unexpectedly, staffing relies on buffers, and assumptions are not regularly revalidated.

Yes. Most underperformance comes from outdated assumptions rather than software defects or system limitations.

In stable operations, forecast accuracy should show incremental improvement year over year. Long plateaus are a warning sign.

Forecasts lose accuracy when volume patterns, customer behavior, shrinkage, or business priorities change without corresponding model recalibration.

No. Many performance issues can be addressed by reassessing assumptions and recalibrating the existing model.

The first step is a structured self-assessment that compares current reality to the assumptions driving the workforce model.

Want to know your structural number?

Use the Structural Overtime Estimator on this page to separate event-driven overtime from structural overtime in under two minutes. Or, if you’d rather talk through what the numbers are telling you:

Shane can walk through it with you in 20 minutes. No deck. No pitch. → 

Find the friction hiding inside your contact center – fast. 

Most contact centers aren’t broken.
They’re over-complicated.

Workarounds stack up. Exceptions become policy.
The operation still runs, but fewer people can explain why.

This 3-minute assessment pinpoints:

Where clarity is breaking down
What’s quietly creating operational drag
What to fix first for real ROI

No dashboards. No vendor pitch. Just insight.