The pattern is consistent: a program delivers early results, the consultant leaves, and within 12 months performance has drifted back. The root cause is almost always the same — the operating system was never installed.
There is a pattern that repeats itself across laboratory improvement programs with remarkable consistency. A consultant is engaged. A constraint is identified — usually in throughput, turnaround time, or quality yield. A solution is designed and implemented. Results improve, sometimes dramatically. The consultant departs. And then, over the following six to eighteen months, performance drifts back toward its pre-engagement baseline.
This is not a failure of intent. The laboratory teams involved are not indifferent to performance. The solutions implemented are not technically flawed. The improvement is real — while it lasts. The failure is structural, and it is almost always the same: the operating system was never installed.
A solution addresses a specific problem. A system governs how work is planned, executed, measured, and improved on an ongoing basis. The distinction matters because most laboratory problems are not caused by the absence of a specific solution — they are caused by the absence of a system that would have prevented the problem from developing in the first place, and that would detect and correct it quickly when it recurs.
Consider a laboratory with a turnaround time problem. The constraint is identified as a bottleneck in the analytical phase — a specific instrument running at capacity during peak hours. A solution is designed: a revised scheduling protocol that distributes workload more evenly across the day. Turnaround time improves. The consultant leaves. Six months later, the scheduling protocol has been modified informally to accommodate a new analyst's preferences, a second instrument has been added without updating the scheduling logic, and turnaround time has drifted back.
The solution was correct. But there was no system to govern its maintenance. No management cadence that reviewed scheduling performance weekly. No standard work that defined how the protocol should be modified when conditions changed. No escalation path for when turnaround time began to drift. The solution existed in isolation, and isolation is not a stable state.
The most common failure mode is the absence of a structured management cadence — a defined set of meetings, reviews, and escalation protocols that create ongoing accountability for performance. Without a cadence, performance data is reviewed infrequently and reactively. Problems are identified late, when they have already compounded. Decisions are delayed because the forum for making them does not exist.
A management cadence does not need to be elaborate. A fifteen-minute daily huddle at the bench level, a thirty-minute weekly performance review at the supervisor level, and a monthly operational review at the management level will, if well-designed and consistently executed, catch the vast majority of performance problems before they become crises. The discipline is in the consistency, not the complexity.
The second failure mode is standard work that exists as documentation rather than as execution control. In most laboratories, SOPs are written to satisfy an audit requirement. They describe what should happen in sufficient detail to demonstrate compliance. They are not designed to govern what actually happens at the bench, in real time, under production pressure.
Execution-framed standard work is different in three ways. It is written for the person doing the task, not the auditor reviewing it — which means it is concise, visual, and located at the point of use. It defines not just the correct method but the control points where deviation is most likely and most consequential. And it is supported by a monitoring system that makes deviation visible quickly enough to allow correction before the problem compounds.
The third failure mode is the most fundamental: the capability to maintain and improve the operating system was never transferred to the internal team. The consultant designed the system, installed it, and managed it during the engagement. But the internal team was never trained to own it. When the consultant left, the system was an orphan.
"The failure is structural, and it is almost always the same: the operating system was never installed."
This failure mode is partly a function of engagement design and partly a function of consulting incentives. An engagement designed around deliverables — a new scheduling system, a revised set of SOPs, a performance dashboard — will naturally focus on producing those deliverables. An engagement designed around capability transfer will look different: more time spent coaching, more structured handover milestones, more explicit measurement of whether the internal team can operate the system independently.
A sustainment-oriented engagement treats capability transfer as a primary deliverable, not an afterthought. This means several things in practice. It means defining, at the outset, what the internal team needs to be able to do independently by the end of the engagement — and measuring progress against that definition throughout. It means designing the operating system with the internal team, not for them, so that they understand not just what the system does but why it is designed the way it is. And it means structuring the engagement so that the consultant's involvement decreases over time, with explicit milestones at which responsibility transfers.
It also means being honest about the conditions under which sustainment is possible. Some organizations do not have the internal capacity to own a complex operating system without ongoing external support. In those cases, the right answer is not to design a system that the organization cannot maintain — it is to design a system that matches the organization's actual capacity, and to be explicit about what that means for long-term performance.
If you are evaluating a past improvement program that has not sustained, the diagnostic question is not what went wrong with the solution. It is what was missing from the system. Was there a management cadence to govern the solution's maintenance? Was there standard work that defined how it should be operated? Was the internal team trained to own it? If the answer to any of these questions is no, the failure mode is structural — and the fix is not to reinstall the solution, but to install the system.
Next step
Start with a diagnostic conversation. No pre-packaged proposals. No junior teams.