Most analytical laboratories have no formal model of their own capacity. This guide describes a practical approach to building a capacity model that can be used for day-to-day scheduling, medium-term planning, and investment decisions.
Ask the manager of most analytical laboratories how much capacity their laboratory has, and they will give you an answer based on headcount and instrument count. Ask them how much of that capacity is actually available on a given day, accounting for planned maintenance, training, quality system activities, and the inevitable unplanned events that consume analyst time, and the answer becomes less certain. Ask them how that available capacity compares to the demand they are managing, and in most cases the honest answer is that they do not know — not precisely, and not in a form that can be used to make decisions.
This is not a failure of management. It is a consequence of the fact that most analytical laboratories were not designed with capacity modeling in mind. They grew organically, adding instruments and analysts as demand increased, without ever building the formal model that would allow them to understand the relationship between demand, capacity, and throughput. The result is a laboratory that is managed reactively — responding to constraints as they emerge rather than anticipating and managing them.
A capacity model is not a planning exercise. It is an operational tool. Its primary purpose is to make the relationship between demand and capacity visible, in real time, so that the people managing the laboratory can make informed decisions about scheduling, resourcing, and escalation. A secondary purpose is to support medium-term planning: understanding how capacity needs to evolve as demand grows, and what investments are required to maintain performance.
Without a capacity model, these decisions are made on the basis of experience and intuition. Experience and intuition are valuable — they should inform the model. But they are not a substitute for it. A laboratory manager who has been in post for ten years has a sophisticated intuitive model of their laboratory's capacity. But that model is not transferable, not auditable, and not useful for planning conversations with finance or operations leadership.
The first step in building a capacity model is to characterize demand — not just the volume of samples or tests, but their distribution across time, their complexity, and their priority. A laboratory that processes 500 samples per day with a flat distribution across the week has a very different capacity challenge from one that processes 500 samples per day with 70% arriving on Monday and Tuesday. The model must capture this distribution, not just the average.
Demand characterization should also capture the analytical complexity of the workload. Not all tests consume the same amount of analyst time or instrument time. A model that treats all tests as equivalent will systematically misrepresent capacity requirements. The right approach is to define a small number of workload categories — typically three to five — that group tests by their resource consumption profile, and to model demand in terms of those categories.
Available capacity is not the same as theoretical capacity. Theoretical capacity is the maximum output of the laboratory if every analyst worked at full productivity for every available hour and every instrument ran continuously. Available capacity is what remains after accounting for all the activities that consume time without producing analytical output: planned maintenance, training, quality system activities, administrative tasks, and the unplanned events — instrument failures, reagent issues, sample problems — that are a normal part of laboratory operations.
A practical approach is to measure actual productive time — the time analysts spend on direct analytical work — as a proportion of total available time. In most analytical laboratories, this proportion is between 55% and 70%. The gap between theoretical and available capacity is the overhead: the time consumed by necessary but non-analytical activities. The model should use available capacity, not theoretical capacity, as its baseline.
Every laboratory has a primary constraint — the resource that limits throughput when demand exceeds capacity. In some laboratories, the constraint is analyst time. In others, it is a specific instrument or instrument type. In others, it is a process step — sample preparation, for example, or data review — that creates a bottleneck regardless of analytical capacity.
Identifying the primary constraint is the most important output of the capacity model. It tells you where to focus improvement effort, where to direct additional resource when capacity is tight, and where additional investment will have the most impact. It also tells you where additional investment will have no impact — because adding capacity to a non-constraint does not increase throughput.
"Identifying the primary constraint is the most important output of the capacity model."
A laboratory that is running at 100% of available capacity has no buffer for variability. Any unexpected increase in demand, any unplanned instrument downtime, any analyst absence will immediately cause a backlog. The capacity model should define a target utilization level — typically 75% to 85% of available capacity — that provides a buffer for normal variability while maintaining high throughput.
The size of the appropriate buffer depends on the variability of demand and the consequences of a backlog. A laboratory with highly variable demand and high-consequence turnaround time commitments needs a larger buffer than one with stable demand and flexible turnaround time requirements. The model should make this trade-off explicit.
A laboratory capacity model does not need to be complex. The most useful models are simple enough to be maintained by the laboratory team without specialist support, and transparent enough to be understood by the people who use them. A well-designed model can be built in a spreadsheet in a day, using data that is already available in most laboratory information management systems.
Start with demand data. Extract three to six months of historical test volume data, broken down by test type and day of week. Calculate the average daily demand and the peak-to-average ratio. This establishes the baseline demand profile.
Measure available capacity. For each analyst role and instrument type, calculate the proportion of time spent on direct analytical work versus overhead activities. Use this to convert theoretical capacity into available capacity.
Identify the primary constraint. Map the demand profile against the available capacity of each resource. The resource with the smallest margin between demand and available capacity is the primary constraint.
Define the target utilization range. Based on the variability of demand and the consequences of a backlog, set a target utilization range for the primary constraint. This becomes the operational trigger for capacity management decisions.
Build in a review cadence. The model is only useful if it is kept current. Assign ownership of the model to a specific role, define a weekly or monthly review cadence, and establish the conditions under which the model should be updated.
A capacity model that is used only for day-to-day scheduling is underutilized. Its second major application is in planning conversations with finance and operations leadership — conversations about headcount, instrument investment, and service level commitments that currently happen without a shared factual basis.
With a capacity model in place, these conversations change character. Instead of a laboratory manager arguing for additional headcount on the basis of felt pressure and anecdotal evidence, the conversation is grounded in a model that shows current utilization, projected demand growth, and the point at which additional capacity will be required to maintain service levels. The model does not make the decision — it makes the decision-making process more rational and more productive.
This is, ultimately, what a capacity model is for: not to produce a number, but to create a shared understanding of the laboratory's operating reality that can be used to make better decisions. Building that understanding is one of the highest-leverage investments a laboratory leadership team can make.
Next step
Start with a diagnostic conversation. No pre-packaged proposals. No junior teams.