The TOC Buffers
11 November 2017, Oded Cohen
Last Thursday (9th November, 2017), I gave a presentation in the 35th TOPA Conference in Vilnius, Lithuania on TOC Buffers.
Discussions that followed the presentation have brought me to realize that there is some misunderstanding of the TOC Buffers.
The word/term Buffer is commonly used today as reserve, on top of, etc. – generally to indicate a mechanism for protection for the unknown, mishaps and unexpected.
Some TOC practitioners have the perception that Buffers in TOC are there just to absorb variability. This is partially true but it is not the full picture, and this does not explain the difference between the meaning of the ‘TOC buffers’ and the conventional ‘buffers’.
The TOCICO dictionary states that: “buffer – Protection against uncertainty. The protection is aggregated and may take the form of time, stock (inventory), capacity, space or money. Buffers are strategically located to protect the system from disruption.”
Actually, the above definition is true for any protection mechanism that was inserted into major conventional managerial systems as early as the 1950s and onward.
As the manager of the computer department in a large Israeli company in 1977-1980 I can state that:
- Safety Stocks were added to the inventory management systems to protect from fluctuations in the consumption and variability of the supply (falls under the definition of the ‘buffer as protection against uncertainty’).
- The mrp – material requirement planning used the Lead Time as the mechanism to provide production with protection from fluctuations and unknown disruptions. One of the formulas to calculate Lead Time was the summation of: process time + setup time + queue time + wait time. That meant adding extra time to what was actually needed for producing the product (falls under the definition of the ‘buffer as protection against uncertainty’).
- Adding extra time when determining production times for processes. Industrial Engineering methods – such as MTM instructed to add extra time for some allowances (falls under the definition of the ‘buffer as protection against uncertainty’) when time and motion study was performed.
- Additional capacity (falls under the definition of the ‘buffer as protection against uncertainty’) was incorporated in the scheduling systems by manipulating the calendars for certain machines – a known mechanism used by production planners.
- Space – debugging of new flow lines also included adding additional space for accumulating WIP in front of machines that are subject to too high level of breakdowns (falls under the definition of the ‘buffer as protection against uncertainty’).
- Money – the general practice while budgeting for new projects was to add time and money contingencies to cover for the unexpected (falls under the definition of the ‘buffer as protection against uncertainty’).
All the above was used by my company and other companies I knew before I met Eli Goldratt and used the OPT.
I think we can agree that using buffers as the protection against uncertainty – in not a TOC invention.
The TOC Buffers are MORE than just the protection against uncertainty. TOC Buffers play a significant role in managing systems and flows, by:
- Providing signals during the execution phase. These signals indicate to management the status of the flow and detect deviations from the planned flow – especially when in cases of slowdown.
- Through the Buffer status, the management is prompted to intervene and take recovery actions to restore the commitments to the outcomes of the flow.
- The buffer status prompts the recording of the causes for delays and stoppages to the flow. These causes are analyzed for system failures and provide input for improvement initiatives.
- Buffers are continuously checked for their purpose – to provide expected level of protection without exaggeration by overprotecting.
Therefore, for at least the TOC practitioners, it should be very clear that the TOC Buffers are a unique feature of TOC. Given that the buffers have been established in the early days of TOC, they should be considered as part of the TOC Pivot in the U-shape.
The TOC Pivot is the Core of TOC that includes the Constraint, the 5 Focusing Steps, the 3 basic assumptions of TOC, T-I-OE, the 6 questions of technology (as minimum set of entities that defines TOC).
A question for further thought:
If TOC buffer is the solution (new concept) – then we have to ask ourselves “what is the problem that lead to the development of this special entity of TOC?”
Feel free to write to me on: oded.cohen.gs@gmail.com
Published by Oded Cohen, 11 November 2017
Thanks, Oded, for timely, elaborate and excellent reminder and articulation of the term – Buffer – and it’s uniqueness in the context of TOC’.
Thanks Mr. Cohen, great post and a good thinking trigger!
I would say that the problem that TOC Buffer is a solution for comes from a paradox that is created by the traditional buffer.
The more time you give to complete an order, more early the order is released to production. The more early the order is released to production, more orders are on the hands of production. The more orders are on the hands of production, less the control of which orders are been prioritized. The less control of prioritization of orders, less the chance of delivering the orders on time. So, the more time you give to production, less the chance of producing on time. Great challenge!
Hi Oded.
We have never met.
I did meet Eli Goldratt in Sydney in about 2006, I think it was.
For me, TOC buffers provide focus.
That focus tells us what job we should be working on.
The effect is we are spending more and more time working on the right jobs.
The problem resolved, is given variation, in the moment, people do not know what job to work on.
As of this morning, we have just installed buffer management into an environment.
And the people could identify the job they were doing now, and what was next.
And these jobs were in Zone 1.
And the effect of losses accumulate and gains get lost, can be visibly seen.
And that leads to conversations about dependency and variation.
And why we do stuff the way we do.
And the effect of looking at a local initiative vs a global initiative.
We expose the process time vs all the other time, and realise that it is not decreasing the process time that will make the difference.
It is decreasing all the sitting time, by half, and then by half again.
Then look at process steps.
This is my simplistic view.
Thank you for commenting. The above comments highlight the problem with the traditional protection mechanisms that have been in use. Management and system incorporate safety into the planning. But, there is no recognition of the fact that a plan is just a plan and management has to take very active role during the execution. Hence, without managing the buffers they are bound to be wasted! Not only they do not produce the expected improved performance, sometimes the buffers create mess to the flow. Hence – TOC Buffers are always used together with Buffer Management.
Dear Oded, Thanks for your article and insights in buffers.
I came across an dilemma that only can be resolved by the use of TOC buffers as you describe it, and not just through static reserves.
The situation/ exercise is about how to fill the pipeline of a project organization.
This is what I’ve implemented with a client.
To fill the pipleline to the brim, the average workload for the constraint must be 100%.
There must be no gaps in the pipeline.
New projects will only be started when capacity becomes available.
So the constraint has always time to get the work done.
Because all projects are different, the workload cant be kept under 100%. It is fluctuating about the 100% line.
(the “frequency” for the fluctuations seems to be about 50% of the average throughput time of the projects. which seems logic).
When the workload is higher than 100%, the project schedule of the affected projects can’t be met.
Although there is time to get all the work done (let’s say in a year), it can’t be done all in time.
The only way to reduce the workload so it stays less than 100% again, is through delaying projects.
But this will reduce the average workload and create overcapacity. delaying projects will create gaps in the pipeline.
To guarantee maximum usage of the constraint, there should not be gaps in the pipeline.
To guarantee the project schedule, the workload must always be lower than 100%. The pipeline must contain gaps.
There is no way to fix this with a static planning upfront.
This dilemma can be solved by adding a project buffer to each project
(The the size should corresponds with 50% of the frequency of the fluctuations, so 33% should be enough).
This buffer is not just a reserve.
It is just a time buffer. There there is time enough to get everything done, but it can’t be done all in time.
And we don’t know upfront which projects will be affected.
Does this makes sense?
I understand that for some reason DBR is not used for project. Do you have information why that is the case?
What is the current challenge staggering projects and resource leveling? I can’t find information about this. Do you know this?
Jan Van Egmond,
Happy New Year!
We are sorry for the delay in posting your comment. We missed it.
There are standard mechanics for handling multi-project environments..
When a new solution is suggested it means that:
1. The standard solutions are not known, or,
2. The standard solution are known, implemented correctly several times but have not produced the expected results due to specific circumstances.
We invite you to come to our ThinkCamp in Vergi, Estonia (on Baltic coast, 80 km east of Tallinn) .
This is where we do developments.
Please contact me directly to my email address.
My email address:
oded.cohen,gs@gmail.com