‘Working Paper’ Drum-Buffer-Rope: Resolving the Capacity Utilization vs Reliability Conflict in Manufacturing Schedules
– Dr Shelja Jose & Co-Authored by Eli Schragenheim, Satyashri Mohanty
Conventional Operational Methodologies
The concept of ‘division of labor’ was THE single idea, which revolutionized industrial manufacturing. This idea of having each step of entire production process dedicated to a type of operation is still how manufacturing plants are organized (Heizer, 1998). With the evolution of new manufacturing techniques, manufacturing plants became increasingly complex in terms of number of steps in the process, BOM (bill of materials) structures, and routings for different types of products. This complexity brought with it scheduling challenges which had to deal with the conflict between the necessity to ship orders on time and the need to get maximum output. Variability and uncertainty added to the complexity of this decision (Hopp & Spearman, 2008). On the one hand, if the plant was loaded to its full capacity, any variability created a cascading effect of queuing delays. But on the other hand if some capacity was spared, then the plant is able to make up even if there is variability, ensuring good due date performance but the fear was that if no such aberration occurred, the spared capacity goes to waste (Hopp & Spearman, 2008). Therefore, it was considered that high output is synonymous with poor reliability and visa-versa. Manufacturing units struggled to walk the tight rope wherein they loaded the factory as much as possible without completely upsetting reliability.
The solution was thought to be in implementing MRP II which inspired by MRP, involved planning by extrapolating the various lead times of the required RM. But the underlying assumption of infinite capacity of MRP played foul with the application of this concept in reality. This experience added to what was already intuitively believed -that if capacity could be determined precisely, then a formula can be devised to effectively address this output vs reliability conflict. Therefore, MRP II evolved to include rough capacities of the various workstations before planning production (Ptak & Smith, 2011). But in spite of this ‘rough-cut’ levelling of capacity, manufacturing plants that had implemented MRP II still experienced large patches of overload and unreliability (Ptak & Smith, 2011). According to Shri Srikanth, 2010, the nervousness in manufacturing supply chains is higher when the supply chain is managed by a sophisticated MRP system. Without precision in capacity, often the planned load was higher and the resultant delay cascaded into the lead times of future orders inevitably delaying them. The same effect was experienced whenever “murphy” (the symbol of everything that may go wrong) hit as well (Hopp & Spearman, 2008). This meant that the plant was no longer able to predict output and ensure reliability. Moreover, while ‘making for stock’, MRP uses demand forecasts as ‘orders’ for planning. These forecasts are inevitably inaccurate and MRP has to be re-run for the same period to accommodate stock corrections. This generated schedule changes which would be chaotic if followed. To solve these, manufacturing units started practicing “time buckets” (Toomey, 1999), a method of planning for a period (using MRP II) and then re-planning for the next period to accommodate the spill overs or changes in demand. It was used as a means to simplify scheduling by ignoring the exact due date within the time bucket. This way, the overload in a period (usually a month) did not perpetually delay all subsequent orders. Residual orders and stock requirement changes due to forecasting errors could be also be accommodated. Without this correction, this method of planning would have increased unreliability infinitely as the time horizon of production moved farther away from the day of initially drawing up the plans. Since “Bucketing” allowed the plant to re-plan at end of the assigned period, some level of reliability was achieved for the ‘bucket’ period though even within this period there was no guarantee of what will come out and when.
Many manufacturing units till date follow this method. But this method too runs into problems (Kumar, Maheshwari, & Kumar, 2003). Demand changes between two consecutive planning sessions. Thus, if parts were produced for a certain end-item and the demand for it did not pan out, there were piles of WIP created which needs to be adjusted for in the next planning cycle. By the time the plans are reset to accommodate this and the raw material is “netted off”, the first week of the month is long over -during which period the plant is literally blind to the manufacturing plan. Once the vendors have also done their respective “netting off”, completes sourcing, manufacturing and delivery, the company is already behind schedule. Therefore, the last week of the month sees a frantic spike in production to catch up with the planned output. This skewed production invariably leads to multiple bottlenecks emerging in the plant during the course of the month. Many plant managers operating under the ‘bucket’ system tend to strongly believe that their plants have multiple constraints and strive to find scheduling techniques to address this adequately.
Technology led Solutions
Over the years, the manufacturing world sought to further fine tune capacity definition in an effort to reduce variance and improve predictability of output in this complex manufacturing environment. Towards this endeavor, capacity at work station levels was analyzed (Toomey, 1999)and planned using deterministic simulation of the flow of jobs. But the application of the closed loop Capacity Requirement Planning (CRP) developed as a result was rather short lived (Hopp & Spearman, 2008). When planning was done using CRP, and one workstation was identified as overloaded, the plan would then be staggered in time (Toomey, 1999). But this often shifted the overload phenomenon to another workstation. Without the necessary computing power available at that time, to handle all these permutations and combinations, CRP stayed as theory.
The Advanced Planning and Optimization, Advanced Planning and Scheduling systems and the finite capacity scheduling models which evolved in due course used advanced computing and processing power to set manufacturing schedules dynamically. In addition to handling complex scheduling problems required to manage multiple constraints that managers working in the ‘Bucket system’ believed they had, these applications also allowed for almost instantaneous re-planning when ‘murphy’ hit. But the dynamic scheduling and large processing power also meant that noise of the system (data inaccuracies and irrelevant data) got amplified and consequently customer promises had to be changed frequently. Since APO and APS were unable to produce any optimal schedules, most users were unhappy (Hopp & Spearman, 2008).
This effort to find the ideal technology solution continued with increasingly complex technology applications. But many felt that the approach of these solutions was to chase symptoms and propose incomplete solutions (Ptak & Smith, 2011) and that the underlying model of these software solutions were flawed (Hopp & Spearman, 2008), therefore some practitioners and experts came to believe that the direction in which the solution for capacity utilization vs reliability dilemma is being pursued itself may have to be rethought.
Changing the direction of solution
The thought evolved that it may be an impossible quest to try and precisely define capacity. The variables involved were just too many! Uncertainty and product mix determines capacity and while product mix can be managed to an extent, uncertainty by definition cannot be predicted. In a production system unreliable equipment, unpredictable yields, glitches in human performance, fluctuations in order sizes can all create variability in capacity. For example, if a worker comes late or if climate becomes humid, output could actually be affected in certain circumstances; or if the grade of wool in a weaving machine changes, the output can change.
Capacity utilization is also guided by many rules which are set by managers (Ptak & Smith, 2011). And most of these rules are flexible and are applied based on the situation of the day. For example to ensure quality of colors, a paint shop in a furniture factory needs to maintain color sequences across orders. Thus ideally (to conserve maximum capacity), they should do all the whites first and then go on to increasingly darker shades and then work in the reverse since this method would involve the least cleaning during color changes. But often in order to meet emergency requirements or because of the kind of order mix in hand, the paint shop has to disrupt their sequencing and not follow this rule. Not following the rule may undermine the capacity of the plant but this decision is made in order to ensure on time delivery of orders.
But at the same time the rules cannot be abandoned while modelling the schedules either since then it will lead to incorrect estimation of capacity. Such situations are difficult to predict and all exceptions to these rules difficult to model into the scheduling algorithms (Mohanty, 2013).
How, then, can manufacturing units get the best possible output from limited capacity while ensuring reliability?
Separation of Planning from Execution
In order to solve this dilemma, Theory of Constraints (TOC) proposed by Dr.Eliyahu Goldratt advocated the separation of planning from execution. Traditionally good planning/scheduling is viewed as detailed decisions made in advance which should be blindly followed in execution phase. But Goldratt realized that this inflexibility is counterproductive since whenever Murphy hits, this plan has to be abandoned. Therefore he suggested a minimum planning phase which takes care of flow, sets priority and a robust execution phase which can ensure on time delivery even under uncertainty.
Using this concept, a generalized method of planning was developed by Dr.Goldratt for manufacturing operations with the primary objective of ensuring smooth flow and prioritization in plants. Instead of trying to solve a complicated net of links between the processing steps and resources, several of which might have limited capacity, a vastly simplified method was evolved with focus on the system constraint.
While the skewed output in the bucket period may give a wrong intuition to managers that there are multiple constraints in the plant, in any chain or process there is only one link that is the weakest. (Goldratt & Cox, The Goal, 1993). Goldratt realized that this link determines the strength and output of the whole system and therefore this link or resource (called the bottle neck or constraint) if put at the heart of the overall production plan and the flow regulated based on this constraint’s output, the entire system can get the best possible output with the existing capacity (Ronen & Spector, 1992) (Jackson & Low, 1993)
Dr.Goldratt suggested that conservative production plans with a priority system that ensured that the right items are manufactured can allow for sufficient protective capacity to accommodate any uncertainties and still deliver on time (Gupta, 2003). This new method of planning and organizing production developed by Dr. Eliyahu Goldratt came to be known as DBR or Drum-Buffer-Rope and the execution control part of DBR was called Buffer Management (BM). Researchers and practitioners added and enhanced this methodology over time.
DBR wave 1: The Traditional Drum Buffer Rope
Dr.Eliyahu Goldratt introduced this concept to the world through the ground breaking book he authored called “the Goal” (Goldratt & Cox, The Goal, 1993). He further described the method and its associated terminology in the book “The Race” (Goldratt & Fox, 1986) and the Haystack Syndrome (Goldratt E. , 1990).
DBR represented a rather radical way in which manufacturing could be done (Schragenheim, Dettmer, & Patterson, 2009). The DBR model was designed to protect production schedules from variation that cannot (or need not be) completely eliminated. The objective of the DBR system is to meet Throughput expectations while managing inventory and operating expense. It helps regulate flow of work-in-process (WIP) through a production line at or near the full capacity of the most restricted resource (Constraint) in the manufacturing chain (Schragenheim & Ronan, 1991). To do so a schedule or plan is developed for manufacturing based on this Constraint called the Drum
The plan for manufacturing and pace of the entire system is based on the constraint capacity since the output at the constraint is the same as the output of the whole system. Any attempt to produce more than what the constraint can process just leads to excess inventory piling up. Therefore the drum is a schedule which considers the work that needs to done by the constraint and this schedule decides what to produce, in what sequence to produce and how much to produce, establishing the rhythm of work for the whole system (Goldratt E. , The Haystack Syndrome: Sifting Information out of the Data Ocean, 1990).
Since any disruption of production at the bottle neck leads by definition to lost throughput and since disruptions and variability are intrinsic to a system and are not completely avoidable, the actual lead time will inevitably be larger than planned if that plan does not allow for some degree of padding in terms of safety time (Stein, 1996). This is done with the help of time buffers- an additional lead time beyond the required set up and process times, for materials in the product flow.
According to Shri Srikanth (2010) what makes this concept of “time buffers” unique and powerful is that unlike earlier methodologies that tried to protect reliability at each step of the process, DBR is not designed to make each task to be on time as per the planned schedule but to ensure that work flows through the system with sufficient reliability to meet due dates. Moreover, this higher degree of reliability can be possible with significantly lower lead time (Simatupang, 2000).
When a suitable time buffer is used, WIP will naturally accumulate before the constraint workstation (constraint buffer) and when there is a breakdown in any of the up-steam workstations, the constrained resource can maintain flow using this buffer. Since all other resources have protective capacity, this buffer gets build up again when they are operational again. Buffers are constraint buffer (to ensure that the bottleneck is fully utilized), Assembly buffer or shipping buffer (to protect due dates). (Sullivan et al, 2007; Louw and Page, 2004).
But the amount of time buffer or stock buffer that should be used is important -too little and the disruptions will retard flow; too much and there would be excessive inventory pile up and lack of coordination leading to chaos. Between these two is the zone where the buffer is sufficient to protect flow (Gardiner et al, 1992) (Figure 2 illustrates impact of time buffer on the effort required to maintain flow).
The last element of DBR is the rope or the mechanism which is used to control the flow through the entire system by controlling a small number of points. While the drum sets the master schedule and the buffer provides the necessary protection of reliability, the rope communicates and controls the actions necessary to support these in the system. It is essentially the action trigger device. It ensures that all work centers perform the right tasks in the right sequence and at the right time by limiting WIP available at the work center to what is immediately needed. To implement the rope the material release points are provided with the detailed lists of what material is to be released and when, based on the master schedule (Russell &Fry, 1997).
Planning using DBR does not guarantee automatic execution on the shop floor. While DBR implementation significantly reduces the impact of variability in the system on reliability, disruptions can occasionally occur that goes beyond the scope of the buffer/s. Buffer management provides the method to systematically and proactively manage these and provide signals to enable corrective actions in these circumstances. It identifies cases in which such actions are needed and helps monitor the effectiveness of the corrective actions. (Goldratt E. , The Haystack Syndrome: Sifting Information out of the Data Ocean, 1990)
Each work order will have a remaining buffer status that can be calculated. Based on this buffer status work orders can be color coded into Red, Yellow and Green. The red orders have the highest priority and has to be worked on first since the have penetrated most into their buffers followed by yellow and green. As time evolves this buffer status might change and the color assigned to the particular work order change with it (Shri Shrikanth, 2010).This mechanism also signals the adequacy of the buffers that have been established. If the number of red orders is too high it is an indication that significant number of orders are experiencing disruptions during processing and that the aggressive buffer times should be eased out. And in the case of too many green orders the buffer time planned will have to be tightened (Schragenheim, E. 2010).
Limitations of DBR
While traditional DBR offered a vast improvement in effectiveness of manufacturing operations, when the method was implemented in certain environments, it was realized that it had some room for improvement. The major reason for this was that when it was designed, it was conceptualized that the constraint would always be in the factory. This was not necessarily true.
Bottleneck not always in the plant
Traditional DBR is designed for situations in which demand is assured and always exceeded the company’s capacity to produce. But for a most organizations, the demand fluctuated over time and in some phases, the demand exceeded the plant capacity while in others, capacity went underutilized. This lead to the conclusion that market is the real constraint but this fact is often masked due to large fluctuations in demand (Schragenheim, Dettmer, & Patterson, 2009).
Internal schedule vs delivery dates
Recognition that market is the real constraint naturally led another concept to be seriously reconsidered -the prudence of following the schedule based on the internal constraint. Since it was acknowledged that delivering with very high reliability on due dates was more important than trying to maintain the internal schedule at all costs, it was really difficult to follow the strict schedule set at the constraint. Depending on due dates some orders were pulled ahead and others delayed violating the schedule in order to meet promised delivery dates. Therefore, the idea of managing based on the master production schedule based on customer demand (the real constraint) rather than the schedule at the internal constraint was deemed to be more appropriate (Schragenheim, E. 2010).
Constraint buffer vs Shipping Buffer
Since due dates are sacrosanct, there was no question that a shipping buffer was needed to protect them. But the need for Constraint buffer was questioned. Constraint buffers were originally designed to ensure that that the constraint does not starve and parts of a product do not get delayed before it reaches shipping. But with an adequate shipping buffer protecting the whole process, a constraint buffer becomes superfluous since as the slowest workstation, a buffer would automatically build before the constraint. Moreover expediting at the constraint (based on constraint buffer penetration) even when there is ample time until delivery could actually result in confused priorities (Schragenheim, Dettmer, & Patterson, 2009).
“Exploiting the constraint”
Due dates were more often missed not because of inadequate buffer but because of a propensity to milk as much as possible from the constraint. Though Goldratt had warned “Don’t be too greedy in exploiting the constraint-leave something on the table” (Schragenheim, Dettmer, & Patterson, 2009), historically practitioners have tried to squeeze every bit of capacity of the constraint. This inevitably led to longer lead times and missed due dates since the constraint buffer does not protect against Murphy hitting the constraint workstation itself and if the loading on the constraint machine is for long horizons, the shipping buffer becomes inadequate to absorb the effect of this Murphy.
Managing due dates
To create further confusion, the due dates themselves were often cause of contention. In traditional DBR, it is often necessary to re-compute the schedule at the constraint just to know whether newly received orders have a good chance of on-time delivery or if there is some risk of being late. Often this information is too late since the warning would come only when the specific order enters the subsequent schedule run (Schragenheim, Dettmer, & Patterson, 2009).
The schedule at the Constraint not only went through frequent re-planning due to new orders and disruptions, but also had to incorporate changes in order to manage multiple buffers when an order must be delayed or expedited. The system just did not allow any flexibility to deal with all such orders because the impact of the Constraint buffer turning red on the possibility of other buffers becoming red too, is not straight forward. Managing using multiple buffers (–while a considerable improvement on expediting randomly) was complicated and there was scope for simplifying the process so that the management attention is more focused (Schragenheim, E. 2010).
Wave 2: Simplified-DBR
Simplified DBR is a variation of the original DBR methodology and was suggested by Eli Schragenheim and Dettmer (2000) in “Manufacturing at Warp Speed” as a valid, simplified replacement for DBR to address the issues the traditional DBR threw up. Now s-DBR which assumes that the constraint is in the market (drum to which the rope is tied) has replaced DBR as the preferred planning method. Not only did it simplify production planning, it also eliminated the need for all buffers except the shipping buffer and gives better guidelines for decision making.
D-B-R in s-DBR
With the understanding that market is always the real constraint (but often not recognized as such due to large fluctuations in demand), s-DBR eliminated the need to identify the constraint in the plant precisely. The bottleneck now identified in the plant is called a capacity constrained resource (CCR)-i.e. the resource that can and will become a bottleneck if the demand surges. It also made sense to tie the rope to actual market demand rather than utilization considerations. Therefore, s-DBR does not schedule the CCR in detail or try to maintain that schedule at all costs. Instead s-DBR relies on a master production schedule based on market demand, to which even the CCR subordinates.
S-DBR also enforced the concept of “planned load” i.e. ensuring that work is released onto the floor at only near capacity (say 90%) of the constraint and not the full capacity. This would make sure that any disruptions in the upstream activities do not create an unrecoverable backlog for the CCR.
The only buffer used in s-DBR is the shipping buffer- a liberal estimate of time from the release of raw material to the arrival of the finished order for shipping. Material release in the s-DBR system follows the “do-not-release-before-schedule date” rule, i.e. if the delivery is required later than the time frame of the buffer, it should not be on the shop floor, to ensure that order is maintained on the shop floor. Penetration into the shipping buffer therefore then suffices to track orders and set priorities on the shop floor.
In s-DBR, production new orders received are assigned due dates based on existing shipping buffer length. A feasibility check is accomplished using the planned load. The new orders are then added to the planned load, while orders that have been completed that day by the CCR are deleted from the planned load. When there is an acceptable time between the value of planned load and the standard quoted lead time, the new orders become customer orders. The material release date is then determined based on the due date. If the comparison shows a of lack capacity to meet the order, then additional capacity can be acquired or the customer is given an extended date.
Wave 3: s-DBR for MTA
Traditional DBR does not differentiate between MTO and Made to stock. So it treated Made to stock as MTO by creating dummy orders with a quantity and artificial due date. But unlike in the case of firm orders, the stock made for inventory is not aimed at delivering a particular quantity on a particular date (as in the case of MTO); the objective here would be to ensure that the stocks of SKUs does not deplete below the levels that is needed to protect sales. But since Buffer management was also based on an artificial due date this meant that the colors of buffer management had no reliable meaning. Lack of visibility to the real priority also led to capacity and raw material stealing in a mixed environment. Therefore it was recognized that, since the intention is to protect stock, the urgency of work on the shop floor should be led by the existing availability in stock (not proximity to due date). Hence the concept of stock buffers was introduced to manage MTA environments. Daily replenishment orders are released based on the previous day’s consumption without any due date. Depending on the levels of finished goods already in stock (buffer penetration) priority is indicated. Now, not only were companies able to replenish their inventory, but they also could ensure that all items were reliably available. To verbalize this promise-to-market of making “specific products available perfectly available at a specific warehouse”, Goldratt coined the concept of Made to Availability (MTA) as different form the existing practice of Made to Stock (in which availability cannot be assured) (Schragenheim, Dettmer, & Patterson, 2009).
By differentiating the operations of MTA from MTO, s-DBR made the most significant contribution with the evolution of DBR,
Wave 4: enhanced s- DBR
While DBR and s-DBR were designed, implementation of Theory of Constraints (TOC) was restricted to manufacturing plants, with the assumption that one does not have any control on the way due dates are quoted. But later with TOC application as a companywide philosophy to create ever flourishing companies, it was possible for operations to influence sales and visa-versa. A further level of simplification also became possible since due dates commitment were now no longer made in isolation.
Using Safe dates
S-DBR and DBR both assume that the sales department generates new orders and quote delivery dates without consulting operations. But if the production team were to be able to provide sales with highly reliable delivery “safe dates” of orders that the sales team might close next, then not only will lead times be low but the reliability would also be almost 100% giving the company an edge in the market. Moreover, the load on the shop floor can be smoothened by encouraging sales during lean periods. This collaboration of operations and sales meant that the quotation of dates in the market was based on the constraint while the execution was based on buffer management principles of s-DBR. This improvement in s-DBR ensured that the CCR is as busy as possible without overloading it. And at the same time, wait time is kept minimized (Schragenheim, Dettmer, & Patterson, 2009).
Wave 5: pull-DBR
All the above techniques helped improve flow. But in most environments and especially complex environments with varying product mix and shifting constraints, the problem of flow was not addressed completely.
Some opponents also criticized DBR as a ‘push system’ (one in which work is released without the feedback loop of WIP in the system); with the disadvantages of the ‘push world’ as DBR depended on a fixed material release schedule (Hopp & Spearman, 2008) . Even in the case of s-DBR release of work did not consider WIP already in the system. Without WIP uniformity, increasing process variability at any workstation tends to increase cycle time at that station and propagate more variability to downstream stations (Hopp & Spearman, 2008). With high WIP, very often multiple orders went into ‘red’, and these could not be effectively expedited. Thus while using this method, variability created scenarios were committed dates could not be met especially when these dates came closer to release horizons (Vector Consulting Group, 2015).
At the same time, the original pull method –the Toyota Production System (TPS), another manufacturing method that understood that capacity cannot be finitely defined, was also running into trouble when demands on manufacturing itself changed with changing consumer demand. With the success of Toyota as a company, TPS, also called Lean, enjoyed immense popularity (Womack, Jones and Roos, 2007). TPS suggested that flow can be improved and inventory in the system reduced considerably with the help of Kanban cards which sets a limit to WIP at a workstation level. This system worked very well when products were made to stock (no due date performance required) and minimal variation in product mix and reasonable standardization could be ensured. With the help of Heijunka, which prescribes that every product be produced in every cycle, component demand can be smoothened across time periods, removing spikes, thus allowing the Kanban to operate smoothly (Ohno, 1988). But in MTO or environments with large product variety and fluctuating loads, TPS cannot be implemented as effectively since it is very difficult to smoothen demand.
But incorporating the concepts of both the worlds i.e. Push and Pull system, some practitioners created the pull version of DBR to further enhance s-DBR. In this method the due dates and buffer management continued to be managed by the principles of “push” concepts but execution on the shop floor is based on “pull” concepts (Vector Consulting Group, 2015).
Dynamic release by Integrating “pull”
In p-DBR, the rope is tied to ‘planned’ load in such a way as to keep the WIP on the shop floor constant. For this the planned release dates are preponed or postponed based on variability affecting the process between the day of planning and start of execution. Therefore when the orders are processed sooner, new orders are released and when there are disruptions, the release slows down so as not to flood the shop floor with WIP. This WIP control enabled the plants to exploit the constraint fully without the risk of overloading it.
Scheduling orders using “push”
s-DBR had tried to resolve the disorder created by the need to squeeze every bit of capacity of the CCR by suggesting that some capacity be spared during planning as a protection against uncertainty. But there is often a fear that this spared capacity might be wasted and therefore in practice, the actual protective capacity planned fluctuates, leading to possibility of CCR overload. p-DBR allows protective capacity and due dates to be planned without this fear. With constant WIP, using dynamic release, the p-DBR model ensures that capacity is never idle in execution.
In addition to improved flow, the advantage of this method is that it allows the orders that are due later to be pulled ahead and processed during seamless production cycles as long as priority is not breached. This then also allows for some slack that can then be utilized for a future order that may run into a ‘Murphy’ -supporting shorter lead times to market for products and a simpler management of the process.
DBR development represented a quantum improvement over MRPII, the most popular production planning and scheduling method of the time. Since its inception in the mid-1980s, DBR has given the opportunity to organizations that adopted it, a robust plan for the production floor that clearly defined the critical points in planning to ensure sustainability. Combined with effective Buffer management to manage execution, DBR can increase production flow, decrease WIP, and ensure shorter manufacturing cycles.
Gardiner, S. C., Blackstone, J., and Gardiner, L. 1994. “The evolution of the Theory of Constraints,” Industrial Management May/June:13–16.
Goldratt, E. .., & Fox, R. E. (1986). The Race. New York: North River Press.
Goldratt, E. (1988). Computerized Shop Floor Scheduling. International Journal of Production Research, 443-455.
Goldratt, E. (1990). The Haystack Syndrome: Sifting Information out of the Data Ocean. New York: North River Press.
Goldratt, E. (2009). Standing on the Shoulder of Giants. Retrieved June 25, 2015, from The Manufacturer: http://www.themanufacturer.com/uk/content/9280/Standing_on_the_shoulders_of_giants.
Goldratt, E., & Cox, J. (1993). The Goal. New York: North River Press.
Gupta, M. (2003). Constraints Management-Recent Advances and Practices. International Journal of Production Research, 647-659.
Heizer, J. (1998). Determining responsibility for development of the moving assembly line. Journal of Management History, 94-103.
Hopp, W. J., & Spearman, M. L. (2008). Factory Physics. Singapore: McGraw Hill.
Jackson, G., & Low, J. (1993). The International Journal of Logistics Management, 41-48.
Louw, L. and Page, D. C. 2004. “Queuing network analysis approach for estimating the sizes of the time buffers in theory of constraints-controlled production systems,” International Journal of Production Research 42:1207–1226.
Ohno, T., 1988, Toyota Production System, CRC Press, Taylor Francis Group, Inc., Boca Raton, FL
Kumar, V., Maheshwari, B., & Kumar, U. (2003). An investigation of Critical Management issues in ERP implementation: Empirical evidence from Canadian Organizations. Technovation, 793-807.
Mohanty, S. (2013). Vector Consulting Group. Retrieved July 21, 2015, from Vector Consulting Group: https://www.vectorconsulting.in/insights/equipment-manufacturing/dealing_with_emperors_new_clothes.html
Ptak, C., & Smith, C. (2011). Orliky’s Material Requirements Planning. New York: McGraw Hill.
Ronen, B., & Spector, Y. (1992). Managing System Constraints; A Cost Utilization Approach. Internatinal Journal of Production Research, 2045-2061.
Russell, G. R. and Fry, T. D. 1997. “Order review/release and lot splitting in drum-buffer-rope,” International Journal of Production Research 35:827–845.
Shri Shrikanth, M. (2010). DBR, Buffer Management, and VATI flow classification. In J. F. Cox III, & J. J. Schleier, Theory of Constraints Handbook (pp. 175-210). New York: McGraw Hill.
Schragenheim, E. (2010). From DBR to Simplified DBR for Make to Order. In J. F. Cox III, & J. J. Schleier, Theory of Constraints Handbook (pp. 211-238). New York: McGraw Hill.
Schragenheim, E., & Ronan, B. (1991). Drum-Buffer-Rope shop floor control. Production and Inventory Management Control, 74-79.
Schragenheim, E., Dettmer, H. W., & Patterson, J. W. (2009). Supply Chain Management at Wrap Speed. Boca Raton: Auerbach publications.
Simatupang, T. M., Wright, A. C., and Sridharan, R. 2004. “Applying the theory of constraints to supply chain collaboration,” Supply Chain Management: An International Journal 9:57–70.
Stein, R. E. 1996. Reengineering the Manufacturing System: Applying the Theory of Constraints (TOC). New York: Marcel Dekker.
Sullivan, T. T., Reid, R. A., and Cartier, B. Editors. 2007. The TOCICO Dictionary. Theory of Constraints International Certification Organization. Available online at http://www.tocico .org/?page=dictionary.
Toomey, J. (1999). MRP II: Planning for Manufacturing Excellence. Boston: Kluwer Academic Publishers.
Vector Consulting Group. (2015). Apparent in Hindsight: From Chaos to Harmony in the Auto Industry. India: TV18 Broadcast Ltd.
Womack, J, Jones, D and D Roos (2007) The machine that changed the world, Simon Schuster: New York
To batch or not to batch: Building Efficiently in Large Batching Operations
Batching issues tend have a profound influence on the flow characteristics of a manufacturing plant and substantial gains can be made by properly understanding underlying principles.Read more- Achal Saran Pande
Analyzing Mysteries of Implementing Replenishment
I think, the best of internalizing new knowledge is by analyzing case studies. So instead of just posting insights, I would like to share a case with you and some questions to ponder at the end of it…Read more- Puneet Kulraj
Battling the efficiency syndrome. Effortlessly!
With implementation of Theory of Constraints solution, it is expected that all plant managers align their thinking and decision making to the paradigm of flow rather than local efficiencies.Read more- Shailesh Ranjan