Business Improvement is a collection of disciplined change methodologies, which help in gaining a sufficient understanding of a situation to assure, with an adequate degree of confidence, that a change will result in the expected level of benefits. This page aims to demystify improvement and examines the distinction between what is and is not “business improvement”.A generalised method of improvement is represented by the sequence: Define→Understand→Improve. First the desired outcome (e.g. development of an opportunity, addressing a problem or doing something better) needs to be clearly defined. Then the organisational context surrounding the desired outcome is studied to reach an adequate level of understanding. Finally, solutions are identified, evaluated, trialled and implemented.
The learning at any phase may prompt a return to a prior phase. For example, understanding the root-cause of a problem might result in a revised and more specific definition of the desired outcome; similarly, the implementation of a change can create new insights and lead to the identification of further improvement opportunities relating to the desired outcome. A key difference with other approaches to change is that the ambiguity of a situation is accepted and examined before an intervention is chosen. While this provides a surface-level definition of improvement, it is not quite enough. In addition to defining improvement further, this page addresses the question: “If I already know what the solution to my problem is, what value does all this ‘improvement’ stuff add?”
By What Name?
“Business Improvement”, “Quality Improvement”, “Process Improvement”, “Kaizen” and many more names have been used to describe similar activities despite obvious semantic differences. Conversely, any one of these terms is used to label activities which are not really “improvement” but instead “undisciplined change”. While it is relatively easy to agree on what “car painting” is, unfortunately the same cannot be said for organisational “improvement”; the labels and titles used are in themselves insufficient.
As Part of Business Excellence?
Where business excellence concepts provide general guidance, “Improvement” is concerned with the application of suitable intervention methods in specific situations. While closely related, they should be considered separate bodies of knowledge; it is quite common to see improvement methodologies used to implement business change in environments retaining old modes of business thinking, which can still be of value but has a few caveats. Regardless of how well a methodology is executed, changes that are antisystemic or otherwise based on flawed thinking will cause disappointment, giving rise to many of the “70% of methodology-x initiatives fail!” stories out there; the problem is not that the methods are flawed but that alone they are insufficient — blind faith in any improvement methodology without the desire to challenge basic assumptions will lead to more disappointments.
If the mere application of a methodology is insufficient then what else is needed? A change can be considered an “improvement” if it has these characteristics:
- It is proactive: Restoring something that ‘broke’ is not an improvement, it just returns things to where they were before. Improvement is a departure from the status quo.
- It applies a disciplined mindset: Whatever the business opportunity, potential solutions come to mind quickly; unfortunately, trapped by various cognitive biases, one can pursue these solutions to the bitter end without ever really questioning them — quite the contrary, one can become attached to and justify some really dubious solutions. The discipline to take a step back and examine the situation and its context can help to take the “gambling” out of change. An improvement is developed by taking the time to understand things and thereby having a rational basis for choosing a solution when the time comes.
- It achieves the right outcomes: Related to the disciplined mindset, improvement needs to be assessed against “what matters” to the business and its customers; this means avoiding convenient yet irrelevant proxy metrics or those that look better when a situation is actually getting worse. Assessment of effectiveness should be prioritised over that of efficiency, as it is usually quite futile to do the wrong thing more efficiently.
- It benefits the whole system: An improvement needs to be better for the whole organisation, not any single part. At a minimum, the potential consequences in other parts of the organisation need to be identified and addressed. With sufficient consideration of the system, an improvement has a higher likelihood of being sustainable.
Many aspects of improvement are dependent on the disciplined mindset. This is especially important as formal methodologies cannot enforce any mode of thinking. In the absence of this mindset, biases can lead to the belief that the approach to change being used is just fine, when it is actually causing significant harm. Thus, the practice of disciplined change is not just a matter of aesthetics.
A Disciplined Approach to Business Change: Science or Art?Many would agree that there is an art to improvement, however, calling it an art would incorrectly imply that it is predominantly an intuitive craft. Insofar as we are interested in a disciplined approach to change, improvement might best be considered a science. While intuition is an important ingredient, many improvement practices add the rigor required to avoid common intuitive mistakes.
Just as academic science emphasises the importance of correctly linking cause with effect, so does business improvement. Central to this is the mental discipline of focusing on the goal or problem being addressed and not prematurely looking at potential solutions. This is essential, as improvement is a process of building understanding (i.e. of the business situation) and not one of justifying a solution (e.g. analysis to support a foregone conclusion). The former builds confidence (with the desired level of rigor) that the eventual solution will have the desired impact, the latter is pure quackery. Analysis for justification can be rather insidious, as Taiichi Ohno observed; “When people want a certain machine badly, they will tend to do the calculations that suit them”[2013, p.129]. While this bias is most obvious when someone is dazzled by a solution, the mere framing of analysis around one will introduce some bias. Staying focused on the goal or problem is therefore not a whimsical concern in improvement but a central and practical one. Notably the discipline, of defining a problem before solving it, exists in many arenas of problem solving besides what we would think of as business improvement methodologies.
The detailed process of improvement varies between methodologies and situations, however, it can be summarised with the three questions of API’s Model for Improvement as seen to the right. Answering them in that sequence (with the possibility of backtracking when learning occurs) is in essence the discipline of improvement. First, the problem or desired outcome needs to be defined; this means focusing on understanding the situation before looking for solutions. Second, the method of evaluating a change, in terms of what is important to the business, needs to be decided up-front, as it is all too easy to show that a change is successful when evaluating it on its strengths. Finally, a solution that reflects what was learnt by answering the first two questions can be designed, tested and implemented. The level of rigor appropriate for the up-front investigative work prior to considering solutions is proportional to scale and risk. For relatively minor issues, all three questions may be answered immediately, followed by a rapid trial of an idea. For large scale change, much more rigor is needed at each step.
The PDSA learning cycle below the questions illustrates how a series of planned experiments is often used to provide and revise answers to the questions. Each iteration begins with some kind of PLAN to test existing understanding, which could be through observation, experimentation or analysis; it is important to consider not only what is to be done but also what one expects to learn by doing it. When one DOes what was planned, special attention needs to be paid to anything unexpected, as this is another source of learning. Once the plan is executed and initial analyses are completed, one needs to STUDY the findings and consider what was learnt; what questions were answered, which remain and what new ones have been raised. Finally it is time to ACT on what was learnt, by either carrying out further investigative activities, moving towards testing and rolling-out a change, or winding down the activity if no further improvements are worth pursuing at the time.
How does one recognise solution-oriented thinking? If you can reasonably ask “Can we implement it?” then “it” is probably a solution; the answer to “What is ‘it’ supposed to address?” may then be the underlying aim that improvement should focus on. However, this simplistic rule of thumb conjures up a false dichotomy; solution-oriented thinking is not always so obvious and it takes some practice to catch the subtler aspects. For example, pre-emptive resource allocation can limit the range of solutions that have a chance of being considered, thus adversely influencing the choice of solution while not actually pre-selecting one.
Undisciplined change can have various consequences, many stemming from confirmation bias and inertia. As biases result in a warped view of reality, the true outcomes can go unseen:
|Visible outcome||Possible reality||Why this can go unnoticed||How to avoid this|
|Successful Project! ✓||Solved a non-problem. ✘||The solution is appealing and thus its value is not questioned. The project is evaluated with generic criteria such as on-time and on-budget.||Define the problem to be solved before thinking about the solution. Confirm the existence and impact of the alleged problem.|
|Solved a problem! ✓||Solved the wrong problem. ✘||A change is usually “successful” if evaluated based on criteria specific to the change itself.||Define the evaluation criteria before thinking about the solution. Question the relative importance of the problem.|
|Solved the problem! ✓||Only addressed a symptom of the problem. ✘||as above||Allocate sufficient time to investigate the nature of a problem and its causes before thinking about the solution.|
|The project is profitable! ✓||The total business impact is negative. ✘||The direct impact of a change is more salient than any indirect effects, such as those on other parts of the business.||Be sure to consider the connections and interactions between various elements of the business when investigating a situation.|
|The project is profitable! ✓||Money has been wasted. ✘||A solution with a positive NPV is unlikely to be heavily scrutinised, yet simultaneously, its incremental NPV (relative to alternate solutions, in this case those neglected from analysis) can be negative. In other words, the extra “bells and whistles” of a more complex solution may not be worth the extra cost, disruption, lock-in, etc.||Actively look for and compare to some simpler solutions before committing to one.|
There are a few issues here, mostly related to biases; the value of discipline is in preempting those biases and focusing attention on the business, not on the project. While popular improvement methodologies embody the discipline of defining a problem first (after all, that is what the D in Six Sigma’s DMAIC stands for), they cannot guarantee it; all the tools can be applied without discipline in the pursuit of a predetermined solution. In some cases, it is a matter of educating those who lead improvement projects, in other cases there are deeper organisational factors to consider. Either way, the need for the discipline to postpone thinking about solutions is relative to the scale of change; the longer a project takes, the more susceptible it is to groupthink and confirmation bias, so large-scale changes are in particular need of this discipline.
Discipline or Formalisation?
The “discipline” being discussed here is essentially self-discipline. Formalisation of change (e.g. controls, documentation, etc.) adds little or no discipline to change; in some cases it might even amplify confirmation bias. On the other hand, formalisation does help with the traceability of work done. The relationship between the two can be expressed in a simple matrix:
|Low Discipline||High Discipline|
|Low Formalisation||Pro: Fastest approach.
Con: Potentially address a symptom instead of the underlying problem.
Appropriate for: Quick “just do it” opportunities with low risk.
|Pro: Avoid bias and its consequences.
Con: Potential uncertainty about the exact impact of a change (i.e. if baseline findings are not recorded).
Appropriate for: Starting improvement projects, with the view of increasing formalisation if and when required.
|High Formalisation||Pro: Detailed record of what was done, how resources were used.
Con: Risks inherent with a lack of discipline, e.g. the first plausible solution is justified (with copious analyses), any others are likely to be ignored.
Appropriate for: Complex requirement-driven projects with no ambiguity about what needs to be implemented (e.g. construction).
|Pro: Avoid bias and its consequences. Detailed analysis of the situation is recorded, which can be turned into a traditional business case if it turns out that significant investment is required.
Con: Time consuming and thus somewhat wasteful when the identified solutions are relatively simple.
Appropriate for: Complex improvement projects (e.g. addressing cross-functional issues).
An obvious question is whether “just do it” changes are improvements or not. This is a bit more nuanced than the 2×2 matrix shows and depends on whether those changes have “low” or “no” discipline. Look back at the three questions in API’s model for improvement; for a small change, just answering those questions can be enough discipline to ensure that it is an improvement.
Clearly there is a time and place for both formalisation and discipline. Insisting on either or both in all situations can be an excessive barrier to change, little different in effect than a fear of improvement would be.
Just Solving Problems? What About Innovation?
Although the word improvement generally implies changing that which already exists, the methodologies are applicable to the pursuit of a broader range of desired outcomes. The benefits of a disciplined approach extend to the development of new activities (e.g. new product offerings), helping to avoid the potential disappointment of quasi-random new ventures. By deferring thinking about solutions, possibilities are not prematurely eliminated; by considering the question “What are we trying to accomplish?”, one is reminded to focus on goals that fit in with the existing capabilities and mission of their business. Many of the same improvement tools are used for designing new activities, with some specific additions such as QFD (quality function deployment).
Innovation plugs right into a disciplined change approach, as a means of developing solutions. For an existing offering, the gap between understanding the problem and choosing a solution is an arena for innovation, that is, innovation targeted at specific issues the business is experiencing. On the other hand, if the desired outcome is a new product or service, the learning in the understand phase should include market research to identify key design criteria for the new offering; again, these criteria are inputs for targeted innovation to develop solutions. In both cases, innovation is clearly linked to business outcomes. This contrasts with serendipitous innovation, which is also important but harder to plan for.
Observe how you think about change in your business and see if you can catch yourself in the act of jumping to a solution.
Also, take a moment to think about how a “Don’t bring me problems, bring me solutions” mentality hinders improvement. How open are you to the discussion of problems?
(If you need some help with disciplined change and improvement in your business then let us know)
1. ^ “Business Improvement” is usually my preferred term, simply because it encompasses all the activities within a business. Where a change needs to be made is unknown at the start of an improvement project, so starting with an unrestricted scope is most appropriate.
“Kaizen” (改善) is simply the Japanese word for “Improvement” but is often used to mean “Business Improvement” as discussed here. Fair enough, but do we really need to use foreign words in an appeal to authority?
“Process Improvement” has an obvious presumption built in. While process analysis is a very common activity in improvement, I cannot see the rationale of limiting improvement to only processes; what about all those problems that have other causes (e.g. the glue doesn’t stick because the factory is too hot)? This narrow focus is one of several ways that people attempt business improvement with only a subset of the required knowledge. One needs to remember that process analysis is mostly reductionist in nature (taking a process and looking at the steps which comprise it) and thus often insufficient without some systems analysis to examine the broader context.
A second, less obvious issue with “Process Improvement” is that the label is often used for activities drastically different from the “Improvement” discussed on this page. These differences seem to stem from history. While most flavours of “Improvement” have clear roots in business excellence and related concepts, some varieties of “Process Improvement” have none, instead being descendants of Business Process Re-engineering — the idea that organisations should throw away what they do and redesign activities from scratch. To be fair, there is some logic behind this approach in the original context; introducing large scale IT changes often does mean that significant parts of old processes become inappropriate or irrelevant. An example of the silliness Hammer was trying to address would be: printing out a form, signing it and scanning it back into a computer. On the other hand, the application of BPR would typically violate the discipline of improvement by pre-selecting a problem space and therefore limiting solution options. It was a disastrous change approach in practice for other reasons (none of which are surprising if we evaluate it with Deming’s SoPK), as discussed in this article from 1995:
“Quality Improvement” is another term that presumes a problem space. While that might be reasonable, as work on customer oriented (i.e. quality) issues should make up the majority of improvement efforts, surely it is not 100% of what needs to be improved.
2. ^ There is an abundance of articles about various improvement methodologies failing. A few examples:
Of the academic writings mentioning business improvement that I have read, many would best be described as “drive-by research”. These have only a shallow, descriptive account of observed methodology, minimal consideration of the origins of the ideas underpinning the methods, absurd conflations and treatment of any labelled methodology as a globally uniform practice. Academic researchers should heed the advice of psychologist Bob Grice to “Always handle your own rat”, that is, get close to the phenomena being studied. Simply surveying CEO’s about whether they ‘have a continuous improvement strategy?’(yes or no), followed by copious data mangling to “prove” some point, is not anywhere near close enough. Granted I’m recalling some worst-case examples and anyone taking a cursory look at the business improvement field is likely to fall into all the same traps. For those who stick around long enough, it becomes evident that there is actually very little uniformity of methods and terms, and vast differences between well and poorly executed attempts at improvement — despite very similar tools being used. It might look like a duck, quack like a duck, but actually be a rabbit.
Of practitioner articles, some make excuses for methodologies, others enumerate various explanations for why the initiatives fail. Leadership is a commonly identified prerequisite for successful improvement; a good article by Davis Balestracci on the topic can be found here:
The paradigm of thinking within organisations is, I believe, a key factor in whether the benefits of improvement, or most other “strategic” initiatives for that matter, are sustained or eroded back to mediocrity.
3. ^ API’s model for improvement is perhaps the most concise definition of “improvement”. Whether it is Define→Measure→Analyse→Improve→Control or simply Define→Understand→Improve, most improvement methodologies are in essence answering the three questions in sequence.
4. ^ Some decisions that are frequently, and incorrectly, made up-front in improvement initiatives, such as budgets and timeframes, come from traditional implementation project management. In the context of, say, building a bridge, they are quite valid constraints, however, improvement is not analogous to implementation project management. If we think of improvement in terms of the Define→Understand→Improve model, then implementation project management would be a part of the “Improve” phase, from the point at which a solution is chosen. If budgets or timeframes are specified at an earlier point, fewer possibilities will be seen due to inattentional blindness. We are already subject to many cognitive biases, so why voluntarily introduce more?
To illustrate, lets say that based on the visible symptoms of the problem, a budget of $10,000 and a timeframe of 2 months are set. First, note that these are quite arbitrary before the “Understand” phase and a sensible person would leave them open to revision. But even so, they will have a detrimental impact on the initiative. What about the $500 solution that addresses 40% of the problem in 2 days? Unlikely to be noticed. What about addressing several related problems across the business? Sure, but not if that requires $150,000 and 12 months. In short, setting a budget and timeframe sounds like good practice but is best deferred until a sufficient understanding of the situation has been gained. If the first solution ideas that then emerge are too expensive, then and only then should leaders prompt those involved to look for more frugal alternatives.
As a side-note, the above could make for an interesting clinical psychology experiment; what are the thresholds below or above a project budget where viable solutions become “invisible”? I’m not aware of prior studies like this, as most inattentional blindness research to-date has dealt with visual inattention.
5. ^ Of the many organisational factors that can inhibit disciplined change, three come to mind. The first is cultural norms, or more specifically, managerial desire to avoid the ambiguity of problems, as reflected by the phrase “Don’t bring me problems, bring me solutions”. This norm essentially implies that if employees see a problem but are not looking for the solution yet, they are not doing their job… Thus, rather than an understanding of the problem, employees in organisations with such norms will fixate on the first plausible solutions that come to mind. These solutions are likely to address non-problems or symptoms (as discussed on this page), which is further compounded because such concerns are not salient when the emphasis is on finding solutions. This is harmful for many other reasons, as essentially anything that doesn’t have an obvious solution is not talked about; a recipe for disaster. Read more about it here:
The second factor also relates to cultural norms and power. For practical reasons management has the last word in any non-trivial change, which is unavoidable (at least until someone solves the problem of making adhocracy scalable). However, if their first word is also the de facto last word, it becomes a barrier to disciplined change and improvement. This occurs when a manager, with all the best intentions, states something like: “Ok team, we need to SOLUTION”. When this happens, what can the team do but focus on the solution? Perhaps all is not lost if the team has the skills to manage upwards but they certainly aren’t off to a good start. Realistically, disciplined change in an organisation depends significantly on good leadership.
Third is the quantification of individual performance with relation to projects, which can have various implications:
- Emphasis on delivering more projects: less time spent understanding the situation (conversely, the reduced “quality” of projects is harder to quantify).
- Emphasis on managing large budgets: avoidance of simpler solutions (related to footnote 4).
- Emphasis on ROI: distorted success criteria, with limited consideration of negative impacts.
As with anything, if the focus is on making the numbers look good, then that is what people will work towards; any undesirable side effects should not be surprising.
6. ^ Anecdotally, large scale change programmes often seem to put pre-solution investigation rigor in the “not needed” bucket based on the belief that all they need to do is deliver on some kind of “vision” statement; having a vision for the future is all well and good but when it is used as an excuse for plunging straight into making changes, without examining the underlying assumptions, we should not be surprised when we hear of another billion dollar failure.
It is not clear why this happens but I believe that a key factor is the delegation of work relating to change in organisations; the initial “fragments” of a change decision received by a project team are not questioned, and the source is too detached from the project to revise them. This is the basis of one of the hypotheses in my current research, that delegated decisions are biased by a socially constructed “Assumption of Rightness”, much like Valerie Thompson et.al.’s findings that a meta-cognitive “Feeling of Rightness” can bias the decisions of individuals:
Thompson, V.A., Turner, J.A., Pennycook, G., Ball, L.J., Brack, H., Ophir, Y. & Ackerman, R. 2013, ‘The role of answer fluency and perceptual fluency as metacognitive cues for initiating analytic thinking’, Cognition, vol. 128, no. 2, pp. 237-51.
7. ^ Creating new offerings for the sake of having new offerings can be a recipe for trouble. It should be obvious that expanding a business into new activities needs a stronger rationale than “bigger is better” but sometimes that seems to be the only reason. I’m reminded of the case of Southcorp, an Australian alcoholic beverages company that somehow ended up owning an American water heater business!
Zalan, T. 2007, ‘Southcorp Limited: winemaker’s winding road’, in C.W.L. Hill, G.R. Jones, P. Galvin & A. Haidar (eds), Strategic Management, An Integrated Approach, 2nd Australasian Edition edn, Wiley, Milton QLD, pp. C80-91.
Had the question “What are we trying to accomplish?” been asked, the oddity of that acquisition would have been obvious, and the fortunes of the company might have been different.
8. ^ While predicting serendipitous innovation might be impossible, establishing an environment that makes it more likely is not. One of the less-tangible benefits of pursuing business excellence is in the new opportunities that employees identify; conversely a command, control and fear approach, or a focus on maximising the “utilisation” of employees like machines, all but eliminates the possibility of innovation.
Innovation within discrete improvement projects, as discussed on this page, might be a little less sensitive to organisational factors. People generally enjoy being creative, so unless they are in a seriously toxic environment, they will offer ideas if given the opportunity to do so. Which type of innovation is “better”, I will leave for others to debate; if we can get both that’s probably a good thing.
Latest posts by Marcin Kreglicki (see all)
- There Is No Recipe for Excellence, nor Is One Necessary - January 25, 2016
- Lean and Six Sigma - March 17, 2015
- The Role of Statistics in Business Improvement - March 17, 2015