Process Owners

Process owners are a fundamental and visible difference part of building a process oriented organizations and are crucial to striving for an effective organization. As the champion of a process, they take overall responsibility for process performance and coordinate all the interfaces in cross-functional processes.

Being a process owner should be the critical part of a person’s job, so they can shepherd the evolution of processes and to keep the organization always moving forward and prevent the reversion to less effective processes.

The Process Owner’s Role

The process owner plays a fundamental role in managing the interfaces between key processes with the objective of preventing horizontal silos and has overall responsibility of the performance of the end-to-end process, utilizing metrics to track, measure and monitor the status and drive continuous improvement initiatives. Process owners ensure that staff are adequately trained and allocated to processes. As this may result in conflicts arising between process owners, teams, and functional management it is critical that process owners exist in a wider community of practice with appropriate governance and senior leadership support.

Process owners are accountable for designing processes; day-to-day management of processes; and fostering process related learning.

Process owners must ensure that process staff are trained to have both organizational knowledge and process knowledge. To assist in staff training, processes, standards and procedures should be documented, maintained, and reviewed regularly.

Process Owners should be supported by the right infrastructure. You cannot be a SME on a end-to-end-process, provide governance and drive improvement and be expected to be a world class tech writer, training developer and technology implementer. The process owner leads and sets the direction for those activities.

The process owner sits in a central role as we build culture and drive for maturity.

Sensemaking, Foresight and Risk Management

I love the power of Karl Weick’s future-oriented sensemaking – thinking in the future perfect tense – for supplying us a framework to imagine the future as if it has already occurred. We do not spend enough time being forward-looking and shaping the interpretation of future events. But when you think about it quality is essentially all about using existing knowledge of the past to project a desired future.

This making sense of uncertainty – which should be a part of every manager’s daily routine – is another name for foresight. Foresight can be used as a discipline to help our organizations look into the future with the aim of understanding and analyzing possible future developments and challenges and supporting actors to actively shape the future.

Sensemaking is mostly used as a retrospective process – we look back at action that has already taken place, Weick himself acknowledged that people’s actions may be guided by future-oriented thoughts, he nevertheless asserted that the understanding that derives from sensemaking occurs only after the fact, foregrounding the retrospective quality of sensemaking even when imagining the future.

“When one imagines the steps in a history that will realize an outcome, then there is more likelihood that one or more of these steps will have been performed before and will evoke past experiences that are similar to the experience that is imagined in the future perfect tense.”

R.B. MacKay went further in a fascinating way by considering the role that counterfactual and prefactual processes play in future-oriented sensemaking processes. He finds that sensemaking processes can be prospective when they include prefactual “whatifs” about the past and the future. There is a whole line of thought stemming from this that looks at the meaning of the past as never static but always in a state of change.

Foresight concerns interpretation and understanding, while simultaneously being a process of thinking the future in order to improve preparedness. Though seeking to understand uncertainty, reduce unknown unknowns and drive a future state it is all about knowledge management fueling risk management.

Do Not Ignore Metaphor

A powerful tool in this reasoning, imagining and planning the future, is metaphor. Now I’m a huge fan of metaphor, though some may argue I make up horrible ones – I think my entire team is sick of the milk truck metaphor by now – but this underutilized tool can be incredibly powerful as we build stories of how it will be.

Think about phrases such as “had gone through”, “had been through” and “up to that point” as commonly used metaphors of emotional experiences as a physical movement or a journey from one point to another. And how much that set of journey metaphors shape much of our thinking about process improvement.

Entire careers have been built on questioning the heavy use of sport or war metaphors in business thought and how it shapes us. I don’t even watch sports and I find myself constantly using it as short hand.

To make sense of the future find a plausible answer to the question ‘what is the story?’, this brings a balance between thinking and acting, and allows us to see the future more clearly.

Bibliography

  • Cornelissen, J.P. (2012), “Sensemaking under pressure: the influence of professional roles and social accountability on the creation of sense”, Organization Science, Vol. 23 No. 1, pp. 118-137, doi: 10. 1287/orsc.1100.0640.
  • Greenberg, D. (1995), “Blue versus gray: a metaphor constraining sensemaking around a restructuring”, Group and Organization Management, Vol. 20 No. 2, pp. 183-209, available at: http://doi-org.esc-web.lib.cbs.dk:8443/10.1177/1059601195202007
  • Luscher, L.S. and Lewis, M.W. (2008), “Organizational change and managerial sensemaking: working through paradox”, Academy of Management Journal, Vol. 51 No. 2, pp. 221-240, doi: 10.2307/20159506.
  • MacKay, R.B. (2009), “Strategic foresight: counterfactual and prospective sensemaking in enacted environments”, in Costanzo, L.A. and MacKay, R.B. (Eds), Handbook of Research on Strategy and Foresight, Edward Elgar, Cheltenham, pp. 90-112, doi: 10.4337/9781848447271.00011
  • Tapinos, E. and Pyper, N. (2018), “Forward looking analysis: investigating how individuals “do” foresight and make sense of the future”, Technological Forecasting and Social Change, Vol. 126 No. 1, pp. 292-302, doi: 10.1016/j.techfore.2017.04.025.
  • Weick, K.E. (1979), The Social Psychology of Organizing, McGraw-Hill, New York, NY.
  • Weick, K.E. (1995), Sensemaking in Organizations, Sage, Thousand Oaks, CA.

What prevents us from improving systems?

Improvement is a process and sometimes it can feel like it is a one-step-forward-two-steps-back sort of shuffle. And just like any dance, knowing the steps to avoid can be critical. Here are some important ones to consider. In many ways they can be considered an onion, we systematically can address a problem layer and then work our way to the next.

Human-error-as-cause

The vague, ambiguous and poorly defined bucket concept called human error is just a mess. Human error is never the root cause; it is a category, an output that needs to be understood. Why did the human error occur? Was it because the technology was difficult to use or that the procedure was confusing? Those answers are things that are “actionable”—you can address them with a corrective action.

The only action you can take when you say “human error” is to get rid of the people. As an explanation the concept it widely misused and abused. 

Human performance instead of human error
AttributePerson ApproachSystem Approach
FocusErrors and violationsHumans are fallible; errors are to be expected
Presumed CauseForgetfulness, inattention, carelessness, negligence“Upstream” failures, error traps; organizational failures that contribute to these
Countermeasure to applyFear, more/longer procedures, retraining, disciplinary measures, shamingEstablish system defenses and barriers
Options to avoid human error

Human error has been a focus for a long time, and many companies have been building programmatic approaches to avoiding this pitfall. But we still have others to grapple with.

Causal Chains

We like to build our domino cascades that imply a linear ordering of cause-and-effect – look no further than the ubiquitous presence of the 5-Whys. Causal chains force people to think of complex systems by reducing them when we often need to grapple with systems for their tendency towards non-linearity, temporariness of influence, and emergence.

This is where taking risk into consideration and having robust problem-solving with adaptive techniques is critical. Approach everything like a simple problem and nothing will ever get fixed. Similarly, if every problem is considered to need a full-on approach you are paralyzed. As we mature we need to have the mindset of types of problems and the ability to easily differentiate and move between them.

Root cause(s)

We remove human error, stop overly relying on causal chains – the next layer of the onion is to take a hard look at the concept of a root cause. The idea of a root cause “that, if removed, prevents recurrence” is pretty nonsensical. Novice practitioners of root cause analysis usually go right to the problem when they ask “How do I know I reached the root cause.” To which the oft-used stopping point “that management can control” is quite frankly fairly absurd.  The concept encourages the idea of a single root cause, ignoring multiple, jointly necessary, contributory causes let alone causal loops, emergent, synergistic or holistic effects. The idea of a root cause is just an efficiency-thoroughness trade-off, and we are better off understanding that and applying risk thinking to deciding between efficiency and resource constraints.

In conclusion

Our problem solving needs to strive to drive out monolithic explanations, which act as proxies for real understanding, in the form of big ideas wrapped in simple labels. The labels are ill-defined and come in and out of fashion – poor/lack of quality culture, lack of process, human error – that tend to give some reassurance and allow the problem to be passed on and ‘managed’, for instance via training or “transformations”. And yes, maybe there is some irony in that I tend to think of the problems of problem solving in light of these ways of problem solving.

Pandemics and the failure to think systematically

As it turns out, the reality-based, science-friendly communities and information sources many of us depend on also largely failed. We had time to prepare for this pandemic at the state, local, and household level, even if the government was terribly lagging, but we squandered it because of widespread asystemic thinking: the inability to think about complex systems and their dynamics. We faltered because of our failure to consider risk in its full context, especially when dealing with coupled risk—when multiple things can go wrong together. We were hampered by our inability to think about second- and third-order effects and by our susceptibility to scientism—the false comfort of assuming that numbers and percentages give us a solid empirical basis. We failed to understand that complex systems defy simplistic reductionism.

Zeynep Tufekci, “What Really Doomed Americas Coronovirus Response” published 24-Mar-2020 in the Atlantic

On point analysis. Hits many of the themes of this blog, including system thinking, complexity and risk and makes some excellent points that all of us in quality should be thinking deeply upon.

COVID-19 is not a black swan. Pandemics like this have been well predicted. This event is a different set of failures, that on a hopefully smaller scale most of us are unfortunately familiar with in our organizations.

I certainly didn’t break out of the mainstream narrative. I traveled in February, went to a conference and then held a small event on the 29th.

The article stresses the importance of considering the trade-offs between resilience, efficiency, and redundancy within the system, and how the second- and third-order impacts can reverberate. It’s well worth reading for the analysis of the growth of COVID-19, and more importantly our reaction to it, from a systems perspective.

Probing Unknown Unknowns

In the post “Risk Management is about reducing uncertainty,” I discussed ignorance and surprise, covering the idea of “unknown unknowns”, those things that we don’t even know that we don’t know.

Our goal should always be to reduce ignorance. Many unknown unknowns are just things no one has bothered to find out. What we need to do is ensure our processes and systems are constructed so that they recognize unknowns.

There are six factors that need to be explored to find the unknown unknowns.

  1. Complexity: A complex process/system/project contains many interacting elements that increase the variety of its possible behaviors and results. Complexity increases with the number, variety, and lack of robustness of the elements of the process, system or project.
  2. Complicatedness: A complicated process/system/project involves many points of failure, the ease of finding necessary elements and identifying cause-and-effect relationships; and the experts/participants aptitudes and experiences.
  3. Dynamism: The volatility or the propensity of elements and relationships to change.
  4. Equivocality: Knowledge management is a critical enabler of product and project life cycle management. If the information is not crisp and specific, then the people who receive it will be equivocal and won’t be able to make firm decisions. Although imprecise information itself can be a known unknown, equivocality increases both complexity and complicatedness. 
  5. Perceptive barriers: Mindlessness. This factor includes a lot of our biases, including an over-reliance on past experiences and traditions, the inability to detect weak signals and ignoring input that is inconvenient or unappealing.
  6. Organizational pathologies: Organizations have problems, culture can have weaknesses. These structural weaknesses allow unknown unknowns to remain hidden.
Interrogating Knowable Unknown Unknowns

The way to address these six factors is to evaluate and challenge by using the following approaches:

Interviewing

Interviews with stakeholders, subject matter experts and other participants can be effective tools for uncovering lurking problems and issues. Interviewers need to be careful not to be too enthusiastic about the projects they’re examining and not asking “yes or no” questions. The best interviews probe deep and wide.

Build Knowledge by Decomposing the System/Process/Project

Standard root cause analysis tools apply here, break it down and interrogate all the subs.

  1. Identifying the goals, context, activities and cause-effect relationships
  2. Breaking the domains into smaller elements — such as processes, tasks and stakeholders
  3. Examining the complexity and uncertainty of each element to identify the major risks (known unknowns) that needed managing and the knowledge gaps that pointed to areas of potential unknown unknowns.

Analyze Scenarios

Construct several different future outlooks and test them out (mock exercises are great). This approach accepts uncertainty, tries to understand it and builds it into the your knowledge base and reasoning. Rather than being predictions, scenarios are coherent and credible alternative futures built on dynamic events and conditions that are subject to change.

Communicate Frequently and Effectively

Regularly and systematically reviewing decision-making and communication processes, including the assumptions that are factored into the processes, and seeking to remove information asymmetries, can help to anticipate and uncover known unknowns. Management Review is part of this, but not the only component. Effective and frequent communication is essential for adaptability and agility. However, this doesn’t necessarily mean communicating large volumes of information, which can cause information overload. Rather, the key is knowing how to reach the right people at the right times. Some important aspects include:

  • Candor: Timely and honest communication of missteps, anomalies and missing competencies. Offer incentives for candor to show people that there are advantages to owning up to errors or mistakes in time for management to take action. It is imperative to eliminate any perverse incentives that induce people to ignore emerging risks.
  • Cultivate an Alert Culture: A core part of a quality culture should be an alert culture made up of people who strive to illuminate rather than hide potential problems. Alertness is built by: 1) emphasizing systems thinking; 2) seek to include and build a wide range of experiential expertise — intuitions, subtle understandings and finely honed reflexes gained through years of intimate interaction with a particular natural, social or technological system; and 3) learn from surprising outcomes.

By working to evaluate and challenge, to truly understand our systems and processes, our risk management activities will be more effective and truly serve to make our systems resilient.

Recommended Reading