Structured What-If Technique as a Risk Assessment Tool

The structured what-if technique, SWIFT, is a high-level and less formal risk identification technique that can be used independently, or as part of a staged approach to make bottom-up methods such as FMEA more efficient. SWIFT uses structured brainstorming in a facilitated workshop where a predetermined set of guidewords (timing, amount, etc.) are combined with prompts elicited from participants that often begin with phrases such as “what if?” or “how could?”.

At the heart of a SWIFT is a list of guidewords to enable a comprehensive review of risks or sources of risk. At the start of the workshop the context, scope and purpose of the SWIFT is discussed and criteria for success articulated. Using the guidewords and “what if?” prompts, the facilitator asks the participants to raise and discuss issues such as:

  • known risks
  • risk sources and drivers
  • previous experience, successes and incidents
  • known and existing controls
  • regulatory requirements and constraints

The list of guidewords is utilized by the facilitator to monitor the discussion and to suggest additional issues and scenarios for the team to discuss. The team considers whether controls are adequate and if not considers potential treatments. During this discussion, further “what if?” questions are posed.

Often the list of risks generated can be used to fuel a qualitative or semi-quantitative risk assessment method, such as an FMEA is.

A SWIFT Analysis allows participants to look at the system response to problems rather than just examining the consequences of component failure. As such, it can be used to identify opportunities for improvement of processes and systems and generally can be used to identify actions that lead to and enhance their probabilities of success.

What-If Analysis

What–If Analysis is a structured brainstorming method of determining what things can go wrong and judging the likelihood and consequences of those situations occurring.  The answers to these questions form the basis for making judgments regarding the acceptability of those risks and determining a recommended course of action for those risks judged to be unacceptable.  An experienced review team can effectively and productively discern major issues concerning a process or system.  Lead by an energetic and focused facilitator, each member of the review team participates in assessing what can go wrong based on their past experiences and knowledge of similar situations.

What If?AnswerLikelihoodSeverityRecommendations
What could go wrong?What would happen if it did?How likely?ConsequencesWhat will we do about them Again – prevent and monitor
What-If Analysis

Steps in a SWIFT Analysis

SWIFT Risk Assessment
  1. Prepare the guide words: The facilitator should select a set of guide words to be used in the SWIFT.
  2. Assemble the team: Select participants for the SWIFT workshop based on their knowledge of the system/process being assessed and the degree to which they represent the full range of stakeholder groups.
  3. Background: Describe the trigger for the SWIFT (e.g., a regulatory change, an adverse event, etc.).
  4. Articulate the purpose: Clearly explain the purpose to be served by the SWIFT (e.g., to improve effectiveness of the process).
  5. Define the requirements: Articulate the criteria for success
  6. Describe the system: Provide appropriate-level textual and graphical descriptions of the system or process to be risk assessed. A clear understanding is necessary and can be is established through interviews, gathering a multifunctional team and through the study of documents, plans and other records. Normally the
  7. Identify the risks/hazards: This is where the structured what-if technique is applied. Use the guide words/headings with each system, high-level subsystem, or process step in turn. Participants should use prompts starting with the phrases like “What if…” or “How could…” to elicit potential risks/hazards associated with the guide word. For instance, if the process is “Receipt of samples,” and the guide word is “time, timing or speed,” prompts might include: “What if the sample is delivered at a shift change” (wrong time) or “How could the sample be left waiting too long in ambient conditions?” (wrong timing).
  8. Assess the risks: With the use of either a generic approach or a supporting risk analysis technique, estimate the risk associated with the identified hazards. In light of existing controls, assess the likelihood that they could lead to harm and the severity of harm they might cause. Evaluate the acceptability of these risk levels, and identify any aspects of the system that may require more detailed risk identification and analysis.
  9. Propose actions: Propose risk control action plans to reduce the identified risks to an acceptable level.
  10. Review the process: Determine whether the SWIFT met its objectives, or whether a more detailed risk assessment is required for some parts of the system.
  11. Document: Produce an overview document to communicate the results of the SWIFT.
  12. Additional risk assessment: Conduct additional risk assessments using more detailed or quantitative techniques, if required. The SWIFT Analysis is really effective as a filtering mechanism to focus effort on the most valuable areas.

Guideword Examples

The facilitator and process owner can choose any guide words that seem appropriate. Guidewords usually stem around:

  • Wrong: Person or people
  • Wrong: Place, location, site, or environment
  • Wrong: Thing or things
  • Wrong: Idea, information, or understanding
  • Wrong: Time, timing, or speed
  • Wrong: Process
  • Wrong: Amount
  • Failure: Control or Detection
  • Failure: Equipment

If your organization has invested time to create root cause categories and sub-categories, the guidewords can easily start there.

Information Gaps

An information gap is a known unknown, a question that one is aware of but for which one is uncertain of the answer. It is a disparity between what the decision maker knows and what could be known The attention paid to such an information gap depends on two key factors: salience, and importance.

  • The salience of a question indicates the degree to which contextual factors in a situation highlight it. Salience might depend, for example, on whether there is an obvious counterfactual in which the question can be definitively answered.
  • The importance of a question is a measure of how much one’s utility would depend on the actual answer. It is this factor—importance—which is influenced by actions like gambling on the answer or taking on risk that the information gap would be relevant for assessing.

Information gaps often dwell in the land of knightian uncertainty.

Communicating these Known Unknowns

Communicating around Known Unknowns and other forms of uncertainty

A wide range of reasons for information gaps exist:

  • variability within a sampled population or repeated measures leading to, for example, statistical margins-of-error
  • computational or systematic inadequacies of measurement
  • limited knowledge and ignorance about underlying processes
  • expert disagreement.

Ambiguity

Ambiguity is present in virtually all real-life situations and are those ‘situations in which we do not have sufficient information to quantify the stochastic nature of the problem. It is a lack of knowledge as
to the ‘basic rules of the game’ where cause-and-effect are not understood and there is no precedent for
making predictions as to what to expect

Ambiguity is often used, especially in the context of VUCA, to cover situations in situations that have:

  • Doubt about the nature of cause and effect
  • Little to no historical information to predict the outcome
  • Difficult to forecast or plan for

It is important to answer whether there are risks of lack of experience and predictability that might affect the situation, and interrogate our unknown unknowns.

People are ambiguity averse in that they prefer situations in which probabilities are perfectly known to situations in which they are unknown.

Ambiguity is best resolved by experimentation.

Review of Process/Procedure

Review of documents are a critical part of the document management lifecycle.

Document Lifecycle

In the post Process/Procedure Lifecycle there are some fundamental stakeholders:

  • The Process Owner defines the process, including people, process steps, and technology, as well as the connections to other processes. They are accountable for change management, training, monitoring and control of the process and supporting procedure. The Process Owners owns the continuous improvement of the overall process.
  • Quality is ultimately responsible for the decisions made and that they align, at a minimum, with all regulatory requirements and internal standards.
  • Functional Area Management represents the areas that have responsibilities in the process and has a vested interest or concern in the ongoing performance of a process. This can include stakeholders who are process owners in upstream or downstream processes.
  • A Subject Matter Expert (SME) is typically an expert on a narrow division of a process, such as a specific tool, system, or set of process steps. A process may have multiple subject matter experts associated with it, each with varying degrees of understanding of the over-arching process.

A Risk Based Approach

The level of review of a new or revised process/procedure is guided by three fundamental risk questions:

  • What might go wrong with the associated process? (risk identification)
  • What is the likelihood that this will go wrong? (risk analysis)
  • What are the consequences? How severe are they if this goes wrong? (risk analysis)

Conducting risk identification is real about understanding how complicated and complex the associated process is. This looks at the following criteria:

  • Interconnectedness: the organization and interaction of system components and other processes
  • Repeatability: the amount of variance in the process
  • Information content: the amount of information needed to interact with the process

What Happens During a Review of Process and Procedure

The review of a process/procedure ensures that the proposed changes add value to the process and attain the outcome the organization wants. There are three levels of review (which can and often do happen simultaneously):

  • Functional review
  • Expert review by subject matter experts
  • Step-by-step real-world challenge

Functional review is the vetting of the process/procedure. Process stakeholders, including functional area management affected by the change has the opportunity to review the draft, suggest changes and agree to move forward.

Functional review supplies the lowest degree of assurance. This review looks for potential impact of the change on the function – usually focused on responsibilities – but does not necessarily assures a critical review.

In the case of expert review, the SMEs will review the draft for both positive and negative elements. On the positive side, they will look for the best practices, value-adding steps, flexibility in light of changing demands, scalability in light of changing output targets, etc. On the negative side, they will look for bottlenecks in the process, duplication of effort, unnecessary tasks, non-value-adding steps, role ambiguities (i.e. several positions responsible for a task, or no one responsible for a task), etc.

Expert review provides a higher degree of assurance because it is a compilation of expert opinion and it is focused on the technical content of the procedure.

The real-world challenge tests the process/procedure’s applicability by challenging it step-by-step in as much as possible the actual conditions of use. Tis involves selecting seasoned employee(s) within the scope of the draft procedure – not necessarily a SME – and comparing the steps as drafted with the actual activities. It is important to ascertain if they align. It is equally important to consider evidence of resistance, repetition and human factor problems.

Sometimes it can be more appropriate to do the real-world test as a tabletop or simulation exercise.

As sufficient reviews are obtained, the comments received are incorporated, as appropriate. Significant changes incorporated during the review process may require the procedure be re-routed for review, and may require the need to add additional reviews.

Repeat as a iterative process as necessary.

Design lifecycle

The process/procedure lifecycle can be seen as the iterative design lifecycle.

Design Thinking: Determine process needs.

  • Collect and document business requirements
  • Map current-state processes.
  • Observe and interview process workers.
  • Design process to-be.

Startup: Create process documentation, workflows, and support materials. Review and described above

Continuous Improvement: Use the process; Collect, analyze, and report; Improve

Sensemaking, Foresight and Risk Management

I love the power of Karl Weick’s future-oriented sensemaking – thinking in the future perfect tense – for supplying us a framework to imagine the future as if it has already occurred. We do not spend enough time being forward-looking and shaping the interpretation of future events. But when you think about it quality is essentially all about using existing knowledge of the past to project a desired future.

This making sense of uncertainty – which should be a part of every manager’s daily routine – is another name for foresight. Foresight can be used as a discipline to help our organizations look into the future with the aim of understanding and analyzing possible future developments and challenges and supporting actors to actively shape the future.

Sensemaking is mostly used as a retrospective process – we look back at action that has already taken place, Weick himself acknowledged that people’s actions may be guided by future-oriented thoughts, he nevertheless asserted that the understanding that derives from sensemaking occurs only after the fact, foregrounding the retrospective quality of sensemaking even when imagining the future.

“When one imagines the steps in a history that will realize an outcome, then there is more likelihood that one or more of these steps will have been performed before and will evoke past experiences that are similar to the experience that is imagined in the future perfect tense.”

R.B. MacKay went further in a fascinating way by considering the role that counterfactual and prefactual processes play in future-oriented sensemaking processes. He finds that sensemaking processes can be prospective when they include prefactual “whatifs” about the past and the future. There is a whole line of thought stemming from this that looks at the meaning of the past as never static but always in a state of change.

Foresight concerns interpretation and understanding, while simultaneously being a process of thinking the future in order to improve preparedness. Though seeking to understand uncertainty, reduce unknown unknowns and drive a future state it is all about knowledge management fueling risk management.

Do Not Ignore Metaphor

A powerful tool in this reasoning, imagining and planning the future, is metaphor. Now I’m a huge fan of metaphor, though some may argue I make up horrible ones – I think my entire team is sick of the milk truck metaphor by now – but this underutilized tool can be incredibly powerful as we build stories of how it will be.

Think about phrases such as “had gone through”, “had been through” and “up to that point” as commonly used metaphors of emotional experiences as a physical movement or a journey from one point to another. And how much that set of journey metaphors shape much of our thinking about process improvement.

Entire careers have been built on questioning the heavy use of sport or war metaphors in business thought and how it shapes us. I don’t even watch sports and I find myself constantly using it as short hand.

To make sense of the future find a plausible answer to the question ‘what is the story?’, this brings a balance between thinking and acting, and allows us to see the future more clearly.

Bibliography

  • Cornelissen, J.P. (2012), “Sensemaking under pressure: the influence of professional roles and social accountability on the creation of sense”, Organization Science, Vol. 23 No. 1, pp. 118-137, doi: 10. 1287/orsc.1100.0640.
  • Greenberg, D. (1995), “Blue versus gray: a metaphor constraining sensemaking around a restructuring”, Group and Organization Management, Vol. 20 No. 2, pp. 183-209, available at: http://doi-org.esc-web.lib.cbs.dk:8443/10.1177/1059601195202007
  • Luscher, L.S. and Lewis, M.W. (2008), “Organizational change and managerial sensemaking: working through paradox”, Academy of Management Journal, Vol. 51 No. 2, pp. 221-240, doi: 10.2307/20159506.
  • MacKay, R.B. (2009), “Strategic foresight: counterfactual and prospective sensemaking in enacted environments”, in Costanzo, L.A. and MacKay, R.B. (Eds), Handbook of Research on Strategy and Foresight, Edward Elgar, Cheltenham, pp. 90-112, doi: 10.4337/9781848447271.00011
  • Tapinos, E. and Pyper, N. (2018), “Forward looking analysis: investigating how individuals “do” foresight and make sense of the future”, Technological Forecasting and Social Change, Vol. 126 No. 1, pp. 292-302, doi: 10.1016/j.techfore.2017.04.025.
  • Weick, K.E. (1979), The Social Psychology of Organizing, McGraw-Hill, New York, NY.
  • Weick, K.E. (1995), Sensemaking in Organizations, Sage, Thousand Oaks, CA.