Risk can be associated with a number of different types of consequences, impacting different objectives. The types of consequences to be analyzed are decided when planning the assessment. The context statement is checked to ensure that the consequences to be analyzed align with the purpose of the assessment and the decisions to be made. This can be revisited during the assessment as more is learned.
Methods used in analyzing risks can be qualitative, semiquantitative, or quantitative. The decision here will be on the intended use, the availability of reliable data, and the decision-making needs of the organization. In ICH Q9 this is also the level of formality.
The combination of the probability of the occurrence of the harm
and the severity of that harm.
The effect of uncertainty on objectives
Often characterized by reference to the potential event and
consequences or combination of these
Often expressed in terms of a combination of the consequences of
an event (including in changes in circumstances) and the associated
likelihood of the occurrence
Qualitative assessments define consequence (or severity), likelihood, and level of risk by significance levels, such as “high,” “medium,” or “low.” They work best when supporting analysis that have a narrow application or are within another quality system, such as change control.
Below is a good way to break down consequences and likelihood for a less formal assessment.
One of the core jobs of a process owner in risk assessment is assembling this team and ensuring they have the space to do their job. They are often called the champion or sponsor for good reason.
It is important to keep in mind that membership of this team will change, gaining and losing members and bringing on people for specific subsections, depending on the scale and scope of the risk assessment.
The more complex the scope and the more involved the assessment tool, the more important it is to have a facilitator to drive the process. This allows someone to focus on the process of the risk assessment, and the reduction of subjectivity.
When evaluating a system we can look at it in two ways. We can identify ways a thing can fail or the various ways it can succeed.
These are really just two sides of the coin in many ways, with identifiable points in success space coinciding with analogous points in failure space. “Maximum anticipated success” in success space coincides with “minimum anticipated failure” in failure space.
Like everything, how we frame the question helps us find answers. Certain questions require us to think in terms of failure space, others in success. There are advantages in both, but in risk management, the failure space is incredibly valuable.
It is generally easier to attain concurrence on what constitutes failure than it is to agree on what constitutes success. We may desire a house that has great windows, high ceilings, a nice yard. However, the one we buy can have a termite-infested foundation, bad electrical work, and a roof full of leaks. Whether the house is great is a matter of opinion, but we certainly know all it is a failure based on the high repair bills we are going to accrue.
Success tends to be associated with the efficiency of a system, the amount of output, the degree of usefulness. These characteristics are describable by continuous variables which are not easily modeled in terms of simple discrete events, such as “water is not hot” which characterizes the failure space. Failure, in particular, complete failure, is generally easy to define, whereas the event, success, maybe more difficult to tie down
Theoretically the number of ways in which a system can fail and the number of ways in which a system can ·succeed are both infinite, from a practical standpoint there are generally more ways to success than there are to failure. From a practical point of view, the size of the population in the failure space is less than the size of the population in the success space. This leads to risk management focusing on the failure space.
The failure space maps really well to nominal scales for severity, which can be helpful as you build your own scales for risk assessments.
For example, let’s look at an example of a morning commute.
When I teach an introductory risk management class, I usually use an icebreaker of “What is the riskiest activity you can think of doing. Inevitably you will get some version of skydiving, swimming with sharks, jumping off bridges. This activity is great because it starts all conversations around likelihood and severity. At heart, the question brings out the concept of risk important activities and the nature of controls.
The things people think of, such as skydiving, are great examples of activities that are surrounded by activities that control risk. The very activity is based on accepting reducing risk as low as possible and then proceeding in the safest possible pathway. These risk important activities are the mechanism just before a critical step that:
Ensure the appropriate transfer of information and skill
Ensure the appropriate number of actions to reduce risk
Influence the presence or effectiveness of barriers
Influence the ability to maintain positive control of the moderation of hazards
Risk important activities is a concept important to safety-thought and are at the center of a lot of human error reduction tools and practices. Risk important activities are all about thinking through the right set of controls, building them into the procedure, and successfully executing them before reaching the critical step of no return. Checklists are a great example of this mindset at work, but there are a ton of ways of doing them.
In the hospital they use a great thought process, “Five rights of Safe Medication Practices” that are: 1) right patient, 2) right drug, 3) right dose, 4) right route, and 5) right time. Next time you are getting medication in the doctor’s office or hospital evaluate just what your caregiver is doing and how it fits into that process. Those are examples of risk important activities.
Assessing controls during risk assessment
Risk is affected by the overall effectiveness of any controls that are in place.
The key aspects of controls are:
the mechanism by which the controls are intended to modify risk
whether the controls are in place, are capable of operating as intended, and are achieving the expected results
whether there are shortcomings in the design of controls or the way they are applied
whether there are gaps in controls
whether controls function independently, or if they need to function collectively to be effective
whether there are factors, conditions, vulnerabilities or circumstances that can reduce or eliminate control effectiveness including common cause failures
A risk can have more than one control and controls can affect more than one risk.
We always want to distinguish between controls that change likelihood, consequences or both, and controls that change how the burden of risk is shared between stakeholders
Any assumptions made during risk analysis about the actual effect and reliability of controls should be validated where possible, with a particular emphasis on individual or combinations of controls that are assumed to have a substantial modifying effect. This should take into account information gained through routine monitoring and review of controls.
Risk Important Activities, Critical Steps and Process
Critical steps are the way we meet our critical-to-quality requirements. The activities that ensure our product/service meets the needs of the organization.
These critical steps are the points of no-return, the point where the work-product is transformed into something else. Risk important activities are what we do to remove the danger of executing that critical step.
Beyond that critical step, you have rejection or rework. When I am cooking there is a lot of prep work which can be a mixture of critical steps, from which there is no return. I break the egg wrong and get eggshells in my batter, there is a degree of rework necessary. This is true for all our processes.
The risk-based approach to the process is to understand the critical steps and mitigate controls.
We are thinking through the following:
Critical Step: The action that triggers irreversibility. Think in terms of critical-to-quality attributes.
Output: The desired result (positive) or the possible difficulty (negative)
Preconditions: Technical conditions that must exist before the critical step
Resources: What is needed for the critical step to be completed
Local factors: Things that could influence the critical step. When human beings are involved, this is usually what can influence the performer’s thinking and actions before and during the critical step
One cannot control risk, or even successfully identify it unless a system is able flexibly to monitor both its own performance (what happens inside the system’s boundary) and what happens in the environment (outside the system’s boundary). Monitoring improves the ability to cope with possible risks
When performing the risk assessment, challenge existing monitoring and ensure that the right indicators are in place. But remember, monitoring itself is a low-effectivity control.
Ensure that there are leading indicators, which can be used as valid precursors for changes and events that are about to happen.
For each monitoring control, as yourself the following:
How have the indicators been defined? (By analysis, by tradition, by industry consensus, by the regulator, by international standards, etc.)
When was the list created? How often is it revised? On which basis is it revised? Who is responsible for maintaining the list?
How many of the indicators are of the ‘leading,’ type and how many are of the lagging? Do indicators refer to single or aggregated measurements?
How is the validity of an indicator established (regardless of whether it is leading or lagging)? Do indicators refer to an articulated process model, or just to ‘common sense’?
For lagging indicators, how long is the typical lag? Is it acceptable?
What is the nature of the measurements? Qualitative or quantitative? (If quantitative, what kind of scaling is used?)
How often are the measurements made? (Continuously, regularly, every now and then?)
What is the delay between measurement and analysis/interpretation? How many of the measurements are directly meaningful and how many require analysis of some kind? How are the results communicated and used?
Are the measured effects transient or permanent?
Is there a regular inspection scheme or -schedule? Is it properly resourced? Where does this measurement fit into the management review?
The structured what-if technique, SWIFT, is a high-level and less formal risk identification technique that can be used independently, or as part of a staged approach to make bottom-up methods such as FMEA more efficient. SWIFT uses structured brainstorming in a facilitated workshop where a predetermined set of guidewords (timing, amount, etc.) are combined with prompts elicited from participants that often begin with phrases such as “what if?” or “how could?”.
At the heart of a SWIFT is a list of guidewords to enable a comprehensive review of risks or sources of risk. At the start of the workshop the context, scope and purpose of the SWIFT is discussed and criteria for success articulated. Using the guidewords and “what if?” prompts, the facilitator asks the participants to raise and discuss issues such as:
risk sources and drivers
previous experience, successes and incidents
known and existing controls
regulatory requirements and constraints
The list of guidewords is utilized by the facilitator to monitor the discussion and to suggest additional issues and scenarios for the team to discuss. The team considers whether controls are adequate and if not considers potential treatments. During this discussion, further “what if?” questions are posed.
Often the list of risks generated can be used to fuel a qualitative or semi-quantitative risk assessment method, such as an FMEA is.
A SWIFT Analysis allows participants to look at the system response to problems rather than just examining the consequences of component failure. As such, it can be used to identify opportunities for improvement of processes and systems and generally can be used to identify actions that lead to and enhance their probabilities of success.
What–If Analysis is a structured brainstorming method of determining what things can go wrong and judging the likelihood and consequences of those situations occurring. The answers to these questions form the basis for making judgments regarding the acceptability of those risks and determining a recommended course of action for those risks judged to be unacceptable. An experienced review team can effectively and productively discern major issues concerning a process or system. Lead by an energetic and focused facilitator, each member of the review team participates in assessing what can go wrong based on their past experiences and knowledge of similar situations.
What could go wrong?
What would happen if it did?
What will we do about them Again – prevent and monitor
Steps in a SWIFT Analysis
Prepare the guide words: The facilitator should select a set of guide words to be used in the SWIFT.
Assemble the team: Select participants for the SWIFT workshop based on their knowledge of the system/process being assessed and the degree to which they represent the full range of stakeholder groups.
Background: Describe the trigger for the SWIFT (e.g., a regulatory change, an adverse event, etc.).
Articulate the purpose: Clearly explain the purpose to be served by the SWIFT (e.g., to improve effectiveness of the process).
Define the requirements: Articulate the criteria for success
Describe the system: Provide appropriate-level textual and graphical descriptions of the system or process to be risk assessed. A clear understanding is necessary and can be is established through interviews, gathering a multifunctional team and through the study of documents, plans and other records. Normally the
Identify the risks/hazards: This is where the structured what-if technique is applied. Use the guide words/headings with each system, high-level subsystem, or process step in turn. Participants should use prompts starting with the phrases like “What if…” or “How could…” to elicit potential risks/hazards associated with the guide word. For instance, if the process is “Receipt of samples,” and the guide word is “time, timing or speed,” prompts might include: “What if the sample is delivered at a shift change” (wrong time) or “How could the sample be left waiting too long in ambient conditions?” (wrong timing).
Assess the risks: With the use of either a generic approach or a supporting risk analysis technique, estimate the risk associated with the identified hazards. In light of existing controls, assess the likelihood that they could lead to harm and the severity of harm they might cause. Evaluate the acceptability of these risk levels, and identify any aspects of the system that may require more detailed risk identification and analysis.
Propose actions: Propose risk control action plans to reduce the identified risks to an acceptable level.
Review the process: Determine whether the SWIFT met its objectives, or whether a more detailed risk assessment is required for some parts of the system.
Document: Produce an overview document to communicate the results of the SWIFT.
Additional risk assessment: Conduct additional risk assessments using more detailed or quantitative techniques, if required. The SWIFT Analysis is really effective as a filtering mechanism to focus effort on the most valuable areas.
The facilitator and process owner can choose any guide words that seem appropriate. Guidewords usually stem around:
Wrong: Person or people
Wrong: Place, location, site, or environment
Wrong: Thing or things
Wrong: Idea, information, or understanding
Wrong: Time, timing, or speed
Failure: Control or Detection
If your organization has invested time to create root cause categories and sub-categories, the guidewords can easily start there.