Multi-Criteria Decision-Making to Drive Risk Control

To be honest, too often, we perform a risk assessment not to make decisions but to justify an already existing risk assessment. The risk assessment may help define a few additional action items and determine how rigorous to be about a few things. It actually didn’t make much of an impact on the already-decided path forward. This is some pretty bad risk management and decision-making.

For highly important decisions with high uncertainty or complexity, it is useful to consider the options/alternatives that exist and assess the benefits and risks of each before deciding on a path forward. Thoroughly identifying options/alternatives and assessing the benefits and risks of each can help the decision-making process and ultimately reduce risk.

An effective, highly structured decision-making process can help answer the question, ‘How can we compare the consequences of the various options before deciding?

The most challenging risk decisions are characterized by having several different, important things to consider in an environment where there are often multiple stakeholders and, often, multiple decision-makers. 

In Multi-Criteria Decision-Making (MCDM), the primary objective is the structured consideration of the available alternatives (options) for achieving the objectives in order to make the most informed decision, leading to the best outcome.

In a Quality Risk Management context, the decision-making concerns making informed decisions in the face of uncertainty about risks related to the quality (and/or availability) of medicines.

Key Concepts of MCDM

  1. Conflicting Criteria: MCDM deals with situations where criteria conflict. For example, when purchasing a car, one might need to balance cost, comfort, safety, and fuel economy, which often do not align perfectly.
  2. Explicit Evaluation: Unlike intuitive decision-making, MCDM involves a structured approach to explicitly evaluate multiple criteria, which is crucial when the stakes are high, such as deciding whether to build additional manufacturing capacity for a product under development.
  3. Types of Problems:
  • Multiple-Criteria Evaluation Problems: These involve a finite number of alternatives known at the beginning. The goal is to find the best alternative or a set of good alternatives based on their performance across multiple criteria.
  • Multiple-Criteria Design Problems: In these problems, alternatives are not explicitly known and must be found by solving a mathematical model. The number of alternatives can be very large, often exponentially.

Preference Information: The methods used in MCDM often require preference information from decision-makers (DMs) to differentiate between solutions. This can be done at various stages of the decision-making process, such as prior articulation of preferences, which transforms the problem into a single-criterion problem.

MCDM focuses on risk and uncertainty by explicitly weighing criteria and trade-offs between them. Multi-criteria decision-making (MCDM) differs from traditional decision-making methods in several key ways:

  1. Explicit Consideration of Multiple Criteria: Traditional decision-making often focuses on a single criterion like cost or profit. MCDM explicitly considers multiple criteria simultaneously, which may be conflicting, such as cost, quality, safety, and environmental impact[1]. This allows for a more comprehensive evaluation of alternatives.
  2. Structured Approach: MCDM provides a structured framework for evaluating alternatives against multiple criteria rather than relying solely on intuition or experience. It involves techniques like weighting criteria, scoring alternatives, and aggregating scores to rank or choose the best option.
  3. Transparency and Consistency: MCDM methods aim to make decision-making more transparent, consistent, and less susceptible to individual biases. The criteria, weights, and evaluation process are explicitly defined, allowing for better justification and reproducibility of decisions.
  4. Quantitative Analysis: Many MCDM methods employ quantitative techniques, such as mathematical models, optimization algorithms, and decision support systems. This enables a more rigorous and analytical approach compared to traditional qualitative methods.
  5. Handling Complexity: MCDM is particularly useful for complex decision problems involving many alternatives, conflicting objectives, and multiple stakeholders. Traditional methods may struggle to handle such complexity effectively.
  6. Stakeholder Involvement: Some MCDM methods, like the Analytic Hierarchy Process (AHP), facilitate the involvement of multiple stakeholders and the incorporation of their preferences and judgments. This can lead to more inclusive and accepted decisions.
  7. Trade-off Analysis: MCDM techniques often involve analyzing trade-offs between criteria, helping decision-makers understand the implications of prioritizing certain criteria over others. This can lead to more informed and balanced decisions.

While traditional decision-making methods rely heavily on experience, intuition, and qualitative assessments, MCDM provides a more structured, analytical, and comprehensive approach, particularly in complex situations with conflicting criteria.

Multi-Criteria Decision-Making (MCDM) is typically performed following these steps:

  1. Define the Decision Problem: Clearly state the problem or decision to be made, identify the stakeholders involved, and determine the desired outcome or objective.
  2. Establish Criteria: Identify the relevant criteria that will be used to evaluate the alternatives. These criteria should be measurable, independent, and aligned with the objectives. Involve stakeholders in selecting and validating the criteria.
  3. Generate Alternatives: Develop a comprehensive list of potential alternatives or options that could solve the problem. Use techniques like brainstorming, benchmarking, or scenario analysis to generate diverse alternatives.
  4. Gather Performance Data: Assess how each alternative performs against each criterion. This may involve quantitative data, expert judgments, or qualitative assessments.
  5. Assign Criteria Weights: By assigning weights, determine each criterion’s relative importance or priority. This can be done through methods like pairwise comparisons, swing weighting, or direct rating. Stakeholder input is crucial here.
  6. Apply MCDM Method: Choose an appropriate MCDM technique based on the problem’s nature and the available data. Some popular methods include: Analytic Hierarchy Process (AHP); Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS); ELimination and Choice Expressing REality (ELECTRE); Preference Ranking Organization METHod for Enrichment of Evaluations (PROMETHEE); and, Multi-Attribute Utility Theory (MAUT).
  7. Evaluate and Rank Alternatives: Apply the chosen MCDM method to evaluate and rank the alternatives based on their performance against the weighted criteria. This may involve mathematical models, software tools, or decision support systems.
  8. Sensitivity Analysis: Perform sensitivity analysis to assess the robustness of the results and understand how changes in criteria weights or performance scores might affect the ranking or choice of alternatives.
  9. Make the Decision: Based on the MCDM analysis, select the most preferred alternative or develop an action plan based on the ranking of alternatives. Involve stakeholders in the final decision-making process.
  10. Monitor and Review: Implement the chosen alternative and monitor its performance. Review the decision periodically, and if necessary, repeat the MCDM process to adapt to changing circumstances or new information.

MCDM is an iterative process; stakeholder involvement, transparency, and clear communication are crucial. Additionally, the specific steps and techniques may vary depending on the problem’s complexity, the data’s availability, and the decision-maker’s preferences.

MCDM TechniqueDescriptionApplicationKey Features
Analytic Hierarchy Process (AHP)A structured technique for organizing and analyzing complex decisions, using mathematics and psychology.Widely used in business, government, and healthcare for prioritizing and decision-making.Pairwise comparisons, consistency checks, and hierarchical structuring of criteria and alternatives.
Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)Based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution and the longest geometric distance from the negative ideal solution.Frequently used in engineering, management, and human resource management for ranking and selection problems.Compensatory aggregation, normalization of criteria, and calculation of geometric distances.
Elimination and Choice Expressing Reality (ELECTRE)An outranking method that compares alternatives by considering both qualitative and quantitative criteria. It uses a pairwise comparison approach to eliminate less favorable alternatives.Commonly used in project selection, resource allocation, and environmental management.Use of concordance and discordance indices, handling of both qualitative and quantitative data, and ability to deal with incomplete rankings.
Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE)An outranking method that uses preference functions to compare alternatives based on multiple criteria. It provides a complete ranking of alternatives.Applied in various fields such as logistics, finance, and environmental management.Preference functions, visual interactive modules (GAIA), and sensitivity analysis.
Multi-Attribute Utility Theory (MAUT)Involves converting multiple criteria into a single utility function, which is then used to evaluate and rank alternatives. It takes into account the decision-maker’s risk preferences and uncertainties.Used in complex decision-making scenarios involving risk and uncertainty, such as policy analysis and strategic planning.Utility functions, probabilistic weights, and handling of uncertainty.
Popular MCDM Techniques

Building the Risk Team

Good risk assessments are a team effort. If done right this is a key way to reduce subjectivity and it recognizes that none of us know everything.

An effective risk team:

One of the core jobs of a process owner in risk assessment is assembling this team and ensuring they have the space to do their job. They are often called the champion or sponsor for good reason.

It is important to keep in mind that membership of this team will change, gaining and losing members and bringing on people for specific subsections, depending on the scale and scope of the risk assessment.

The more complex the scope and the more involved the assessment tool, the more important it is to have a facilitator to drive the process. This allows someone to focus on the process of the risk assessment, and the reduction of subjectivity.

Quality, Decision Making and Putting the Human First

Quality stands in a position, sometimes uniquely in an organization, of engaging with stakeholders to understand what objectives and unique positions the organization needs to assume, and the choices that are making in order to achieve such objectives and positions.

The effectiveness of the team in making good decisions by picking the right choices depends on their ability of analyzing a problem and generating alternatives. As I discussed in my post “Design Lifecycle within PDCA – Planning” experimentation plays a critical part of the decision making process. When designing the solution we always consider:

  • Always include a “do nothing” option: Not every decision or problem demands an action. Sometimes, the best way is to do nothing.
  • How do you know what you think you know? This should be a question everyone is comfortable asking. It allows people to check assumptions and to question claims that, while convenient, are not based on any kind of data, firsthand knowledge, or research.
  • Ask tough questions Be direct and honest. Push hard to get to the core of what the options look like.
  • Have a dissenting option. It is critical to include unpopular but reasonable options. Make sure to include opinions or choices you personally don’t like, but for which good arguments can be made. This keeps you honest and gives anyone who see the pros/cons list a chance to convince you into making a better decision than the one you might have arrived at on your own.
  • Consider hybrid choices. Sometimes it’s possible to take an attribute of one choice and add it to another. Like exploratory design, there are always interesting combinations in decision making. This can explode the number of choices, which can slow things down and create more complexity than you need. Watch for the zone of indifference (options that are not perceived as making any difference or adding any value) and don’t waste time in it.
  • Include all relevant perspectives. Consider if this decision impacts more than just the area the problem is identified in. How does it impact other processes? Systems?

A struggle every organization has is how to think through problems in a truly innovative way.  Installing new processes into an old bureaucracy will only replace one form of control with another. We need to rethink the very matter of control and what it looks like within an organization. It is not about change management, on it sown change management will just shift the patterns of the past. To truly transform we need a new way of thinking. 

One of my favorite books on just how to do this is Humanocracy: Creating Organizations as Amazing as the People Inside Them by Gary Hamel and Michele Zanini. In this book, the authors advocate that business must become more fundamentally human first.  The idea of human ability and how to cultivate and unleash it is an underlying premise of this book.

Visualized by Rose Fastus

it’s possible to capture the benefits of bureaucracy—control, consistency, and coordination—while avoiding the penalties—inflexibility, mediocrity, and apathy.

Gary Hamel and Michele Zanini, Humanocracy, p. 15

The above quote really encapsulates the heart of this book, and why I think it is such a pivotal read for my peers. This books takes the core question of a bureaurcacy is “How do we get human beings to better serve the organization?”. The issue at the heart of humanocracy becomes: “What sort of organization elicits and merits the best that human beings can give?” Seems a simple swap, but the implications are profound.

Bureaucracy versus Humanocracy. Source: Gary Hamel and Michele Zanini, Humanocracy, p. 48

I would hope you, like me, see the promise of many of the central tenets of Quality Management, not least Deming’s 8th point. The very real tendency of quality to devolve to pointless bureaucracy is something we should always be looking to combat.

Humanocracy’s central point is that by truly putting the employee first in our organizations we drive a human-centered organization that powers and thrives on innovation. Humanocracy is particularly relevant as organizations seek to be more resilient, agile, adaptive, innovative, customer centric etc. Leaders pursuing such goals seek to install systems like agile, devops, flexible teams etc.  They will fail, because people are not processes.  Resiliency, agility, efficiency, are not new programming codes for people.  These goals require more than new rules or a corporate initiative.  Agility, resilience, etc. are behaviors, attitudes, ways of thinking that can only work when you change the deep ‘systems and assumptions’ within an organization.  This book discusses those deeper changes.

Humanocracy lays out seven tips for success in experimentation. I find they align nicely with Kotter’s 8 change accelerators.

Humanocracy’s TipKotter’s Accelerator
Keep it SimpleGenerate (and celebrate) short-term wins
Use VolunteersEnlist a volunteer army
Make it FunSustain Acceleration
Start in your own backyardForm a change vision and strategic initiatives
Run the new parallel with the oldEnable action by removing barriers
Refine and RetestSustain acceleration
Stay loyal to the problemCreate a Sense of Urgency around a
Big Opportunity
Comparison to Kotter’s Eight Accelerators for Change

Teams reason better

Teams collaborate better than individuals on a wide range of problem-solving for two reason:

  • People are exposed to points of view different from their own. If the arguments are good enough, people can change their mind to adopt better beliefs. This requires structure, such as “Yes…but…and
  • The back-and-forth of a conversation allows people to address counterarguments, and thus to refine their arguments, making it more likely that the best argument carries the day.

Both of these work to reduce bias and subjectivity.

Principles of Team Collaboration

There are a few principles to make this team collaboration work.

  • Clear purpose: What is the reason for the collaboration? What’s the business case or business need? Without alignment on the purpose and its underlying importance to the organization, the collaboration will fail. The scope will start to change, or other priorities will take precedence. 
  • Clear process: How will the collaboration take place? What are the steps? What is the timing? Who is responsible for what?
  • Clear expectations: What is the specific goal or outcome we are striving for through this collaboration? 
  • Clear supportProblems will arise that the team cannot handle on their own. In those cases, what is the escalation process, including who and when? 

Ensure these are in your team ground rules, measure success and perform continuous improvement.

Information Gaps

An information gap is a known unknown, a question that one is aware of but for which one is uncertain of the answer. It is a disparity between what the decision maker knows and what could be known The attention paid to such an information gap depends on two key factors: salience, and importance.

  • The salience of a question indicates the degree to which contextual factors in a situation highlight it. Salience might depend, for example, on whether there is an obvious counterfactual in which the question can be definitively answered.
  • The importance of a question is a measure of how much one’s utility would depend on the actual answer. It is this factor—importance—which is influenced by actions like gambling on the answer or taking on risk that the information gap would be relevant for assessing.

Information gaps often dwell in the land of knightian uncertainty.

Communicating these Known Unknowns

Communicating around Known Unknowns and other forms of uncertainty

A wide range of reasons for information gaps exist:

  • variability within a sampled population or repeated measures leading to, for example, statistical margins-of-error
  • computational or systematic inadequacies of measurement
  • limited knowledge and ignorance about underlying processes
  • expert disagreement.