We gather for a meeting, usually around a table, place our collective attention on the problem, and let, most likely let our automatic processes take over. But, all too often, this turns out to be a mistake. From this stems poor meetings, bad decisions, and a general feeling of malaise that we are wasting time.
Problem-solving has stages, it is a process, and in order for groups to collaborate effectively and avoid talking past one another, members must simultaneously occupy the same problem-solving stage. Clear communication is critical here and it is important for the team to understand what. Our meetings need to be methodical.
In a methodical meeting, for each issue that needs to be discussed, members deliberately and explicitly choose just one problem-solving stage to complete.
To convert an intuitive meeting into a methodical one take your meeting agenda, and to the right of each agenda item, write down a problem-solving stage that will help move you closer to a solution, as well as the corresponding measurable outcome for that stage. Then, during that part of the meeting, focus only on achieving that outcome. Once you do, move on.
A Template for Conducting a Methodical Meeting
Pair each agenda item with a problem solving stage and a measurable outcome.
Problem Solving Stage
Select a venue for the offsite
List of potential venues
Discuss ERP usage problems
Implement new batch record strategy
Plan for Implementation
List of actions / owners / due dates
Review proposed projects
List of strengths and weaknesses
Choose a vendor
If you don’t know which problem-solving stage to choose, consider the following:
Do you genuinely understand the problem you’re trying to solve? If you can’t clearly articulate the problem to someone else, chances are you don’t understand it as well as you might think. If that’s the case, before you start generating solutions, consider dedicating this part of the meeting to framing and ending it with a succinctly written problem statement.
Do you have an ample list of potential solutions? If the group understands the problem, but hasn’t yet produced a set of potential solutions, that’s the next order of business. Concentrate on generating as many quality options as possible (set the alternatives).
Do you know the strengths and weaknesses of the various alternatives? Suppose you have already generated potential solutions. If so, this time will be best spent letting the group evaluate them. Free attendees from the obligation of reaching a final decision—for which they may not yet be ready—and let them focus exclusively on developing a list of pros and cons for the various alternatives.
Has the group already spent time debating various alternatives? If the answer is yes, use this part of the meeting to do the often difficult work of choosing. Make sure, of course, that the final choice is in writing.
Has a decision been made? Then focus on developing an implementation plan. If you’re able to leave the conversation with a comprehensive list of actions, assigned owners, and due dates, you can celebrate a remarkably profitable outcome.
The decisions we make are often complex and uncertain. Making the decision-making process better is critical to success, and yet too often we do not think of the how we make decisions, and how to confirm we are making good decisions. In order to bring quality to our decisions, we need to understand what quality looks like and how to obtain it
There is no universal best process or set of steps to follow
in making good decisions. However, any good decision process needs to embed the
idea of decision-quality as the measurable destination.
Decisions do not come ready to be made. You must shape them and declare what is the decision you should be making; that must be made. All decisions have one thing in common – the best choice creates the best possibility of what you truly want. To find that best choice, you need decision-quality and you must recognize it as the destination when you get there. You cannot reach a good decision, achieve decision-quality, if you are unable to visualize or describe it. Nor can you say you have accomplished it, if you cannot recognize it when it is achieved.
What makes a Good Decision?
The six requirements for a good decision are: (1) an
appropriate frame, (2) creative alternatives, (3) relevant and reliable
information, (4) clear values and trade-offs, (5) sound reasoning, and (6)
commitment to action. To judge the quality of any decision before you act, each
requirement must be met and addressed with quality. I like representing it as a
chain, because a decision is no better than the weakest link.
The frame specifies the problem or opportunity you are
tackling, asking what is to be decided. It has three parts: purpose in making the decision; scope of what
will be included and left out; and your perspective including your point of
view, how you want to approach the decision, what conversations will be needed,
and with whom. Agreement on framing is essential, especially when more than one
party is involved in decision making. What is important is to find the frame
that is most appropriate for the situation. If you get the frame wrong, you will
be solving the wrong problem or not dealing with the opportunity in the correct
The next three links are: alternatives – defining what you
can do; information – capturing what you know and believe (but cannot control),
and values – representing what you want and hope to achieve. These are the
basis of the decision and are combined using sound reasoning, which guides you
to the best choice (the alternative that gets you the most of what you want and
in light of what you know). With sound reasoning, you reach clarity of
intention and are ready for the final element – commitment to action.
Asking: “What is the decision I should be making?” is not a
simple question. Furthermore, asking the question “On what decision should I be
focusing?” is particularly challenging. It is a question, however, that is
important to be asked, because you must know what decision you are making. It
defines the range within which you have creative and compelling alternatives.
It defines constraints. It defines what is possible. Many organizations fail to
create a rich set of alternatives and simply debate whether to accept or reject
a proposal. The problem with this approach is that people frequently latch on
to ideas that are easily accessible, familiar or aligned directly with their
Exploring alternatives is a combination of analysis, rigor, technology and judgement. This is about the past and present – requiring additional judgement to anticipate future consequences. What we know about the future is uncertain and therefore needs to be described with possibilities and probabilities. Questions like: “What might happen?” and “How likely is it to happen?” are difficult and often compound. To produce reliable judgements about future outcomes and probabilities you must gather facts, study trends and interview experts while avoiding distortions from biases and decision traps. When one alternative provides everything desired, the choice among alternatives is not difficult. Trade-offs must be made when alternatives do not provide everything desired. You must then decide how much of one value you are willing to give up to receive more of another.
Commitment to action is reached by involving the right
people in the decision efforts. The right people must include individuals who
have the authority and resources to commit to the decision and to make it stick
(the decision makers) and those who will be asked to execute the decided-upon
actions (the implementers). Decision makers are frequently not the implementers
and much of a decision’s value can be lost in the handoff to implementers. It
is important to always consider the resource requirements and challenges for
These six requirements of decision-quality can be used to
judge the quality of the decision at the time it is made. There is no need to
wait six months or six years to assess its outcome before declaring the
decision’s quality. By meeting the six requirements you know at the time of the
decision you made a high-quality choice. You cannot simply say: “I did all the
right steps.” You have got to be able to judge the decision itself, not just
how you got to that decision. When you ask, “How good is this decision if we
make it now?” the answer must be a very big part of your process. The piece
missing in the process just may be in the material and the research and that is
a piece that must go right.
Decision-quality is all about reducing comfort zone bias – when people do what they know how to do, rather than what is needed to make a strong, high-quality decision. You overcome the comfort zone bias by figuring out where there are gaps. Let us say the gap is with alternatives. Your process then becomes primarily a creative process to generate alternatives instead of gathering a great deal more data. Maybe we are awash in a sea of information, but we just have not done the reasoning and modelling and understanding of the consequences. This becomes more of an analytical effort. The specific gaps define where you should put your attention to improve the quality of the decision.
Leadership needs to have clearly defined decision rights and
understand that the role of leadership is assembling the right people to make
quality decisions. Once you know how to recognize digital quality, you need an
effective and efficient process to get there and that process involves many
things including structured interactions between decision maker and decision staff,
remembering that productive discussions result when multiple parties are
involved in the decision process and difference in judgement are present.
The most common decision process tends to be an advocacy
decision process – you are asking somebody to sell you an answer. Once you are
in advocacy mode, you are no longer in a decision-quality mode and you cannot
get the best choice out of an advocacy decision process. Advocacy suppresses
alternatives. Advocacy forces confirming evidence bias and means selective
attention to what supports your position. Once in advocacy mode, you are really
in a sales mode and it becomes a people competition.
When you want quality in a decision, you want the alternatives to compete, not the people. From the decision board’s perspective, when you are making a decision, you want to have multiple alternatives in front of you and you want to figure out which of these alternatives beats the others in terms of understanding the full consequences in risk, uncertainty and return. For each of the alternatives one will show up better. If you can make this happen, then it is not the advocate selling it, it is you trying to help look at which of these things gives us the most value for our investment in some way.
The role outcomes play in the measuring of decision quality
Always think of decisions and outcomes as separate because
when you make decisions in an uncertain world, you cannot fully control the
outcomes. When looking back from an outcome to a decision, the only thing you
can really tell is if you had a good outcome or a bad outcome. Hindsight bias
is strong, and once triggered, it is hard to put yourself back into
understanding what decisions should have been made with what you knew, or could
have known, at the time.
In understanding how we use outcomes in terms of evaluating
decisions, you need to understand the importance of documenting the decision
and the decision quality at the time of the decision. Ask yourself, if you were
going to look back two years from now, what about this decision file answers
the questions: “Did we make a decision that was good?” and “What can we learn
about the things about which we had some questions?” This kind of documentation
is different from what people usually do. What is usually documented is the
approval and the working process. There is usually no documentation answering
the question: “If we are going to look back in the future, what would we need
to know to be able to learn about making better decisions?”
The reason you want to look back is because that is the way
you learn and improve the whole decision process. It is not for blaming; in the
end, what you are trying to show in documentation is: “We made the best
decision we could then. Here is what we thought about the uncertainties. Here
is what we thought were the driving factors.” Its about having a learning
When decision makers and individuals understand the
importance of reaching quality in each of the six requirements, they feel
meeting those requirements is a decision-making right and should be demanded as
part of the decision process. To be in a position where they can make a good
decision, they know they deserve a good frame and significantly different alternatives
or they cannot be in a position to reach a powerful, correct conclusion and
make a decision. From a decision-maker’s perspective, these are indeed needs and
rights to be thought about. From a decision support perspective, these needs and
rights are required to be able to position the decision maker to make a good
Building decision-quality enables measurable value creation and its framework can be learned, implemented and measured. Decision-quality helps you navigate the complexity of uncertainty of significant and strategic choices, avoid mega biases and big decision traps.
A Goal is generally described as an effort directed towards an end. In project management, for example, the term goal is to three different target values of performance, time and resources. To be more specific, the project goal specifies the desired outcome (performance), the specific end date (time) and the assigned amount of resources (resources). A goal answers to “What” is the main aim of the project.
An Objective defines the tangible and measurable results of the team to support the agreed goal and meet the planned end time and other resource restrictions. It answers to “How” something is to be done.
I think many of us are familiar with the concept of SMART goals. Lately I’ve been using FAST objectives.
Transparency provides the connective tissue, and must be a primary aspect of any quality culture. Transparency is creating a free flow within an organization and between the organization and its many stakeholders. This flow of information is the central nervous system of an organization and it’s effectiveness depends on it. Transparency influences the capacity to solve problems, innovate, meet challenges and as shown above, meet goals.
This information flow is simply that critical information gets to the right person at the right time and for the right reason. By making our goals transparent we can start that process and make a difference in our organizations.
Risk assessments have a core of the subjective to them, frequently including assumptions about the nature of the hazard, possible exposure pathways, and judgments for the likelihood that alternative risk scenarios might occur. Gaps in the data and information about hazards, uncertainty about the most likely projection of risk, and incomplete understanding of possible scenarios contribute to uncertainties in risk assessment and risk management. You can go even further and say that risk is socially constructed, and that risk is at once both objectively verifiable and what we perceive or feel it to be. Then again, the same can be said of most of science.
Risk is a future chance of loss given exposure to a hazard. Risk estimates, or qualitative ratings of risk, are necessarily projections of future consequences. Thus, the true probability of the risk event and its consequences cannot be known in advance. This creates a need for subjective judgments to fill-in information about an uncertain future. In this way risk management is rightly seen as a form of decision analysis, a form of making decisions against uncertainty.
has a mental picture of risk, but the formal mathematics of risk analysis are
inaccessible to most, relying on probability theory with two major schools of
thought: the frequency school and the subjective probability school. The frequency
school says probability is based on a count of the number of successes divided
by total number of trials. Uncertainty that is ready characterized using
frequentist probability methods is “aleatory” – due to randomness (or random
sampling in practice). Frequentist methods give an estimate of “measured” uncertainty;
however, it is arguably trapped in the past because it does not lend itself to
easily to predicting future successes.
In risk management we tend to measure uncertainty with a combination of frequentist and subjectivist probability distributions. For example, a manufacturing process risk assessment might begin with classical statistical control data and analyses. But projecting the risks from a process change might call for expert judgments of e.g. possible failure modes and the probability that failures might occur during a defined period. The risk assessor(s) bring prior expert knowledge and, if we are lucky, some prior data, and start to focus the target of the risk decision using subjective judgments of probabilities.
Some have argued that a failure to formally control subjectivity — in relation to probability judgments – is the failure of risk management. This was an argument that some made during WCQI, for example. Subjectivity cannot be eliminated nor is it an inherent limitation. Rather, the “problem with subjectivity” more precisely concerns two elements:
A failure to recognize
where and when subjectivity enters and might create problems in risk assessment
and risk-based decision making; and
A failure to
implement controls on subjectivity where it is known to occur.
is about the chance of adverse outcomes of events that are yet to occur,
subjective judgments of one form or another will always be required in both
risk assessment and risk management decision-making.
subjectivity in risk management by:
Raising awareness of where/when subjective judgments of probabilityoccur in risk assessment and risk management
Identifying heuristics and biases where they occur
Improving the understanding of probability among the team and individual experts
More a collection of topics for things I am currently exploring. Please add additional ones and/or resources in the comments.
Increasing collaborative modes of working, specifically more: Matrix structures (Cross et al. 2013, 2016; Cross and Gray 2013) (Distributed) Teamwork (Cross et al. 2015) (Multi-) Project work (Zika-Viktorsson et al. 2006) and multiple team membership (O`Leary et al. 2011) Interruptions, which are ‘normal’ or even as a necessary part of knowledge workers’ workday (Wajcman and Rose 2011) Collaboration, which is seen as an end (Breu et al. 2005; Dewar et al. 2009; Gardner 2017; Randle 2017)
Collaborative work is highly demanding (Barley et al. 2011; Dewar et al. 2009; Eppler and Mengis 2004) Perils of multitasking (Atchley 2010; Ophir et al. 2009; Turkle 2015) Too many structurally unproductive and inefficient teams (Duhigg 2016) Lack of accountability for meeting and conference call time (Fried 2016) Overall, lack of structural protection of employee’s productive time (Fried 2016)
Impacts of collaborative technology Growing share of social technologies in the workplace (Bughin et al. 2017) ‘Always on’ mentality, cycle of responsiveness (Perlow 2012) Platforms are designed to prime and nudge users to spend more time using them (Stewart 2017)
Unclear organizational expectations how to use collaborative technology and limited individual knowledge (Griffith 2014; Maruping and Magni 2015) Technology exacerbates organizational issues (Mankins 2017) Inability to ‘turn off’ (Perlow 2012) Technology creates more complexity than productivity gains (Stephens et al. 2017) Increasing complex media repertoires: highly differentiated, vanishing common denominator (Greene 2017; Mankins 2017) Social technology specific Increased visibility (Treem and Leonardi 2013) and thus the ability to monitor behaviour Impression management and frustration (Farzan et al. 2008) Overall, overload scenarios and fragmentation of work (Cross et al. 2015; Wajcman and Rose 2011)
Increasing ratio of collaborative activities for managers (Mankins and Garton 2017; Mintzberg 1990) and employees (CEB 2013; Cross and Gray 2013)
Workdays are primarily characterized by communication and collaboration.
Managers at intersections of matrix structures get overloaded (Feintzeig 2016; Mankins and Garton 2017) Limited knowledge how to shape collaboration on the managerial level (Cross and Gray 2013; Maruping and Magni 2015) Experts and structurally exposed individuals (e.g. boundary spanners) easily get overburdened with requests (Cross et al. 2016; Cross and Gray 2013).
Behavioral traits (‘givers’) may push employees close burn-outs (Grant 2013; Grant and Rebele 2017) Diminishing ‘perceived control’ over one’s own schedule (Cross and Gray 2013)
Overall, managers and employees do not have enough uninterrupted time (Cross et al. 2016; Mankins and Garton 2017)
Cross, R., Ernst, C., Assimakopoulos, D., and Ranta, D. 2015. “Investing in boundary-spanning collaboration to drive efficiency and innovation,” Organizational Dynamics (44:3), pp. 204–216.
Cross, R., and Gray, P. 2013. “Where Has the Time Gone? Addressing Collaboration Overload in a Networked Economy,” California Management Review (56:1), pp. 1–17.
Cross, R., Kase, R., Kilduff, M., and King, Z. 2013. “Bridging the gap between research and practice in organizational network analysis: A conversation between Rob Cross and Martin Kilduff,” Human Resource Management (52:4), pp. 627–644.
Cross, R., Rebele, R., and Grant, A. 2016. “Collaborative Overload,” Harvard Business Review (94:1), pp. 74–79.
Dewar, C., Keller, S., Lavoie, J., and Weiss, L. M. 2009. “How do I drive effective collaboration to deliver real business impact?,” McKinsey & Company.
Duhigg, C. 2016. Smarter, Faster, Better – The Secrets of Being Productive in Life and Business, New York, USA: Penguin Random House.
Eppler, M. J., and Mengis, J. 2004. “The Concept of Information Overload: A Review of Literature from Organization Science, Accounting, Marketing, MIS, and Related Disciplines,” The Information Society (20:5), pp. 325–344.
Farzan, R., DiMicco, J. M., Millen, D. R., Brownholtz, B., Geyer, W., and Dugan, C. 2008. “Results from Deploying a Participation Incentive Mechanism within the Enterprise,” in Proceedings of the 26th SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy.
Treem, J. W., and Leonardi, P. M. 2013. “Social Media Use in Organizations: Exploring the Affordances of Visibility, Editability, Persistence, and Association,” Annals of the International Communication Association (36:1), pp. 143–189.
Turkle, S. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age, New York, USA: Pinguin Press.