Feeds:
Posts
Comments

Posts Tagged ‘Group Decisions’

In this second part of this discussion we will talk about how applying a structured decision process can help to temper, if not overcome some of the biases we pointed out in the previous post.

Let’s start with a brief description of a structured approach to decision making.  For this example we will consider the Analytic Hierarchy Process (AHP) developed by Dr. Thomas Saaty.  This approach has been made accessible by Decision Lens, Inc., who has developed a software solution that enables application of this process to organizations to collaboratively make complex multi-criteria, multi-stakeholder decision where these organizations are faced with many possible courses of action that can be taken.

The Analytic Hierarchy Process (AHP) is a theory of measurement that uses the principles of comparative judgment, first to make comparisons of decision criteria with respect to some property and then derives scales of priority measurement from these judgments.  In doing this the AHP can be used to derive measurements for anything and the meaning depends on the judgments. This allows creation of a measurement system with regard to the criteria to derive the scales and establish a prioritization framework.

The application of AHP follows a structured, while sometimes iterative, format.  Before we look at our example, let’s briefly review the essential aspects of the AHP process.

  1. Develop a hierarchy or tree of criteria- in clusters from high level categories at the top level to more specific sub-criteria, down to ratings and measures that will be used to differentiate the strategic value of options.
  2. Comparison of criteria to each other using a pairwise approach to establish their relative priorities for use in assessing options
  3. Rating the options against quantitative or qualitative scales that are derived to describe each criterion, and measure how well options reflect the priorities expressed within them.
  4. Optimize the allocation of resources among the options by way of cost/benefit analysis and performing sensitivity analysis to determine the robustness and drivers of decision outcomes.

So how can this impact many of the biases laid out in our previous discussion?  Let’s see.

We will look at an example of how some of the anchoring introduced by our fictional Committee Chair is impacted by item 1) and 2) above.  Similarly these techniques can help with many of the other issues highlighted in the way the group interacts in our example.

Committee Chair:  So we have a decision to make.  Which of these two product options are we going to pursue?  This is a critical strategic decision for us, and key to our ability to hit next year’s numbers.

Ideally the group of stakeholders to the decision would take a collaborative approach to defining a decision goal, and establishing the criteria that best enable achieving the goal. The weights of these criteria can then be derived before going down the narrow path being presented by the committee chair.  In this case, the group would be asked what the key factors that shape their decision are.  They may ultimately conclude that the criteria of “next year’s numbers” is the primary consideration for their decision, but without first defining the goal of the decision and articulating the issues that should shape the framework for making the decisions they can fall subject to this anchoring.  Among the important criteria may be the short term financial impact, but perhaps also the long term impact.  The question is which is more important and by how much?  By surfacing these counterbalancing criteria, and having the group weigh in on which should be the larger consideration in the decision, some anchoring from a single influential member can be buffered.  The group in our conversation goes on to talk about a variety of topics in a somewhat random, circular  and unstructured way; what the customers prefer, how easy it is to manufacture, and the overall technical feasibility of the product, etc.  Considering these other criteria can even further balance the thinking of the group and create a more holistic view of the decision.  Once a determination is made by the group of stakeholders to the decision as to which of these criteria is most important, the options can then be put to the test for how well they meet these criteria and the objectives imbedded within them.

If we look at the nature of the conversation, the key stakeholders are speaking of a decision structure that might look something like the following:

This all becomes very interesting in the process of making the comparative judgments.  It becomes virtually impossible for the group to get into single criteria based arguments when posed with a decision structure like this.  By virtue of the comparison exercise you can’t say something doesn’t matter, you can only say how much more or less it matters than something else.  There are often cases like the following where an outlier may need to defend their position in an attempt to influence a group, or have to concede that the rest of the group has a different point of view.  The question is, is this outlier Henry Fonda in “12 Angry Men” or Jim Carey in “Yes Man”.  In some cases information comes to light in these discussions that either substantiates or negates the position of the person who opposes the group, but one way or another, this kind of conversation moves the group to greater understanding and alignment by making all of these preferences and biases more explicit and transparent.  Making this quantifiable using AHP, may look something like this when engaging on the question, “with regard to this decision, which of the following pair is more important, and by how much?”

Provided the decision makers have the ability to stay independent in their assessments and not play “follow the leader” (which can be accomplished by anonymously sourcing these judgments), we can see that despite the very strong preference of the Committee Chair to have the financial focused considerations trump other factors that the opinions and preferences of the entire group give it only a slightly larger consideration on average than the Customer Preference for instance.  The arguments of others may run along the lines of “if we give the customer a preferred product, they’ll buy it, and we’ll meet our financial objectives.  Anyway, our financial forecasts tend to be less reliable than our customer research”.  Often the person with the extreme judgment will consider the positions of their colleagues and scale back the strength of their preference. In other cases, they may hold their ground, but have to accept that theirs is not the popular opinion.  In any case, voices are heard and fair process is at work.  This approach assures that at least participants have to hear perspectives that may interrupt biases.

Let’s look at the next level down in the decision.  When talking about financial impact, which is more important the short term or the long term?

Here again, we can see that the diversity of opinion in the group allows for some greater consideration of the short term, but the more extreme preference of a single stakeholder (which may be completely justified given their incentives, or political pressures) is again dampened by those who feel the long term should be given a more equal consideration, or even preferential consideration.  Collectively the group leans Short Term, but can create a framework that brings a considerably more balanced perspective to the decision.

Using a similar approach the decision makers, or other subject matter experts that they appoint, can be tasked with evaluating how well each option meets each of these criteria for selection.  The same principles of bringing multiple perspectives together and having a structured conversation about outliers can be used to temper the effect of biases and have a counterbalancing effect on more extreme positions while allowing for all of these perspectives to be considered.

This ultimately has the effect of creating a tangible view of the mental model that a group may be bringing to a decision.  These differences in perspective and strengths of preferences can remain invisible in discussion except for the way they are expressed in disagreement, power driven arguments, and advocacy based discussion.

The basic principles as outlined by James Surowiecki in “The Wisdom of Crowds” can be made real using such an approach.  The use of AHP as a decision framework allows for;

  • Independence:  Each member can independently provide their priorities
  • Diversity of Opinion: The AHP actually promotes the expression of diverse opinion
  • Local Knowledge – Stakeholders from different subject matter expertise are able to evaluate the information through their own lens of local knowledge.
  • Aggregation – The mathematical underpinnings of AHP create a unique aggregating mechanism where the decision makers derive the measurement system and establish priorities in a process of transforming implicit considerations into an explicit value model for their decision.

One possible approach to limiting the influence of biases may be to tackle them head on by bringing them to light with a degree of structure and process rather than allowing them to run wild in an unstructured, circular, power driven, advocacy based debate.

Read Full Post »

I hope you like the title of this post. It tees things up pretty well for what follows in this first of a two part series on cognitive biases.  The title demonstrates a Bias Blind Spot, suggesting the “I’m not biased, but THEY are”, and the famous “I knew it all along” type of thinking that accompanies the Hindsight Bias.  I had an opportunity to speak on behalf of Decision Lens at the recent Cambridge Health Institute conference on Portfolio Management last month, and one of the parts of my talk that stimulated the most interest and conversation, was the portion on cognitive biases and how structured decision making may help overcome them.  So, having had these conversations during and after the conference and given some thought to them in the context of portfolio decisions and project or product choices, I thought I would share a typical project selection type discussion that I have found myself in over the years and then break it down to look at what might be going on to help illustrate a few points.  Next post we’ll talk about possible remedies of structured and collaborative decision making and their potential to positively influence the process.  Enjoy.

Setting: A product or project Steering Committee meeting in a large company delivering  about 60% of its strategic planning goals from new product development (like many companies).

Committee Chair:  So we have a decision to make.  Which of these two product options are we going to pursue?  This is a critical strategic decision for us, and key to our ability to hit next year’s numbers.

VP R&D (Joe):  I think there’s not much to decide really, clearly WonderWidget is the superior option.  Acme consulting’s report said as much, and Maria, didn’t your team’s study find it to be preferred?

Director Market Research (Maria ): Well, I’m not sure how much value I put in Acme’s assessment, but yes, we did on the whole find WonderWidget the better choice. However, some groups preferred GreatGadget.  In fact, look at these numbers.  When I broke them out by age group against our target, an iron clad case can be made for GreatGadget.

VP R&D (Joe): OK, but c’mon. We’ve tried the GreatGadget approach before, and it has failed outright.  As I recall, the WonderWidget prototype shown to those groups didn’t even include our latest greatest improvements, isn’t that true Maria?

Director Market Research (Maria): Well, um…, I think…

VP R&D (Joe): I’m also not sure how we recruited those participants in the study, but they certainly didn’t seem representative of the typical savvy of most of our users, wouldn’t you agree Maria?

Director Market Research (Maria): I have some concerns about a couple of aspects of the study design, that may have contributed to what you observed.

VP Operations (Andre): Joe, there has to be considerable value in GreatGadget, it is right in our wheelhouse, it’s basically repurposing known technology!

VP R&D (Joe): I wouldn’t say that… Are you saying that because of the common interface?

VP Operations (Andre): We would have to be able to have GreatGadget commercially ready in a fraction of the time and cost!, 6 months max, and very little incremental investment based on our existing capabilities.

Director Market Research (Maria): We’ve had some quality complaints from product produced on that platform, though I do think it has some merit…

VP R&D (Joe): It may be faster and cheaper Andre, but Maria’s right, no one wants it.  We knew when we launched it that it may take a long time to work out the bugs.

VP Operations (Andre): Maria, those are minor problems, we can overcome those from my shop.  I’m 99% sure of it.

Director Market Research (Maria):  I didn’t say…

Committee Chair:  OK, I’ve been listening very objectively.  We have a track record of not always being very good at these decisions.  While we’ve been in a bit of a drought; we’re due for a win.  It sounds to me like we are reaching a general consensus that we should pursue GreatGadget.  So, how do we move forward?

==========

Now, let’s replay this conversation and take a little deeper look into what’s going on…

If you need to reference the biases discussed below, you can follow this link or those embedded in the discussion.

==========

Committee Chair:  So we have a decision to make.  Which of these two product options are we going to pursue?  This is a critical strategic decision for us, and key to our ability to hit next year’s numbers.

Comment >> Our committee chair leads out of the gate to trigger the Framing Effect, immediately limiting the options to two. Then throws in a dash of Focusing Effect, and Hyperbolic Discounting to drive the group to viewing the decision through the filter of next year’s number.

VP R&D (Joe):  I think there’s not much to decide really, clearly WonderWidget is the superior option.  Acme consulting’s report said as much, and Maria, didn’t your team’s study find it to be preferred?

Comment >> Joe responds with what could be Positive Outcome, Wishful Thinking and Optimism bias inferring a foregone conclusion about the route to successful decision. He then evokes the Interloper Effect about the objectivity of the consultants with no substantiation, and pursues the Confirmation Bias as he seeks corroboration for his position from Maria.

Director Market Research (Maria ): Well, I’m not sure how much value I put in Acme’s assessment, but yes, we did on the whole find WonderWidget the better choice. However, some groups preferred GreatGadget.  In fact, look at these numbers.  When I broke them out by age group against our target, an iron clad case can be made for GreatGadget.

Comment >> Maria counters Joe’s Interloper Effect with a dose of Ingroup Bias rewarding her group for their superior research efforts, she then seems to have an episode of the Framing Effect as she begins to parse the data in ways to support an argument that runs contrary to Joe’s position.

VP R&D (Joe): OK, but c’mon. We’ve tried the GreatGadget approach before, and it has failed outright.  As I recall, the WonderWidget prototype shown to those groups didn’t even include our latest greatest improvements, isn’t that true Maria?

Comment >> Whoa!  Maria hits Joe right in his Semmelweis Reflex as he responds to reject the new evidence that contradicts his position, he reels and strikes back with a combination of the Subjective Validation, The Primacy Effect, and Negativity Bias as he doesn’t substantiate the outright failure, gives the initial failure more emphasis than the current research, and gives more weight to the negative aspects of the previous effort, than any positives.  He then slips in an uppercut that tags Maria right in the Suggestibility Bias.

Director Market Research (Maria): Well, um…, I think…

Comment >> Maria is now suffering from some combination of False Memory, and Cryptomnesia as she fights her confusion to sort facts from suggestions and is likely moving down the path to some form of Information Bias to try to shore up the data to make the case when the data is either unavailable or irrelevant to the influence driven argument.

VP R&D (Joe): I’m also not sure how we recruited those participants in the study, but they certainly didn’t seem representative of the typical savvy of most of our users, wouldn’t you agree Maria?

Comment >> Joe doubles down on triggering Maria’s Suggestibility Bias with his Fundamental Attribution Error about the participants in the study.

Director Market Research (Maria): I have some concerns about a couple of aspects of the study design, that may have contributed to what you observed.

Comment >> Maria hints at the fact that she may be concerned about a variety biases, like the Hawthorne Effect, Herd Instinct, Expectation Bias, or  Selection Biases to which studies may be prone.

VP Operations (Andre): Joe, there has to be considerable value in GreatGadget, it is right in our wheelhouse, it’s basically repurposing known technology!

Comment>> Andre is new to the party, and comes with a BYOB (Bring Your Own Bias) of Status Quo, and the Mere Exposure Effect.

VP R&D (Joe): I wouldn’t say that… Are you saying that because of the common interface?

VP Operations (Andre): We would have to be able to have GreatGadget commercially ready in a fraction of the time and cost!, 6 months max, and very little incremental investment based on our existing capabilities.

Comment>> Andre goes on to fall victim to the Planning Fallacy, by likely underestimating the time and cost required to undertake this similar but entirely new effort.  He is likely in the throes of the Overconfidence Bias.

Director Market Research (Maria): We’ve had some quality complaints from product produced on that platform, though I do think it has some merit…

VP R&D (Joe): It may be faster and cheaper Andre, but Maria’s right, no one wants it.  We knew when we launched it that it may take a long time to work out the bugs.

Comment>> Joe makes a huge leap, and by way of the Authority Bias he attributes expertise to Maria and exaggerates her position through a bit of Egocentric Bias, and the Availability Cascade bias (a.k.a; if you say it enough it is true).  Then he pulls out the Hindsight Bias!

VP Operations (Andre): Maria, those are minor problems, we can overcome those from my shop.  I’m 99% sure of it.

Comment>> ???… Overconfidence Bias run amuck.

Director Market Research (Maria):  I didn’t say…

Comment>> Sorry Maria, looks like this train is leaving the station…

Committee Chair:  OK, I’ve been listening very objectively.  We have a track record of not always being very good at these decisions. While we’ve been in a bit of a drought; we’re due for a win.  It sounds to me like we are reaching a general consensus that we should pursue GreatGadget.  So, how do we move forward?

Comment>> Lastly, this is a mix of Bias Blindspot (believing you are not biased), Outcome Bias (judging the result rather than the quality of the decision at the time, i.e. knowing what you knew then), and the Gambler’s Fallacy that biases one to think that a series of losses must be leading to a win, while the odds in the meantime remain exactly the same… all with a False Consensus Effect cherry on top.

====================

Sound familiar?  In my next post I’ll talk about how to use a structured approach to decision making to help neutralize some of these effects and increase the chances that the group makes the best decision possible with the information available.

Read Full Post »

%d bloggers like this: