Connecting the Dots with Analyses of Competing Hypotheses

By IDI Staff

As practitioners of an analytic discipline, be it operations research, statistics, or systems engineering, to name a few, many of us really relish the math part of our discipline.  After all, its “hard”; its not easy to do or understand for the layperson; its (maybe) what makes us special.  Short shrift is to often given to the “soft” side of the profession:  communicating, persuading, eliciting, problem definition, generating hypotheses, to name a few.  These activities are messy, ill-defined – not math-like!  Unfortunately for us math lovers, these squishy activities are central to actual problem solving.  Once we get to the “math” portion of the problem, the problem becomes tractable and “easy”.  Some folks call the techniques to handles these messy problems “Soft Operations Research” (Soft OR).

In this blog entry, we will discuss a technique for handling squishy problems called Analysis of Competing Hypotheses (ACH).

When attempting to identify the cause of or predict the outcome of an event, most of us focus our analysis on a preconceived hypothesis.  We then defend that hypothesis by trying to refute or simply ignore evidence to the contrary.  ACH is a technique used to combat this tendency to favor one explanation from the beginning, and to focus on evidence that is inconsistent rather than consistent with various hypotheses.  The focus on "inconsistent" evidence is important because consistent evidence often applies to multiple hypotheses and therefore tends to reinforce biases of trying to prove a preconceived idea.  Developed in the 1970's by Richards J. Heuer, Jr., the former head of the analytic methods unit in the CIA's Office of Political Analysis, ACH is easy to do and can be applied to many problems.

Morgan D. Jones, a former CIA analyst and author of The Thinker's Toolkit, suggests using the following steps:

1.     Generate hypotheses.

2.     Build a matrix, (with rows for evidence and columns for hypotheses)

3.     List "significant" evidence.

4.     Test the evidence for "consistency" or "inconsistency".

5.     Refine the matrix, (redefine the hypotheses, add new "significant" evidence, and delete evidence that is consistent with all hypotheses).

6.     Evaluate each hypothesis, (check underlying assumptions, and delete hypotheses with significant amounts of inconsistent evidence).

7.     Rank the remaining hypotheses, (by the weakness of the inconsistent evidence, but still considering each one relative to the others).

8.     Perform a sanity check.

The following example, drawn from popular discourse, illustrates how to use the ACH technique in order to explore the causes of climate change.  Please note:  we are not advocating a position here – just providing a provocative example for which we make no particular claim of legitimacy.


LEGEND:  C = Consistent, I = Inconsistent

Screen Shot 2013-10-09 at 2.53.36 PM.png

 *Column totals assign (-1) for each piece of inconsistent evidence and (0) for each piece of consistent evidence. The hypothesis with the highest total value has the least number of inconsistencies.

**Heuer suggests applying "sensitivity analysis" to your conclusions to determine the degree to which those conclusions depend on specific pieces of evidence being valid/true.

One takeaway from this exercise is how some "conclusive" evidence may not be considered diagnostic in an ACH.  For example, consider the common claim that C02 levels are currently higher than at any point in the past 650,000 years.  This piece of evidence is consistent with both the human-cause hypothesis and the natural cycle hypothesis, as it does not truly consider the cause of those increases.


The Thinker's Toolkit by Morgan D. Jones

Psychology of Intelligence Analysis by Richards J. Heuer, Jr.

Some links:

Wikipedia’s article:

An article discussing Soft OR, which is a different toolkit than ACH but provides a rich trove of methods for dealing with messy problems: