Sunday 22 Jul 2018 | 11:20 | SYDNEY
Sunday 22 Jul 2018 | 11:20 | SYDNEY

How not to free your mind

6 August 2010 15:23

Tim van Gelder is a philosopher at the University of Melbourne and Principal Consultant in Austhink Consulting, which specialises in critical thinking methods and software. 

Supposedly, some years back, an Indian government program to control birth rates by distributing free condoms failed because the intended users didn't trust the condoms. Anything given out free, they reasoned, had to be worthless. 

Well, that may be apocryphal, but there's a valid point: be wary of anything being given out for free. There may be a trap in it somewhere. Bear this in mind when hearing the apparently good news of the imminent availability, in open-source form, of a software product designed to help intelligence analysts use the Analysis of Competing Hypotheses method. In this case, even if you pay nothing, you'd be buying a lemon. 

The problem isn't with the product itself, which may be Very Good Software (I don’t know; I haven't tried it). The problem is with method it is trying to support — the Analysis of Competing Hypotheses (ACH).  ACH was developed some decades ago by Richard Heuer, the doyen of US analytical tradecraft theoreticians. He came up with it in an attempt to provide a more rigorous method for evaluation of hypotheses. Certainly a worthy ambition. 

ACH is now a standard part of training for analysts in the US intelligence community. However, anecdotal reports suggest that, overwhelmingly, analysts do not go on to actually use the method. (The evidence is only anecdotal because intelligence agencies don't release data on such matters. In fact, they probably don't even gather it.) Analysts don't use the method because they don't like using it and in practice they don't find it helpful. 

The underlying reason for this neglect is that ACH is fundamentally flawed. It has a superficial appeal, but in fact it deeply misconceives the nature of the relationship between evidence and hypotheses. Analysts find the method to be tedious at best, and thoroughly discombobulating at worst.  (For more, see this.)

Further, there is no evidence that ACH actually improves analytical judgements in practice. University of Melbourne philosopher Neil Thomason, who has a deep concern with the quality of intelligence analysis, investigated this and found a shocking lack of any decent evidence supporting such a widely advocated method.

Ironically, the widespread adoption of ACH as the official method for hypothesis evaluation is the result of a failure to consider alternative hypotheses (ie. alternative possible answers to the question, 'What would be the best way to make hypothesis evaluation more rigorous and reliable?') ACH has been falsely assumed to be (a) valid and (b) the only game in town. That is just the kind of 'jumping to conclusions' that ACH would supposedly help us avoid.

So when we hear about software for ACH failing to be adopted by the US intelligence community, we shouldn't assume that it is another case of tragic bungling by massive bureaucracy. In this case, it might in fact be a lucky escape.

Photo by Flickr user alexsnaps, used under a Creative Commons license.