John Lathrop, Executive Principal Analyst
Framing This Blog:
I’d like to open up a blog-based dialog on some of the key challenges of terrorism risk management. These challenges are found in other areas of risk assessment and risk management, but the context of terrorism risk management adds drama and relevance. My hope is that comments coming back to us from this article can build a forum where we can all help each other think through how best to cope with these challenges. I’ve developed positions on all of these challenges, but I don’t want to contaminate your thinking by presenting them here. Page limitations limit me to just two challenges in this edition, but I’ll be coming back in future months with discussions of the responses to this month’s blog, and then also present two other challenges. The challenges I’ve presented here all have to do with terrorism risk assessment (TRA) based on probabilistic risk assessment, and how those relate to terrorism risk management, but other challenges we’ll consider later will range more broadly over other ways to assess and manage terrorism risk.
I’ve experience with six DHS and DOE WMD terrorism risk assessments (TRAs): I’ve managed or been part of three third-party reviews of terrorism risk assessments, have built one and advised on two others. I’m currently co-writing (with a member of the Intelligence Community) a comparison of several nuclear TRAs for DOE. All of the challenges I present here arise out of a “composite character” of those six TRAs and the other TRAs we’re comparing – none are tied to any particular TRA. I’ve sanitized and drastically simplified the challenges. “WMD type” here includes biological, chemical, nuclear and radiological.
Challenge 1: What is the “risk” we are assessing?
Example: One WMD TRA, let’s call it TRA “A,” asked intelligence community subject matter experts (IC SMEs) which terrorist (“Red”) groups could contribute to the risk of that WMD type over the time horizon of the study, offering them broad categories of groups to choose from. Those SMEs named a known set of Red groups, determining that any other groups would not contribute significantly to the risk. Then TRA “A” proceeded to assess the risk arising from those known groups.
Another TRA on the same WMD type, TRA “B,” asked another set of SMEs to characterize a spectrum of Red groups that could contribute to the risk, including ones for which they have no intelligence. Those SMEs characterized a broad spectrum of Red groups, not tied to currently known groups. As a result, TRA “B” based its assessments upon a more broadly based threat, which could result in a higher assessment of overall risk, and broader probability distributions over targets and versions of that WMD.
This presents us with a long list of questions and concepts:
- Did the two sets of SMEs in fact have two different opinions on the sources of risk, or did the two TRAs define “risk” differently?
- Should Blue (we are Blue) risk management be based on the opinions of IC SMEs based on current intelligence (as with TRA A)?
- Or should it be based on what Red groups “could be out there” (as with TRA B)?
- Does the TRA A approach define the risk too narrowly, or does it make the best use of Blue intelligence?
- What is “Best Use of Available Data” (“BUAD”)? Is it “sticking with” IC SME TRA A judgments, or is part of that “Available Data” the fact that the two sets of SMEs disagreed, and so BUAD is to account for the broader-scope risk elicited from the SMEs of TRA B?
- Is it OK for Blue to “trust” the TRA A SMEs to “know how much they do not know”? You could say that the TRA A SMEs know the information and capabilities “out there,” and so can indeed know that unknown groups can’t contribute significantly to the risk.
- On the other hand, you could ask “But … what about Black Swans?”
- Or you could assert that, regardless of our high or low confidence that the TRA A SMEs know how much they do not know, on a first-principles basis Blue should account for unknown groups.
- Is that a misallocation of Blue defensive resources (toward possibly imaginary Red groups) and a failure to make the best use of available information, or is it sound risk management in an uncertain world?
- Given all that, what’s the best basis for advising Blue?
Challenge 2: What do we do when TRA “A” and TRA “B” differ in their assessments of the same risk by N orders of magnitude?
Example: I can name two different pairs of TRAs, one pair for each of two different WMD types, where yes, in fact, the TRAs differed in their results (e.g. expected fatalities per decade) by N orders of magnitude. (Only one of these four TRAs is among the six I listed earlier.) Actually, with one of the WMD types the TRAs differed by twice as many orders of magnitude, 2*N, than the pair of TRAs for the other WMD type.
The conundrum: The decision makers were provided descriptions of the analyses, assumptions, etc., for each TRA. No typical error bar considerations, or analysis scope differences, could explain the differences. We have no basis for determining that one of the TRAs is “wrong” and the other one “right.” The differences could be attributed to differing assumptions and/or SME judgments. In fact, typically assumptions are checked with SMEs, so we can regard assumptions and SME judgments as the same source of the differences. So …:
- What does this tell us about the ability of TRAs to advise Blue risk management?
- How should Blue manage risk given this level of advice? We’re looking for better answers than simply “use more SMEs.” Also looking for better answers than “combine the models,” since that idea seems suspect when the models differ by N orders of magnitude.
- Is TRA as discussed here, i.e., based on probabilistic risk assessment, a fundamentally flawed approach when applied in this context, or is it fundamentally sound but prone to variation?
- If the former, what should be done to replace it?
- If the latter, how can we shore it up