Decision Support Systems for Just War Deliberations
Unless one is a pure pacifist, the general assumption is that some wars are justified. For centuries a body of literature called just war theory has developed concerning what distinguishes a just from an unjust war. The criteria come under several headings, like (1) just cause, (2) right intention, (3) last resort, (4) legal authority, (5) probability of success, and (6) that the war not produce greater harms than it intends to solve.
If these criteria, which conform to common sense and moral philosophy alike, were applied scrupulously, most wars would be avoided. The problem comes in practice: governments, if they consider these criteria at all, typically pay mere lip service to them. For example, to satisfy the just cause criteria, threats posed by foreign powers are greatly exaggerated; and the predicted costs of a war, both economically and in terms of human life and suffering, are greatly minimized. Further, as happened in the case of the 2001 Afghan War and the 2003 Iraq War, intellectuals spend more time arguing tedious fine points about the precise technical meanings of just war criteria than in applying them in a practical and sensible way.
Considering this, it struck me how there is a close similarity between the decision to make war and a medical decision to perform some drastic and risky procedure — say, a dangerous operation. In the latter case, because of the complexity of the choices involved and the fallibility of human decision-makers, expert systems and artificial intelligence have been used as decision support tools. In fact, I’ve developed one or two such systems myself.
Computerized medical decision-support systems offer several benefits. First, they can help a physician decide how to treat a particular patient. For example, based on such variables as the patient’s age, health, genes, and physiology, the system might supply the physician with the estimated probabilities of success for several treatment options (e.g., surgery, medication, naturalistic treatment, or perhaps no treatment at all). The physician isn’t required to follow the recommendation — but he or she can take it into account. Usually it is found that, in the long run, incorporating such a system into medical practice reduces the number of unnecessary procedures and improves practice overall.
Second, and perhaps more importantly, the process of developing of a medical decision support system is itself very valuable. It requires physicians and medical scientists to focus attention on how actual treatment decisions are made. Ordinarily, diagnosis and treatment selection can be a very subjective and ad hoc thing — something physicians do based on habit, wrong practices, or anecdotal evidence. Developing an expert system forces physicians to explicitly state how and why they make various decisions — and this process not infrequently reveals procedural errors and forces people to re-think and improve their practices.
Both of these advantages might accrue were we to similarly develop a computerized support system to decide whether a war is just. From the technical standpoint, it would not be difficult to do this; a functional prototype could easily be developed in, say, 6 weeks or less. Off-the-shelf software packages enable the rapid development of such a system.
Another advantage of such systems is that they do not produce yes/no results, but rather a probability of success. That is, they are inherently probabilistic in nature. All inputs — for example, whether a foreign power has weapons of mass destruction — would be supplied as probabilities, not definite facts. Probabilities can be estimated based on mathematical models, or expert consensus (e.g., the Delphi method).
A decision support system helps one see how uncertainties accumulate in a complex chain of inferences. For example, if the success of choice C depends on facts A and B both being true, and if A and B are only known as probabilities, then a system accordingly takes uncertainty concerning A and B into account in estimating the probability of C’s success. In a medical decision based on a dozen or more variables, none known with complete certainty, the net uncertainty concerning success or failure of a particular treatment option can be considerable. In that case, a physician may elect not to perform a risky procedure for a particular patient. The same principle would apply for a just war decision support system.
Such, then, is my proposal. From experience, I’ve learned that it is better to start with a simpler decision support system, and then to gradually increase its complexity. Accordingly, I suggest that we could begin with a system to model only one part of just war theory — say, just cause, or ‘no greater harms produced.’ I further propose that we could take the decision to invade Iraq in 2003 as guiding example. My guess is that were such a model produced, it would show that the likelihood of success, the immediate necessity, and the range of possible harms were all so uncertain in 2003 that we should have not intervened as we did.
A final advantage of such a system is that it would connect moral philosophy with science. Science is cumulative: one scientific or mathematical advance builds on another. The same is not true of moral philosophy. Philosophers can go back and forth for centuries, even millennia, rehashing the same issues over and over, and never making progress.
Perhaps this is a project I should pursue myself. Or it might be an excellent opportunity for a young researcher. In any case, I’m throwing it out into cyber-space for general consideration. If anyone reads this and finds it interesting, please let me know.
Incidentally, military analysts have developed many such computerized systems to aid combat decisions. (When working at the RAND Corporation, I worked on a system to help US forces avoid accidentally shooting at their own aircraft — something called fratricide.) Since it is clearly in the interests of the military to avoid pursuing unwinnable wars, possibly it is they who could take a lead in developing the line of research proposed here. US Naval War College and West Point — are you listening?