Systems demand analysis as systems

Saturday, November 4th, 2017

Techniques of Systems Analysis closes with an appendix on The Essence of Systems Analysis:

An item of equipment cannot be fully analyzed in isolation; frequently; its interaction with the entire environment, including other equipment, has to be considered. The art of systems analysis is born of this fact; systems demand analysis as systems.

Systems are analyzed with the intention of evaluating, improving, and comparing them with other systems. In the early days many people naively thought this meant picking a single definitive quantitative measure of effectiveness, finding a best set of assumptions and then using modern mathematics and high speed computers to carry out the computations. Often, their professional bias led them to believe that the central issues revolved around what kind of mathematics to use and how to use the computer.

With some exceptions, the early picture was illusory. First, there is the trivial point that even modern techniques are not usually powerful enough to treat een simple practical problems without great simplification and idealization. The ability and knowledge necessary to do this simplification and idealization is not always standard equipment of scientists and mathematicians or even of their practical military collaborators.

Much more important, the concept of a simple optimizing calculation ignores the central role of uncertainty. The uncertainty arises not only because we do not actually know what we have (much less what the enemy has) or what is going to happen, but also because we cannot agree on what we are trying to do.

In practice, three kinds of uncertainty can be distinguished.

  1. Statistical Uncertainty
  2. Real Uncertainty
  3. Uncertainty about the Enemy’s Actions.

We will mention each of these uncertainties in turn.

Statistical Uncertainty. This is the kind of uncertainty that pertains to fluctuation phenomena and random variables. It is the uncertainty associated with “honest” gambling devices. There are almost no conceptual difficulties in treating it — it merely makes the problem computationally more complicated.

Real Uncertainty. This is the uncertainty that arises from the fact that people believe different assumptions, have different tastes (and therefore objectives), and are (more often than not) ignorant. It has been argued by scholars that any single individual can, perhaps, treat this uncertainty as being identical t the statistical uncertainty mentioned above, but it is in general impossible for a group to do this in any satisfactory way. For example it is possible for individuals to assign subjectively evaluated numbers to such things as the probability of war or the probability of success of a research program, but there is typically no way of getting a useful consensus on these numbers. Usually, the best that can be done is to set limits between which most reasonable people agree the probabilities lie.

The fact that people have different objectives has almost the same conceptual effect on the design of a socially satisfactory system as the disagreement about empirical assumptions. People value differently, for example, deterring a war as opposed to winning it or alleviating its consequences, if deterrence fails; they ascribe different values to human lives (some even differentiate between different categories of human lives, such as civilian and military, or friendly, neutral, and enemy), future preparedness vs. present, preparedness vs. current standard of living, aggressive vs. defensive policies, etc. Our category, “Real Uncertainty,” covers differences in objectives as well as differences in assumptions.

The treatment of real uncertainty is somewhat controversial, but we believe actually fairly well understood practically. It is handled mainly by what we call Contingency Design.

Uncertainty Due to Enemy Reaction. This uncertainty is a curious and baffling mixture of statistical and real uncertainty, complicated by the fact that we are playing a non-zero sum game. It is often very difficult to treat satisfactorily. A reasonable guiding principle seems to be (at least for a rich country), to compromise designes so as to be prepared for the possibility that the enemy is bright, knowledgeable, and malevolent, and yet be able to exploit the situation if the enemy fails in any of these qualities.

To be specific:

Assuming that the enemy is bright means giving him the freedom (for the purpose of analysis) to use the resources he has in the way that is best for him, even if you don’t think he is smart enough to do so.

Assuming that he is knowledgeable means giving the enemy credit for knowing your weaknesses if he could have found out about them by using reasonable effort. You should be willing to do this even though you yourself have just learned about these weaknesses.

Assuming that the enemy is malevolent means that you will at least look at the case where the enemy does what is worst for you — even though it may not be rational for him to do this. This is sometimes an awful prospect and, in addition, plainly pessimistic, as one may wish to design against a “rational” rather than a malevolent enemy; but as much as possible, one should carry some insurance against the latter possibility.

Comments

  1. Bill says:

    I apologize if someone else has pointed this out, but if you haven’t read Systemantics: How Systems Work and Especially How They Fail by John Gall, you don’t know anything about systems!

  2. Adar says:

    The Los Alamos labs museum is very well done and one exhibit shows how to test a nuclear weapon for reliability without detonating the weapon. That methodology [as demonstrated with an ordinary land-life telephone] is to disassemble the device into all basic sub-assemblies and test each sub-assembly for reliability. Gives you an idea to some extent of how reliable to final assembly will be.

Leave a Reply