The founding idea of those studies (like ORO-T-160, but also SALVO I & II, SAWS and several others) was that “only bullets that hit count”, and that the only military effect of small-arms fire (and Individual Weapon fire in particular) was hitting and disabling a target. Suppression effects that are known to greatly reduce enemy fire effectiveness and enemy movements (two very interesting military effects) were simply not taken into account and no effort was made to try to evaluate them, or incorporate results of other studies devoted to such topics.
This idea of limiting the effectiveness of small-arms fire to hitting and disabling enemy soldiers immediately calls upon past visions of glorious battlefields where dense masses of soldiers were shooting at each other, or where a handful of brave souls stand against a “human wave” assault of mechanized infantry (“high density” battlefields), but seems totally remote from the “low density battlefields” so frequently encountered during “decolonization” wars or peacekeeping / stabilization engagements.
From a methodology point of view, this choice (deliberate or not) to reduce military effectiveness of Individual Weapon fire to “bullets that hit” had major implications. First, the complexity of evaluating small arms effectiveness was greatly reduced, “scientific” evaluations could be performed and focused on hit probability (pH) and terminal effectiveness (pI/H) against unprotected targets, or after defeating personal protection (but not intermediate barriers).
Second, since the maximum effective range considered (300 m) is relatively short, almost any bullet pushed fast enough could do the assigned job (hitting and delivering “sufficient” terminal effectiveness).
Of course, the capability to hit something is very valuable, but what is the hit probability of a soldier in a real combat, as opposed to simulated combat?
The “shots to casualty” ratio of small-arms fire is a highly debatable issue, and numbers as high as 100,000 have been quoted, but without a strong database to sustain that claim.
More reliable values could be found in the experience of the First Australian Task Force (1ATF) during the Vietnam war, with (mean) values of 187 shots per casualty for the 7.62 mm SLR and 232 shots per casualty for the M16 in the context of day patrol.
Nearly 80% of those engagements took place at ranges shorter than 30 m, not really long range, and still the average hit probability was around 0.5%, compared to a hit probability of ~100% found in ORO-T-160.
Of course, “mean” values are only average and in particular events close to “ideal” shooting scenario, shots-to-casualty ratio around 30 to 1 were achieved. While this number (pH ~3%) is definitively higher than 0.4% or 0.5% (nearly one order of magnitude), it’s still a very substantial difference from results commonly found during simulated combat.
The French operation in Mogadiscio in June 1992 could be seen as a very good example of effective firing, but even in this scenario ~3500 small arms (5.56 mm and 7.62 mm) rounds and ~500 12.7 mm rounds were expanded to produce a maximum of 50 casualties (pH of 1.25 %).
Police shootings that take place at very short range (generally less than 7 feet) exhibit the same symptoms of very low hit probability, one or two orders of magnitude less than expected. For example, during the famous 1997 North Hollywood shootout, the two heavily armed bank robbers fired approximately 1100 rounds during a 44 minutes battle and wounded 11 police officers (pH ~1%) and 7 (probably untargeted) civilians.
In return, police officers fired an estimated 650 rounds and killed both perpetrators (it is possible that one committed suicide after being wounded). Both bank robbers wore homemade bulletproof garments and one was hit several times in rapid succession in his legs until he surrendered (he died later from blood loss), so it’s difficult to evaluate the hit probability of the law officers, but even at a few feet, with good visibility and superior training (the final part of the shootout was conducted by SWAT members at a distance around 3-4 meters), one should expect results probably not much higher than 5% to 10%, again a substantial difference between “real life” results and results recorded during simulated combat.
This difference could be easily explained because of course, during simulated combat, no matter the amount of “realism” of the shooting scenario (sounds, fumes, explosions, fatigue or even electric shocks on the shooters), the targets are not returning fire so soldiers could focus on “clearing the range” (and freely expose themselves during the process), while during real combat trying to minimize exposure time to avoid being hit is mandatory.
So, if we look back at Figure 30, we have an idea of the hit probability of a soldier firing his M1 rifle at a human-size target with an exposure time of 3 seconds.
In order to be able to hit this target, the soldier needs also to expose himself to incoming fire (from his target, or from other people waiting for a shot of opportunity, the battlefield is not a place for a duel) during roughly the same amount of time.
[...]
Evaluating the dispersion of hand-held weapons and trying to improve the hit probability was at the heart of both ORO-T-160 and ORO-T-397 (Salvo II study).
Most results found in ORO-T-160 used a target exposure time of 3 seconds and shooters were discriminated between “experts” (highest skill) and “marksmen” (lowest skill).
During those tests, “experts” scored significantly higher than “marksmen” (another “argument” against long-range firing in the hands of the masses). For example, during the second test, “experts” scored 8 hits (25% hit probability) on a man-size target at 310 yards (Figure 33), when “marksmen” scored only 2 hits (6% hit probability, Figure 34).
That last point makes it clear that we weren’t training riflemen to shoot quickly.
The 7.5 mils and 7.25 mils obtained for “marksmen” and “experts” respectively, for a target exposure time of 3 seconds found in ORO-T-160 (published in 1952) are close to the “upper bound” found in ORO-T-397 (published in 1961) and probably reflect the change of marksmanship training (TRAINFIRE I was introduced in 1954, and TRAINFIRE II in 1957), from “bull’s-eyes” targets to “popup” targets.
Mean dispersion found in ORO-T-397 was around 3 mils for a target exposure time higher than 5 seconds, and this dispersion was increasing as the exposure time was decreasing, up to 7–7.5 mils (24 MoA to 26 MoA).
[...]
So, there is a wide difference between the way we evaluate small-arms hit probability and “real-life” results which seem to range from 0.3% to 3%, even at fairly close distances. From an individual fire perspective (one soldier, one target and one bullet), this value could be considered low, but from a tactical point of view, with tens to hundreds of soldiers, each carrying more than a hundred rounds, it’s high enough to produce decisive military results.
For example:
- during the battle of Magersfontain in December 1899, fire from the 8,500 Boers’ individual weapons (Mauser bolt action rifles) at a range of around 400 yards (366 m) was sufficient to kill and wound 665 British soldiers (24.5% of the total) in the first 10 minutes of the battle,
- a few days later, during the battle of Colenso, 2 British batteries (12 guns) of field artillery were engaged by rifle fire at a distance of 700 m. Suffering heavy casualties, the British were forced to fall back to their camp, losing 10 guns in the process.
[...]
The adoption by the US, followed by NATO, of the .223 Remington cartridge as the 5.56 x 45 mm, was not the result of concluding that the battlefield depth (measured in kilometres before WWI) was now reduced to 300 m, but an acknowledgement that effective HE support could be provided now at very short range in most conditions, and that the fire delivered by the infantry individual weapon should be used only for defeating adversaries in the 0 to 300 m bracket, longer ranges being devoted to collective weapons firing heavier ammunition.