Top Shot season 3 competitor Cliff Walsh — the revolver champion — explains to the other serious shooters on Brian Enos’s forums how there is not time to show everything that happened:
Each show is filmed over 3 days. Day 1: team practice. One team goes in the morning and the other in the afternoon. Day 2: Team challenge in the morning then nomination range in the afternoon. Day 3: elimination practice and then the challenge in the afternoon. A show is about 45 minutes long so all that had to be cut down to fit. Taran was making his usual jokes but they didn’t show much of practice.
I am surprised at some things they did not show. During the cannonball run, I fumbled a reload and launched a mag into the air right about the middle of the shoot. It landed on the ground in front of the platform. I grabbed another mag, got the gun running, and went back to work but I lost 3 or 4 balls before I was back on track. I would think that would have made it more dramatic to see me fumble and then try to catch back up but there was no sign of it. When I fall off the log in the 1st show, Gary is standing behind the log. He tries to help me and gives me a push but nobody is on the other side and he throws me off balance and I roll off the log. In out meeting, the first thing we discuss is me falling off the log, Gary says that it was not my fault he pushed me and this is not a lumberjack competition and we move on. It would have been nice to include that 30 seconds in the show so I don’t have to take so much crap about it. The first time, I did fall all by myself though. In the second show, there were some misfires with AK47 that were cut out. I would really like to see uncut footage of all the challenges form start to finish. Maybe they will put in the DVDs.
The rioters in the news last week had a thwarted sense of entitlement that has been assiduously cultivated by an alliance of intellectuals, governments and bureaucrats. “We’re fed up with being broke,” one rioter was reported as having said, as if having enough money to satisfy one’s desires were a human right rather than something to be earned.
“There are people here with nothing,” this rioter continued: nothing, that is, except an education that has cost $80,000, a roof over their head, clothes on their back and shoes on their feet, food in their stomachs, a cellphone, a flat-screen TV, a refrigerator, an electric stove, heating and lighting, hot and cold running water, a guaranteed income, free medical care, and all of the same for any of the children that they might care to propagate.
But while the rioters have been maintained in a condition of near-permanent unemployment by government subvention augmented by criminal activity, Britain was importing labor to man its service industries. You can travel up and down the country and you can be sure that all the decent hotels and restaurants will be manned overwhelmingly by young foreigners; not a young Briton in sight (thank God).
The reason for this is clear: The young unemployed Britons not only have the wrong attitude to work, for example regarding fixed hours as a form of oppression, but they are also dramatically badly educated. Within six months of arrival in the country, the average young Pole speaks better, more cultivated English than they do.
When Aretae mentioned the recent refutation of the 10,000-hour rule, Dr. Pat chimed in with a story about a chat he had with a graduate of the Chinese Olympic program:
She’d be selected from a nationwide search at the age of 7 and spent the next 13 years living and training in specialized facilities.
There was an initial selection: For swimming all the children were lined up on the edge of a pool, some objects were thrown in, and the kids told to retrieve them. Talent spotters grabbed the children who “showed promise” and they were selected.
This particular woman got into both the swimming and ballet programs. And stayed in both until at about 15 she had to choose, because nobody could specialize and keep up the training for both.
About the training, she just kept talking about pain. Lots and lots of pain. Hours of pain every day.
We were talking about the movie Black Swan, and she said that in real life it is much more brutal and painful than shown in the movie.
There was also a weird psychological thing about how a child who didn’t come from a horrible, poor, background could never be a good dancer because you needed pain to be able to put it into the dance. I’ve heard the same argument about music and I didn’t understand it then either. I’ve classified this as “Stuff I’ll remember the words to, as it may well be true, but that’s all I can do.”
To get back to the point: The Chinese certainly think it is a combination of innate talent combined with years of practice.
Ericcson’s expert performance framework, which says that you need 10,000 hours of deliberate practice to become an expert, is an already simple framework that often gets oversimplified — as in this video by table-tennis champion Matthew Syed, author of Bounce:
Tyler Cowen (and then Aretae) recently linked to a refutation of the expert performance framework — and especially of the oversimplified versions of it — by two exercise physiologists, Ross Tucker and Jonathoan Dugas:
I have that study, and what is remarkable about it is that Ericsson presents no indication of variance — there are no standard deviations, no maximums, minimums, or ranges. And so all we really know is that average practice time influences performance, not whether the individual differences present might undermine that argument. Statistically, this is a crucial omission and it may undermine the 10,000 hour conclusion entirely.
While I strongly agree that we need distributions, not single average values, to characterize such things, Tucker and Dugas attack something of a straw man here:
If the theory is that 10,000 hours of practice are needed, and there is no innate ability, then you should not find a single person who has succeeded with fewer than 10,000 hours, and nor should anyone fail having done their 10,000 hours.
I have no trouble accepting the 10,000-hour rule as merely a rule of thumb that suggests the right order of magnitude.
Here’s where things get much more interesting — and data-driven:
Gobet and Campitelli studied 104 chess players and measured practice time and performance level, and looked at the time taken to reach the Master level. This is their finding:
So, the average time taken is 11,053 hours. That’s pretty much in agreement with Ericsson’s violin players. So far so good. But look at that Standard Deviation — 5,538 hours, and it gives a co-efficient of variation of 50%. [...] One player reaches master level on 3,000 hours, another takes almost 24,000 hours, and some are still practicing but not succeeding. That’s a 21,000 hour difference, which is two entire practice lifetimes according to the model of practice.
Darts, which has been studied by Duffy and Ericsson, offers more data:
They find the following when looking at darts scores and accumulated practice time:
The figure above shows how much of performance can be explained by deliberate practice. In chess, which I showed above, it’s 34%. In darts, 15 years of practice explains only 28% of the variation in performance between individuals! An extra-ordinary finding, because with all due respect, that’s in darts. What else is there that influences performance? Yet practice time accounts for only a quarter of the performance differences.
What else is there to influence dart performance? Plenty of random noise, I suspect, because of the peculiar scoring system. There’s clearly a skill to poker, but that skill only explains a tiny percentage of performance compared to chess.
This also fails to disprove the importance of deliberate training, if we accept that there are degrees of deliberate-ness that are hard to measure. The original finding, after all, was that top-tier musicians hadn’t practiced music more than third-tier musicians, but that they had deliberately practiced more:
All expert musicians were found to spend about the same amount of time on all types of music related activities during the diary week — about 50–60 hours. The most striking difference was that the two most accomplished groups of expert musicians were found to spend more time (25 hours) in solitary practice than the least accomplished group, who only spent around 10 hours per week.
During solitary practice the experts reported working with full concentration on improving specific aspects of their music performance — often identified by their master teacher at their weekly lessons — thus meeting the criteria for deliberate practice. The best groups of expert musicians spent around four hours every day, including weekends, in this type of solitary practice.
From retrospective estimates of practice, Ericsson et al. (1993) calculated the number of hours of deliberate practice that five groups of musicians at different performance levels had accumulated by a given age, as is illustrated in Figure 3. By the age of 20, the most accomplished musicians had spent over 10,000 hours of practice, which is 2500 and 5000 hours more than two less accomplished groups of expert musicians or 8000 hours more than amateur pianists of the same age (Krampe & Ericsson, 1996).
As the contest moves away from pure skill to something more physical, the primacy of skill naturally drops:
Start with Olympic wrestling, football and field hockey. Below are the findings from research on the USA Olympic athletes.
Clearly, 10,000 hours are rarely required. A subsequent study on Australian athletes found that 28% had participated for fewer than four years in their sport — that’s probably 3,000 to 4,000 hours, at most. One netball player from Australia had made the international stage on 600 hours of play.
Clearly there is some overlap between the skills and attributes needed for success in various sports, and some sports — coughnetballcough — are nowhere near as competitive as others.
Their last point is one that immediately jumped out at me when I read about the original research: which way does the causality run?
Ericsson concludes that these children just accumulate more training time and that this explains performance. The difference between the “best experts” and the “least accomplished players” is the training time.
But what if it is exactly the other way around? Let’s take two children at nine years old. Do they have the same ability to play on first exposure? Ericsson’s model says yes, and that the difference comes later, when one child practices more, gets better teaching. But what if the difference is present from the very first note, the first exposure to the activity? The parents of a child who shows some ability encourage further practice, they invest in teaching and training, and this child, by virtue of the fact that he/she has more ability to begin with, accumulates more practice.
But the child who has little innate ability makes the violin sound like the death march of stray cats, and their parents do not encourage more play. In fact, they discourage it -— the “go play outside” syndrome takes over, and the child is never exposed to teaching or practice. His trajectory is set precisely because he has less innate ability.
This Matthew effect was also popularized by the same Gladwell book that made the 10,000-hour rule so fashionable — but Outliers neglects to mention that this effect disappears past the junior level.
Tucker and Dugas tend to focus on sports with a strong metabolic component, like running, cycling, and swimming, where skill plays less of a role than endurance, which is highly trainable but has a strong genetic component nonetheless:
The study that is needed to answer this question is to take a large, random group of people and expose them to training, and then to measure how much they improve. And this has been done. There are four studies, summarized in the figure below, where big groups have been put through a supervised training programme, and their VO2max measured as an index of fitness.
So, on average, VO2max will improve by 15% as a result of training. In some studies, it’s been as high as 19%, in others, 9%. This may be due to differences in the training programme, or the people involved. However, what you should be asking, especially given our look at Ericsson’s violin study and the chess paper, is “What are the individual differences that make up that 15%, and what is the genetic impact in these studies?”
And for this, a paper by Claude Bouchard earlier this year. In this study, 470 untrained volunteers were put through five months of training, and their fitness levels measured before and after. The figure below shows the result:
As you might expect, most people improve by average amounts — 38% of the volunteers improved by between 300 and 500 ml/min (shown by they yellow and green bars in the breakdown of responders section). But either side of these “typical responses”, you see the extremes — the “low responders” shown in reds and oranges, and the “high responders” shown in blues and purples. 4% of the volunteers improved by 800ml/min or more, whereas 7% improved by less than 100ml/min.
Overall, there was a range of changes in VO2max all the way from 100ml/min (basically no improvement) to over 1000ml/min. That’s a 10-fold difference. You may recall that yesterday, we saw how chess expertise showed an 8-fold difference between the fastest and slowest to succeed at reaching Master level. It seems that a similar range of responses occurs for physiology.
The end result is that the bottom 5% of the sample, those who responded the least, improved their VO2max by less than 4%. On the other end, the high responders, the top 5%, improved by 40%. That is an astonishing difference, and the simple, and obvious question is where are you most likely to find an endurance athlete in this sample? The answer is on the far right — the individual who shows large adaptations to training, improves quickly and then reaches a higher ceiling. I am sure that every one of you reading this knows one of each of these people, perhaps you are one of them!
We should expect to see similar patterns in strength, power, flexibility, etc. — different people start at different levels and then respond to training and conditioning differently.
Great Britain’s leniency began in the 1950s, with a policy that only under extraordinary circumstances would anyone under 17 be sent to prison. This was meant to rehabilitate young offenders. But the alternative to incarceration has been simply to warn them to behave, maybe require community service, and return them to the streets. There has been justifiable concern about causes of crime such as poverty and unemployment, but little admission that some individuals prefer theft to work and that deterrence must be taken seriously.
Victims of aggression who defend themselves or attempt to protect their property have been shown no such leniency. Burglars who injured themselves breaking into houses have successfully sued homeowners for damages. In February, police in Surrey told gardeners not to put wire mesh on the windows of their garden sheds as burglars might hurt themselves when they break in.
If a homeowner protecting himself and his family injures an intruder beyond what the law considers “reasonable,” he will be prosecuted for assault. Tony Martin, an English farmer, was sentenced to life in prison for killing one burglar and wounding another with a shotgun during the seventh break-in at his rural home in 1999. While his sentence was later reduced to five years, he was refused parole in 2003 because he was judged a danger to burglars.
In 2008, a robber armed with a knife attacked shopkeeper Tony Singh in West Lancashire. During the struggle the intruder was fatally stabbed with his own knife. Although the robber had a long record of violent assault, prosecutors were preparing to charge Mr. Singh with murder until public outrage stopped them.
Meanwhile, the cost of criminal justice has convinced British governments to shorten the sentences of adult criminals, even those guilty of violent crimes, and to release them when they have served half of their sentence. Police have been instructed by the British Home Office to let burglars and first-time offenders who confess to any of some 60 crimes — ranging from assault and arson to sex with an underage girl — off with a caution. That means no jail time, no fine, no community service, no court appearance.
In 2009, 70% of apprehended burglars avoided prison, according to British Ministry of Justice figures. The same year, 20,000 young offenders were electronically tagged and sent home, a 40% increase in the number of people tagged over three years.
All sorts of weapons useful for self-defense have been severely restricted or banned. A 1953 law, the “Prevention of Crime Act,” made any item someone carried for possible protection an “offensive weapon” and therefore illegal. Today there is also a list of devices the mere possession of which carries a 10-year sentence. Along with rocket launchers and machine guns, the list includes chemical sprays and any knife with a blade more than three inches long.
Handguns? Parliament banned their possession in 1997. As an example of the preposterous lengths to which zealous British authorities would enforce this law, consider the fate of Paul Clark, a former soldier. He was arrested in 2009 by Surrey police when he brought them a shotgun he found in his garden. For doing this personally — instead of asking the police to retrieve it — he received a five-year prison sentence. It took a public outcry to reduce the normal five-year sentence to 12 months, and then suspend it.
The ban on handguns did not stop actual crimes committed with handguns. Those crimes rose nearly 40%, according to a 2001 study by King’s College London’s Center for Defence Studies, and doubled by a decade later, according to government statistics reported in the London Telegraph in October 2009.
Knives? It’s illegal for anyone under age 18 to buy one, and using a knife for self-defense is unlawful. In 1991, American tourist Dina Letarte of Tempe, Ariz., used a penknife to protect herself from a violent attack by three men in a London subway. She was convicted of carrying an offensive weapon, fined, and given a two-year suspended sentence.
The result of policies that punish the innocent but fail to deter crime has been stark, even before the latest urban violence. The last decade has seen a doubling of gun crime. According to the latest annual report of the Home Office (2009), there was a 25% increase in crimes involving contact, such as assault and battery, over the previous year.
The Conservative government came to power pledging to end the police “caution culture” and permit more scope for self-defense. But old habits die hard. The Conservative recommendation in December 2009 to permit householders to use any force “not grossly disproportionate” against an intruder was described in the Guardian newspaper as “backward and barbaric.”
Successful businesspeople are often attributed somewhat mystical talents, such as the ability to mesmerize an audience or envision the future. We suggest that this mystique — the way some managers are perceived by observers — arises from the intuitive logic that psychologists and anthropologists call magical thinking.
Consistent with this account, Study 1 found that perceptions of a manager’s mystique are associated with judgments of his or her charismatic vision and ability to forecast future business trends. The authors hypothesized that mystique arises especially when success is observed in the absence of mechanical causes, such as long hours or hard-won skills.
In Study 2, managers who succeeded mysteriously rather than mechanically evoked participants’ attributions of foresight and their expectations of success at visionary tasks yet not at administrative tasks. The authors further hypothesized that as mystique is assumed to spread through contagion, observers desire physical contact with managers who are attributed mystique and with these managers’ possessions.
Study 3 found that managers described as visionary as opposed to diligent are judged to be charismatic and ultimately magnetic. The authors discuss the implications of these judgment patterns for the literatures on perception biases and impression management in organizations.
While Tory leaders have often preached 19th-century self-improvement, the Cameron government broke with that tradition by cutting back funding of the clubs and libraries that were supposed to guide the poor to middle-class values. Television-presented bling plus persistent unemployment were the fuel, and Mr. Cameron’s policies were the spark. Does he propose to return to the days of radio news readers in dinner jackets and black tie, a Reith policy to put them in the proper mood?
I had such an experience during the opening weekend of Conan the Barbarian 3D.
It’s hard to feel bad for someone who co-wrote the new Conan flick:
You make light of it, of course. You joke and shrug. But the blow to your ego and reputation can’t be brushed off. Reviewers, even when they were positive, mocked Conan The Barbarian for its lack of story, lack of characterization, and lack of wit. This doesn’t speak well of the screenwriting — and any filmmaker who tells you s/he “doesn’t read reviews” just doesn’t want to admit how much they sting.
Unfortunately, the work I do as a script doctor is hard to defend if the movie flops. I know that those who have read my Conan shooting script agree that much of the work I did on story and character never made it to screen. I myself know that given the difficulties of rewriting a script in the middle of production, I did work that I can be proud of. But its still much like doing great work on a losing campaign. All anyone in the general public knows, all anyone in the industry remembers, is the flop. A loss is a loss.
He says that a movie’s opening day is analogous to a political election night, and naturally another screenwriter knows exactly what that’s like:
Sean compared this to being a part of a losing Presidential campaign and as someone who has done both, I can say that that is exactly what it is like. I had the wonderful opportunity to work for John Kerry in 2004 and experience the horrific feeling that comes with losing to George W. Bush and actually believing in my candidate. Watching the election returns was like a never ending math test that just kept going for what seemed to be the purpose of dragging out my misery. I had the best possible outcome in 2008, but I don’t think I’ll ever forget the feeling that came with the 2004 election results.
I remember being surprised to learn that Frank Darabont (The Shawshank Redemption) would be directing The Walking Dead. I was even more surprised to learn that he wouldn’t be directing its second season — AMC fired him:
Within a space of months, AMC has become embroiled in messy public fights with the creators of its top three shows — Mad Men, Breaking Bad and now Walking Dead. The battles have been about money, but in this case, at least, it was more of a slow burn than a sudden flare-up. Sources say last fall, even before the first episode of the show had aired, AMC let it be known that it would effectively slash the show’s second-season budget per episode by about $650,000, from $3.4 million to $2.75 million. AMC cut the budget and pocketed a tax credit previusly applied to the show. An AMC source says the size of the cut cited by sources is “grossly inflated” and that the second-season budget represents a more typical and sustainable number for a basic cable show.
At a glance, it would appear AMC is taking a big risk with its only huge commercial success. Mad Men and Breaking Bad are Emmy magnets that average 2.3 million and 4.3 million viewers, respectively. But Walking Dead, based on a series of graphic novels, attracted an astonishing 5.3 million viewers when it premiered on Halloween. The season finale in December drew more than 6 million viewers. In the 18-to-49 demo, it chalked up the biggest number ever for any drama on basic cable.
Dan O’Bannon made his name writing the screenplay for Alien, but before that he did some technical work on the computer animation for a little science-fiction film called Star Wars:
George Lucas had hired 24 year-old computer scientist Larry Cuba to create the (at the time) challenging wireframe and vector-based CGI work for the tactical briefing before the attack on the Death Star in Star Wars. Cuba worked out of the Electronic Visualization Lab (EVL) at the University of Illinois, and created the blueprints and graphics using the vector graphics scripting language GRASS (GRAphics Symbiosis System, created by Ohio State’s Tom DeFanti in 1974). EVL themselves take no particular credit for the sequence, but say “[Larry Cuba] stayed at our facility and used our equipment for many months in order to create the sequence.” Cuba created an instructional video about the sequence at the time of Star Wars, and EVL released it in 2008 as a well-viewed 10-minute video on their channel at YouTube.
Cuba used a Vector General CRT, DEC PDP-11 minicomputer to generate the images and recorded the frames by pointing a film camera at the monitor in an automated process which awaited each successive image to be rendered before triggering a frame exposure.
O’Bannon’s first task on Star Wars was to create the final section of the Death Star tactical simulation, wherein torpedoes are seen entering the shaft and descending to the core to cause a reactor explosion. For this O’Bannon made an effort to simulate Cuba’s style, with white lines on black, but added his signature ‘strobing’ at certain points. This end section of Star Wars’ one and only CGI sequence would have been an ambitious addition to the schedule, and Lucas decided that concluding it with animation was the quickest route to completing the scene.
Later Lucas returned to chat with O’Bannon about creating the remaining tactical and computer display animations for Star Wars. Lucas was shuttling between San Francisco and ILM’s facility in Van Nuys at the period, and would sit with O’Bannon sketching out rough diagrams for the tacticals on scrap paper.
Feedback from Lucas was minimal throughout the three-month period in which O’Bannon supervised the shots, though he notes that the director was concerned at one stage that some of the visuals were coming out too ‘colourful’. This is something O’Bannon says he could easily have remedied in advance if there had been more detailed discussion, but in the end the colour in some of the tactical shots was toned down for release.
The one shot where O’Bannon’s team employed computer technology was on the compositing work for the Death Star’s aspect for clearance to destroy the rebel moon Yavin IV. Here O’Bannon praised the great speed at which the Image West facility was able to take the elements that he brought and composite them with motion on an analogue computer. The system was known as Scanimate, and was created by Lee Harrison III, the founder of Chicago’s Computer Image. Scanimate would scan core imagery at twice the horizontal rate of NTSC or PAL and output the various elements composited onto a five-inch CRT screen, which was filmed in real-time with a conventional movie camera. If you’re interested in more detail on how Scanimate worked, check out this post at Siggraph.
Of all the visual effects produced for the original Star Wars, the contribution of O’Bannon’s team has been the least affected by the two ‘enhanced’ re-releases in 1997 and 2005, though we must note that Lucas did decide to change the written language on the Death Star’s tractor-beam generator (above) from English to…well, something else. O’Bannon joked that he was disappointed George Lucas had not taken the opportunity to revamp the screens for the special editions, and that something more interesting could have been done with newer technologies. On this, of course, we can’t fault Lucas; it would not only have removed O’Bannon’s work from the film but substituted a great deal of the original feeling and iconography of Star Wars. Good call!
Watching that how-to video, it seems like they would have been better off filming literal wire-frame models, which is what they more-or-less did for Escape from New York‘s computer displays. The (very different) scene from Heavy Metal, where Taarna rides her pteranodon over the desert landscape, was actually animated using a similar technique, with a physical model of the landscape painted with lines along its edges, so they could fly the movie camera over the terrain and then produce high-contrast photocopies of the film, which could then be painted for the final animation.
For instance, the first episode’s what-if is the classic, What if Hitler had won the war?, and it doesn’t even mention Soviet Russia — or any other countries besides the US and Germany. So, how does Hitler win the war? By repulsing the D-Day invasion with his jet fighters, of course. That was easy.
So, he then consolidates his holdings in Europe, Asia, and Africa, right? Not sure. But we do know that he develops submarine-launched missiles with atomic warheads, destroys a couple American cities, and then takes over. It’s the obvious next step.
And that‘s the real point of the show, to depict America under the heel of evil white right-wingers who use smart-phones and tablets to track down Jews, Blacks, and “undesirables” for extermination.
Ah, but The People rise up and use social networks to Revolt and take back Power! I’m not sure who their NATO is though, providing air cover and covert operatives on the ground.
Cult classic The Dark Crystal serves as a wonderful example of how to use pre-CGI puppetry in a film — and how not to, in the case of the two protagonists, the Gelflings:
I didn’t realize that the disturbingly zombie-likeGelflings were designed not by Brian Froud but by Wendy Froud, his wife. I also didn’t realize that the fantastic landscapes in much of Mr. Froud’s work look like the countryside of Dartmoor, where he lives.
It’s always funny to see Jim Henson or Frank Oz interviewed, because it’s the wrong face to go with the very familiar voice (of Kermit or Fozzie.)
After years of grueling battle, fighting island to island across the Pacific, Japan’s Navy and Air Force were all but destroyed. The production of materiel was faltering, completely overmatched by American industry, and the Japanese people were starving. A full-scale invasion of Japan itself would mean hundreds of thousands of dead GIs, and, still, the Japanese leadership refused to surrender.
But in early August 66 years ago, America unveiled a terrifying new weapon, dropping atomic bombs on Hiroshima and Nagasaki. In a matter of days, the Japanese submitted, bringing the fighting, finally, to a close.
On Aug. 6, the United States marks the anniversary of the Hiroshima bombing’s mixed legacy. The leader of our democracy purposefully executed civilians on a mass scale. Yet the bombing also ended the deadliest conflict in human history.
UC Santa Barbara’s Tsuyoshi Hasegawa argues that it was the Soviet invasion of Manchuria that forced Japan’s surrender. Like Steve Sailer, I assumed it was a combination of atomic bombings, fire bombings, Soviet invasion, naval blockade, etc. They had every reason to surrender; what their leadership needed was a face-saving way to surrender:
The Japanese were nuts in WWII. The rulers had largely risen up through a system in which the non-nuts were assassinated, so their grip on reality was shaky. Their strategic planning boiled down to asserting that the bravery of Japanese soldiers would make Japan win in the end.
Imperial Japan was truly, truly foreign. Here’s their end-game:
The Japanese could still inflict heavy casualties on any invader, and they hoped to convince the Soviet Union, still neutral in the Asian theater, to mediate a settlement with the Americans. Stalin, they calculated, might negotiate more favorable terms in exchange for territory in Asia. It was a long shot, but it made strategic sense.
Sailer disagrees that it made strategic sense:
As opposed to Stalin just taking Japanese-held territory in northeast Asia with the world’s strongest army? The Japanese had been beaten bad up in the Manchuria-Mongolia-Russia border region by Gen. Zhukov way back in August 1939, and six years later, there was no evidence that a second Soviet-Japanese war would be less of a drubbing. So, what was in it for Stalin to step in on the side of Japan?
The Japanese high command was living in cloud-cuckoo land. And why, exactly, would you want to get Stalin involved in a war you are losing? In contrast, during the last weeks of the war in Europe, everybody in Germany with half-a-brain (e.g., Werner von Braun) had been climbing in their Mercedes and driving west as fast as they could to surrender to Americans or Brits rather than to the Soviets.
How is it possible that the Japanese leadership did not react more strongly to many tens of thousands of its citizens being obliterated? Gareth Cook summarizes Hasegawa’s point of view:
One answer is that the Japanese leaders were not greatly troubled by civilian causalities. As the Allies loomed, the Japanese people were instructed to sharpen bamboo sticks and prepare to meet the Marines at the beach.
Yet it was more than callousness. The bomb — horrific as it was — was not as special as Americans have always imagined. In early March, several hundred B-29 Super Fortress bombers dropped incendiary bombs on downtown Tokyo. Some argue that more died in the resulting firestorm than at Hiroshima. People were boiled in the canals. The photos of charred Tokyo and charred Hiroshima are indistinguishable.
In fact, more than 60 of Japan’s cities had been substantially destroyed by the time of the Hiroshima attack, according to a 2007 International Security article by Wilson, who is a senior fellow at the Center for Nonproliferation Studies at the Monterey Institute of International Studies. In the three weeks before Hiroshima, Wilson writes, 25 cities were heavily bombed.
To us, then, Hiroshima was unique, and the move to atomic weaponry was a great leap, military and moral. But Hasegawa argues the change was incremental. “Once we had accepted strategic bombing as an acceptable weapon of war, the atomic bomb was a very small step,” he says. To Japan’s leaders, Hiroshima was yet another population center leveled, albeit in a novel way. If they didn’t surrender after Tokyo, they weren’t going to after Hiroshima.
That really misses the point. The point is not that an atomic bomb demolishes a city more thoroughly than thousands of conventional bombs; it’s that one bomb carried by one bomber can do the work of thousands of bombs carried by hundreds of bombers.
As Sailer puts it, it was not the Hiroshima bomb but the Nagasaki bomb that demonstrated that the U.S. could now vaporize cities at will, because Nagasaki convinced them that we didn’t have just one atomic bomb.
As a patient, logistics-oriented type, I would’ve let the blockade do its job quite a while longer.
Rollory: It’s not that simple. Wander around the downtown of certain former Soviet cities, you will see a lot more small-scale entrepreneurialism than in an American city (with the possible exception of Hispanic-run food trucks). It’s entirely possible this is because they must do so or starve, but the phenomenon is definitely noticeable. One of Moldbug’s claims is that communism is basically an anglo phenomenon that was exported abroad; he uses the analogy of a disease that is more...
Bill: You’ve been quoting John Glubb’s Fate of Empires lately: The first half of the Age of Commerce appears to be peculiarly splendid. The ancient virtues of courage, patriotism and devotion to duty are still in evidence. The nation is proud, united and full of self-confidence. Boys are still required, first of all, to be manly—to ride, to shoot straight and to tell the truth. (It is remarkable what emphasis is placed, at this stage, on the manly virtue of truthfulness, for lying is...
Toddy Cat: So, Communism destroys your moral fiber. Who woulda thunk it? Seriously, this also explains a lot about the plight of the former USSR — there is no one left who didn’t grow up under Communism.
Rollory: Chechar has written quite a lot about New Spain at his blog (chechar.wordpress.com). He has repeatedly made the point that New Spain, as a political entity, endured for nearly 300 years — comparable to the existence of the United States (and also very comparable to Glubb’s imperial period) yet anglocentric Americans have completely glossed over the lessons that might be learned from it.
Borepatch: I am entirely skeptical of any study on GM food that comes out of Europe. Almost certainly government funded, and therefore looking for the outcome that the funders want. Sort of like Global Warming …
Electric Angel: I recall Spain holding on to most of Latin America until 1820 or so. In 1798, a man could travel from Tierra del Fuego up to Port Angeles, Washington, go east to the Mississippi River drainage, down to New Orleans, and over to Miami, and never set foot on territory not at least nominally under the control of the King of Spain. I wonder if the Tierra del Fuego to Port Angeles distance is more than Belarus to Vladivostok?
Rollory: DC expects everybody on the planet to do exactly as they say. But DC is sure that they’ll do so because everyone is basically good and wants what’s best (except of course for Bad People who need to have bombs dropped on their heads), which is, of course, what DC sees is good and best, because DC is obviously correct in all things, most particularly about the need for everybody to do exactly as DC says. Whether DC formally calls itself an empire or not doesn’t seem to really...
Rollory: France losing the imperial world contest with England might have been the best thing that ever happened to it. As it stands it’ll still be a close thing — all those nonwhites in Paris are there because they’re coming from former French colonies.
James James: “The present infatuation for independence for ever smaller and smaller units will eventually doubtless be succeeded by new international empires.” Interesting prediction! Glubb hasn’t mentioned empires that pretend not to be empires. Is this a new phenomenon? There’s now a DOCX version available on libgen.
Bob Sykes: As Ames pointed out many years ago, plants themselves are a significant source of carcinogens and toxicants, which evolved to offset grazing and browsing.