Like any editor, Stalin could be ambivalent

Sunday, July 14th, 2019

The Soviet Union, Aaron Lake Smith reminds us, was a regime founded by freelance writers and editors:

In other words, a nightmare. Pamphleteers, autodidactic theoreticians, critics, publishers of small journals, hot-­take artists, takedown artists, and failed poets who’d reinvented themselves as labor organizers — fractious and at constant war with one another, literary people through and through.

If we imagine the early Soviet Union as a hierarchical publishing company, a magazine or new media outfit like The New Republic or BuzzFeed, Lenin was the founder and publisher, Trotsky was the deputy editor, and Stalin was the seemingly humble managing editor. As anyone who has worked in publishing knows, the managing editor is the hardest worker. They make sure the deadlines are met and the trains run on time. They are, above all, reliable. This particular managing editor takes no vacations, never leaves town. He lives for the work, strives to appear to be the mere executor of the will of the publisher and the company.

When the publisher becomes very sick, it is the managing editor who visits him at home to cheer him up with jokes and receive his instructions. By bringing the boss’s instructions back to the office from on high, he leverages this personal relationship and increases his authority within the organization. It’s not hard to see how Stalin’s ascent within the Bolshevik hierarchy happened. We’ve all seen this person before. When the publisher dies, no one suspects the managing editor of harboring ambitions to take over. But really, who better understands the day-­to-­day functioning of the organization, who better to be in charge?

Stalin was a consummate editor. He seemed to understand that the role was to sublimate ego in order to shape the world quietly in the background. Good editors know how to render themselves invisible. Stalin’s blue pencil, unlike that of other editors, glided across not just poetry chapbooks and literary journals but life itself. “Fool,” “bastard,” “scoundrel,” he wrote in the margins of Andrei Platonov’s 1931 novella, Profit, destroying Platonov’s career. “Radek, you ginger bastard, if you hadn’t pissed into the wind, if you hadn’t been so bad, you’d still be alive,” he scrawled on a male nude drawing that reminded him of Karl Radek, an editor and strategist of the October Revolution whose death he had ordered years earlier. “You need to work, not masturbate,” he wrote on another. The combination of editorial influence with the power of life and death itself resulted in absurd, nearly un­believable situations — such as when Stalin’s old friend and comrade Nikolai Bukharin wrote him from the prison cell Stalin had put him in, begging his inquisitor for a preface to what would be his last book. “I fervently beg you not to let this work disappear… this is completely apart from my personal fateHave pity! Not on me, on the work!

Like any editor, Stalin could be ambivalent. “Stalin has a very particular attitude toward me,” the great Soviet writer Vasily Grossman told his daughter. “He does not send me to the camps, but he never awards me prizes.” Several times anticipated to win the prestigious Stalin Prize for his celebrated novels — in one instance, having planned the victory party, à la ­Hillary at the Javits Center — at the last minute Grossman found his name mysteriously removed from the list each time.

Today Grossman is best known as the author of Life and Fate, a novel often called the War and Peace of the twentieth century. The kaleidoscopic thousand-­page book, which follows the middle-­class Shaposhnikov family through the Second World War, is an indictment of ideological zealotry and a stark account of the horrors of Stalinism. The narrative ranges from the Great Terror to the gulag, the German camps, and Stalin’s late anti-­Semitic campaigns of the 1950s, slowly building the sense that, in their lack of humanity, the Soviet and Nazi regimes became mirror images of each other. “Does human nature undergo a true change in the cauldron of totalitarian violence? Does man lose his innate yearning for freedom?” Grossman asks at a pivotal moment. “The fate of both man and the totalitarian State depend on the answer to this question.” The book was considered so dangerous that all known copies of the text were “arrested” and suppressed by the KGB in 1961, an experience that broke Grossman physically and spiritually. “They strangled me in a dark corner,” he said. After his death, a copy he had hidden with an old friend was smuggled out of Russia on microfilm and published in the West in 1980, only appearing in Russia during the glasnost.

Front-seat belts have pretensioners and load limiters

Saturday, July 13th, 2019

Seat belt technology has improved over the years — but only in the front seats, which are now safer than the back seats:

For the sake of comfort, modern seat belts give a little as occupants move. But if a passenger suddenly pitches forward, a mechanism called an inertia-lock retractor will prevent the belt from completely unspooling. This device is used in both front- and rear-seat belts.

“It doesn’t get any tighter,” Mr. Zuby said. “It just stops where it is.”

Front seat belts, though, have two safety features that typically aren’t found in back: a pretensioner and a load limiter.

The pretensioner reels in a seat belt when a vehicle rapidly decelerates, pulling occupants firmly against the seat to prevent them from smashing into the steering wheel or glove compartment. The load limiter causes the belt to loosen slightly if the tension of a passenger launching forward against an unyielding belt reaches a dangerous threshold.

“The idea of a seat belt is twofold,” Mr. Zuby said. “Pretensioners take out slack before the occupant pushes into the belt. Load limiters allow the belt to pay out to make sure the forces that keep you with the car don’t get high enough to injure you, in particular your chest.”

To see what happens in frontal crashes—when seat belts offer the most protection—the Insurance Institute examined injuries to 117 belted rear-seat passengers in collisions that occurred from 2004 through 2015. The occupants in the study, which was published in April, were age 6 or older. The vehicles were model year 2000 or later and were no more than 10 years old at the time of the crash.

Thirty-six of the rear-seat passengers were seriously injured and 81 were killed. More than half were more seriously injured than front-seat passengers in the same crash.

[...]

Since then, a 2013 study by NHTSA has found that front-seat occupants of passenger vehicles wearing seat belts with pretensioners and load limiters had a 12.8% lower fatality risk than occupants restrained by front-seat belts without these technologies.

The numbers used to assess health are not helpful

Friday, July 12th, 2019

The numbers used to assess health are, for the most part, not helpful, but other, simpler metrics are:

The speed at which you walk, for example, can be eerily predictive of health status. In a study of nearly 35,000 people aged 65 years or older in the Journal of the American Medical Association, those who walked at about 2.6 feet per second over a short distance — which would amount to a mile in about 33 minutes — were likely to hit their average life expectancy. With every speed increase of around 4 inches per second, the chance of dying in the next decade fell by about 12 percent. (Whenever I think about this study, I start walking faster.)

Walking speed isn’t unique. Studies of simple predictors of longevity like these come out every couple of years, building up a cadre of what could be called alternative vital signs. In 2018, a study of half a million middle-aged people found that lung cancer, heart disease, and all-cause mortality were well predicted by the strength of a person’s grip.

Yes, how hard you can squeeze a grip meter. This was a better predictor of mortality than blood pressure or overall physical activity. A prior study found that grip strength among people in their 80s predicted the likelihood of making it past 100. Even more impressive, grip strength had good predictive ability in a study among 18-year-olds in the Swedish military on cardiovascular death 25 years later.

Another study made headlines earlier this year for declaring that push-up abilities could predict heart disease. Stefanos Kales, a professor at Harvard Medical School, noticed that the leading cause of death of firefighters on duty was not smoke inhalation, burns, or trauma, but sudden cardiac death. This is usually caused by coronary-artery disease. Even in this high-risk profession, people are most likely to die of the same thing as everyone else.

Still, the profession needed effective screening tests to define fitness for duty. Since firefighters are generally physically fit people, Kales’s lab looked at push-ups. He found that they were an even better predictor of cardiovascular disease than a submaximal treadmill test. “The results show a strong association between push-up capacity and decreased risk of subsequent cardiovascular disease,” Kales says.

You would think the drive to move to these new metrics would come from their effectiveness and efficiency:

This is driven in part by the Americans With Disabilities Act, which mandates that people not be discriminated against in occupational settings based on BMI or age.

This estimate caught my eye:

Granted, Joyner and other experts I heard from estimated that the number of Americans who can do a single push-up is likely only about 20 or 30 percent.

How to fight a war in space (and get away with it)

Thursday, July 11th, 2019

We depend on satellites, so knocking them out is becoming a military priority:

Today, much more civilian infrastructure relies on GPS and satellite communications, so attacks on them could lead to chaos. The military leans more heavily on satellites too: data and video feeds for armed UAVs, such as the Reaper drones that the US military has flying over Afghanistan and Iraq, are sent via satellite to their human operators. Intelligence and images are also collected by satellites and beamed to operations centers around the world. In the assessment of Chinese analysts, space is used for up to 90% of the US military’s intelligence.

[...]

Non-state actors, as well as more minor powers like North Korea and Iran, are also gaining access to weapons that can bloody the noses of much larger nations in space.

That doesn’t necessarily mean blowing up satellites. Less aggressive methods typically involve cyberattacks to interfere with the data flows between satellites and the ground stations. Some hackers are thought to have done this already.

For example, in 2008, a cyberattack on a ground station in Norway let someone cause 12 minutes of interference with NASA’s Landsat satellites. Later that year, hackers gained access to NASA’s Terra Earth observation satellite and did everything but issue commands.

[...]

There are strong suspicions that Russia has been jamming GPS signals during NATO exercises in Norway and Finland, and using similar tactics in other conflicts. “Russia is absolutely attacking space systems using jammers throughout the Ukraine,” says Weeden. Jamming is hard to distinguish from unintentional interference, making attribution difficult (the US military regularly jams its own communications satellites by accident). A recent report from the US Defense Intelligence Agency (DIA) claims that China is now developing jammers that can target a wide range of frequencies, including military communication bands. North Korea is believed to have bought jammers from Russia, and insurgent groups in Iraq and Afghanistan have been known to use them too.

Spoofing, meanwhile, puts out a fake signal that tricks GPS or other satellite receivers on the ground. Again, it’s surprisingly easy. In the summer of 2013, some students at the University of Texas used a briefcase-sized device to spoof a GPS signal and cause an $80 million private yacht to veer hundreds of meters off course in the Mediterranean. Their exploit wasn’t detected (they later announced it themselves).

[...]

There’s no evidence that anyone has yet used lasers to destroy targets in space, though aircraft-borne lasers have been tested against missiles within the atmosphere. The DIA report suggests that China will have a ground-based laser that can destroy a satellite’s optical sensors in low Earth orbit as early as next year (and that will, by the mid-2020s, be capable of damaging the structure of the satellite). Generally, the intention with lasers is not to blast a satellite out of the sky but to overwhelm its image sensor so it can’t photograph sensitive locations. The damage can be temporary, unless the laser is powerful enough to make it permanent.

Lasers need to be aimed very precisely, and to work well they require complex adaptive optics to make up for atmospheric disturbances, much as some large ground-based telescopes do. Yet there is some evidence, all unconfirmed and eminently deniable, that they are already being used. In 2006, US officials claimed that China was aiming lasers at US imaging satellites passing over Chinese territory.

[...]

In November 2016, the Commercial Spaceflight Center at AGI, an aerospace firm, noticed something strange. Shortly after it was launched, a Chinese satellite, supposedly designed to test high-performance solar cells and new propellants, began approaching a number of other Chinese communications satellites, staying in orbit near them before moving on. It got within a few miles of one—dangerously close in space terms. It paid visits to others in 2017 and 2018. Another Chinese satellite, launched last December, released a second object once it reached geostationary orbit that seemed to be under independent control.

The suspicion is that China is practicing for something known as a co-orbital attack, in which an object is sent into orbit near a target satellite, maneuvers itself into position, and then waits for an order. Such exercises could have less aggressive purposes—inspecting other satellites or repairing or disposing of them, perhaps. But co-orbiting might also be used to jam or snoop on enemy satellites’ data, or even to attack them physically.

Russia, too, has been playing about in geostationary orbit. One of its satellites, Olymp-K, began moving about regularly, at one point getting in between two Intelsat commercial satellites. Another time, it got so close to a French-Italian military satellite that the French government called it an act of “espionage.” The US, similarly, has tested a number of small satellites that can maneuver around in space.

Fortnite’s dominance is ebbing

Wednesday, July 10th, 2019

The Wall Street Journal takes a look at the man behind Fortnite:

By age 30, Epic Games Inc. founder and CEO Tim Sweeney had a couple of successful videogames under his belt and was starting to make real money.

“I had a Ferrari and a Lamborghini in the parking lot of my apartment,” he recalled. “People who hadn’t met me thought I must be a drug dealer.”

Today, Mr. Sweeney, at 48, is worth more than $7 billion, according to Bloomberg’s Billionaires Index. Epic was last valued at $15 billion, counting Walt Disney Co. and China’s Tencent Holdings PLC among its investors. And “Fortnite,” its blockbuster game, has racked up 250 million players and $3.9 billion in estimated revenue.

[...]

While the biggest U.S. videogame companies are clustered in Los Angeles, New York and the Bay Area, Epic is based in Cary, N.C., down the road from Raleigh. Mr. Sweeney said the location prevents Epic from being swayed by Silicon Valley groupthink.

[...]

Epic tried something different. It made “Fortnite” free and put it on every major device people use to play games — consoles, computers, smartphones and tablets. It put its own spin on a trendy new genre called Battle Royale, where a large group of players fight until only one person or squad is left standing. It constantly tweaked the game’s virtual world to give players something new to discover. And it took the popular shooter format and made it less violent and more playful, with colorful characters who compete with dance moves as well as firearms.

[...]

By erasing the barriers between players with different devices, Epic effectively turned “Fortnite” into a massive social network. Wearing headsets to talk to one another, groups of friends trade jokes and gossip while battling to survive.

[...]

Mr. Sweeney founded Epic in 1991 from his parents’ basement, at age 20, funding it with $4,000 in personal savings. He later dropped out of the University of Maryland a few credits shy of a mechanical-engineering degree. “I went from mowing lawns to being CEO of Epic,” said Mr. Sweeney, who got his diploma in 2018.

In its early years, the company had some success with a handful of games, including “Unreal Tournament” and “Gears of War,” that followed more traditional shoot-’em-up formats.

[...]

Today, “Fortnite’s” dominance is ebbing. Monthly revenue from sales of virtual perks such as costumes and dance moves for players’ avatars has fallen 56% since peaking at a record $372.2 million in December, according to Nielsen’s SuperData.

All the hand-wringing about getting into good colleges is probably a waste of time

Wednesday, July 10th, 2019

Scott Alexander looks at increasingly competitive college admissions and ends with this summary:

  1. There is strong evidence for more competition for places at top colleges now than 10, 50, or 100 years ago. There is medium evidence that this is also true for upper-to-medium-tier colleges. It is still easy to get into medium-to-lower-tier colleges.
  2. Until 1900, there was no competition for top colleges, medical schools, or law schools. A secular trend towards increasing admissions (increasing wealth + demand for skills?) plus two shocks from the GI Bill and the Vietnam draft led to a glut of applicants that overwhelmed schools and forced them to begin selecting applicants.
  3. Changes up until ten years ago were because of a growing applicant pool, after which the applicant pool (both domestic and international) stopped growing and started shrinking. Increased competition since ten years ago does not involve applicant pool size.
  4. Changes after ten years ago are less clear, but the most important factor is probably the ease of applying to more colleges. This causes an increase in applications-per-admission which is mostly illusory. However, part of it may be real if it means students are stratifying themselves by ability more effectively. There might also be increased competition just because students got themselves stuck in a high-competition equilibrium (ie an arms race), but in the absence of data this is just speculation.
  5. Medical schools are getting harder to get into, but law schools are getting easier to get into. There is no good data for graduate schools.
  6. All the hand-wringing about getting into good colleges is probably a waste of time, unless you are from a disadvantaged background. For most people, admission to a more selective college does not translate into a more lucrative career or a higher chance of admission to postgraduate education. There may be isolated exceptions at the very top, like for Supreme Court justices.

The trees are ready to cut

Tuesday, July 9th, 2019

A new federal program in the 1980s offered farmers money to reforest depleted land:

Pine trees appealed to Mr. George. He bought loblolly seedlings and pulled his pickup into a parking lot where hands-for-hire congregated.

“We figured we’d plant trees and come back and harvest it in 30 years and in the meantime go into town to make a living doing something else,” he said.

Three decades later the trees are ready to cut, and Mr. George is learning how many other Southerners had the same idea.

A glut of timber has piled up in the Southeast. There are far more ready-to-cut trees than the region’s mills can saw or pulp. The surfeit has crushed timber prices in Mississippi, Alabama and several other states.

The volume of Southern yellow pine, used in housing and to make paper, has surged in recent decades as farmers replaced cropland with trees and as clear-cut forests were replanted. By 2020, the amount of wood growing per acre of timberland in many counties will have more than quadrupled since 1980, U.S. forestry officials estimate.

It has been a big loser for some financial investors, among them the country’s largest pension fund. The California Public Employees’ Retirement System spent more than $2 billion on Southern timberland, and harvested trees at depressed prices to pay interest on money borrowed to buy. Calpers sold much of its land this summer at a loss. A spokeswoman for the pension fund declined to comment.

It has also been tough for the individuals and families who own much of the South’s forestland, and who had banked on its operating as a college fund or retirement account. The region has more than six million owners of at least 10 wooded acres, say academics and forestry consultants. Many of the owners were counting on forests as a long-term investment that could be replenished and passed on to heirs.

The marvel of advancing through life’s stations

Tuesday, July 9th, 2019

Much of our pop culture is made by and for folks who rate high on openness, the sort attracted to novelty — world travels, new drugs, and so forth — but not country music:

Emotional highlights of the low-openness life are going to be the type celebrated in “One Boy, One Girl”: the moment of falling in love with “the one,” the wedding day, the birth one’s children (though I guess the song is about a surprising ultrasound). More generally, country music comes again and again to the marvel of advancing through life’s stations, and finds delight in experiencing traditional familial and social relationships from both sides. Once I was a girl with a mother, now I’m a mother with a girl. My parents took care of me, and now I take care of them. I was once a teenage boy threatened by a girl’s gun-loving father, now I’m a gun-loving father threatening my girl’s teenage boy. Etc. And country is full of assurances that the pleasures of simple, rooted, small-town, lives of faith are deeper and more abiding than the alternatives.

(Hat tip to T. Greer.)

The top 20 most watched shows on Netflix include only a few “originals”

Monday, July 8th, 2019

I’m not sure I’d say that ‘Stranger Things’ helps illustrate the flaws in Netflix’s strategy:

Last year, Netflix shelled out more than $12 billion to purchase, license and produce content. This year, that figure will rise to $15 billion. It will spend $2.9 billion more on marketing. These costs come as Netflix is expected to report $20.2 billion in revenue in 2019, according to analysts surveyed by Refinitiv.

[...]

From 2012 to 2016, Netflix subscriptions in the U.S. grew about 5% each year and spiked by 10% in 2017. However, in 2018, domestic memberships only grew about 3.6%.

Internationally, Netflix has grown its subscriptions to nearly 81 million, up from just 1.86 million in 2011. Since 2015, the company has seen double digit growth in this area. Altogether, the company has just under 150 million subscribers.

Also, of the top 20 most watched shows on Netflix, six are “originals,” but only one of those are actually owned by the company, according to data from Nielsen and Pachter.

Top 20 Shows on Netflix in 2018 by Minutes

I knew I was odd, but I guess I don’t watch any of Netflix’s top shows.

They suddenly find themselves in a society that is disgustingly self-centered

Monday, July 8th, 2019

T. Greer’s life’s short course has brought him to many places, bound him to sundry peoples, and urged him to varied trades:

Yet out of the lands I’ve lived and roles I’ve have donned, none blaze in my memory like the two years I spent as a missionary for the Church of Jesus Christ. It is a shame that few who review my resume ask about that time; more interesting experiences were packed into those few mission years than in the rest of the lot combined.

To be a missionary is to confront the uncanny. You cannot serve without sounding out the weird bottoms of the human heart. But if missionary life forces you to come full contact with mankind at its most desperate and unsettled, so too it asks you to witness mankind at its most awesome and ethereal. Guilt’s blackest pit, fear’s sharpest grip, rage at its bluntest, hope at its highest, love at its longest and fullest — to serve as a missionary is to be thrust in the midst of the full human panorama, with all of its foulness and all of its glory. I doubt I shall ever experience anything like it again. I cannot value its worth. I learned more of humanity’s crooked timbers in the two years I lived as missionary than in all the years before and all the years since.

Attempting to communicate what missionary life is like to those who have not experienced it themselves is difficult. You’ll notice my opening paragraph restricted itself to broad generalities; it is hard to move past that without cheapening or trivializing the experience.

Yet there is one segment of society that seems to get it. In the years since my service, I have been surprised to find that the one group of people who consistently understands my experience are soldiers. In many ways a Mormon missionary is asked to live something like a soldier: like a soldier, missionaries go through an intense ‘boot camp’ experience meant to reshape their sense of self and duty; are asked to dress and act in a manner that erodes individuality; are ‘deployed’ in far-flung places that leave them isolated from their old friends, family members, and community; are pushed into contact with the full gamut of human personality in their new locales; live within a rigid hierarchy, follow an amazing number of arcane rules and regulations, and hold themselves to insane standards of diligence, discipline, and obedience; and spend years doing a job which is not so much a job as it is an all-encompassing way of life.

The last point is the one most salient to this essay. It is part of the reason both many ex-missionaries (known as “RMs” or “Return Missionaries” in Mormon lingo) and many veterans have such trouble adapting to life when they return to their homes. This comparison occurred to me first several years ago, when I read a Facebook comment left by a man who had served as a Marine mechanic in Afghanistan. He was commenting on an interview Sebstation Junger had done to promote his book, Tribe: On Homecoming and Belonging.

I really enjoyed the audiobook of Tribe, by the way, but audiobooks don’t lend themselves to excerpts.

Many RMs report a sense of loss and aimlessness upon returning to “the real world.” They suddenly find themselves in a society that is disgustingly self-centered, a world where there is nothing to sacrifice or plan for except one’s own advancement. For the past two years there was a purpose behind everything they did, a purpose whose scope far transcended their individual concerns. They had given everything — “heart, might, mind and strength” — to this work, and now they are expected to go back to racking up rewards points on their credit card? How could they?

The soldier understands this question. He understands how strange and wonderful life can be when every decision is imbued with terrible meaning. Things which have no particular valence in the civilian sphere are a matter of life or death for the soldier. Mundane aspects of mundane jobs (say, those of the former vehicle mechanic) take on special meaning. A direct line can be drawn between everything he does — laying out a sandbag, turning off a light, operating a radio — and the ability of his team to accomplish their mission. Choice of food, training, and exercise before combat can make the difference between the life and death of a soldier’s comrades in combat. For good or for ill, it is through small decisions like these that great things come to pass.

In this sense the life of the soldier is not really his own. His decisions ripple. His mistakes multiply. The mission demands strict attention to things that are of no consequence in normal life. So much depends on him, yet so little is for him.

This sounds like a burden. In some ways it is. But in other ways it is a gift. Now, and for as long as he is part of the force, even his smallest actions have a significance he could never otherwise hope for. He does not live a normal life. He lives with power and purpose — that rare power and purpose given only to those whose lives are not their own.

[...]

This sort of life is not restricted to soldiers and missionaries. Terrorists obviously experience a similar sort of commitment. So do dissidents, revolutionaries, reformers, abolitionists, and so forth. What matters here is conviction and cause. If the cause is great enough, and the need for service so pressing, then many of the other things — obedience, discipline, exhaustion, consecration, hierarchy, and separation from ordinary life — soon follow. It is no accident that great transformations in history are sprung from groups of people living in just this way. Humanity is both at its most heroic and its most horrifying when questing for transcendence.

What do you call a female defender?

Sunday, July 7th, 2019

The French language has masculine and feminine genders. Somehow this his become confusing when referring to female soccer players and managers. What do you call a female defender?

The language offers at least three options: the masculine form défenseur, the feminine form défenseuse, or another feminine form défenseure, which is pronounced exactly the same as the masculine. And if you follow French coverage of the tournament, you might see all three.

In Le Monde, you would read about a défenseuse or sélectionneuse (the word used for national team managers). A dispatch from Agence France-Presse, meanwhile, will say défenseure and sélectionneure. Television networks TF1 and Canal+, which are broadcasting the tournament here, often use one form in graphics on screen, but let commentators like Mr. Lizarazu employ another during live broadcasts.

Traditionally, you use the masculine form unless you want to explicitly refer to a female. I have no idea where this third, quasi-female gender came from.

When it comes to questions of proper usage, the country has its own ancient authority, the 384-year-old Académie Française. Its 35 members are known as the Immortals. They are charged with sporadically producing the definitive dictionary on usage and cutting through the babble of a constantly evolving tongue. They are even issued swords.

But time moves slowly at the Académie. In 1984, as more French speakers adapted their speech to reflect a growing number of women in the workplace, the Académie felt compelled to weigh in on the topic: It ruled out any changes, preferring to stick to the masculine form, except in cases where usage had already taken root. It was important to remember, the Académie argued at the time, that there was no connection between what it called “natural gender” and “grammatical gender.”

Have you confidence in me to trust me with your watch until tomorrow?

Sunday, July 7th, 2019

The term “confidence man” appears to have been coined in 1849 during the trial of one William Thompson in New York:

A debonair thief, Thompson had a knack for ingratiating himself with complete strangers on the street and then asking, “Have you confidence in me to trust me with your watch until tomorrow?” Many did, which cost them their expensive timepieces. The much-publicized trial and the odd crime at its heart piqued the interest of Herman Melville, who reworked it eight years later for his under-appreciated high-concept final novel, The Confidence-Man. After boarding a Mississippi steamboat on April Fool’s Day, its Mephistophelean titular character adopts a succession of guises with evocative backstories and surnames (Goodman, Truman, Noble) with the aim of getting one over on fellow passengers. Spurred by self-interest and reflective of society at large, the dupes place unquestioning trust in tokens such as attire and profession, making them as complicit in the con as the perpetrator. In The Adman’s Dilemma, which used literary and cultural waypoints to chart the evolution of the common snake-oil salesman into the modern man of advertising, Paul Rutherford bleakly described Melville’s novel as “a study in deception and even a self-deception so complete that there was no possibility of redemption”.

These contests will be byzantine

Saturday, July 6th, 2019

Suez Deconstructed aims to be a historically rooted how-to manual for statecraft:

The book seeks to convey the experience of “masterminding solutions to giant international crises,” Zelikow writes, by providing “a sort of simulator that can help condition readers just a little more” before confronting their own crises. It sets up that simulation by scrambling the storytelling. First, Suez Deconstructed divides the crisis into three phases: September 1955 through July 1956, July 1956 through October 1956, and October through November of that year. In doing so, the authors hope to show that “most large problems of statecraft are not one-act plays” but instead begin as one problem and then mutate into new ones. This was the case with Suez, which began with Egypt purchasing Soviet arms and which became a multipronged battle over an international waterway. Second, the book proceeds through these phases not chronologically but by recounting the perspectives of each of the six participants: the United States, the Soviet Union, the United Kingdom, France, Israel, and Egypt. The goal — and the effect — is to deprive the reader of omniscience, creating a “lifelike” compartmentalization of knowledge and perspective.

Zelikow encourages readers to assess Suez by examining three kinds of judgments made by the statesmen during the crisis: value judgments (“What do we care about?”), reality judgments (“What is really going on?”), and action judgments (“What can we do about it?”). Asking these questions, Zelikow argues, is the best means of evaluating the protagonists. Through this structure, Suez Deconstructed hopes to provide “a personal sense, even a checklist, of matters to consider” when confronting questions of statecraft.

The book begins this task by describing the world of 1956. The Cold War’s impermeable borders had not yet solidified, and the superpowers sought the favor of the so-called Third World. Among non-aligned nations, Cold War ideology mattered less than anti-colonialism. In the Middle East, its champion was Egyptian President Gamal Abdel Nasser, who wielded influence by exploiting several festering regional disputes. He rhetorically — and, the French suspected, materially — supported the Algerian revolt against French rule. He competed with Iraq, Egypt’s pro-British and anti-communist rival. He threatened to destroy the State of Israel. And through Egypt ran the Suez Canal, which Europe depended on for oil.

Egypt’s conflict with Israel precipitated the Suez crisis. In September 1955, Nasser struck a stunning and mammoth arms deal with the Soviet Union. The infusion of weaponry threatened Israel’s strategic superiority, undermined Iraq, and vaulted the Soviet Union into the Middle East. From that point forward, Zelikow argues, the question for all the countries in the crisis (aside from Egypt, of course) became “What to do next about Nasser?”

Israel responded with dread, while, Britain, France, and the United States alternated between confrontation and conciliation. Eventually, the United States abandoned Nasser, but he doubled down by nationalizing the Suez Canal. This was too much for France. Hoping to unseat Nasser to halt Egyptian aid to Algeria, it concocted a plan with Israel and, eventually, Britain for Israel to invade Egypt and for British and French troops to seize the Canal Zone on the pretense of separating Israeli and Egyptian forces. The attack began just before the upcoming U.S. presidential election and alongside a revolution in Hungary that triggered a Soviet invasion. The book highlights the Eisenhower administration’s anger at the tripartite plot. Despite having turned on Nasser, Eisenhower seethed at not having been told about the assault, bitterly opposed it, and threatened to ruin the British and French economies by withholding oil shipments.

[...]

Even so, it is possible to extract several key lessons about statecraft. Chief among them is the extent to which policymakers are informed as much by honor and will as by interest. Britain and France, for example, ultimately joined forces to invade Egypt, but they did so for different reasons and with different degrees of resolve. As Zelikow notes, in the mid-1950s, France, recently beaten in Indochina, seemed beleaguered, while Britain “still seemed big,” boasting a “far-flung network of bases and influence.” But appearances could deceive. France was led by men who “had been heroes of the resistance” during World War II and were determined to restore their country’s honor. Outwardly strong, meanwhile, Britain suffered from a gnawing sense of exhaustion.

This imbalance of morale would shape each nation’s actions during the crisis and contribute to Suez’s strange outcome. France’s Socialist-led coalition, Zelikow writes, was “driven by ideas and historical experience.” It possessed a vision of restoring French pride and a dedication to defeating what it saw as “antimodern throwbacks” in Algeria backed by a Mussolini-like figure in Cairo. It was thus undeterred when complications arose and “more creative in [its] policy designs.” But because Washington, Moscow, and Cairo all judged France by its seeming lack of material power and its recent defeats alone, they underestimated its will.

British leaders, equally eager to topple Nasser and more capable of acting independently than the French, nevertheless struggled to overcome their nation’s fatigue. Initially behind the government’s desire to punish Nasser, the British public, as the book details, “[lost] its appetite for military adventure” as diplomacy commenced. British Prime Minster Anthony Eden had long argued for the need to reconcile with anti-colonialism and with Nasser, its chief Middle Eastern apostle. The British public, tired of war, could not long support Eden’s reversal. London ultimately joined French-Israeli strikes not so much out of conviction but to save face — avoiding the embarrassment of abandoning the demands it made of Nasser.

The second lesson that emerges is the centrality of relationships between statesmen, which drove events just as much as, if not more than, money, power, and ideas. One of the central drivers of the war, in fact, was the bond between French and Israeli statesmen. France’s Socialist leaders had all fought in the French Resistance during World War II. They sympathized with Israel, feeling morally obligated to prevent another massacre of the Jewish people and, as one author in the book describes, viewing Israel’s struggle “as a sort of sequel” to the fight against fascism. The Israelis, many of whom were former guerilla fighters themselves, easily related to the French and appreciated their support. Paris and Jerusalem grew closer for practical reasons as well: France sought Israel’s aid in addressing the Algerian revolt. But the relationship extended beyond material interest. As one chapter relates, during French-Israeli negotiations regarding the attack on Egypt, “there was an emotional connection between [the French and Israeli leaders] that documents do not easily capture.” The affection between French and Israeli officials repeatedly propelled the war planning forward.

If intimate ties catalyzed the invasion of Egypt, so, too, did combustible ones — none more so than the rancor between Eden and Dulles. Eden detested Dulles as moralistic, legalistic, and tedious (as related in Suez Deconstructed, he once described Dulles with the quip, “Dull, Duller, Dulles”). Their mutual disregard plagued U.S.-British cooperation. At key moments, Eden believed, Dulles would intervene with a maladroit statement that would harm planning or undermine British leverage. In early October 1956, for example, Dulles stated that there were “no teeth” to the diplomatic plan that the powers had been devising and that when it came to issues of “so-called colonialism,” the United States would “play a somewhat independent role.” For Eden, feeling isolated, this statement “was in some ways the final blow,” spurring him to join the French-Israeli initiative.

The statesmen of the Suez Crisis were haunted by history as much as they were guided by pride and personality — another striking theme that surfaces in Suez Deconstructed. Zelikow begins his overview of the world in 1956 by stating that “[t]hey were a wartime generation,” nations that had “lived through conclusive, cataclysmic wars, some more than one.” Those experiences permeated their approaches to the crisis. French and British leaders could not help but see Nasser as a 1930s potentate.

[...]

It is a rare quality in world leaders to be able to make historical analogies without fully embracing them, thereby becoming trapped.

[...]

The wars of the coming decades, however, are likely to look more like Suez than Berlin or Iraq. They will likely be multi-state conflicts, in which states of every size and strength play major roles. These contests will be byzantine. Like Suez, they will be local skirmishes and global crises simultaneously. They will feature webs of overlapping rivalries and alliances (and rivalries within alliances), strategic and ideological considerations at multiple levels, and high-stakes signaling amid confusion and disinformation.

How would fifty guineas for a night’s work suit you?

Friday, July 5th, 2019

I was listening to Stephen Fry’s narration of “The Adventure of the Engineer’s Thumb,” when the young (unemployed) engineer at the center of the story was offered 50 guineas for a night’s work:

The guinea was a coin of approximately one quarter ounce of gold that was minted in Great Britain between 1663 and 1814. The name came from the Guinea region in West Africa, where much of the gold used to make the coins originated. It was the first English machine-struck gold coin, originally worth one pound sterling, equal to twenty shillings, but rises in the price of gold relative to silver caused the value of the guinea to increase, at times to as high as thirty shillings. From 1717 to 1816, its value was officially fixed at twenty-one shillings.

When Britain adopted the gold standard the guinea became a colloquial or specialised term. Although the coin itself no longer circulated, the term guinea survived as a unit of account in some fields. Notable usages included professional fees (medical, legal etc), which were often invoiced in guineas, and horse racing and greyhound racing, and the sale of rams. In each case a guinea meant an amount of one pound and one shilling (21 shillings), or one pound and five pence (£1.05) in decimalised currency.

One pound in 1892 has inflated to well over 100 pounds today, so 50 guineas would be worth over 6,000 pounds in 2019.

Happy Secession Day!

Thursday, July 4th, 2019

I almost forgot to wish everyone a happy Secession Day:

Brexit 1776