That’s when self-hatred starts

Tuesday, August 20th, 2019

The black critique of Brown v. Board of Education starts with some of the psychological research, Malcolm Gladwell explains:

Well, the great book on this is Daryl Scott’s Contempt and Pity. He’s a very good black historian at Howard [University], I believe. Yes, he’s the chair of history at Howard. And he has much to say, so I got quite taken when I was doing this season of my podcast with the black critique of Brown [v. Board of Education]. And the black critique of Brown starts with some of that psychological research because the psychological research is profoundly problematic on many levels.

So what Clark was showing, and what so moved the court in the Warren decision, was this research where you would take the black and the white doll, and you show that to the black kid. And you would say, “Which is the good doll?” And the black kid points to the white doll. “And which doll do you associate with yourself?” And they don’t want to answer the question. And the court said, “This is the damage done by segregation.”

Scott points out that if you actually look at the research that Clark did, the black children who were most likely to have these deeply problematic responses in the doll test were those from the North, who were in integrated schools. The southern kids in segregated schools did not regard the black doll as problematic. They were like, “That’s me. Fine.”

That result, that it was black kids, minority kids from integrated schools, who had the most adverse reactions to their own representation in a doll, is consistent with all of the previous literature on self-hatred, which starts with Jews. That literature begins with, where does Jewish self-hatred come from? Jewish self-hatred does not come from Eastern Europe and the ghettos. It comes from when Jewish immigrants confront and come into close conflict and contact with majority white culture. That’s when self-hatred starts, when you start measuring yourself at close quarters against the other, and the other seems so much more free and glamorous and what have you.

So, in other words, the Warren Court picks the wrong research. There are all kinds of problems caused by segregation. This happens to be not one of them. So why does the Warren Court do that? Because they are trafficking — this is Scott’s argument — they are trafficking in an uncomfortable and unfortunate trope about black Americans, which is that black American culture is psychologically damaged. That the problem with black people is not that they’re denied power, or that doors are closed to them, or that . . . no, it’s because that something at their core, their family life and their psyches, have, in some way, been crushed or distorted or harmed by their history.

It personalizes the struggle. By personalizing the struggle, what the Warren Court is trying to do is to manufacture an argument against segregation that will be acceptable to white people, particularly Southern white people. And so, what they’re saying is, “Look, it’s not you that’s the problem. It’s black people. They’re harmed in their hearts, and we have to usher them into the mainstream.”

They’re not making the correct argument, which was, “You guys have been messing with these people for 200 years! Stop!” They can’t make that argument because Warren desperately wants a majority. He wants a nine-nothing majority on the court. So, instead, they construct this, in retrospect, deeply offensive argument, about how it’s all about black people carrying this . . . and using social science in a way that’s actually quite deeply problematic. It’s not what the social science said.

He favors a motorcentric view of the brain

Wednesday, August 14th, 2019

Neuroscientist Shane O’Mara has written an entire book In Praise of Walking:

He favours what he calls a “motor-centric” view of the brain — that it evolved to support movement and, therefore, if we stop moving about, it won’t work as well.

This is neatly illustrated by the life cycle of the humble sea squirt which, in its adult form, is a marine invertebrate found clinging to rocks or boat hulls. It has no brain because it has eaten it. During its larval stage, it had a backbone, a single eye and a basic brain to enable it to swim about hunting like “a small, water-dwelling, vertebrate cyclops”, as O’Mara puts it. The larval sea squirt knew when it was hungry and how to move about, and it could tell up from down. But, when it fused on to a rock to start its new vegetative existence, it consumed its redundant eye, brain and spinal cord. Certain species of jellyfish, conversely, start out as brainless polyps on rocks, only developing complicated nerves that might be considered semi-brains as they become swimmers.

[...]

“Our sensory systems work at their best when they’re moving about the world,” says O’Mara. He cites a 2018 study that tracked participants’ activity levels and personality traits over 20 years, and found that those who moved the least showed malign personality changes, scoring lower in the positive traits: openness, extraversion and agreeableness. There is substantial data showing that walkers have lower rates of depression, too. And we know, says O’Mara, “from the scientific literature, that getting people to engage in physical activity before they engage in a creative act is very powerful. My notion — and we need to test this — is that the activation that occurs across the whole of the brain during problem-solving becomes much greater almost as an accident of walking demanding lots of neural resources.”

O’Mara’s enthusiasm for walking ties in with both of his main interests as a professor of experimental brain research: stress, depression and anxiety; and learning, memory and cognition. “It turns out that the brain systems that support learning, memory and cognition are the same ones that are very badly affected by stress and depression,” he says. “And by a quirk of evolution, these brain systems also support functions such as cognitive mapping,” by which he means our internal GPS system. But these aren’t the only overlaps between movement and mental and cognitive health that neuroscience has identified.

I witnessed the brain-healing effects of walking when my partner was recovering from an acute brain injury. His mind was often unsettled, but during our evening strolls through east London, things started to make more sense and conversation flowed easily. O’Mara nods knowingly. “You’re walking rhythmically together,” he says, “and there are all sorts of rhythms happening in the brain as a result of engaging in that kind of activity, and they’re absent when you’re sitting. One of the great overlooked superpowers we have is that, when we get up and walk, our senses are sharpened. Rhythms that would previously be quiet suddenly come to life, and the way our brain interacts with our body changes.”

From the scant data available on walking and brain injury, says O’Mara, “it is reasonable to surmise that supervised walking may help with acquired brain injury, depending on the nature, type and extent of injury — perhaps by promoting blood flow, and perhaps also through the effect of entraining various electrical rhythms in the brain. And perhaps by engaging in systematic dual tasking, such as talking and walking.”

One such rhythm, he says, is that of theta brainwaves. Theta is a pulse or frequency (seven to eight hertz, to be precise) which, says O’Mara, “you can detect all over the brain during the course of movement, and it has all sorts of wonderful effects in terms of assisting learning and memory, and those kinds of things”. Theta cranks up when we move around because it is needed for spatial learning, and O’Mara suspects that walking is the best movement for such learning. “The timescales that walking affords us are the ones we evolved with,” he writes, “and in which information pickup from the environment most easily occurs.”

Essential brain-nourishing molecules are produced by aerobically demanding activity, too. You’ll get raised levels of brain-derived neurotrophic factor (BDNF) which, writes O’Mara, “could be thought of as a kind of a molecular fertiliser produced within the brain because it supports structural remodelling and growth of synapses after learning … BDNF increases resilience to ageing, and damage caused by trauma or infection.” Then there’s vascular endothelial growth factor (VEGF), which helps to grow the network of blood vessels carrying oxygen and nutrients to brain cells.

All because they thought they were on ‘roids

Saturday, July 20th, 2019

Steroids work — in part because lifters expect them to work:

When someone goes “on,” they have been fully convinced that the drugs are going to make a huge difference in their training and their results. Those expectations are the critical issue though — those expectations are doing just as much work as the steroids themselves.

I’ll reference and expand briefly on two landmark studies regarding the placebo effect and steroids. If you’d like to look them up, here are the citations:

Ariel et. Al. (1972) “Anabolic Steroids: The Physiological Effects of Placebos,” Medicine and Science in Sports, vol. 4, 124–26.

Maganaris et. Al. (2000) “Expectancy effects and strength training: do steroids make a difference?” Sport psychologist, vol. 14, no. 3, 272–278.

In the first study, fifteen male lifters were put on a strength training plan, and were told that the ones who made the best progress during the first phase of training on seated shoulder press, military press, and bench press (researchers confirmed for being gym-bros in lab coats. Just saying…) would be chosen to use steroids for four weeks to evaluate their effects.

So, these guys trained as hard as they could for 4 weeks to get free, legal roids. The 6 guys who made the best progress gained an average of 11kg between the three lifts, and were selected for the “steroid” trial.

They were told they were being given 10 mg/day of Dianabol, but, in fact, they were given a placebo pill.

So, they made similar gains to the first phase, right? Maybe a little extra because of the placebo effect?

Nope.

They gained an average of 45 kg (about 100 pounds) between their three lifts. They didn’t report the breakdown per lift, but that’s probably somewhere in the neighborhood of 40 pounds on the bench, and 30 apiece on seated and military press. That’s in contrast to 24 pounds TOTAL in the first four weeks between all three lifts.

All because they thought they were on ‘roids.

Second example:

Eleven national level powerlifters were given a saccharine pill before they maxed on squat, bench, and deadlift. They were told that it was a fast-acting steroid

They immediately beat their old PRs by an average of about 4–5% (and since we’re talking about national level lifters, that means we’re probably talking about at least 50–100 pounds on their total).

They were given more sham “steroids” for the next two weeks of training, after which they maxed again. Except…

Five were informed that they’d been taking a placebo the whole time, while six still believed they were taking legit steroids.

The five who knew the truth regressed back to their old “pre-steroid” maxes. They couldn’t even hit the PRs they’d set two weeks before, even though they knew that they were drug-free for those maxes too! They didn’t just fail to make more placebo gains — they lost their initial gains as well.

This was in spite of the fact that they’d reported lifting heavier weights in the gym or doing more reps with certain weights during the two intervening weeks. They knew their training was going better, they knew they’d hit bigger lifts drug-free before, but they just couldn’t put up as heavy of weights knowing that they didn’t have drugs in their systems.

The six who still thought they were juicing managed to hit new PRs again!

So, from these studies, we see people who got “steroid-like gains” in spite of the fact that they never took steroids. They merely thought they did.

Now, obviously steroids do play a role. They do, absolutely, “work.” However, we have to keep in mind that they don’t just “work” via physiological mechanisms — they also “work” by altering peoples’ expectations.

America is losing its grip

Thursday, July 18th, 2019

America is losing its grip — literally:

When she was a practicing occupational therapist, Elizabeth Fain started noticing something odd in her clinic: Her patients were weak. More specifically, their grip strengths, recorded via a hand-held dynamometer, were “not anywhere close to the norms” that had been established back in the 1980s.

[...]

In a study published in 2015 in The Lancet, the health outcomes of nearly 140,000 people across 17 countries were tracked over four years, via a variety of measures—including grip strength. Grip strength was not only “inversely associated with all-cause mortality”—every 5 kilogram (kg) decrement in grip strength was associated with a 17 percent risk increase—but as the team, led by McMaster University professor of medicine Darryl Leong, noted: “Grip strength was a stronger predictor of all-cause and cardiovascular mortality than systolic blood pressure.”

Grip strength has even been found to be correlated more robustly with “ageing markers” than chronological aging itself. It has become a key method of diagnosing sarcopenia, the loss of muscle mass associated with aging. Low grip strength has been linked to longer hospital stays, and in a study of hospitalized cancer patients, it was linked to a “an approximate 3-fold decrease in probability of discharge alive.” In older subjects, lower grip strength has even been linked with declines in cognitive performance.

“I’ve seen people refer to it as a ‘will-to-live’ meter,” says Richard Bohannon, a professor of health studies at North Carolina’s Campbell University. Grip strength, he suggests, is not necessarily an overall indicator of health, nor is it causative—if you start building your grip strength now it does not ensure you will live longer—“but it is related to important things.” What’s more, it’s non-invasive, and inexpensive to measure. Bohannon notes that in his home-care practice, a grip strength test is now de rigueur. “I use it in basically all of my patients,” he says. “It gives you an overall sense of their status, and high grip strength is better than low grip strength.”

Grip Strength vs. Age

Curious about what that all of that means for my own grip strength, I went out and bought a Jamar Hydraulic Hand Dynamometer, which is favored by clinicians. My strength rang in at nearly 62 kgs which, according to a chart of normative grip strengths in the Jamar’s manual, was above the mean for males 45-49, but not hugely outside the standard deviation. In that data, my age group did worse than the 20-24 age group, like you’d expect.

What was surprising was that my grip strength came in at 40 percent above a group of contemporary male college students that Fain measured last year. She found that a group of males aged 20-24—ages that had produced some of the peak mean grip strength scores in the 1980s tests—had a mean grip strength of just 44.7 kgs, well below my own and far below the same cohort in the 1980s, whose mean was in the low 50s. There were also significant declines in female grip strength.

I just dug out my dynamometer, and I may need to dig out my Captains of Crush grip trainers, too.

Implicit learning ability is distinct from IQ or working memory

Tuesday, July 16th, 2019

Our implicit learning ability seems to be distinct from IQ or working memory:

Priya Kalra at the University of Wisconsin-Madison and her colleagues gave 64 healthy young adults four types of tasks that required implicit learning. One involved detecting an artificial grammar (after studying a series of letter strings that all adhered to undisclosed grammatical rules, the participants had to judge which strings among a new set were “grammatical” and which were not.) The second task required them to learn whether a particular group of images was going to trigger one outcome, or another (and they were given feedback to help them to learn). For the third task, they had to predict where a circle was going to appear on a screen, based on prior experience, during which the circle sometimes appeared in a predictable sequence of positions and sometimes did not. Finally, they had to learn visual categories implicitly: with the help of feedback, they had to classify abstract visual stimuli into one of two categories. (Explicit learning could have fed into some of these tasks, but the researchers made efforts to investigate, and take into account, its contribution for each individual.)

One week later, the participants returned to complete different versions of all these tasks, as well as tests of working memory, explicit learning (they had to deliberately learn a list of words) and IQ.

For three of the four implicit learning tasks, the researchers found a “medium” level relationship between a participant’s initial performance and how well they did a week later. This suggests stability in implicit learning ability. The exception was the artificial grammar task; the researchers think it’s possible that explicit learning “contaminated” implicit learning in this task at the second time-point.

The team also found that how good a participant was at implicit learning bore no relation to their IQ or working memory results. It seems, then, to be driven by independent neural processes to those that underpin explicit learning, which is linked to IQ.

This finding fits with earlier work that has tied explicit and implicit learning to different brain regions and networks. (The hippocampus is important for explicit but not implicit earning, for example, whereas damage to the basal ganglia and cerebellum impair implicit, but not explicit, learning.)

The new results have implications for theories that intelligence depends upon a single fundamental factor, such as processing speed, the researchers write. “These data … provide evidence for the existence of a completely uncorrelated cognitive ability,” they added.

The findings also imply that someone might feasibly be smart, as measured by an IQ test, but poorer at implicit learning than someone else with a significantly lower IQ score.

This might explain all kinds of things, from book-smart nerds with no common sense, to Jared Diamond’s acquaintances in Papua New Guinea.

The numbers used to assess health are not helpful

Friday, July 12th, 2019

The numbers used to assess health are, for the most part, not helpful, but other, simpler metrics are:

The speed at which you walk, for example, can be eerily predictive of health status. In a study of nearly 35,000 people aged 65 years or older in the Journal of the American Medical Association, those who walked at about 2.6 feet per second over a short distance — which would amount to a mile in about 33 minutes — were likely to hit their average life expectancy. With every speed increase of around 4 inches per second, the chance of dying in the next decade fell by about 12 percent. (Whenever I think about this study, I start walking faster.)

Walking speed isn’t unique. Studies of simple predictors of longevity like these come out every couple of years, building up a cadre of what could be called alternative vital signs. In 2018, a study of half a million middle-aged people found that lung cancer, heart disease, and all-cause mortality were well predicted by the strength of a person’s grip.

Yes, how hard you can squeeze a grip meter. This was a better predictor of mortality than blood pressure or overall physical activity. A prior study found that grip strength among people in their 80s predicted the likelihood of making it past 100. Even more impressive, grip strength had good predictive ability in a study among 18-year-olds in the Swedish military on cardiovascular death 25 years later.

Another study made headlines earlier this year for declaring that push-up abilities could predict heart disease. Stefanos Kales, a professor at Harvard Medical School, noticed that the leading cause of death of firefighters on duty was not smoke inhalation, burns, or trauma, but sudden cardiac death. This is usually caused by coronary-artery disease. Even in this high-risk profession, people are most likely to die of the same thing as everyone else.

Still, the profession needed effective screening tests to define fitness for duty. Since firefighters are generally physically fit people, Kales’s lab looked at push-ups. He found that they were an even better predictor of cardiovascular disease than a submaximal treadmill test. “The results show a strong association between push-up capacity and decreased risk of subsequent cardiovascular disease,” Kales says.

You would think the drive to move to these new metrics would come from their effectiveness and efficiency:

This is driven in part by the Americans With Disabilities Act, which mandates that people not be discriminated against in occupational settings based on BMI or age.

This estimate caught my eye:

Granted, Joyner and other experts I heard from estimated that the number of Americans who can do a single push-up is likely only about 20 or 30 percent.

This broken gene may explain humans’ endurance

Tuesday, July 2nd, 2019

A “broken” gene may explain humans’ endurance:

Some clues came 20 years ago, when Ajit Varki, a physician-scientist at the University of California, San Diego (UCSD), and colleagues unearthed one of the first genetic differences between humans and chimps: a gene called CMP-Neu5Ac Hydroxylase (CMAH). Other primates have this gene, which helps build a sugar molecule called sialic acid that sits on cell surfaces. But humans have a broken version of CMAH, so they don’t make this sugar, the team reported. Since then, Varki has implicated sialic acid in inflammation and resistance to malaria.

In the new study, Varki’s team explored whether CMAH has any impact on muscles and running ability, in part because mice bred with a muscular dystrophy–like syndrome get worse when they don’t have this gene. UCSD graduate student Jonathan Okerblom put mice with a normal and broken version of CMAH (akin to the human version) on small treadmills. UCSD physiologist Ellen Breen closely examined their leg muscles before and after running different distances, some after 2 weeks and some after 1 month.

After training, the mice with the human version of the CMAH gene ran 12% faster and 20% longer than the other mice, the team reports today in the Proceedings of the Royal Society B. “Nike would pay a lot of money” for that kind of increase in performance in their sponsored athletes, Lieberman says.

The team discovered that the “humanized” mice had more tiny blood vessels branching into their leg muscles, and — even when isolated in a dish — the muscles kept contracting much longer than those from the other mice. The humanlike mouse muscles used oxygen more efficiently as well. But the researchers still have no idea how the sugar molecule affects endurance, as it serves many functions in a cell.

The true identity of this snake has been a puzzle

Monday, July 1st, 2019

I’ve been listening to Stephen Fry’s narrations of the Sherlock Holmes stories, and I came to “The Adventure of the Speckled Band,” where the murder weapon is — spoiler alert! — a swamp adder:

The name swamp adder is an invented one, and the scientific treatises of Doyle’s time do not mention any kind of adder of India. To fans of Sherlock Holmes who enjoy treating the stories as altered accounts of real events, the true identity of this snake has been a puzzle since the publication of the story, even to professional herpetologists. Many species of snakes have been proposed for it, and Richard Lancelyn Green concludes the Indian Cobra (Naja naja) is the snake which it most closely resembles, rather than Boa constrictor, which is not venomous. The Indian cobra has black and white speckled marks, and is one of the most lethal of the Indian venomous snakes with a neurotoxin which will often kill in a few minutes. It is also a good climber and is used by snake charmers in India. Snakes are deaf in the conventional sense but have vestiges to sense vibrations and low-frequency airborne sounds, making it remotely plausible to signal a snake by whistling.

In The Adventures of Sherlock Holmes and Dr. Watson, the deafness inconsistency (while not the others) was solved by Dr. Roylott (suspecting the deafness of snakes) softly knocking on the wall in addition to whistling. While snakes are deaf, they are sensitive to vibration.

Bitis arietans from Africa, Russell’s viper and saw-scaled viper also bear resemblance to the swamp adder of the story, but they have hemotoxin — slow working venoms.

The herpetologist Laurence Monroe Klauber proposed, in a tongue-in-cheek article which blames Dr. Watson for getting the name of the snake wrong, a theory that the swamp adder was an artificial hybrid between the Mexican Gila monster (Heloderma suspectum) and Naja naja. His speculation suggests that Doyle might have hidden a double-meaning in Holmes’ words. What Holmes said, reported by Watson, was “It is a swamp adder, the deadliest snake in India”; but Klauber suggested what Holmes really said was “It is a samp-aderm, the deadliest skink in India.” Samp-aderm can be translated “snake-Gila-monster”: Samp is Hindi for snake, and the suffix aderm is derived from heloderm, the common or vernacular name of the Gila monster generally used by European naturalists. Skinks are lizards of the family Scincidae, many of which are snake-like in form. Such a hybrid reptile will have a venom incomparably strengthened by hybridization, assuring the almost instant demise of the victim. And it will also have ears like any lizard, so it could hear the whistle, and legs and claws allowing it to run up and down the bell cord with a swift ease.

Type A blood converted to universal-donor blood

Saturday, June 29th, 2019

To up the supply of universal-donor blood, scientists have tried transforming the second most common blood, type A, by removing its “A-defining” antigens:

But they’ve met with limited success, as the known enzymes that can strip the red blood cell of the offending sugars aren’t efficient enough to do the job economically.

After 4 years of trying to improve on those enzymes, a team led by Stephen Withers, a chemical biologist at the University of British Columbia (UBC) in Vancouver, Canada, decided to look for a better one among human gut bacteria. Some of these microbes latch onto the gut wall, where they “eat” the sugar-protein combos called mucins that line it. Mucins’ sugars are similar to the type-defining ones on red blood cells.

So UBC postdoc Peter Rahfeld collected a human stool sample and isolated its DNA, which in theory would include genes that encode the bacterial enzymes that digest mucins. Chopping this DNA up and loading different pieces into copies of the commonly used lab bacterium Escherichia coli, the researchers monitored whether any of the microbes subsequently produced proteins with the ability to remove A-defining sugars.

At first, they didn’t see anything promising. But when they tested two of the resulting enzymes at once — adding them to substances that would glow if the sugars were removed — the sugars came right off. The enzymes also worked their magic in human blood. The enzymes originally come from a gut bacterium called Flavonifractor plautii, Rahfeld, Withers, and their colleagues report today in Nature Microbiology. Tiny amounts added to a unit of type A blood could get rid of the offending sugars, they found. “The findings are very promising in terms of their practical utility,” Narla says. In the United States, type A blood makes up just under one-third of the supply, meaning the availability of “universal” donor blood could almost double.

Velocity is strangling baseball

Thursday, June 27th, 2019

Velocity is strangling baseball:

Baseball’s timeless appeal is predicated upon an equilibrium between pitching and hitting, and in the past, when that equilibrium has been thrown off, the game has always managed, either organically or through small tweaks, to return to an acceptable balance.

But there is growing evidence that essential equilibrium has been distorted by the increasing number of pitchers able to throw the ball harder and faster.

[...]

The 2018 season was the first in history in which strikeouts outpaced hits, a trend that has accelerated so far in 2019. The ball is in play less than ever, with a record 35.4 percent of plate appearances in 2019 resulting in a strikeout, walk or home run. Teams are using an average of 3.3 relievers per game in 2019, just below last year’s all-time record of 3.4. The leaguewide batting average of .245 in 2019 is the lowest since 1972 and a drop of 26 points from 1999, at the height of the steroids era. The leaguewide strikeout rate of 8.78 per nine innings, also a record, is higher than the career rate of Roger Clemens.

[...]

Most, if not all, of this change can be traced back to the rising velocity of the fastball — the fundamental unit of pitching — from a leaguewide average of 89 mph in 2002, when FanGraphs first recorded data, to 92.9 mph so far this season. At the upper end of the spectrum, the shift is even more striking: In 2008, there were 196 pitches thrown at 100 mph or higher, according to Statcast data. In 2018, there were 1,320, a nearly sevenfold increase. In 2008, only 11 pitchers averaged 95 mph or higher; in 2018, 74 did. Aroldis Chapman of the New York Yankees and Jordan Hicks of the St. Louis Cardinals have both been clocked at 105 mph.

[...]

Here, via Statcast, are the slash-lines (batting average/on-base percentage/slugging percentage) of MLB hitters in 2018 against four different pitch-speeds:

• Vs. 92 mph: .283/.364/.475
• Vs. 95 mph: .259/.342/.421
• Vs. 98 mph: .223/.310/.329
• Vs. 101 mph: .198/.257/.214

[...]

One seeming contradiction is that fastball usage, as a percentage of overall pitches, has been steadily decreasing, from 64.4 percent of all pitches in 2002 to just 52.8 percent so far this year. But that doesn’t mean pure velocity is any less effective — it merely indicates teams have learned to dole out fastballs in more effective patterns. The simple threat of a 99-mph fastball makes the 92-mph slider or the 90-mph change-up that much more effective.

[...]

In a 2018 study headed by former Red Sox trainer Mike Reinold, pitchers who went through a six-week velocity training program featuring weighted balls increased their velocity by an average of more than two mph but were “substantially” more likely to suffer arm injuries than those in the control group.

[...]

In 1893, when the mound was moved back 10 feet to its current distance, the change resulted in a 35-point jump in batting average and a 34 percent drop in strikeouts. By comparison, lowering the mound from 15 inches to 10 inches in 1969 resulted in more modest changes: an 11-point rise in batting average and a 2 percent drop in strikeouts.

Free throws should be easy

Wednesday, June 26th, 2019

Free throws should be easy, right?

For decades, elite players in the NBA, WNBA, and NCAA have averaged between 70 and 75 percent from the foul line. Most of basketball’s sharpest shooters top out in the high eighties, with Nash being one of only two NBA players to retire with a career average above 90 percent. His consistency at the line raises some questions: For starters, why isn’t everyone else better? But also: If Nash can show up unpracticed, four years after retirement, and drain 98 percent of his free throws in an impromptu shootout against a ham-handed journalist, what kept him from shooting that reliably during his career?

On paper, the free throw could not be more straightforward. It’s a direct, unguarded shot at a hoop 18 inches across, 10 feet off the ground, and 15 feet away. Like a carefully controlled experiment, the conditions are exactly the same every single time. Larry Silverberg, a dynamicist at North Carolina State University, has used this fact to study the free throw in remarkable detail. “It’s the same for every single player, so you can actually look at the shot very scientifically,” he says.

An expert in the modeling of physical phenomenon, Silverberg has examined the physics of the free throw for 20 years, using computers to simulate the trajectories of millions of shots. His findings show that a successful free throw boils down to four parameters: the speed at which you release the ball, how straight you shoot it, the angle at which it leaves your hand, and the amount of backspin that you place on it.

[...]

The ideal rate of spin is three backward rotations per second, which, incidentally, is about how long it should take the ball to make the trip from a player’s hand to the hoop. (That spin buys you some wiggle room, in the event you over- or under-shoot.) The best angle of trajectory is between 46 and 54 degrees from the horizon, depending on your height. The most advantageous release angle for a given shooter also corresponds to their lowest launch speed—a relationship that helps explain why shots that go in often feel like they require less effort than shots that don’t. As Nash describes it: “There’s no strain, there’s no forcing, there’s no flicking at the rim, there’s just a really smooth stroke.”

The best free throw shooter on earth isn’t a pro basketball player, but Bob Fisher, a 62-year-old soil-conservation technician from Centralia, Kansas:

“I played high school basketball, and I played recreationally till I was 44.” A few years later, in his early 50s, he started practicing free throws every day at his local gym. That was September 2009. Within a couple of months he was consistently sinking more than 100 shots in a row. In January 2010 he set his first world record. Since then, his speed and accuracy from the foul line have garnered him an additional 24 Guinness titles.

Fisher happily shares the secrets to his success. He attributes his accuracy and precision to something he calls the centerline technique (it involves aligning the lower palm and middle finger with the rim of the basket), the details of which he has recounted in a book and instructional video. His consistency he attributes to preparation. For years, Fisher has spent hours a day refining his shot. “All it takes to become good is three things: knowledge, practice, and time,” he says.

There’s a reason we shoot better in practice than in a game:

“I think we’ve all had the experience where we can hit that shot when no one’s watching, but when all eyes are on us we fumble,” says cognitive scientist Sian Beilock, president of Barnard College and author of Choke: What the Secrets of the Brain Reveal About Getting It Right When You Have To. Beilock attributes those mistakes to something she calls paralysis by analysis: When a player overthinks a task, it interrupts the working memory they’ve establish through hours of practice. Remember the hyper-coordinated movements required for releasing a free throw shot at a precise speed? They’re exactly the kind of thing that overanalysis tends to screw up. Closing the gap between training and competition, Beilock says, is a matter of practicing under conditions that simulate high-pressure scenarios: Training under a watchful eye, or competing against the clock.

They would be able to salvage the reputation of their physics community

Thursday, June 20th, 2019

In Captain America: The First Avenger, the quasi-Nazi villain Red Skull wields a cosmic cube, and I must admit that’s what came to mind when I read about the two-inch uranium cubes at the center of Nazi Germany’s nuclear program:

Several German physicists were involved in that research program; perhaps the most widely recognized was Werner Heisenberg.

Rather than working together under central leadership the way the Manhattan Project scientists eventually would, the German nuclear researchers were divided into three groups that each ran a separate series of experiments. Each was code-named after the city in which the experiments took place: Berlin (B), Gottow (G), and Leipzig (L). Although the Germans began their work nearly two years before serious US efforts began, their progress toward creating a sustained nuclear reactor was extremely slow. The reasons for the delay were varied and complex and included fierce competition over finite resources, bitter interpersonal rivalries, and ineffectual scientific management.

In the winter of 1944, as the Allies began their invasion of Germany, the German nuclear researchers were trying desperately to build a reactor that could achieve criticality. Unaware of the immense progress the Manhattan Project had made, the Germans hoped that though they were almost certainly going to lose the war, they would be able to salvage the reputation of their physics community by being the first to achieve a self-sustaining nuclear reactor.

In holding out that hope, officials moved the Berlin reactor experiments headed by Heisenberg south ahead of the Allied invasion. They eventually landed in a cave underneath a castle, shown in figure 1, in the small town of Haigerloch in southwest Germany.

B-VIII reactor entrance at castle in Haigerloch, Germany

In that cave laboratory Heisenberg’s team built their last experiment: B-VIII, the eighth experiment of the Berlin-based group. Heisenberg described the setup of the reactor in his 1953 book Nuclear Physics. The experimental nuclear reactor comprised 664 uranium cubes, each weighing about five pounds. Aircraft cable was used to string the cubes together in long chains hanging from a lid, as shown in figure 2. The ominous uranium chandelier was submerged in a tank of heavy water surrounded by an annular wall of graphite. That configuration was the best design the German program had achieved thus far, but it was not sufficient to achieve a self-sustaining, critical reactor.

B-VIII reactor, 664 uranium cubes

In 1944, as Allied forces began moving into German-occupied territory, Leslie Groves, commander of the Manhattan Project, ordered a covert mission code-named Alsos (Greek word for “groves”) to take a small number of military personnel and scientists to the front lines in Europe to gather information on the state of the German scientific program. The mission broadly aimed to gather information and potentially capture data and instrumentation from all scientific disciplines from microscopy to aeronautics. The most pressing task was to learn how far German physicists had gotten in their study of nuclear reactions. The initial leg of the Alsos mission began in Italy and moved to Germany as the Allied military forces swept south.6 Among the men involved in the mission was Samuel Goudsmit. After the war, he went on to be the American Physical Society’s first editor-in-chief and the founder of Physical Review Letters.

As the Allies closed in on southern Germany, Heisenberg’s scientists quickly disassembled B-VIII. The uranium cubes were buried in a nearby field, the heavy water was hidden in barrels, and some of the more significant documentation was hidden in a latrine. (Goudsmit had the dubious honor of retrieving those documents.) When the Alsos team arrived in Haigerloch in late April 1945, the scientists working on the experiment were arrested and interrogated to reveal the location of the reactor materials. Heisenberg had escaped earlier by absconding east on a bicycle under cover of night with uranium cubes in his backpack.

[...]

Many scholars have long thought that the German scientists could not have possibly created a working nuclear reactor because they did not have enough uranium to make the B-VIII reactor work. In Heisenberg’s own words, “The apparatus was still a little too small to sustain a fission reaction independently, but a slight increase in its size would have been sufficient to start off the process of energy production.” That statement was recently confirmed using Monte Carlo N-particle modeling of the B-VIII reactor core. The model showed that the rough analyses completed by the Germans in 1945 were correct: The reactor core as designed would not have been able to achieve a self-sustaining nuclear chain reaction given the amount of uranium and its configuration. But the design might have worked if the Germans had put 50% more uranium cubes in the core.

Former NFL players live longer than the general population

Tuesday, June 18th, 2019

Former NFL players live longer than the general population:

One study from 2012 found that NFL players had overall decreased mortality as well as lower cardiovascular mortality than the general population. Another paper that year also found that overall mortality in NFL players was reduced, but did find that they had rates of neurodegenerative mortality that were three times higher than the general population.

They don’t live longer than other athletes, though:

Researchers looked at data from the NFL cohort, which was a database constructed by the National Institute for Occupational Safety and Health in the ’90s and contains information on former players who participated in at least five seasons between 1959 and 1988. Weisskopf and colleagues then generated a comparable dataset for former MLB players. By then matching the 3,419 NFL players and the 2,708 MLB players to the National Death Index — which contains records and causes of deaths of U.S. citizens — the researchers compared mortality rates between the two groups.

The new work found that NFL players were about 2.5 times more likely to die from cardiovascular disease and almost three times more likely than MLB players to die from neurodegenerative disease.

[...]

Among the NFL players in the study, far more died of cardiovascular disease than neurodegenerative disease: nearly 500 versus 39, respectively.

Hermann Oberth had originally intended to build a working rocket for use in the film

Saturday, June 15th, 2019

One of the first serious science fiction movies was Fritz Lang’s Frau im Mond, or Woman in the Moon, which was released in the US as By Rocket to the Moon:

Lang, who also made Metropolis, had a personal interest in science fiction. When returning to Germany in the late 1950s he sold his extensive collection of Astounding Science Fiction, Weird Tales, and Galaxy magazines. Several prescient technical or operational features are presented during the film’s 1920′s launch sequence, which subsequently came into common operational use during America’s postwar space race:

  • The rocket ship Friede is fully built in a tall building and moved to the launch pad
  • As launch approaches, the launch team counts down the seconds from ten to zero (“now” was used for zero), and Woman in the Moon is often cited as the first occurrence of the “countdown to zero” before a rocket launch
  • The rocket ship blasts off from a pool of water; water is commonly used today on launch pads to absorb and dissipate the extreme heat and to damp the noise generated by the rocket exhaust
  • In space, the rocket ejects its first stage and fires its second stage rocket, predicting the development of modern multistage orbital rockets
  • The crew recline on horizontal beds to cope with the G-forces experienced during lift-off and pre-orbital acceleration
  • Floor foot straps are used to restrain the crew during zero gravity (Velcro is used today).
  • These items and the overall design of the rocket led to the film being banned in Germany from 1933-1945 during World War II by the Nazis, due to similarities to their secret V-2 project.

Rocket scientist Hermann Oberth worked as an advisor on this movie. He had originally intended to build a working rocket for use in the film, but time and technology prevented this from happening. The film was popular among the rocket scientists in Wernher von Braun’s circle at the Verein für Raumschiffahrt (VfR). The first successfully launched V-2 rocket at the rocket-development facility in Peenemünde had the Frau im Mond logo painted on its base. Noted post-war science writer Willy Ley also served as a consultant on the film. Thomas Pynchon’s Gravity’s Rainbow, which deals with the V-2 rockets, refers to the movie, along with several other classic German silent films.

Ambiguous, longed for and desolate

Friday, June 14th, 2019

Science fiction illuminates the dreams of the new moon-rushers:

Take the origins of Pence’s reference to the “lunar strategic high ground”. In one of the first moon novels written after the second world war, Robert Heinlein’s Rocket Ship Galileo (1947), an atomic scientist and his teenage crew discover, on what they believe to be the first mission to the moon, a base from which the Third Reich’s rump intends to rain nuclear vengeance on to Earth. Heinlein, an aeronautical engineer who was one of the first American science fiction writers to gain a mainstream audience, had seen the V-2 and the Manhattan Project make real the rocket ships and superweaponry that had been his prewar stock in trade. Such authors were highly exercised by the strategic implications. In the same month that Heinlein’s book was published, John W Campbell, the preeminent American science fiction editor of the age, published an essay by his and Heinlein’s friend L Ron Hubbard on the strategic necessity of America being the first nation to build such a moonbase for its missiles. A year later Colliers, a mass market magazine, was warning of a “Rocket Blitz from the Moon”.

The idea rode high for a decade. “He who controls the moon, controls the Earth,” General Homer A Boushey told the American press in 1958. The US air force investigated the possibility of demonstrating that control, and adding to the moon’s craters, by conducting a nuclear test on its surface, one that would be ominously and spectacularly visible to most of the world below (Carl Sagan, later to be prominent in the fight for nuclear disarmament, was one of those who worked on the project).

It did not happen. Though the Apollo programme was a crucial piece of cold war strategy, its goal was not to occupy the moon or use it as a missile base. Rather, it was to show the world the remarkable resources the US was willing to invest in advancing its technological power; the means, not the end, were the message. But Hubbard’s megalomaniacal dreams of an Earth controlled from the moon still lurks in that idea of the “strategic high ground”.

Rocket Ship Galileo used the moon not only as a way of thinking about the prospect of nuclear war, it also made it a way of understanding the aftermath. (“The moon people … ruined themselves. They had one atomic war too many.”)

These visions of existential dread led Arthur C Clarke to argue in Prelude to Space (1947), a novel about the preparations for a moon mission, that “atomic power makes interplanetary travel not just possible but imperative. As long as it was confined to Earth, humanity had too many eggs in one rather fragile basket.” That feeling informs dreams of space travel today. Musk, in particular, talks of war, pandemics, rebel AIs and asteroid Armageddons all making it vital for humans to become a multiplanetary species. A more junior Silicon Valley space mogul told me he wants to help build a moonbase for the same reason that, before cloud computing, he would back up his files to a second hard disk: something might happen. (Of course, such plutocratic panic feels dangerously close to the idea of a bolthole for the select.)

As active proponents of the new space age, Clarke and Heinlein realised that linking the moon only with nuclear catastrophe would be a poor sales pitch. To get the public on board, a more fertile idea was the dream of building human settlements on the moon, which could somehow be portrayed as both wonderful and mundane. In Heinlein’s short story “Space Jockey”, the problem facing the astronaut protagonist is not Ming the Merciless or a swarm of comets but the amount of time he has to spend away from home; the resolution is his decision to take a desk job in comfortably domestic Luna City, built under the surface of the moon. A teenager whines that “nothing ever happens on the moon”. This dualism of the familiar and the fantastic is epitomised in the motif of Earth playing the same role in the moon’s sky as the moon does in Earth’s, lighting the landscape’s darkness.

It is not a new insight; Galileo realised that nights on the nearside of the moon would be earthlit, just as earthly nights are moonlit. All early lunar fiction draws the reader’s attention to Earth waxing and waning in the alien sky as the clearest possible indication of the revolutionary Copernican insight. Twentieth-century heirs made a similar use of the image of worlds reversed. Earthlight (1955), Clarke’s first moon-set novel, opens with the accountant Bertram Sadler, new to the moon, looking out of his train window at the “cold glory of this ancient, empty land” illuminated by “a light tinged with blues and greens; an arctic radiance that gave no atom of heat. And that, thought Sadler, was surely a paradox, for it came from a world of light and warmth.”

Clarke’s paradox was made plain to see in the famous image Earthrise captured by Apollo 8: a world of warmth and light rising above the cold glory of ancient emptiness. The contrast was strong enough – the blasted basalts below unworldly and unappealing enough – that the colonised, normalised moon which Clarke and Heinlein had imagined fell back into the realm of fancy, if not that of the absurd.

So why does returning to the moon now seem plausible again? For one thing, China, or any other country, can put a man or woman on the moon with far less effort than it took the US in the 1960s: as a way to claim parity with a fading superpower, that relatively modest effort has obvious attractions. And as the effort involved has been reduced the resources in the hands of private individuals have increased: Bezos may choose, in the near-term, to yoke his dreams of expansion into space – unlocking untold wealth – to the more parochial ambitions of the US government. But that is convenience, not necessity. Being the richest person on the planet brings with it its own superempowerment.

Science fiction, too, has cast space travel in economic, rather than political, terms. Once again it is hard to avoid Heinlein, this time his novella The Man Who Sold the Moon (1950). Its main character is DD Harriman, a tycoon who, having made his fortune from other technologies, persuades and cons investors of all sorts to provide the further resources he needs to realise his true dream, the founding of a moon colony. After the sheer Soviet Union-surpassing, 2.5%-of-GDP scale of the Apollo effort became manifest in the 1960s, the story seemed quaint. Moon missions were the work of nations, not cigar-puffing wheeler dealers. Now it seems oddly prescient.

If strategic rivalry, existential fear and plutocratic caprice were the only narratives science fiction had lent the moon, one might feel justified in taking a dim view of the whole affair. But there is more. A lifeless world may again provide new insights into a living one, as it did with Earthrise. It is in such changed perspectives on worlds and their peoples that the true promise of science fiction surely lives. Heinlein’s most successful lunar novel, The Moon Is a Harsh Mistress (1967), is driven by a thrilling plot. But the reason it continues to be loved by many, especially in Silicon Valley, is the strange, contradictory, savage but cosy, polyamorous, Malthusian, libertarian, utopian and carceral society it conjures as its cyborg setting. Similarly, the most striking recent novel about the moon, John Kessel’s The Moon and the Other (2017) sets itself in the “Society of Cousins”, a matriarchy inspiring and troubling, idealistic, indulgent and somewhat stifling. It is, to borrow the subtitle of Ursula K Le Guin’s The Dispossessed (1974), an ambiguous utopia.

Which is as much as you can hope for. The moon, as it becomes a target for politicians, billionaires and enthusiasts inspired by science fictions past, should remain ambiguous, longed for and desolate, always the same and yet shockingly new, a strangeness sitting in the sky for all to see.