Why the Father of Modern Statistics Didn’t Believe Smoking Caused Cancer

Tuesday, September 27th, 2016

Ronald Fisher, the notoriously cantankerous father of modern statistics, was appalled when the British Medical Journal‘s editorial board announced, in 1957, that the time for amassing evidence and analyzing data was over:

Now, they wrote, “all the modern devices of publicity” should be used to inform the public about the perils of tobacco.

According to Fisher, this was nothing short of statistically illiterate fear mongering.

He was right, in the narrow sense, that no one had yet proven a causal link between smoking and cancer:

Fisher never objected to the possibility that smoking caused cancer, only the certainty with which public health advocates asserted this conclusion.

“None think that the matter is already settled,” he insisted in his letter to the British Medical Journal. “Is not the matter serious enough to require more serious treatment?”

R.A. Fisher Smoking Pipe

While most of the afflictions that had been killing British citizens for centuries were trending downward, the result of advances in medicine and sanitation, one disease was killing more and more people each year: carcinoma of the lung.

The figures were staggering. Between 1922 and 1947, the number of deaths attributed to lung cancer increased 15-fold across England and Wales. Similar trends were documented around the world. Everywhere, the primary target of the disease seemed to be men.

What was the cause? Theories abounded. More people than ever were living in large, polluted cities. Cars filled the nation’s causeways, belching noxious fumes. Those causeways were increasingly being covered in tar. Advances in X-ray technology allowed for more accurate diagnoses. And, of course, more and more people were smoking cigarettes.

Which of these factors was to blame? All of them? None of them? British society had changed so dramatically and in so many ways since the First World War, it was impossible to identify a single cause. As Fisher would say, there were just too many confounding variables.

In 1947, the British Medical Research Council hired Austin Bradford Hill and Richard Doll to look into the question.

Though Doll was not well known at the time, Hill was an obvious choice. A few years earlier, he had made a name for himself with a pioneering study on the use of antibiotics to treat tuberculosis. Just as Fisher had randomly distributed fertilizer across the fields at Rothamsted, Hill had given out streptomycin to tubercular patients at random while prescribing bed rest to others. Once again, the goal was to make sure that the patients who received one treatment were, on average, identical to those who received the other. Any large difference in outcomes between the two groups had to be the result of the drug. It was medicine’s first published randomized control trial.

Despite Hill’s groundbreaking work with randomization, the question of whether smoking (or anything else) causes cancer was not one you could ask with a randomized control trial. Not ethically, anyway.

“That would involve taking a group of say 6,000 people, selecting 3,000 at random and forcing them to smoke for 5 years, while forcing the other 3,000 not to smoke for 5 years, and then comparing the incidence of lung cancer in the two groups,” says Donald Gillies, an emeritus professor of philosophy of science and mathematics at University College London. “Clearly this could not be done, so, in this example, one has to rely on other types of evidence.”

Hill and Doll tried to find that evidence in the hospitals of London. They tracked down over 1,400 patients, half of whom were suffering from lung cancer, the other half of whom had been hospitalized for other reasons. Then, as Doll later told the BBC, “we asked them every question we could think of.”

These questions covered their medical and family histories, their jobs, their hobbies, where they lived, what they ate, and any other factor that might possibly be related to lung cancer. The two epidemiologists were shooting in the dark. The hope was that one of the many questions would touch on a trait or behavior that was common among the lung cancer patients and rare among those in the control group.

At the beginning of the study, Doll had his own theory.

“I personally thought it was tarring of the roads,” Doll said. But as the results began to come in, a different pattern emerged. “I gave up smoking two-thirds of the way though the study.”

Hill and Doll published their results in the British Medical Journal in September of 1950. The findings were alarming, but not conclusive. Though the study found that smokers were more likely than non-smokers to have lung cancer, and that the prevalence of the disease rose with the quantity smoked, the design of the study still left room for Fisher’s dreaded “confounding” problem.

The problem was in the selection of the control. Hill and Doll had picked a comparison group that resembled the lung cancer patients in age, sex, approximate residence, and social class. But did this cover the entire list of possible confounders? Was there some other trait, forgotten or invisible, that the two researchers had failed to ask about?

To get around this problem, Hill and Doll designed a study where they wouldn’t have to choose a control group at all. Instead, the two researchers surveyed over 30,000 doctors across England. These doctors were asked about their smoking habits and medical histories. And then Hill and Doll waited to see which doctors would die first.

By 1954, a familiar pattern began to emerge. Among the British doctors, 36 had died of lung cancer. All of them had been smokers. Once again, the death rate increased with the rate of smoking.

The “British Doctor Study” had a distinct advantage over the earlier survey of patients. Here, the researchers could show a clear “this then that” relationship (what medical researchers call a “dose-response”). Some doctors smoked more than others in 1951. By 1954, more of those doctors were dead.

The back-to-back Doll and Hill studies were notable for their scope, but they were not the only ones to find a consistent connection between smoking and lung cancer. Around the same time, the American epidemiologists, E. C. Hammond and Daniel Horn conducted a study very similar to the Hill and Doll survey of British doctors.

Their results were remarkably consistent. In 1957, the Medical Research Council and the British Medical Journal decided that enough evidence had been gathered. Citing Doll and Hill, the journal declared that “the most reasonable interpretation of this evidence is that the relationship is one of direct cause and effect.”

Ronald Fisher begged to differ.

In some ways, the timing was perfect. In 1957, Fisher had just retired and was looking for a place to direct his considerable intellect and condescension.

Neither the first nor the last retiree to start a flame war, Fisher launched his opening salvo by questioning the certainty with which the British Medical Journal had declared the argument over.

“A good prima facie case had been made for further investigation,” he wrote. “The further investigation seems, however, to have degenerated into the making of more confident exclamations.”

The first letter was followed by a second and then a third. In 1959, Fisher amassed these missives into a book. He denounced his colleagues for manufacturing anti-smoking “propaganda.” He accused Hill and Doll of suppressing contrary evidence. He hit the lecture circuit, relishing the opportunity to once again hold forth before the statistical establishment and to be, in the words of his daughter, “deliberately provocative.”

Provocation aside, Fisher’s critique came down to the same statistical problem that he had been tackling since his days at Rothamsted: confounding variables. He did not dispute that smoking and lung cancer tended to rise and fall together—that is, that they were correlated. But Hill and Doll and the entire British medical establishment had committed “an error…of an old kind, in arguing from correlation to causation,” he wrote in a letter to Nature.

Most researchers had evaluated the association between smoking and cancer and concluded that the former caused the latter. But what if the opposite were true?

What if the development of acute lung cancer was preceded by an undiagnosed “chronic inflammation,” he wrote. And what if this inflammation led to a mild discomfort, but no conscious pain? If that were the case, wrote Fisher, then one would expect those suffering from pre-diagnosed lung cancer to turn to cigarettes for relief. And here was the British Medical Journal suggesting that smoking be banned in movie theaters!

“To take the poor chap’s cigarettes away from him,” he wrote, “would be rather like taking away [the] white stick from a blind man.”

If that particular explanation seems like a stretch, Fisher offered another. If smoking doesn’t cause cancer and cancer doesn’t cause smoking, then perhaps a third factor causes both. Genetics struck him as a possibility.

To make this case, Fisher gathered data on identical twins in Germany and showed that twin siblings were more likely to mimic one another’s smoking habits. Perhaps, Fisher speculated, certain people were genetically predisposed to crave of cigarettes.

Was there a similar familial pattern for lung cancer? Did these two predispositions come from the same hereditary trait? At the very least, researchers ought to look into this possibility before advising people to toss out their cigarettes.

And yet nobody was.

“Unfortunately, considerable propaganda is now being developed to convince the public that cigarette smoking is dangerous,” he wrote. “It is perhaps natural that efforts should be made to discredit evidence which suggests a different view.”

Though Fisher was in the minority, he was not alone in taking this “different view.” Joseph Berkson, the chief statistician at the Mayo Clinic throughout the 1940s and 50s, was also a prominent skeptic on the smoking-cancer question, as was Charles Cameron, president of the American Cancer Society. For a time, many of Fisher’s peers in academic statistics, including Jerzy Neyman, questioned the validity of a causal claim. But before long, the majority buckled under the weight of mounting evidence and overwhelming consensus.

But not Fisher. He died in 1962 (of cancer, though not of the lung). He never conceded the point.

Feed a virus, starve a bacterial infection?

Saturday, September 24th, 2016

A new study supports the folk wisdom to “feed a cold and starve a fever” — if you assume a fever is bacterial:

In the first series of experiments, the investigators infected mice with the bacterium Listeria monocytogenes, which commonly causes food poisoning. The mice stopped eating, and they eventually recovered. But when the mice were force fed, they died. The researchers then broke the food down by component and found fatal reactions when the mice were given glucose, but not when they were fed proteins or fats. Giving mice the chemical 2-DG, which prevents glucose metabolism, was enough to rescue even mice who were fed glucose and allowed them to survive the infection.

When the researchers did similar studies in mice with viral infections, they found the opposite effect. Mice infected with the flu virus A/WSN/33 survived when they were force fed glucose, but died when they were denied food or given 2-DG.

Migrant Competence

Sunday, September 18th, 2016

James Thompson explores migrant competence:

Europe is experiencing enormous inflows of people from Africa and the Middle East, and in the midst of conflicting rhetoric, of strong emotions and of a European leadership broadly in favour of taking more migrants (and sometimes competing to do so) one meme keeps surfacing: that European Jews are the appropriate exemplars of migrant competence and achievements.

European history in the 20th Century shows why present-day governments feel profound shame at their predecessors having spurned European Jews fleeing Nazi Germany. However, there are strong reasons for believing that European Jews are brighter than Europeans, and have greater intellectual and professional achievements. There may be cognitive elites elsewhere, but they have yet to reveal themselves. Expectations based on Jewish successes are unlikely to be repeated.

I am old enough to know that political decisions are not based on facts, but on presumed political advantages. The calculation of those leaders who favour immigration seems to be that the newcomers will bring net benefits, plus the gratitude and votes of those migrants, plus the admiration of some of the locals for policies which are presented as being acts of generosity, thus making some locals feel good about themselves for their altruism. One major ingredient of the leadership’s welcome to migrants is the belief that they will quickly adapt to the host country, and become long term net contributors to society. Is this true?

With Heiner Rindermann he analyzed the gaps, possible causes, and impact of The Cognitive Competences of Immigrant and Native Students across the World:

In Finland the natives had reading scores of 538, first-generation immigrants only 449, second-generation 493. The original first-generation difference of 89 points was equivalent to around 2–3 school years of progress, the second-generation difference of 45 points (1-2 school years) is still of great practical significance in occupational terms.

In contrast, in Dubai natives had reading scores of 395; first-generation immigrants 467; second-generation 503. This 105 point difference is equivalent to 16 IQ points or 3–5 years of schooling.

Rather than look at the scales separately, Rindermann created a composite score based on PISA, TIMSS and PIRLS data so as to provide one overall competence score for both the native born population and the immigrants which had settled in each particular country. For each country you can seen the natives versus immigrant gap. By working out what proportion of the national population are immigrants you can recalculate the national competence (IQ) for that country. Rindermann proposes that native born competences need to be distinguished from immigrant competences in national level data.

The analysis of scholastic attainments in first and second generation immigrants shows that the Gulf has gained from immigrants and Europe has lost. This is because those emigrating to the Gulf have higher abilities than the locals, those emigrating to Europe have lower ability than the locals.

The economic consequences can be calculated by looking at the overall correlations between country competence and country GDP.

[...]

The natives of the United Kingdom have a competence score of 519 (migrants to UK 499), Germany 516 (migrants to Germany 471), the United States 517 (migrants to US 489). There, in a nutshell, is the problem: those three countries have not selected their migrants for intellectual quality. The difference sounds in damages: lower ability leads to lower status, lower wages and higher resentment at perceived differences. On the latter point, if the West cannot bear to mention competence differences, then differences in outcome are seen as being due solely to prejudice.

Just bite in

Wednesday, September 14th, 2016

Group socialisation theory was Judith Rich Harris’s attempt to solve a puzzle she had encountered while writing child development textbooks for college students:

My textbooks endorsed the conventional view of child development — that what makes children turn out the way they do is ‘nature’ (their genes) and ‘nurture’ (the way their parents bring them up). But after a while it dawned on me that there just wasn’t enough solid evidence to support that view, and there was a growing pile of evidence against it. The problem was not with the ‘nature’ part — genes were having their expected effect. But ‘nurture’ wasn’t working the way it was supposed to. In studies that provided some way of controlling for or eliminating the effects of heredity, the environment provided by parents had little or no effect on how the children turned out.

And yet, genes accounted for only about 50 per cent of the variation in personality and social behaviour. The environment must be playing some role. But it wasn’t the home environment. So I proposed that the environment that has lasting effects on personality and social behaviour is the one the child encounters outside the home. This makes sense if you think about the purpose of childhood. What do children have to accomplish while they’re growing up? They have to learn how to behave in a way that is acceptable to the other members of their society. How do they do this? Not by imitating their parents! Parents are adults, and every society prescribes different behaviours for children and adults. A child who behaved like his or her parents (in any context other than a game) would be seen as impertinent, unruly or weird.

Before going on to become The Nurture Assumption, her work started out as a 1995 Psychological Review piece, which won the George A. Miller award for an outstanding article in general psychology — and there was a certain irony to that:

In 1960 I was a graduate student in the Department of Psychology at Harvard. One day I got a letter saying that the Department had decided to kick me out of their PhD programme. They doubted I would ever make a worthwhile contribution to psychology, the letter said, due to my lack of ‘originality and independence’. The letter was signed by the acting chairman of the Department, George A. Miller!

Sometimes, when life hands you a lemon, you should just bite in. Getting kicked out of Harvard was a devastating blow at the time, but in retrospect, it was the best thing that Harvard ever did for me. It freed me from the influence of ‘experts’. It kept me from being indoctrinated. Many years later, it enabled me to write The Nurture Assumption.

The Superhero Genes

Sunday, September 4th, 2016

Stanford University scientist Euan Ashley and his team are looking for the superhero genes that give elite athletes their superhuman abilities — and which may yield medical insights, too:

The data analysis will take many years?—?there are too many possibilities to sift through them all?—?but the ELITE team has already isolated some 9,200 genetic variants that may explain preternatural athletic ability. “Our first focus is on the heart,” Ashley said, “but then we’re searching for variants across the whole genome.” One early contender, flagged just before my visit, is a gene known as DUOX. A mutation in the gene essentially confers what many nutrition gurus tout as the health benefits of antioxidants, mitigating the damaging effects of our usual cellular metabolism. In the past, DUOX mutations have been identified in a very specific population: People who’ve managed to adapt to living at extremely high altitudes?—?in the Andes, in particular?—?show the mutation, suggesting a possible link to increased pulmonary function. Could DUOX-targeting therapies help in hypoxia? Could they help with tissue repair, since the amount of oxygen in wounds is a crucial factor for speed of recovery?

Then there’s NADK, a gene involved in fatty acid synthesis. If you have lowered NADK, your body could be better at using fat as fuel, making you more powerful over time. So far, two athletes in the sample have the mutation, a high hit rate given its rarity. Could this be a weight-regulating therapy in the making?

Another intriguing variant found in several athletes is RUNX3?—?though, as with all of these mutations, the data are quite preliminary and any conclusions likewise so. Originally, the gene came to light in cancer research. Normally, it suppresses tumors, but in mutated form the suppression function is lost and increased cellular growth ensues. If you’re an athlete, cellular growth can be good: The better your muscles and heart grow, the more quickly you respond to training. The mutation, however, can also lead to tumors. There’s a finely calibrated and fungible line between overperforming and underperforming, between what makes us healthier and what puts us at risk.

The Idea of Improvement

Sunday, August 28th, 2016

What caused innovation to accelerate in so many different industries during the British Industrial Revolution? Anton Howes suggests the emergence of an idea that was even simpler and more fundamental than systematic experimentation or Newtonian mechanics, though it was implicit to each of them — the idea of improvement itself:

I present new evidence on the sources of inspiration and innovation-sharing habits of 677 people who innovated in Britain between 1651 and 1851. The vast majority of these people — at least 80% — had some kind of contact with innovators before they themselves started to innovate. These connections were not always between members of the same industry, and innovators could improve areas in which they lacked expertise. This suggests the spread, not of particular skills or knowledge, but of an improving mentality. The persistent failure to implement some innovations for centuries before the Industrial Revolution, despite the availability of sufficient materials, knowledge, and demand, further suggests that prior societies may have failed to innovate quite simply because the improving mentality was absent. As to what made Britain special, we cannot know for sure without constructing similarly exhaustive lists of innovators for other societies. But a likely candidate is that the vast majority of innovators — at least 83% — shared innovation in some way, while only 12% tried to stifle it. Just like a religion or a political ideology, the improving mentality spread from person to person, and to be successful required effective preachers and proselytisers too.

What Reality are Trump People Living In?

Saturday, August 27th, 2016

What reality are Trump people living in?, Jer Clifton wonders aloud:

As luck would have it, I happen to be a researcher at Penn who studies the impact of primal world beliefs, which are beliefs about the nature of reality writ large such as “the world is fascinating.”

[...]

So I had this fantastic theory that Republicans would see the world as way more dangerous than Democrats. I though that might explain Republicans’ “irrational” a) fear of criminals which manifests as interest in law and order and support for mandatory minimums, c) fear of ISIS, d) fear of Mexicans, e) fear of people coming to take their guns, f) fear of government, and g) fear of out-group members generally. At their last convention, and indeed for every single Republican debate, it seemed like candidates were always trying to out-terrorize each other (“No, I understand the great peril we are in!”…”No, no. I understand it better.”)

However, this theory was wrong. Republicans see the world as slightly more dangerous, but it’s very slight.

[...]

Let’s talk about the biggest differences, because they both make sense and don’t make sense: first hierarchical and second just.

The “hierarchical” primal concerns the nature of differences. Namely, does difference imply that something is better or worse? For those who believe that reality is hierarchical, if two things are different that tends to (not always) imply that one is better than the other. Likewise, for those who see reality as nonhierarchical, differences are likely surface and meaningless distinctions and probably distractions. Under the latter view, any attempt to organize the world into “better” or “worse” things will either fail or be inaccurate and superficial. However, for folks who see the world as hierarchical, most things can be fairly usefully ranked and ordered from better or worse. This includes objects, from knives to countries, and people, from individuals to ethnic groups. The biggest difference between Republicans and Democrats is that Republicans on average see the world as more hierarchical, or, to put it a different way, Democrats gloss over differences.

It makes sense, therefore, that the second biggest distinction between Republicans and Democrats concerns whether or not the arc of life trends towards justice. Does life find a way to reward those who do good and punish those who do bad? Is the world a place where working hard and being nice pays off? With plenty of exceptions, Republicans tend to say ‘Yes’ and Democrats say ‘No.’

[...]

Trump supporters out-Republican their Republican peers by seeing the world as even more hierarchical and just.

What does this all mean?

Those who see the world as hierarchical and just will tend to assume in small ways that successful people are better people. This might help explain infatuations with billionaires generally.

If we assume that the world is hierarchical and just, then political correctness appears foolish. PC culture is a real problem because it glosses over differences that really matter. This might explain a deep frustration on the Right about political correctness that the Left just doesn’t get.

I’ve often been confused by why Americans need to talk about their country like it’s the best country in the history of the world. But, if we assume that the world is hierarchical and just, and America is the most powerful country in the world, then it stands to reason that America is also the best. It would feel false to say, “America is unique” without also saying, “America is the best.”

Finally, if we assume that the world is hierarchical and just, then we will have more difficulty mixing with and including out-groups. Obviously, hispanic or African American culture is different than the culture of small-town white America where, according to Haidt, sanctity concerns matter more.

Gnon rewards those who follow His laws.

Primordial Pressure Cooker

Thursday, August 25th, 2016

For nearly a century, the origin of life has been traced back to a “primordial soup“:

Under the conventional theory, life supposedly began when lightning or UV rays caused simple molecules to join together into more complex compounds. This culminated in the creation of information-storing molecules similar to our own DNA, housed within the protective bubbles of primitive cells. Laboratory experiments confirm that trace amounts of molecular building blocks that make up proteins and information-storing molecules can indeed be created under these conditions. For many, the primordial soup has become the most plausible environment for the origin of first living cells.

But life isn’t just about replicating information stored within DNA. All living things have to reproduce in order to survive, but replicating the DNA, assembling new proteins and building cells from scratch require tremendous amounts of energy.

[...]

This process works a bit like a hydroelectric dam. Instead of directly powering their core metabolic reactions, cells use energy from food to pump protons (positively charged hydrogen atoms) into a reservoir behind a biological membrane. This creates what is known as a “concentration gradient” with a higher concentration of protons on one side of the membrane than other. The protons then flow back through molecular turbines embedded within the membrane, like water flowing through a dam. This generates high-energy compounds that are then used to power the rest of cell’s activities.

Life could have evolved to exploit any of the countless energy sources available on Earth, from heat or electrical discharges to naturally radioactive ores. Instead, all life forms are driven by proton concentration differences across cells’ membranes. This suggests that the earliest living cells harvested energy in a similar way and that life itself arose in an environment in which proton gradients were the most accessible power source.

Recent studies based on sets of genes that were likely to have been present within the first living cells trace the origin of life back to deep-sea hydrothermal vents. These are porous geological structures produced by chemical reactions between solid rock and water. Alkaline fluids from the Earth’s crust flow up the vent towards the more acidic ocean water, creating natural proton concentration differences remarkably similar to those powering all living cells.

The studies suggest that in the earliest stages of life’s evolution, chemical reactions in primitive cells were likely driven by these non-biological proton gradients. Cells then later learned how to produce their own gradients and escaped the vents to colonise the rest of the ocean and eventually the planet.

What are the evolutionary roots of West African sprinting and East African distance running dominance?

Thursday, August 18th, 2016

Jon Entine argues that Usain Bolt’s Olympic gold shows again why no Asian, white, or East African will ever be crowned world’s fastest human, but Razib Khan argues that Entine’s wrong — because better drugs and biological engineering mean that the fastest human alive is soon going to be non-African, probably Chinese.

Khan sees running a few seconds faster in the 100 meter dash as a non-adaptively beneficial trait, but Steve Sailer wouldn’t be surprised if the ability to outrun those who are after you and mean to do you harm were an important life skill that is highly adaptive in Darwinian terms:

For example, in 1982, when I had just moved to Chicago, I was headed into the Century Mall on N. Clark St., when a black teen rushed out, followed by two twenty-something Hispanic security guards in close pursuit. I watched them head up Clark Street with the teen in sneakers pulling away from the guards in shiny black leather shoes.

But whether sprinting ability or distance running ability is best for survival depends upon how long pursuers’ sightlines extend in your home terrain.

The shoplifter then turned left at the first corner. It occurred to me that was an important life decision he had just made: if it was a dead end he was in big trouble. But if it were a thru street then he just needed to make a series of seemingly random turns until he had lost his pursuers.

In contrast, if the pursued had headed into open grassland, his pursuers could keep him in sight for a long time, so his better sprinting ability might prove nugatory if they had more endurance.

Perhaps in forested or brush covered terrain, as in West Africa, sprinting is selected for because the pursued individual can get lost faster, while in open grassland, as in East Africa, endurance running is the surest way to get away.

Geomythology

Tuesday, August 16th, 2016

The field of geomythology relates ancient stories of great floods to real events:

Around the tsunami-prone Pacific, flood stories tell of disastrous waves that rose from the sea. Early Christian missionaries were perplexed as to why flood traditions from South Pacific islands didn’t mention the Bible’s 40 days and nights of rain, but instead told of great waves that struck without warning. A traditional story from the coast of Chile described how two great snakes competed to see which could make the sea rise more, triggering an earthquake and sending a great wave ashore. Native American stories from coastal communities in the Pacific Northwest tell of great battles between Thunderbird and Whale that shook the ground and sent great waves crashing ashore. These stories sound like prescientific descriptions of a tsunami: an earthquake-triggered wave that can catastrophically inundate shorelines without warning.

Other flood stories evoke the failure of ice and debris dams on the margins of glaciers that suddenly release the lakes they held back. A Scandinavian flood story, for example, tells of how Odin and his brothers killed the ice giant Ymir, causing a great flood to burst forth and drown people and animals. It doesn’t take a lot of imagination to see how this might describe the failure of a glacial dam.

While doing fieldwork in Tibet, I learned of a local story about a great guru draining a lake in the valley of the Tsangpo River on the edge of the Tibetan Plateau – after our team had discovered terraces made of lake sediments perched high above the valley floor. The 1,200-year-old carbon dates from wood fragments we collected from the lake sediments correspond to the time when the guru arrived in the valley and converted the local populace to Buddhism by defeating, so the story goes, the demon of the lake to reveal the fertile lake bottom that the villagers still farm.

Surprises of the Faraday Cage

Friday, August 5th, 2016

We thought we understood the Faraday Cage:

The Faraday cage effect involves shielding of electrostatic and electromagnetic fields. A closed metal cavity makes a perfect shield, with zero fields inside, and that is in the textbooks. Faraday’s discovery of 1836 was that fields are nearly zero inside a wire mesh, too. You see this principle applied in your microwave oven, whose front door contains a metal screen with small holes. The screen keeps the microwaves in, while allowing light, with its much smaller wavelength, to pass through.

[...]

So I started looking in books and talking to people and sending emails. In the books, nothing! Well, a few of them mention the Faraday cage, but rarely with equations. And from experts in mathematics, physics, and electrical engineering, I got oddly assorted explanations. They said the skin depth effect was crucial, or this was an application of the theory of waveguides, or the key point was Babinet’s principle, or it was Floquet theory, or “the losses in the wires will drive everything…”

And then at lunch one day, colleague n+1n+1 told me, it’s in the Feynman Lectures [2]! And sure enough, Feynman gives an argument that appears to confirm the exponential intuition exactly.

[...]

Now Feynman is a god, the ultimate cool genius. It took me months, a year really, to be confident that the great man’s analysis of the Faraday cage, and his conclusion of exponential shielding, are completely wrong.

[...]

In closing, I want to reflect on some of the curious twists of this story, first, by mentioning three lessons:

L1. There are gaps out there. If you find something fundamental that nobody seems to have figured out, there’s a chance that, in fact, nobody has.

L2. Analogies are powerful. I would never have pursued this problem had I not been determined to understand the mathematical relationship between the Faraday cage and the trapezoidal rule.

L3. Referees can be useful. Thank you, anonymous man or woman who told us the Faraday cage section in our trapezoidal rule manuscript wasn’t convincing! We removed those embarrassing pages, and proper understanding came months later.

And then three questions:

Q1. How can arguably the most famous effect in electrical engineering have remained unanalyzed for 180 years?

Q2. How can a big error in the most famous physics textbook ever published have gone unreported since 1964?

Q3. Somebody must design microwave oven doors based on laboratory measurements. Where are these people?

(Hat tips to our Slovenian guest and Ross.)

Puzzling Statistics

Sunday, July 17th, 2016

Why do the human sciences record pervasive behavioral differences among racial groups, such as in violent-crime rates?

One explanation is that these disparities originate in complex interactions between nature and nurture.

But, of course, only dangerous extremists hold that theory.

The much more respectable sentiment is that statistical differences among the races are the fault of bad white people, such as George Zimmerman and Minnesota policeman Jeronimo Yanez.

Last week, on his way to Warsaw on Air Force One, President Barack Obama was looking at social media. According to The New York Times, he alerted his press secretary that:

He had decided to make a statement himself as soon as they landed, and had told his aides to collect statistics demonstrating racial bias in the criminal justice system.

Now, you might think that’s putting the cart before the horse. Perhaps the administration should objectively evaluate the evidence first, rather than order its media flacks to dredge up some data justifying the president’s prejudices?

But that would be wrong. Everybody knows that culture or evolution can’t have anything to do with hereditary racial differences in performance. If you even consider those possibilities, you must be one of the bad white people you’ve been warned about.

Instead, we know that science has proved that statistical differences among the races are all due to a vast conspiracy to plunder blacks. Nothing makes 21st-century people who think they are white richer than having a lot of black bodies around. Just ask MacArthur genius Ta-Nehisi Coates. He’ll tell you.

“Why are there all these puzzling statistics that don’t agree with the stereotypes promoted by our national leaders?”

And yet, here’s a statistic published in 2011 that doesn’t support the Coates-Obama orthodoxy:

While young black males have accounted for about 1% of the population from 1980 to 2008…(b)y 2008, young black males made up about a quarter of all homicide o?enders (27%)…

In other words, young black males are about 27 times more likely to kill somebody than the average American.

Interestingly, that datum comes from the Obama administration’s Bureau of Justice Statistics, which published a report entitled Homicide Trends in the United States, 1980–2008.

One reason young black males are disproportionately homicidal is that they are young (homicide rates are highest among 18- to 24-year-olds). Another factor is that they are male (according to the BJS, “Males were 7 times more likely than females to commit murder in 2008”).

That the police keep a warier eye on men than women and the young than the old is never seen as offensive. It’s just common sense.

Yet profiling blacks as tending to be more threatening than whites (not to mention Hispanics or Asians) is the worst offense imaginable under today’s ruling ideology. For instance, the day after the Dallas antiwhite atrocity, the first two policy responses that Hillary Clinton recommended in an interview with Wolf Blitzer were: “National guidelines for police about the use of force” and “We need to look more into implicit bias.”

Transplanting Mitochondria

Friday, July 15th, 2016

Transplanting mitochondria extends life — in mice:

Dr Enríquez and his colleagues worked on that scientific stalwart, the mouse. Many genetic strains of lab mice are available, and the team started with two whose mitochondria had been shown by DNA analysis to have small but significant differences—about the same, Dr Enríquez reckons, as the ones between the mitochondria of modern Africans and those of Asians and Europeans, people whose ancestors left Africa about 60,000 years ago. They then copied the procedure for human mitochondrial transplants by removing fertilised nuclei from eggs of one strain, leaving behind that strain’s mitochondria, and transplanting them into enucleated eggs of the second strain, whose mitochondria remained in situ. A group of the first strain, left unmodified, was employed as a control. The researchers raised the mice and kept an eye on how they developed.

While the animals were young, few differences were apparent between modified and unmodified individuals. But as murine middle age approached, at around the animals’ first birthdays, differences began to manifest themselves. Modified mice gained less weight than controls, despite having the same diet. Their blood-insulin levels fluctuated less after fasting, suggesting they were more resistant to diabetes. Their muscles deteriorated less rapidly with age. And their telomeres—protective caps on the ends of their chromosomes whose shortening is implicated in ageing—stayed lengthier for longer.

Not all of the changes were beneficial. Young, unmodified mice had lower levels of free radicals—highly reactive (and therefore damaging) chemicals produced by mitochondria—than did their modified brethren, though even that difference reversed itself after the animals were 30 weeks old. But the combined result of the various changes was that the modified mice lived longer. Their median age at death was about a fifth higher than that of their unmodified cousins.

Given the fundamental metabolic role played by mitochondria, it makes sense that replacing one set with another, more distantly related set causes profound changes. The surprise is that those changes seem largely positive. Most biologists would have predicted the opposite, assuming that nuclear and mitochondrial DNA would co-evolve to interact optimally, so that mixing versions which have not co-evolved would be harmful.

Though unsure what to make of his discovery, Dr Enríquez suggests that a concept called hormesis might offer an explanation. This is the observation that a small amount of adversity can sometimes do an animal good, by activating cellular repair mechanisms that go on to clear up other damage which would otherwise have gone untreated. The biochemical cost of coping with mismatched mitochondria might, therefore, be tempering the animals’ metabolisms in ways that improve their overall health.

Is transgenderism an autism spectrum disorder?

Thursday, July 14th, 2016

Steve Sailer has a vague hunch that the transgender movement is somehow related to what he calls the Nerd Liberation movement, the most unexpectedly successful identity movement of his lifetime:

It’s not clear if autism, Asperger’s, and/or nerdism is becoming more common, but it’s definitely more of an identity than it once was.

There has been a little research into this subject, breaking trans people up into three main categories:

  1. Effeminate early transitioning male to female trans individuals (ladyboys) are of course not very nerdy at all. They tend to be people persons (e.g., prostitutes) and not big on logic.
  2. Female to male trans are very nerdy.
  3. Late transitioning masculine male to female trans people (the Wachowskis, the baseball stats person, my MBA school teammate, the economist, etc.) tend to be at least as nerdy as the average man and much more nerdy than the average woman.

I’ve found that the third category, which includes most of the celebrities and high achievers, tends to have a science fiction aspect to their interests. They often seem like characters from old Heinlein sci-fi stories.

Heinlein, a dedicated professional writer, believed in fan service and studied the wants of his various kinds of fans. In 1941 he was both guest of honor and de facto host of a convention for sci-fi fans at which he emphasized to the attendees that, sure, they might be social outcasts today, but they would be a world-changing elite tomorrow!

It doesn’t strike me as absurd that Heinlein would have sensed a market for these kind of fantasies among some sci-fi fans as early as 1958, the year of his solipsistic transsexual time travel short story “All You Zombies.”

In general, much of transgenderism seems like a weird flavor of a sci-fi fan’s traditional interest in Subduing Nature through New Technology.

Our Dumb World

Tuesday, July 12th, 2016

As far as average IQ scores go, Gregory Cochran notes, this is what the world looks like:

national_iq_per_country_-_estimates_by_lynn_and_vanhanen_2006

But there are two relevant tests: the Stanford-Binet, and life itself. If a country scored low on IQ but at the same time led the world in Cavorite production, or cured cancer, or built spindizzies, we would say “screw Stanford-Binet”, and we would be right to do so.

Does that happen? Are there countries with low average scores that tear up the technological track? Mostly not – generally, fairly high average IQ seems to be a prerequisite for creativity in science and mathematics. Necessary, although not sufficient: bad choices (Communism), having the world kick you in the crotch (Mongols), or toxic intellectual fads can all make smart peoples unproductive.

[...]

You could improve the situation, raise the average, by selection for IQ. But that takes a long time, and I know of no case where it was done on purpose. You could decrease inbreeding, for example by banning cousin marriage. That only takes one generation. You could make environmental improvements, iodine supplementation being the best understood. People assume that there are a lot of other important environmental variable, but I sure don’t know what they are. In practice the rank ordering of populations seem to be the same everywhere, which is not what you would expect if there were strong, malleable environmental influences.

Is it easy to notice such differences? Well, for ordinary people, it’s real easy. Herero would ask Henry why Europeans were so smart – he said he didn’t know. But with the right education, it apparently becomes impossible to see. Few anthropologists know that such differences exist and even fewer admit it. I’m sure that most have never even read any psychometrics – more importantly, they ignore their lying eyes. Economists generally reject such explanations, which is one reason that they find most of the Third World impossible to understand. I must give credit to Garret Jones, who is actually aware of this general pattern. Sure, he stepped on the dick of his own argument there at the end of his book, but he was probably lying, because he had to. Sociologists? It is to laugh.

Generally, you could say that the major job of social science is making sure that people do not know this map. Not knowing has its attractions: practically every headline is a surprise. The world must seem ever fresh and new to the dis-illuminati – something like being Henry Molaison, who had his hippocampus removed by a playful neurosurgeon and afterwards could not create new explicit memories.

So when we tried a new intervention aimed at eliminating the GAP, and it failed, Molaison was surprised, even if 47 similar programs had already failed. Neurologically, he was much like a professor of education.