None of the experts are experts

Thursday, October 23rd, 2014

There are whole fields in which none of the experts are experts, Gregory Cochran notes:

At the high point of Freudian psychoanalysis in the US,  I figure that a puppy had a significantly positive effect on your mental health, while the typical psychiatrist of the time did not.  We (the US) listened to psychologists telling us how to deal with combat fatigue: the Nazis and Soviets didn’t, and had far less trouble with it than we did.

Fidel Castro, a jerk,  was better at preventive epidemiology (with AIDS) than the people running the CDC.

In the 1840s, highly educated doctors knew that diseases were not spread by contagion, but old ladies in the Faeroe Islands (along with many other people) knew that some were.

In 2003, the ‘experts’ (politicians, journalists, pundits, spies) knew that Saddam had a nuclear program, but the small number of people that actually knew anything about nuclear weapons development and something about Iraq (at the World Almanac level, say) knew that wasn’t so.

The educationists know that heredity isn’t a factor in student achievement, and they dominate policy — but they’re wrong.  Some behavioral geneticists and psychometricians know better.

In many universities, people were and are taught that really are no cognitive or behavioral differences between the sexes — in part because of ‘experts’ like John Money.  Anyone with children tends to learn better.

Infected by Politics

Thursday, October 23rd, 2014

The public-health establishment has been infected by politics, Heather Mac Donald explains:

The public-health establishment has unanimously opposed a travel and visa moratorium from Ebola-plagued West African countries to protect the U.S. population. To evaluate whether this opposition rests on purely scientific grounds, it helps to understand the political character of the public-health field. For the last several decades, the profession has been awash in social-justice ideology. Many of its members view racism, sexism, and economic inequality, rather than individual behavior, as the primary drivers of differential health outcomes in the U.S. According to mainstream public-health thinking, publicizing the behavioral choices behind bad health—promiscuous sex, drug use, overeating, or lack of exercise—blames the victim.

The Centers for Disease Control and Prevention’s Healthy Communities Program, for example, focuses on “unfair health differences closely linked with social, economic or environmental disadvantages that adversely affect groups of people.” CDC’s Healthy People 2020 project recognizes that “health inequities are tied to economics, exclusion, and discrimination that prevent groups from accessing resources to live healthy lives,” according to Harvard public-health professor Nancy Krieger. Krieger is herself a magnet for federal funding, which she uses to spread the message about America’s unjust treatment of women, minorities, and the poor. To study the genetic components of health is tantamount to “scientific racism,” in Krieger’s view, since doing so overlooks the “impact of discrimination” on health. And of course the idea of any genetic racial differences is anathema to Krieger and her left-wing colleagues.

Super-Intelligent Humans Are Coming

Thursday, October 23rd, 2014

Super-intelligent humans are coming, Stephen Hsu argues:

The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals.

If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen.

This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.1

Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points.

Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.

Gian-Carlo Rota’s Ten Lessons

Tuesday, October 21st, 2014

Gian-Carlo Rota of MIT shares ten lessons he wishes he had been taught:

  1. Lecturing
  2. Blackboard Technique
  3. Publish the same results several times.
  4. You are more likely to be remembered by your expository work.
  5. Every mathematician has only a few tricks.
  6. Do not worry about your mistakes.
  7. Use the Feynmann method.
  8. Give lavish acknowledgments.
  9. Write informative introductions.
  10. Be prepared for old age.

His lesson on lecturing:

The following four requirements of a good lecture do not seem to be altogether obvious, judging from the mathematics lectures I have been listening to for the past forty-six years.

Every lecture should make only one main point
The German philosopher G. W. F. Hegel wrote that any philosopher who uses the word “and” too often cannot be a good philosopher. I think he was right, at least insofar as lecturing goes. Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.

Never run overtime
Running overtime is the one unforgivable error a lecturer can make. After fifty minutes (one microcentury as von Neumann used to say) everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute overtime can destroy the best of lectures.

Relate to your audience
As you enter the lecture hall, try to spot someone in the audience with whose work you have some familiarity. Quickly rearrange your presentation so as to manage to mention some of that person’s work. In this way, you will guarantee that at least one person will follow with rapt attention, and you will make a friend to boot.

Everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

Give them something to take home
It is not easy to follow Professor Struik’s advice. It is easier to state what features of a lecture the audience will always remember, and the answer is not pretty. I often meet, in airports, in the street and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course, and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.

How To Think Real Good

Monday, October 20th, 2014

After compiling How to do research at the MIT AI Lab, David Chapman went on to write How To Think Real Good, a rather meandering piece that culminates in this list:

  • Figuring stuff out is way hard.
  • There is no general method.
  • Selecting and formulating problems is as important as solving them; these each require different cognitive skills.
  • Problem formulation (vocabulary selection) requires careful, non-formal observation of the real world.
  • A good problem formulation includes the relevant distinctions, and abstracts away irrelevant ones. This makes problem solution easy.
  • Little formal tricks (like Bayesian statistics) may be useful, but any one of them is only a tiny part of what you need.
  • Progress usually requires applying several methods. Learn as many different ones as possible.
  • Meta-level knowledge of how a field works — which methods to apply to which sorts of problems, and how and why — is critical (and harder to get).

I didn’t find that list as interesting as his pull-out points along the way:

  • Understanding informal reasoning is probably more important than understanding technical methods.
  • Finding a good formulation for a problem is often most of the work of solving it.
  • Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.
  • Choosing a good vocabulary, at the right level of description, is usually key to understanding.
  • Truth does not apply to problem formulations; what matters is usefulness.
  • All problem formulations are “false,” because they abstract away details of reality.
  • Work through several specific examples before trying to solve the general case. Looking at specific real-world details often gives an intuitive sense for what the relevant distinctions are.
  • Problem formulation and problem solution are mutually-recursive processes.
  • Heuristics for evaluating progress are critical not only during problem solving, but also during problem formulation.
  • Solve a simplified version of the problem first. If you can’t do even that, you’re in trouble.
  • If you are having a hard time, make sure you aren’t trying to solve an NP-complete problem. If you are, go back and look for additional sources of constraint in the real-world domain.
  • You can never know enough mathematics.
  • An education in math is a better preparation for a career in intellectual field X than an education in X.
  • You should learn as many different kinds of math as possible. It’s difficult to predict what sort will be relevant to a problem.
  • If a problem seems too hard, the formulation is probably wrong. Drop your formal problem statement, go back to reality, and observe what is going on.
  • Learn from fields very different from your own. They each have ways of thinking that can be useful at surprising times. Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind.
  • If all you have is a hammer, everything looks like an anvil. If you only know one formal method of reasoning, you’ll try to apply it in places it doesn’t work.
  • Evaluate the prospects for your field frequently. Be prepared to switch if it looks like it is approaching its inherent end-point.
  • It’s more important to know what a branch of math is about than to know the details. You can look those up, if you realize that you need them.
  • Get a superficial understanding of as many kinds of math as possible. That can be enough that you will recognize when one applies, even if you don’t know how to use it.
  • Math only has to be “correct” enough to get the job done.
  • You should be able to prove theorems and you should harbor doubts about whether theorems prove anything.
  • Try to figure out how people smarter than you think.
  • Figure out what your own cognitive style is. Embrace and develop it as your secret weapon; but try to learn and appreciate other styles as well.
  • Collect your bag of tricks.
  • Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.

How to see into the future

Saturday, October 18th, 2014

So, what is the secret of looking into the future?

Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.

But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness — although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.

This is because our predictions are about the future only in the most superficial way. They are really advertisements, conversation pieces, declarations of tribal loyalty — or, as with Irving Fisher, statements of profound conviction about the logical structure of the world.

Some participants in the Good Judgment Project were given advice, a few pages in total, which was summarised with the acronym CHAMP:

  • Comparisons are important: use relevant comparisons as a starting point;
  • Historical trends can help: look at history unless you have a strong reason to expect change;
  • Average opinions: experts disagree, so find out what they think and pick a midpoint;
  • Mathematical models: when model-based predictions are available, you should take them into account;
  • Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

The Advent of Cholera

Friday, October 17th, 2014

Cholera seems to have existed in the Ganges delta for a long time, but it only spread to the rest of the world fairly recently, Gregory Cochran notes, and two factors interfered with an effective policy response:

[Scientists] concluded that contagion was never the answer, and accepted miasmas as the cause, a theory which is too stupid to be interesting. Sheesh, they taught the kids in medical school that measles wasn’t catching — while ordinary people knew perfectly well that it was. You know, esoteric, non-intuitive truths have a certain appeal — once initiated, you’re no longer one of the rubes. Of course, the simplest and most common way of producing an esoteric truth is to just make it up.

On the other hand, 19th century liberals (somewhat like modern libertarians, but way less crazy) knew that trade and individual freedom were always good things, by definition, so they also opposed quarantines — worse than wrong, old-fashioned! And more common in southern, Catholic, Europe: enough said! So, between wrong science and classical liberalism, medical reformers spent many years trying to eliminate the reactionary quarantine rules that still existed in Mediterranean ports.

The intellectual tide turned: first heroes like John Snow, and Peter Panum, later titans like Pasteur and Koch. Contagionism made a comeback.

Mesopredator Release

Monday, October 13th, 2014

Over the past 100 years, coyotes have taken over America:

They are native to the continent, and for most of their existence these rangy, yellow-eyed canids were largely restricted to the Great Plains and western deserts where they evolved. But after wolves and cougars were exterminated from most of the United States by the 1800s, coyotes took their place. Colonizing some areas at a rate of 720 square miles per year, coyotes now occupy — or “saturate,” as one scientist I spoke with described it — nearly the entire continent. (Long Island is a notable exception.) The animals are now the apex predators of the east. And they’re proving so resourceful that even the last stronghold — the urban core — represents an opportunity to flourish.

Coyotes may be the most driven carnivores to penetrate modern cities in recent years, but they’re hardly the only ones. Raccoons, foxes, and skunks have long been prolific urban residents. And now bobcats, cougars, even grizzly bears — predators that symbolize wilderness, who typically require a lot of space and a stable prey base, and defend their territories — are not just visiting but occupying areas that scientists used to consider impossible for their survival. Dozens of grizzlies now summer within the city limits of Anchorage. The most urban cougar ever, a male named P22, has been canvassing Los Angeles’ Griffith Park for more than two and a half years. Bobcats prowl the Hollywood Hills and saunter near skyscrapers in Dallas. And in New York City, a predator is returning that hasn’t been seen since Henry Hudson’s day — the fisher, a dachshund-sized member of the weasel family with a long, thick tail. This spring, a police officer named Lenart snapped the first NYC photo of one, skulking on a Bronx sidewalk at dawn.

Why are these large weasels flourishing in the east?

“We found support that eastern fishers are experiencing what’s known as mesopredator release,” says Kays. “That means they overlap with fewer predatory species than they used to. There are no cougars; there are no wolves.” Without many big competitors to fear, middle-sized predators, or mesopredators, are free to change their habits: they can hunt in a wider range of places or times. They can also pursue larger prey (for fishers, that means hefty snowshoe hares, porcupines, or deer roadkill) without getting beaten to it or bullied. Scientists suspect that mesopredator release is fueling coyotes’ incredible expansion as well.

Most intriguingly, LaPoint and Kays discovered that the bodies of eastern fishers are actually getting bigger over time. These carnivores seem to be evolving to better catch larger-bodied prey by becoming larger themselves. A big, well nourished fisher is more likely to survive in new, challenging environments. “They’re getting bigger where their populations are expanding,” says LaPoint, which the team documented by comparing hundreds of museum specimens collected from the 19th century to the present. That’s brisk, evolutionarily speaking. “Within a century, more or less,” he says. “It’s pretty crazy.”

The rub is that this “wilderness” species seems to be quickly adapting to our presence. In persecuting North America’s biggest carnivores, we may be encouraging medium-sized ones to spread directly into the areas we now live, and in some cases, actually evolve into bigger, more resourceful predators.

(Hat tip to T. Greer.)

Cats and Dogs

Monday, October 13th, 2014

Leopards in India have an interesting diet:

The researchers found that domestic dogs were by far the most common prey, making up 39 percent of the leopards’ diet (in terms of biomass). The remains of domestic cats were found in 15 percent of poop samples and accounted for 12 percent of the mass of leopards’ meals.

By comparison, livestock were a relatively small portion of the leopard diet. Domestic goats, for example, accounted for just 11 percent of the mass of the big cats’ meals, even though they were seven times more abundant than dogs in the study area.

All told, 87 percent of the leopards’ diet was made up of domestic animals, including both livestock and pets; this suggests the leopards, though considered wild, are completely dependent on human-related sources of food. The small portion of the wild animals in the leopards’ diet consisted of mostly rodents, as well as civets, monkeys, mongooses and birds.

The Philosophy of the Science of Poker

Friday, October 10th, 2014

Existential Comics illustrates the philosophy of the science of poker:

Existential Comics Philosophy of Science of Poker

(Read the whole thing.)

The Eccentric Polish Count Who Influenced Classic SF’s Greatest Writers

Thursday, October 9th, 2014

Alfred Korzybski deserves a more prominent place in our histories of science fiction, Lee Konstatinou argues:

Korzybski inspired a legion of students, and the meta-science of “General Semantics” that he created affected disciplines as diverse as literary criticism, philosophy, linguistics, psychology, and cybernetics.

But his most powerful effect might have been on John W. Campbell’s Golden Age. Indeed, Korzybski is probably the most important influence on science fiction you’ve never heard of.

Alfred Korzybski was a Polish aristocrat who came to North America near the end of World War I after being injured in the war. Trained as an engineer, he created a philosophy he called General Semantics (not to be confused with semantics as a linguistic discipline). General Semantics was part of a much larger philosophical effort, early in the twentieth century, to create a logically ideal language and a contribution to intellectual debates about the so-called “meaning of meaning.”

Attempting to build on the work of Bertrand Russell and Alfred North Whitehead, Korzbyski tried to explain, among other things, why humans were uniquely prone to self-slaughter. He hoped, quixotically, that his meta-linguistic system might save us from our own worst tendencies.

Korzybski coined the well-known slogan, “The map is not the territory,” to sum up his ideas:

To defeat our Aristotelian habits of mind, to help humankind achieve what he called “sanity,” Korzybski created a mental and spiritual training regime. He recommended that we achieve a “consciousness of abstracting,” an awareness of our own process of abstracting the world, in order to gain a better understanding of what he called “silence on the objective level,” the fundamentally non-linguistic nature of reality. Korzybski advised that we engage in a “semantic pause” when confronted with a novel stimulus, a sort of neurocognitive Time Out.

He profoundly influenced Robert A. Heinlein, L. Ron Hubbard, A.E. Van Vogt — and many others:

Many other Golden Age writers, such as H. Beam Piper and Reginald Bretnor, incorporated Korzybski into their fiction. And his influence stretches well beyond the conventional boundaries of the Golden Age.

Frank Herbert, for instance, ghostwrote a nationally syndicated column on General Semantics, under Hayakawa’s byline, while writing Dune (1965). Korzybski’s ideas are visible in Herbert’s depiction of the Bene Gesserit’s mental and physical training regime.

Stop the flights now

Wednesday, October 8th, 2014

Epidemiologist David Dausey says, stop the flights now!

Individuals who suspect they have been exposed to Ebola and have the means to travel to the United States have every reason to get on a plane to the United States as soon as possible. There are no direct flights from the three most-affected nations, but passengers can transfer elsewhere, as Duncan did. If they stay in Africa, the probability that they will survive the illness if they have it is quite low. If they make it to the United States, they can expect to receive the best medical care the world can provide, and they will have a much higher probability of survival. So they are motivated to lie about their exposure status (wouldn’t you, in their shoes?) to airlines and public health officials and travel to the United States.

You can be fooled in a thousand subtle ways

Wednesday, October 8th, 2014

I didn’t realize how Gary Taubes came to write Good Calories, Bad Calories:

He majored in applied physics at Harvard, where he also played on the football team’s defensive line. (John Tuke, one of his teammates, recalls that Taubes stood out for his intensity.) After Harvard, Taubes headed to Stanford for a master’s in engineering with visions of becoming an astronaut. It was only after realizing that NASA wasn’t likely to send a man of his size to space—Taubes is 6?2? and 220 pounds—that he decided to pursue an interest in investigative reporting that had been sparked by reading All the President’s Men.

He attended Columbia University’s Graduate School of Journalism and soon landed a job at Discover magazine. He caught a break in 1984, when a profile of particle physicist Carlo Rubbia led to a deal for his first book, Nobel Dreams. Taubes thought he would be documenting a breakthrough in physics. Instead, the book chronicled Rubbia’s errors and the machinations he used to outmaneuver his fellow physicists. Taubes was struck that science could be so subjective at the highest levels—that it’s not just the big mistakes that scientists have to worry about but the numerous small ones that accumulate to support their misconceptions. “You can be fooled in a thousand subtle ways,” he says.

That lesson stuck with him when, almost by accident, he turned his attention to nutrition science in 1997. By then a freelancer and running low on rent money, he called his editor at Science and asked if there were any assignments he could turn around quickly. The editor mentioned a paper in The New England Journal of Medicine that detailed a dietary approach to reducing blood pressure without restricting salt. Maybe he could write about that?

Taubes knew almost nothing about the topic. He would end up spending the next nine months interviewing 80 researchers, clinicians, and administrators. That research resulted in an August 1998 article headlined “The (Political) Science of Salt.” It was a sweeping takedown of everything scientists thought they had established about the link between salt consumption and blood pressure. The belief that too much salt was the cause of hypertension wasn’t based on careful experiments, Taubes wrote, but primarily on observations of the diets of populations with less hypertension. The scientists and health professionals railing against salt didn’t seem to notice or care that the diets of those populations might differ in a dozen ways from the diets of populations with more hypertension.

Taubes began to wonder if his critique applied beyond salt, to the rest of nutrition science. After all, one of the researchers Taubes interviewed had taken credit not only for getting Americans to eat less salt but also for getting them to eat less fat and eggs. He kicked off a multiyear research project that culminated in 2002, when he published a New York Times Magazine cover story on fat that would vault him into prominence and onto the path to NuSI.

Under the cover line “What if Fat Doesn’t Make You Fat?” Taubes made the case that we get fat not because we ignore the advice of the medical establishment but because we follow it. He argued that carbohydrates, not fat, were more likely to be the cause of the obesity epidemic. The piece was a sensation.

The Testing Effect

Monday, October 6th, 2014

Sometimes, when we open a test, we see familiar questions on material we’ve studied — and yet we still do badly. Why does this happen?

Psychologists have studied learning long enough to have an answer, and typically it’s not a lack of effort (or of some elusive test-taking gene). The problem is that we have misjudged the depth of what we know. We are duped by a misperception of “fluency,” believing that because facts or formulas or arguments are easy to remember right now, they will remain that way tomorrow or the next day. This fluency illusion is so strong that, once we feel we have some topic or assignment down, we assume that further study won’t strengthen our memory of the material. We move on, forgetting that we forget.

Often our study “aids” simply create fluency illusions — including, yes, highlighting — as do chapter outlines provided by a teacher or a textbook. Such fluency misperceptions are automatic; they form subconsciously and render us extremely poor judges of what we need to restudy or practice again. “We know that if you study something twice, in spaced sessions, it’s harder to process the material the second time, and so people think it’s counterproductive,” Nate Kornell, a psychologist at Williams College, said. “But the opposite is true: You learn more, even though it feels harder. Fluency is playing a trick on judgment.”

The best way to overcome this illusion is testing, which also happens to be an effective study technique in its own right. This is not exactly a recent discovery; people have understood it since the dawn of formal education, probably longer. In 1620, the philosopher Francis Bacon wrote, “If you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite it from time to time and consulting the text when your memory fails.”

Scientific confirmation of this principle began in 1916, when Arthur Gates, a psychologist at Columbia University, created an ingenious study to further Bacon’s insight. If someone is trying to learn a piece of text from memory, Gates wondered, what would be the ideal ratio of study to recitation (without looking)? To interrogate this question, he had more than 100 schoolchildren try to memorize text from Who’s Who entries. He broke them into groups and gave each child nine minutes to prepare, along with specific instructions on how to use that time. One group spent 1 minute 48 seconds memorizing and the remaining time rehearsing (reciting); another split its time roughly in half, equal parts memorizing and rehearsing; a third studied for a third and recited for two-thirds; and so on.

After a sufficient break, Gates sat through sputtered details of the lives of great Americans and found his ratio. “In general,” he concluded, “best results are obtained by introducing recitation after devoting about 40 percent of the time to reading. Introducing recitation too early or too late leads to poorer results.” The quickest way to master that Shakespearean sonnet, in other words, is to spend the first third of your time memorizing it and the remaining two-thirds of the time trying to recite it from memory.

Continue reading the main story
In the 1930s, a doctoral student at the State University of Iowa, Herman F. Spitzer, recognized the broader implications of this insight. Gates’s emphasis on recitation was, Spitzer realized, not merely a study tip for memorization; it was nothing less than a form of self-examination. It was testing as study, and Spitzer wanted to extend the finding, asking a question that would apply more broadly in education: If testing is so helpful, when is the best time to do it?

He mounted an enormous experiment, enlisting more than 3,500 sixth graders at 91 elementary schools in nine Iowa cities. He had them study an age-appropriate article of roughly 600 words in length, similar to what they might analyze for homework. Spitzer divided the students into groups and had each take tests on the passages over the next two months, according to different schedules. For instance, Group 1 received one quiz immediately after studying, then another a day later and a third three weeks later. Group 6, by contrast, didn’t take one until three weeks after reading the passage. Again, the time the students had to study was identical. So were the quizzes. Yet the groups’ scores varied widely, and a clear pattern emerged.

The groups that took pop quizzes soon after reading the passage — once or twice within the first week — did the best on a final exam given at the end of two months, marking about 50 percent of the questions correct. (Remember, they had studied their peanut or bamboo article only once.) By contrast, the groups who took their first pop quiz two weeks or more after studying scored much lower, below 30 percent on the final. Spitzer’s study showed that not only is testing a powerful study technique, but it’s also one that should be deployed sooner rather than later. “Achievement tests or examinations are learning devices and should not be considered only as tools for measuring achievement of pupils,” he concluded.

The testing effect, as it’s known, is now well established, and it opens a window on the alchemy of memory itself. “Retrieving a fact is not like opening a computer file,” says Henry Roediger III, a psychologist at Washington University in St. Louis, who, with Jeffrey Karpicke, now at Purdue University, has established the effect’s lasting power. “It alters what we remember and changes how we subsequently organize that knowledge in our brain.”

Disaster in the South Pacific

Friday, October 3rd, 2014

The 1918 influenza pandemic hit almost every country on Earth, Gregory Cochran explains:

It missed American Samoa entirely, which is interesting. It’s even more interesting when you notice that it hit the neighboring islands of West Samoa harder than anywhere else.

[...]

American Samoa was physically quite close to Western Samoa, less than 100km. There were close cultural ties: people intermarried and often sailed back and forth. But the governmental structure was different. There were no copra plantations in American Samoa, so you didn’t have any powerful business interests lobbying for suicide. The US Navy ran the colony. John Martin Poyer, an officer that had retired from active duty due to illness, was brought back to active duty in 1915 to serve as Governor of American Samoa.

Both American Samoa and West Samoa had advance warning of the flu’s danger: they both had wireless sets and occasional mail.

Washington didn’t micro-manage American Samoa, not being all that interested. A policy of benign neglect was interpreted by Poyer as an opportunity to act on his best judgment, in the finest traditions of the US Navy. He imposed quarantine. That was harder that it sounds, because of the frequent family visits between West Samoa and American Samoa — but Poyer also had the support of the local chiefs, who understood how serious imported epidemics could be. The people of American Samoa self-blockaded, on top of official quarantine: they sent out canoes to stop any and all visitors. They never had a single case.