Who is supporting you? Big Kale?

Friday, January 11th, 2019

Siddhartha Mukherjee, author The Emperor of All Maladies: A Biography of Cancer, says it’s time we studied diet as seriously as we study drugs:

Several months before my surgical procedure, a cancer patient asked me whether she should change her diet. She had lost her appetite. One nutritionist had advised her to start consuming highly caloric, sugar-loaded drinks to maintain her body weight. But, she worried, what if the sugar ended up “feeding” her cancer? Her anxiety was built on nearly eight decades of science: In the 1920s, Otto Warburg, a German physiologist, demonstrated that tumor cells, unlike most normal cells, metabolize glucose using alternative pathways to sustain their rapid growth, provoking the idea that sugar might promote tumor growth.

You might therefore expect the medical literature on “sugar feeding cancer” to be rich with deep randomized or prospective studies. Instead, when I searched, I could find only a handful of such trials. In 2012, a team at the Dana-Farber Cancer Institute in Boston divided patients with Stage 3 colon cancer into different groups based on their dietary consumption, and determined their survival and rate of relapse. The study generated provocative data — but far from an open-and-shut case. Patients whose diets consisted of foods with a high glycemic load (a measure of how much blood glucose rises after eating a typical portion of a food) generally had shorter survival than patients with lower glycemic load. But a higher glycemic index (a measure of how much 50 grams of carbohydrate from a food, which may require eating a huge portion, raises blood glucose) or total fructose intake had no significant association with overall survival or relapse.

While the effect of sugar on cancer was being explored in scattered studies, the so-called ketogenic diet, which consists of high fat, moderate protein and low carbohydrate, was also being promoted. It isn’t sugars that are feeding the tumor, the logic runs. It’s insulin — the hormone that is released when glucose enters the blood. By reducing carbohydrates and thus keeping a strong curb on insulin, the keto diet would decrease the insulin exposure of tumor cells, and so restrict tumor growth. Yet the search for “ketogenic diet, randomized study and cancer” in the National Library of Medicine database returned a mere 11 articles. Not one of them reported an effect on a patient’s survival, or relapse.

But what if diet, rather than acting alone, collaborates with a drug to produce an effect on a tumor? In the winter of 2016, I had dinner with Lewis Cantley, director of the Meyer Cancer Center at Weill Cornell Medicine. Decades ago, Cantley discovered an enzyme named PI3 kinase, which regulates the growth and survival of cells in the presence of nutrients. By inhibiting this enzyme using novel drugs, researchers had hoped to target the signals used by tumor cells to grow, thus “starving” the cancer. But the drugs designed thus far were only marginally effective. Why, we wondered over salmon teriyaki in a nondescript Upper East Side joint, might blocking such a central hub of growth activity have had only a modest effect on tumor growth?

The trials gave us a crucial, obvious clue that we had missed: Many patients had become diabetic, a phenomenon seen as a side effect of the drug that had been ignored. Perhaps the drug wasn’t just providing a “starvationlike” signal only to the tumor cells, we speculated. As most drugs do, the molecule circulated through the entire body of the patient and also acted on the liver, which sensed the same starvationlike signal and, as a reflexive response, sent glucose soaring into the blood. The glucose, in turn, most likely incited insulin release in the pancreas. And some patients treated with the medicine returned to the clinic with sky-high levels of glucose and insulin — in essence, in the throes of drug-induced diabetes.

Cantley wondered whether the additional insulin was reactivating the signals within the tumor cells that had been shut off by the PI3 kinase inhibitor, and so allowing the cells to survive — in effect, undoing all the good being done by the drug. On a paper napkin borrowed from the waiter, he drew out a scheme to outwit this vicious cycle. What if we cut off all extra insulin released, by putting patients on a low-carb, ketogenic diet while on the drug? It would be a novel kind of trial — one in which diet itself would become a drug, or a co-drug, with the PI3 kinase inhibitors.

Between 2016 and 2018, postdoctoral researchers in Cantley’s laboratory and mine established that this strategy worked on several mouse cancers, and on human cancers implanted into mice. By 2019, working with clinicians at Columbia, Cornell and Memorial Sloan Kettering, we hope to begin a study in humans with lymphomas, endometrial cancer and breast cancer, to use ketogenic diets in concert with the PI3 kinase inhibitors. (In the meantime, a host of other studies have also demonstrated that other diets could potently modulate the effects of targeted therapies on cancers in mouse models.)

But the experiments on mice also warned us of an important pitfall of such an approach. While the “drug plus diet” model worked on experimental mouse and human cancers, the ketogenic diet had a limited effect by itself. For some cancers in the mouse models, the keto diet alone kept the tumor growth at bay. But for others, like some leukemias implanted into mice, the diet alone accelerated the cancer, while the drug-plus-diet approach slowed it down.

We published this data in the scientific journal Nature early this year. I sent out a tweet with the results, emphasizing that the human trial was about to be started, and that the keto diet alone might have a negative effect on some tumors — in essence, a “folks, don’t try this at home” message. The response over social media was unexpected — brisk, vicious, angry, suspicious and, at times, funny. “Keto is pure hype,” one responder wrote. Another countered: “Who is supporting you? Big Kale?”

Humor at its best is a kind of heightened truth

Friday, January 4th, 2019

Supernormal stimuli are key to certain kinds of wit, skewing or exaggerating our usual patterns of perception:

The great silent comic Buster Keaton is a case in point.

In The High Sign (1921), as Keaton settles down on a bench to read his local daily, he unfolds the paper to standard broadsheet format. He soon notices, though, that the newspaper is bigger than he expected, so he continues unfolding it — first to roughly the surface area of an ample picnic blanket, then easily to the proportions of a king-size bedsheet, until he’s finally engulfed by a single gigantic swath of newsprint.

In Seven Chances (1925), Keaton, a stockbroker on the verge of financial ruin, learns that he will inherit handsomely from his grandfather — if he weds by 7 p.m. When his sweetheart rebuffs him (she will marry for love, not for money), he places an open offer of marriage, with details of the pecuniary benefits, in the newspaper. Hundreds of women turn up at the church for the ceremony, only to become enraged at Keaton’s tactics. The bevy of would-be brides chases him out of town and onto a nearby hill, where he dislodges a single rock, which sets in motion an avalanche of boulders, which rain down on our hapless groom’s head.

Keaton’s gags start innocuously enough, with some ordinary object, then snowball into supernormal stimuli. But stimuli can also be made supernormal by visual or verbal tricks that disrupt the ordinary ways we see and understand the world.

Marcel Marien’s work is rife with such tricks. Marien started out as a photographer’s apprentice while still in his teens. But in 1935, after seeing the work of René Magritte for the first time, he decided on a career as an artist, soon becoming a close friend of Magritte and one of the most prominent of the Belgian surrealists. He worked in a variety of media — photography; film; collage; and “ready-mades,” works of art assembled from discarded materials, common household items, or unused parts of other objects.

In Star Dancer (1991), Marien attached a doll’s high-heel shoe to one of the arms of a dead starfish, transforming it into a wispy, Matisse-esque ballerina. The strange juxtaposition makes the viewer do a double take. How can such a clearly alien creature have such distinctly human expressiveness? Like the volleyball/egg that birds try to incubate, the cobbled-together starfish/doll becomes a supernormal stimulus that alters viewers’ perceptions.

The same principle is at work in verbal wit. The English film director Anthony Asquith, for example, once introduced Jean Harlow, the platinum-blond 1930s Hollywood star, to his mother, Lady Margot Asquith, the author and wife of the longtime British prime minister Herbert Henry Asquith. Harlow mispronounced Lady Margot’s first name, sounding the final t, as in forgot. “The t is silent, my dear,” Asquith snipped, “as in Harlow.” Lady Margot isolated and exaggerated the significance of the simple t, just as Tinbergen isolated and exaggerated the herring gull’s orange spot, thereby dramatically enhancing its impact.

What is a punch line but a supernormal stimulus?

We respond to witty words and images more intensely than to “normal” objects, just as Tinbergen’s theory of supernormal stimuli suggests. “Humor at its best is a kind of heightened truth — a super-truth,” E. B. White wrote. This is also true of wit, which takes routine seeing and heightens it by shearing ordinary things and meanings of their habitual context, revealing them as suddenly strange and unfamiliar.

Popular Posts of 2018

Tuesday, January 1st, 2019

I just took a look back at my numbers for 2018. Here are the most popular posts during that calendar year, four of which are new, six of which are older:

  1. Robert Conquest’s Three Laws of Politics
  2. The Bob Rubin Trade (new)
  3. Polar Bear Turns Purple After Medication
  4. The Father of Social-Science
  5. Fast Friends Protocol
  6. Observations from Actual Shootings
  7. He-Man Opening Monologue
  8. I’ve been blogging for 15 years (new)
  9. Maximum effective range of buckshot (new)
  10. The most expensive new public school in San Francisco history failed (new)

Here are the most popular posts actually from 2018 and not from an earlier year:

  1. The Bob Rubin Trade
  2. I’ve been blogging for 15 years
  3. Maximum effective range of buckshot
  4. The most expensive new public school in San Francisco history failed
  5. The physical strength of nations varies considerably
  6. Would you pay $70,000 for a lunar vacation?
  7. Why some people become sudden geniuses
  8. Why is English so weirdly different from other languages?
  9. Where education was tried it turned out to be futile
  10. Marine experiment finds women get injured more frequently, shoot less accurately than men

Again, I’m not sure what to conclude.

Also, I should thank some of my top referrers: Social MatterZ ManMapping The Dark Enlightenment, and The Scholar’s Stage.

Merry Christmas!

Tuesday, December 25th, 2018

Please enjoy these yuletide posts of Christmas Past:

All Hallows’ Eve

Wednesday, October 31st, 2018

I’ve written a surprising amount about Halloween and horror over the years:

We’ve all been planet chauvinists

Wednesday, October 17th, 2018

Gerard K. O’Neill (The High Frontier) and Isaac Asimov appeared in a 1975 Roundtable TV interview, where Asimov noted that he and his fellow science-fiction writers failed to imagine free-floating space colonies:

Nobody did, really, because we’ve all been planet chauvinists. We’ve all believed people should live on the surface of a planet, of a world. I’ve had colonies on the moon. So have a hundred other science fiction writers. The closest I came to a manufactured world in free space was to suggest that we go out to the Asteroid Belt and hollow out the asteroids, and make ships out of them. It never occurred to me to bring the material from the asteroids in towards the Earth, where conditions are pleasanter, and build the worlds there.

Steven Levy was able to interview Jeff Bezos — head of Amazon, of course, but also Blue Origin — only after watching that O’Neill-Asimov interview.

That vision captivated a generation of space nerds, including Bezos, who believed it back then, as a brainy schoolkid. And he believes it now, with “increasing conviction” every passing year. Earth is destined to run out of resources, he explains patiently to anyone questioning his priorities. Humans need a plan B. While he readily concedes that building a space company qualifies as a cool adventure, the ultimate point, he always insists, is getting people to live in space. He often remarks with astonishment and disgust that there have never been more than 13 humans in space at one time. He’s out to change that, by creating the backbone needed for O’Neill’s millions, billions, maybe even a trillion people to reside off-planet.

Why is English so weirdly different from other languages?

Wednesday, October 10th, 2018

John McWhorter, professor of linguistics and American studies at Columbia University, explains why English is so weirdly different from other languages:

The oddity that we all perceive most readily is its spelling, which is indeed a nightmare. In countries where English isn’t spoken, there is no such thing as a “spelling bee” competition. For a normal language, spelling at least pretends a basic correspondence to the way people pronounce the words. But English is not normal.

[...]

There is no other language, for example, that is close enough to English that we can get about half of what people are saying without training and the rest with only modest effort. German and Dutch are like that, as are Spanish and Portuguese, or Thai and Lao. The closest an Anglophone can get is with the obscure Northern European language called Frisian.

[...]

We think it’s a nuisance that so many European languages assign gender to nouns for no reason, with French having female moons and male boats and such. But actually, it’s us who are odd: almost all European languages belong to one family — Indo-European — and of all of them, English is the only one that doesn’t assign genders that way.

[...]

There is exactly one language on Earth whose present tense requires a special ending only in the third-person singular. I’m writing in it. I talk, you talk, he/she talk-s — why just that? The present-tense verbs of a normal language have either no endings or a bunch of different ones (Spanish: hablo, hablas, habla). And try naming another language where you have to slip do into sentences to negate or question something. Do you find that difficult?

Read the whole thing.

McWhorter’s book, The Language Hoax, refutes the Sapir-Whorf Hypothesis, which suggests that language influences thought, and that some languages might lead to clearer thinking.

The Russians are considering a semi-catamaran aircraft carrier

Wednesday, October 3rd, 2018

I don’t know much about naval architecture, but I remember idly musing — while reading about World War II, of course — that a catamaran design would allow an almost arbitrarily large flight deck on an aircraft carrier — and I wondered what I was missing.

Now it looks like the Russians are considering a semi-catamaran design for their new Krylov light aircraft carrier:

The underwater part of the light aircraft carrier features a semi-catamaran hull which is the major distinction of the project designed by the Krylov Scientific Center, a representative of the organization told TASS.

Krylov Light Aircraft Carrier Stern

“The project is distinguished by the underwater part of a semi-catamaran form. Catamaran actually means two hulls united by a platform. It has a wide deck which is important for an aircraft carrier. The design adds flight deck space on which the number of aircraft depends. As a result, a medium-displacement ship can carry a full-fledged air wing,” he said.

Abixia is a paracosm

Saturday, September 29th, 2018

Alison Gopnik explores the imaginary worlds of childhood:

In 19th-century England, the Brontë children created Gondal, an imaginary kingdom full of melodrama and intrigue. Emily and Charlotte Brontë grew up to write the great novels “Wuthering Heights” and “Jane Eyre.” The fictional land of Narnia, chronicled by C.S. Lewis in a series of classic 20th-century novels, grew out of Boxen, an imaginary kingdom that Lewis shared with his brother when they were children. And when the novelist Anne Perry was growing up in New Zealand in the 1950s, she and another girl created an imaginary kingdom called Borovnia as part of an obsessive friendship that ended in murder — the film “Heavenly Creatures” tells the story.

But what about Abixia? Abixia is an island nation on the planet Rooark, with its own currency (the iinter, divided into 12 skilches), flag and national anthem. It’s inhabited by cat-humans who wear flannel shirts and revere Swiss army knives — the detailed description could go on for pages. And it was created by a pair of perfectly ordinary Oregon 10-year-olds.

Abixia is a “paracosm,” an extremely detailed and extensive imaginary world with its own geography and history. The psychologist Marjorie Taylor at the University of Oregon and her colleagues discovered Abixia, and many other worlds like it, by talking to children. Most of what we know about paracosms comes from writers who described the worlds they created when they were children. But in a paper forthcoming in the journal Child Development, Prof. Taylor shows that paracosms aren’t just the province of budding novelists. Instead, they are a surprisingly common part of childhood.

Prof. Taylor asked 169 children, ages eight to 12, whether they had an imaginary world and what it was like. They found that about 17 percent of the children had created their own complicated universe. Often a group of children would jointly create a world and maintain it, sometimes for years, like the Brontë sisters or the Lewis brothers. And grown-ups were not invited in.

Prof. Taylor also tried to find out what made the paracosm creators special. They didn’t score any higher than other children in terms of IQ, vocabulary, creativity or memory. Interestingly, they scored worse on a test that measured their ability to inhibit irrelevant thoughts. Focusing on the stern and earnest real world may keep us from wandering off into possible ones.

But the paracosm creators were better at telling stories, and they were more likely to report that they also had an imaginary companion. In earlier research, Prof. Taylor found that around 66% of preschoolers have imaginary companions; many paracosms began with older children finding a home for their preschool imaginary friends.

Children with paracosms, like children with imaginary companions, weren’t neurotic loners either, as popular stereotypes might suggest. In fact, if anything, they were more socially skillful than other children.

Why do imaginary worlds start to show up when children are eight to 12 years old? Even when 10-year-olds don’t create paracosms, they seem to have a special affinity for them — think of all the young “Harry Potter” fanatics. And as Prof. Taylor points out, paracosms seem to be linked to all the private clubhouses, hidden rituals and secret societies of middle childhood.

Prof. Taylor showed that preschoolers who create imaginary friends are particularly good at understanding other people’s minds — they are expert at everyday psychology. For older children, the agenda seems to shift to what we might call everyday sociology or geography. Children may create alternative societies and countries in their play as a way of learning how to navigate real ones in adult life.

You’re almost always better off turning it into a “which one” question

Saturday, September 22nd, 2018

Steven Johnson (@stevenbjohnson), author of Farsighted: How We Make the Decisions That Matter the Most, explains the importance of generating alternatives to any course of action you are considering:

In the early 1980s, a business school professor named Paul Nutt set out to catalog real-world decisions the way a botanist might catalog the various types of vegetation growing in a rain forest. In his initial study, published in 1984, he analyzed 78 decisions made by senior managers at a range of public and private organizations in the United States and Canada: insurance companies, government agencies, hospitals, consulting firms.

The most striking finding in Professor Nutt’s research was this: Only 15 percent of the decisions he studied involved a stage where the decision makers actively sought out a new option beyond the initial choices on the table. In a later study, he found that only 29 percent of organizational decision makers contemplated more than one alternative.

This turns out to be a bad strategy. Over the years, Professor Nutt and other researchers have demonstrated a strong correlation between the number of alternatives deliberated and the ultimate success of the decision itself. In one of his studies, Professor Nutt found that participants who considered only one alternative ultimately judged their decision a failure more than 50 percent of the time, while decisions that involved contemplating at least two alternatives were felt to be successes two-thirds of the time.

The upshot is clear: If you find yourself mapping a “whether or not” question, looking at a simple fork in the road, you’re almost always better off turning it into a “which one” question that gives you more available paths.

He continues with a rather fashionable follow-on notion:

What’s the best way to expand your pool of options? Researchers suggest that if possible, you diversify the group of people who are helping make the decision. About a decade ago, the social psychologist Samuel Sommers conducted a series of mock trials in which a jury debated and evaluated evidence from a sexual assault case. Some of the juries were entirely white, while other juries were more diverse in their racial makeup. By almost every important metric, the racially mixed juries performed better at their task. They considered more potential interpretations of the evidence, remembered information about the case more accurately and engaged in the deliberation process with more rigor and persistence.

Homogeneous groups — whether they are united by ethnic background, gender or some other commonality like politics — tend to come to decisions too quickly. They settle early on a most-likely scenario and don’t question their assumptions, since everyone at the table seems to agree with the broad outline of the interpretation.

A 2008 study led by the management professor Katherine Phillips using a similar investigative structure revealed an additional, seemingly counterintuitive finding: While the more diverse groups were better at reaching the truth, they were also far less confident in the decisions they made. They were both more likely to be right and, at the same time, more open to the idea that they might be wrong.

No more than a few dozen excellent examples were ever published

Tuesday, August 7th, 2018

Ira Levin’s Rosemary’s Baby kicked off a wave of horror novels that flourished throughout the 1970s and 80s. James Clavell’s Shogun kicked off a similar, smaller wave of historical adventure novels set in Asia:

I’ve long been a big fan of these books which, for lack of a better term, I refer to collectively as ‘The Children of Shogun.’

Alas, Shogun didn’t produce nearly as many bastard offspring as Rosemary’s Baby did. It was fairly easy for any professional writer with imagination and a passion for horror stories to turn out a handful of 300-page supernatural thrillers over the course of a couple of decades. Producing a 900-page Shogun-like epic is another matter entirely. The Children of Shogun were written mostly by men and women with years of personal experience in Asia. They tended to be journalists or academics with a profound interest in the history and culture of the East. If you were a horror fan in the 1970s and ’80s (and I was), it was easy to find titles to feed your hunger for demonic children, seductive witches, and haunted houses. If you craved massive historical epics featuring singsong girls, opium pipes, rickshaws, treaty ports, forbidden cities, warlords, seppuku, pillow dictionaries, foot binding, and godowns filled with tea or silk or jade, feeding your hunger took a bit more initiative.

Nevertheless, quite a few such books got published between the mid-1970s and mid-1990s, and I’ve read dozens of them. The phenomenon seems to have faded over the past 20 years, giving way to other literary booms: vampire novels, fantasy epics, young-adult dystopian series. It is unlikely the boom will ever be revived. In a critical review of Pearl S. Buck’s The Good Earth posted on Goodreads, author Celeste Ng probably spoke for many of today’s progressive readers when she complained about “the weirdness that arises from a Westerner writing about a colonized country.” Apparently it’s all right when an Asian author like Haruki Murakami (whose work I love) writes novels inspired by the likes of Raymond Carver, Raymond Chandler, and Franz Kafka. But Westerners who write about the adventures of English-speaking protagonists in Asia are likely to be shouted down with accusations of cultural appropriation.

[...]

One thing that stands out about these authors is that many of them led lives nearly as adventurous as their protagonists. Anthony Grey spent 27 months in a Chinese prison. During his long career in journalism, Noel Barber was stabbed five times and shot in the head once. James Clavell was a prisoner of war during WWII. Robert Elegant covered both the Korean and Vietnam wars as a journalist and Richard Nixon once called him “my favorite China expert.” If books in this genre seem somewhat more convincing than horror novels of the same era, perhaps it’s because no horror novelists of the era were ever actually possessed by Satan, bitten by vampires, or capable of starting fires with their minds. But, while horror novels are still being churned out in large numbers, almost no one is writing Shogun-like sagas any longer. Soon the genre may cease to exist entirely. If you don’t believe me, consider the decline of the American Indian novel written by white authors.

During much of the twentieth century, white American authors produced some excellent novels featuring Native American characters. The list includes masterpieces such as Oliver La Farge’s Pulitzer Prize-winning Laughing Boy (1929) and Scott O’Dell’s Newbery Medal-winning Island of the Blue Dolphins (1960). Other prominent titles in the genre include Thomas Berger’s 1964 novel Little Big Man (subsequently adapted into a film starring Dustin Hoffman and Faye Dunaway directed by Arthur Penn), Margaret Craven’s I Heard the Owl Call My Name (1967), and Douglas C. Jones’s A Creek Called Wounded Knee (1978).

But the production of such novels has dwindled markedly over the last 40 years or so. This probably has something to do with what happened to Ruth Beebe Hill after the publication of her 1978 novel Hanta Yo. The early reviews of the book were positive. A reviewer for the Harvard Crimson called Hanta Yo “the best researched novel yet written about an American Indian tribe.” Native American author N. Scott Momaday, author of House Made of Dawn, admired the book. David Wolper, the producer of the landmark TV miniseries Roots purchased the film rights to Hanta Yo and planned to give it the same treatment as Roots. Alas, before Wolper could put his plan into action, the book began drawing criticism from Native American groups contending that it was an inaccurate portrayal of the Sioux.

[...]

If you haven’t yet experienced the joys of exploring ‘The Children of Shogun,’ a great literary pleasure still awaits you. But read slowly and linger over each book. No more than a few dozen excellent examples were ever published. And no new titles are likely to appear in the foreseeable future, if Celeste Ng and her ilk have their way.

Gastronomy is the science of pain

Sunday, June 10th, 2018

Anthony Bourdain got his start in writing with this piece for the The New Yorker — which, from the get-go, demonstrates his dark streak:

Good food, good eating, is all about blood and organs, cruelty and decay. It’s about sodium-loaded pork fat, stinky triple-cream cheeses, the tender thymus glands and distended livers of young animals. It’s about danger — risking the dark, bacterial forces of beef, chicken, cheese, and shellfish. Your first two hundred and seven Wellfleet oysters may transport you to a state of rapture, but your two hundred and eighth may send you to bed with the sweats, chills, and vomits.

Gastronomy is the science of pain. Professional cooks belong to a secret society whose ancient rituals derive from the principles of stoicism in the face of humiliation, injury, fatigue, and the threat of illness. The members of a tight, well-greased kitchen staff are a lot like a submarine crew. Confined for most of their waking hours in hot, airless spaces, and ruled by despotic leaders, they often acquire the characteristics of the poor saps who were press-ganged into the royal navies of Napoleonic times — superstition, a contempt for outsiders, and a loyalty to no flag but their own.

Ape-men and dactyloscopy

Friday, April 13th, 2018

When I first read Tarzan of the Apes years ago, I was surprised by a number of things, including how fingerprints were still seen as cutting-edge science in a novel from 1912:

The ape-man was anxious to proceed to America, but D’Arnot insisted that he must accompany him to Paris first, nor would he divulge the nature of the urgent necessity upon which he based his demand.

One of the first things which D’Arnot accomplished after their arrival was to arrange to visit a high official of the police department, an old friend; and to take Tarzan with him.

Adroitly D’Arnot led the conversation from point to point until the policeman had explained to the interested Tarzan many of the methods in vogue for apprehending and identifying criminals.

Not the least interesting to Tarzan was the part played by finger prints in this fascinating science.

“But of what value are these imprints,” asked Tarzan, “when, after a few years the lines upon the fingers are entirely changed by the wearing out of the old tissue and the growth of new?”

“The lines never change,” replied the official. “From infancy to senility the fingerprints of an individual change only in size, except as injuries alter the loops and whorls. But if imprints have been taken of the thumb and four fingers of both hands one must needs lose all entirely to escape identification.”

“It is marvelous,” exclaimed D’Arnot. “I wonder what the lines upon my own fingers may resemble.”

“We can soon see,” replied the police officer, and ringing a bell he summoned an assistant to whom he issued a few directions.

The man left the room, but presently returned with a little hardwood box which he placed on his superior’s desk.

“Now,” said the officer, “you shall have your fingerprints in a second.”

He drew from the little case a square of plate glass, a little tube of thick ink, a rubber roller, and a few snowy white cards.

Squeezing a drop of ink onto the glass, he spread it back and forth with the rubber roller until the entire surface of the glass was covered to his satisfaction with a very thin and uniform layer of ink.

“Place the four fingers of your right hand upon the glass, thus,” he said to D’Arnot. “Now the thumb. That is right. Now place them in just the same position upon this card, here, no–a little to the right. We must leave room for the thumb and the fingers of the left hand. There, that’s it. Now the same with the left.”

“Come, Tarzan,” cried D’Arnot, “let’s see what your whorls look like.”

Tarzan complied readily, asking many questions of the officer during the operation.

“Do fingerprints show racial characteristics?” he asked. “Could you determine, for example, solely from fingerprints whether the subject was Negro or Caucasian?”

“I think not,” replied the officer.

“Could the finger prints of an ape be detected from those of a man?”

“Probably, because the ape’s would be far simpler than those of the higher organism.”

“But a cross between an ape and a man might show the characteristics of either progenitor?” continued Tarzan.

“Yes, I should think likely,” responded the official; “but the science has not progressed sufficiently to render it exact enough in such matters. I should hate to trust its findings further than to differentiate between individuals. There it is absolute. No two people born into the world probably have ever had identical lines upon all their digits. It is very doubtful if any single fingerprint will ever be exactly duplicated by any finger other than the one which originally made it.”

“Does the comparison require much time or labor?” asked D’Arnot.

“Ordinarily but a few moments, if the impressions are distinct.”

One reason this all surprised me was that I was certain I’d read about Sherlock Holmes using fingerprints in much older stories — but Holmes was ahead of his time, and the stories weren’t quite as old as I’d assumed:

Conan Doyle made Holmes a man of science and an innovator of forensic methods. Holmes is so much at the forefront of detection that he has authored several monographs on crime-solving techniques. In several instances the extremely well-read Conan Doyle depicted Holmes using methods years before they were adopted by official police forces in both Britain and America.

Holmes was quick to realize the value of fingerprint evidence. The first case in which fingerprints are mentioned is The Sign of the Four (1890); Scotland Yard did not begin to use fingerprints until 1901. Thirty-six years later in the 55th story, “The Adventure of the Three Gables” (1926), fingerprints still figure in detection. In “The Adventure of the Norwood Builder” (1903), the appearance of a fingerprint is the key piece of evidence in the solution of the crime. It is interesting to note that Conan Doyle chose to have Holmes use fingerprints but not Bertillonage (also called anthropometry), the system of identification invented by Alphonse Bertillon in Paris that pivoted on measuring 12 characteristics of the body. The two methods competed for forensic ascendancy for many years. By having Holmes use fingerprints rather than Bertillonage, the astute Conan Doyle picked the method with the soundest scientific future.

Fingerprints have a long history:

Jan Evangelista Purkinje (1787–1869), a Czech physiologist and professor of anatomy at the University of Breslau, published a thesis in 1823 discussing 9 fingerprint patterns, but he did not mention any possibility of using fingerprints to identify people.

In 1840, following the murder of Lord William Russell, a provincial doctor, Robert Blake Overton, wrote to Scotland Yard suggesting checking for fingerprints but the suggestion, though followed up, did not lead to their routine use by the police for another 50 years.

Some years later, the German anatomist Georg von Meissner (1829–1905) studied friction ridges, and five years after this, in 1858, Sir William James Herschel initiated fingerprinting in India. In 1877 at Hooghly (near Calcutta) he instituted the use of fingerprints on contracts and deeds to prevent the then-rampant repudiation of signatures and he registered government pensioners’ fingerprints to prevent the collection of money by relatives after a pensioner’s death. Herschel also fingerprinted prisoners upon sentencing to prevent various frauds that were attempted in order to avoid serving a prison sentence.

In 1863, Paul-Jean Coulier (1824–1890), professor for chemistry and hygiene at the medical and pharmaceutical school of the Val-de-Grâce military hospital in Paris, discovered that iodine fumes can reveal fingerprints on paper.

In 1880, Dr. Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the subject in the scientific journal Nature, discussing the usefulness of fingerprints for identification and proposing a method to record them with printing ink. He also established their first classification and was also the first to identify fingerprints left on a vial. Returning to the UK in 1886, he offered the concept to the Metropolitan Police in London but it was dismissed at that time.

Faulds wrote to Charles Darwin with a description of his method but, too old and ill to work on it, Darwin gave the information to his cousin, Francis Galton, who was interested in anthropology. Having been thus inspired to study fingerprints for ten years, Galton published a detailed statistical model of fingerprint analysis and identification and encouraged its use in forensic science in his book Finger Prints. He had calculated that the chance of a “false positive” (two different individuals having the same fingerprints) was about 1 in 64 billion.

Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file, associating these fingerprints to the anthropometric system of Alphonse Bertillon, who had created, in 1879, a system to identify individuals by anthropometric photographs and associated quantitative descriptions. In 1892, after studying Galton’s pattern types, Vucetich set up the world’s first fingerprint bureau. In that same year, Francisca Rojas of Necochea, was found in a house with neck injuries, whilst her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Alvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas’ prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons.

Women clerical employees of the Los Angeles Police Department being fingerprinted and photographed in 1928.
A Fingerprint Bureau was established in Calcutta (Kolkata), India, in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. Working in the Calcutta Anthropometric Bureau, before it became the first Fingerprint Bureau in the world, were Azizul Haque and Hem Chandra Bose. Haque and Bose were Indian fingerprint experts who have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry.

The Henry Classification System, co-devised by Haque and Bose, was accepted in England and Wales when the first United Kingdom Fingerprint Bureau was founded in Scotland Yard, the Metropolitan Police headquarters, London, in 1901. Sir Edward Richard Henry subsequently achieved improvements in dactyloscopy.

In the United States, Dr. Henry P. DeForrest used fingerprinting in the New York Civil Service in 1902, and by 1906, New York City Police Department Deputy Commissioner Joseph A. Faurot, an expert in the Bertillon system and a finger print advocate at Police Headquarters, introduced the fingerprinting of criminals to the United States.

The Scheffer case of 1902 is the first case of the identification, arrest and conviction of a murderer based upon fingerprint evidence. Alphonse Bertillon identified the thief and murderer Scheffer, who had previously been arrested and his fingerprints filed some months before, from the fingerprints found on a fractured glass showcase, after a theft in a dentist’s apartment where the dentist’s employee was found dead. It was able to be proved in court that the fingerprints had been made after the showcase was broken. A year later, Alphonse Bertillon created a method of getting fingerprints off smooth surfaces and took a further step in the advance of dactyloscopy.

There are other reasons why stories are remembered

Friday, February 23rd, 2018

I recently shared an interview with Frank Herbert, where he and Professor McNelly discuss, among other things, why stories are remembered:

Willis McNelly: I have said this to my classes that, in many ways as satisfying as “Dune” is, I find it unsatisfying because there are so many unanswered questions; you don’t tie up the loose ends of, say, Paul’s sister, unless you read…what is it?.. a “Huntress of a Thousand Worlds” (Laughter)…that marvellous little…little footnote of Princess Alia. But… or several other things. The whole question of the Spacing Guild itself and how it got to be the way it was is handled very…you know…

Frank Herbert: Well, let’s…let’s examine something, as far as fiction in general is concerned…

WM: All right.

FH: Now there are other reasons why stories are remembered, and I’m talking about story in the classic sense of the knights who goes from castle to castle to earn his meal.

WM: All right.

FH: Entertainment…

WM: Sure.

FH: The stories that are remembered are the ones that strike sparks from your mind, one way or another. It’s like a grinding wheel. They touch you and sparks fly.

WM: Would this be something like the Miller’s tale of Chaucer or Sir Gawain and the Green Knight, if you please?

FH: Yes, indeed.

WM: Or, well, we could adduce thousands of other examples up to, say, Treasure Island or what you will. There’s sparks there.

FH: OK.

WM: I understand your term.

FH: Now we all have stories that we go on with after we finish reading them. As children, we can remember playing Treasure Island…

WM: Or playing Tom Sawyer…

FH: Or Tom Sawyer…any of these. We remember playing these. The story stayed with us…the characters and their conflicts, their joys, their play all stayed with us.

WM: And it enkindled sparks in our own imagination, so that we were then active in creative play.

FH: That’s exactly right! We went on and told the story ourself…

WM: Yes.

FH: Now, I deliberately did this in “Dune” for that purpose. I want the person to go on and construct for himself all of these marvellous flights of fantasy and imagination. I want him to…you see, you haven’t had the Spacing Guild explained completely…just enough so that you know its existence. Now with lots of people, they’ve got to complete this.

WM: Yes.

FH: So they build it up in their own minds. Now this is right out of the story, though, you see…

WM: Yes. Or the whole…

FH: The sparks have flown.

I found this almost ironic, since I had just watched the first episode of Netflix’s The Toys that Made Us, about the original Star Wars toys, which no major toy manufacturer was willing to produce. Only Kenner was willing to take on the project, because Bernard Loomis recognized how toyetic the new film would be. Millions of kids would go on to play out their own versions of Star Wars, never knowing how heavily it borrowed from Dune.

Give them living eulogies

Sunday, February 4th, 2018

Megan McArdle offers her own 12 rules for life:

  1. Be kind. Mean is easy; kind is hard.
  2. Politics is not the most important thing in the world. It’s just the one people talk about the most.
  3. Always order one extra dish at a restaurant, an unfamiliar one.
  4. Give yourself permission to be bad.
  5. Go to the party even when you don’t want to.
  6. Save 25 percent of your income.
  7. Don’t just pay people compliments; give them living eulogies.
  8. That thing you kinda want to do someday? Do it now.
  9. Human beings are often splendid, the world is often glorious, and nature, red in tooth and claw, also invented kindness, charity and love. Believe in that.
  10. Don’t try to resolve fundamental conflicts with your spouse or roommates.
  11. Be grateful. No matter how awful your life seems at the moment, you have something to be grateful for.
  12. Always make more dinner rolls than you think you can eat.