Experts’ brains transform data into action

Friday, July 21st, 2017

Neuroscientists Jason Sherwin and Jordan Muraskin are studying what happens inside the brain of a baseball player trying to hit a pitch:

Sherwin and Muraskin think they’ve identified a pattern of brain activation in professional hitters. One key area is the fusiform gyrus, a small spot at the bottom of the brain that is crucial for object recognition. For baseball players, this region is much more active during hitting. Recent data also suggests that in experts the fusiform gyrus may be more connected to the motor cortex, which controls movement. Sajda says this has important implications because the increased connection could indicate that experts’ brains are more efficient at transforming data about the pitch into movement.

The expert hitters also tend to use their frontal cortex — a part of the brain that is generally in charge of deliberate decision-making — less than nonexperts do when hitting. (When we decide to order a baked potato rather than french fries, it’s a good bet that our frontal cortex is deeply involved. However, this part of the brain tends to make decisions more slowly and meticulously; it is not adept at split-second choices.)

This diminished frontal participation is crucial, they say. “Players seem to make the decision in their motor cortex rather than their frontal cortex,” Sajda says. “Their brains recognize and act on pitches more efficiently.”

Another key area that appears to be more energized among expert hitters is the supplementary motor area (SMA), a small region at the top of the brain. It is involved in the coordination of sequences of preplanned movements such as hitting. In expert hitters, this area is especially active as the pitcher winds up and as the pitch approaches the plate. In essence, the researchers say, experts are better at preparing to swing.

Muraskin thinks that the SMA plays a key role in helping hitters choose when not to swing. Many good hitters — the Nationals’ Daniel Murphy is known for this — have a preternatural ability to wait for the “right” pitch, the pitch they can hit. In other words, they excel at inhibiting their swing. “When you choose not to swing, that’s a choice,” Muraskin says. “It is a learned expertise.”

One in five Americans are prescribed opioids

Thursday, July 20th, 2017

More than one in five people were prescribed an opioid painkiller at least once in 2015 — at least among those insured by Blue Cross and Blue Shield:

The report, which covers 30 million people with Blue Cross and Blue Shield insurance in 2015, supports what experts have been saying: much, if not most, of the opioid overdose epidemic is being driven by medical professionals who are prescribing the drugs too freely.

“Twenty-one percent of Blue Cross and Blue Shield (BCBS) commercially insured members filled at least one opioid prescription in 2015,” the report says. “Data also show BCBS members with an opioid use disorder diagnosis spiked 493 percent over a seven year period.”

The report excludes people with cancer or terminal illnesses. What it found fits in with similar surveys of people with Medicare, Medicaid or other government health insurance, said Dr. Trent Haywood, chief medical officer for the Blue Cross and Blue Shield Association (BCSBA).

Grass pyramids cut noise pollution

Thursday, July 20th, 2017

Airport noise travels far in a flat country like the Netherlands:

The tricky thing about dampening airport noise is that the noise is a very low frequency with a very long wavelength, around 36 feet, so a simple barricade will do little to stop the drone. But in 2008, airport staff noticed that noise levels were reduced every fall by an unsuspecting phenomenon: plowed fields. After examining the scene, they discovered that the ridges and furrows of the field were spaced in a way that they partially silenced the hum.

So, the firm H+N+S Landscape Architects teamed up with artist Paul De Kort to produce a series of 150 artificial pyramids of grass, each 6 feet tall and 36 feet apart (the approximate wavelength of airport hubbub). This ingenious method, based on the groundbreaking work of acoustician Ernst Chladni, has effectively reduced noise pollution in the region by half.

Buitenschot Land Art Park

To the amusement of the people in the area, the 80-acre swath of ridges adds entertainment to utility. Paths for pedestrians and bicycles slice between the grass ridges, and De Kort has even incorporated works of art into the park, including “Listening Ear,” a dish with a gap in the middle that amplifies sound, and “Chladni-Pond,” a diamond-shaped pond where park guests can power a wave mechanism with their feet.

I may be screwing this person over

Wednesday, July 19th, 2017

A recent Freakonomics podcast looks at civic-minded Harvard physician Richard Clarke Cabot’s long-running Cambridge-Somerville Youth Study, which matched troubled boys with mentors — versus a matched control group who received no mentoring:

They found a null effect. They found there were no differences between the treatment and control boys on offending.

When computers came on the scene and they could analyze the data in finer detail, they made an interesting discovery:

On all seven measures — we’re talking, how long did you live? Were you a criminal? Were you mentally healthy, physically healthy, alcoholic, satisfied with your job; satisfied with your marriage? On all seven measures, the treatment group did statistically, significantly worse off than the control group.

The lesson:

And that’s one of the important things people who are engaged in social interventions really don’t spend much time thinking, “I may be screwing this person over.” They are self-conscious about, “Maybe this won’t work, but I’ve got to try!”

You can get away with as little as one minute of effort

Wednesday, July 19th, 2017

Scientists out of McMaster University recently conducted research on the shortest interval training ever:

To see just how little you can get away with when it comes to interval training for health purposes, the researchers brought in 25 less-than-in-shape young men (future studies will focus on women). They tested their levels of aerobic fitness and their ability to use insulin in the right way to control blood sugar, and biopsied their muscles to see how well they functioned on a cellular level.

Then they split them into a control group, a moderate-intensity-exercise group, and a sprint interval training (SIT) group.

The control group did nothing differently at all.

The moderate-intensity group did a typical I’m-at-the-gym routine of a two-minute warm-up, 45 minutes on the stationary bike, and a three-minute cool down, three times a week.

The SIT group did the shortest interval training ever recorded thus far by science. Participants warmed up for two minutes on a stationary bike, then sprinted full-out for 20 seconds, then rode for two minutes very slowly. They repeated this twice (for a total of three sets). The whole workout took 10 minutes, with only one minute being high-intensity.

All of the groups kept at it for 12 weeks, or about twice as long as most previous studies.

The results?

The control group, as expected, had no change in results.

The two other groups enjoyed results that were basically identical to each other’s. In both, scientists found a 20 percent increase in cardiovascular endurance, good improvements in insulin resistance, and significant increases in the cells responsible for energy production and oxygen in the muscles (thanks, biopsies).

That is remarkable. By the end, the moderate-intensity group had ridden for 27 hours, while the SIT group had ridden for 6 total hours, just 36 minutes of which was arduous.

This means one group spent about 10 total minutes on each workout, while the other spent 50 minutes. The SIT group got the same benefits in a fifth of the time.

The games get increasingly difficult as the player’s heart rate increases

Tuesday, July 18th, 2017

Boston Children’s Hospital researchers have developed videogames for children who need to learn how to control their emotions better:

The videogames track a child’s heart rate, displayed on the screen. The games get increasingly difficult as the player’s heart rate increases. To be able to resume playing without extra obstacles the child has to calm themselves down and reduce their heart rate.


The impact of the games was tested in two studies.

In a pilot study, they first tested the game in a psychiatric inpatient unit with children with anger management issues, said Joseph Gonzalez-Heydrich, director of the developmental neuropsychiatry clinic at Boston Children’s. They found improvements in just five days and published the results in 2012 in a study in the journal Adolescent Psychiatry.

“A lot of these kids we are seeing are not interested in psychotherapy and talking,” said Dr. Gonzalez-Heydrich, who is head of the scientific advisory board of Mighteor, and said he has a small amount of equity in the company. “But they will work really hard to get good at a videogame.”

In a subsequent outpatient study the researchers randomized 20 youth to 10 cognitive behavior therapy sessions and videogame therapy that required them to control their heart rate, and 20 youth to CBT with the same videogame but not linked to heart rates. All the adolescents had anger or aggression problems, said Dr. Gonzalez-Heydrich, who was senior author of the study.

Therapists interviewed the children’s primary caregiver before and two weeks after their last therapy session. They found the children’s ratings on aggression and opposition were reduced much more in the group that played the game with the built-in biofeedback. The ratings for anger went down about the same in both groups. The findings were presented at the American Academy of Child and Adolescent Psychiatry conference in 2015. The study is currently under review for publication.

Think you drink a lot?

Tuesday, July 18th, 2017

Think you drink a lot? This chart will tell you:

These figures come from Philip J. Cook’s Paying the Tab, an economically-minded examination of the costs and benefits of alcohol control in the U.S. Specifically, they’re calculations made using the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) data.

Drinks per Capita by Decile

“One consequence is that the heaviest drinkers are of greatly disproportionate importance to the sales and profitability of the alcoholic-beverage industry,” he writes writes. “If the top decile somehow could be induced to curb their consumption level to that of the next lower group (the ninth decile), then total ethanol sales would fall by 60 percent.”

(Hat tip to P.D. Mangan.)

Trevor Butterworth considers this data journalism gone wrong:

If we look at the section where he arrives at this calculation, and go to the footnote, we find that he used data from 2001-2002 from NESARC, the National Institute on Alcohol Abuse and Alcoholism, which had a representative sample of 43,093 adults over the age of 18. But following this footnote, we find that Cook corrected these data for under-reporting by multiplying the number of drinks each respondent claimed they had drunk by 1.97 in order to comport with the previous year’s sales data for alcohol in the US. Why? It turns out that alcohol sales in the US in 2000 were double what NESARC’s respondents — a nationally representative sample, remember — claimed to have drunk.

While the mills of US dietary research rely on the great National Health and Nutrition Examination Survey to digest our diets and come up with numbers, we know, thanks to the recent work of Edward Archer, that recall-based survey data are highly unreliable: we misremember what we ate, we misjudge by how much; we lie. Were we to live on what we tell academics we eat, life for almost two thirds of Americans would be biologically implausible.

But Cook, who is trying to show that distribution is uneven, ends up trying to solve an apparent recall problem by creating an aggregate multiplier to plug the sales data gap. And the problem is that this requires us to believe that every drinker misremembered by a factor of almost two. This might not much of a stretch for moderate drinkers; but did everyone who drank, say, four or eight drinks per week systematically forget that they actually had eight or sixteen? That seems like a stretch.

In the early stages of Alzheimer’s glycation damages an enzyme called MIF

Monday, July 17th, 2017

Abnormally high blood sugar levels are linked to Alzheimer’s, and now the mechanism has become clearer:

Diabetes patients have an increased risk of developing Alzheimer’s disease compared to healthy individuals. In Alzheimer’s disease abnormal proteins aggregate to form plaques and tangles in the brain which progressively damage the brain and lead to severe cognitive decline.

Scientists already knew that glucose and its break-down products can damage proteins in cells via a reaction called glycation but the specific molecular link between glucose and Alzheimer’s was not understood.

But now scientists from the University of Bath Departments of Biology and Biochemistry, Chemistry and Pharmacy and Pharmacology, working with colleagues at the Wolfson Centre for Age Related Diseases, King’s College London, have unraveled that link.

By studying brain samples from people with and without Alzheimer’s using a sensitive technique to detect glycation, the team discovered that in the early stages of Alzheimer’s glycation damages an enzyme called MIF (macrophage migration inhibitory factor) which plays a role in immune response and insulin regulation.

MIF is involved in the response of brain cells called glia to the build-up of abnormal proteins in the brain during Alzheimer’s disease, and the researchers believe that inhibition and reduction of MIF activity caused by glycation could be the ‘tipping point’ in disease progression. It appears that as Alzheimer’s progresses, glycation of these enzymes increases.

Impregnable to the waves and every day stronger

Sunday, July 16th, 2017

Pliny the Elder, in his Natural History, described Rome’s concrete as “impregnable to the waves and every day stronger” — which, it turns out, was literally true:

Writing in the journal American Mineralogist, Jackson and colleagues describe how they analysed concrete cores from Roman piers, breakwaters and harbours.

Previous work had revealed lime particles within the cores that surprisingly contained the mineral aluminous tobermorite — a rare substance that is hard to make.

The mineral, said Jackson, formed early in the history of the concrete, as the lime, seawater and volcanic ash of the mortar reacted together in a way that generated heat.

But now Jackson and the team have made another discovery. “I went back to the concrete and found abundant tobermorite growing through the fabric of the concrete, often in association with phillipsite [another mineral],” she said.

She said this revealed another process that was also at play. Over time, seawater that seeped through the concrete dissolved the volcanic crystals and glasses, with aluminous tobermorite and phillipsite crystallising in their place.

These minerals, say the authors, helped to reinforce the concrete, preventing cracks from growing, with structures becoming stronger over time as the minerals grew.

By contrast, modern concrete, based on Portland cement, is not supposed to change after it hardens — meaning any reactions with the material cause damage.

Deep down, they really want a king or queen

Saturday, July 15th, 2017

Ross Douthat recently teased liberals that they really like Game of Thrones because, deep down, they really want a king or queen. He considers this response a strong misreading of what Martin’s story and the show are offering:

To say that Game of Thrones is attractive to liberals because of secret monarchical longings, you have to ignore…everything GoT is doing. GoT does not make being a Stark bannerman or a Daenerys retainer look fun! Those people get flayed and beheaded! GoT presents a vision of monarchy that is exaggeratedly dystopian even compared to most of the historical reality of monarchy. I think that dystopian exaggeration is in fact key to the show’s appeal to liberals in many ways. It lets you fantasize about the negation of your principles while simultaneously confirming their rightness. GoT presents a vision of a world in which illiberal instincts can be freely indulged, in which the id is constrained only by physical power. All the violent, nasty stuff liberal society (thankfully) won’t let us do, but that’s still seething in our lizard brains, gets acted out. And not just acted out — violence and brutality are the organizing principles on which the world is based.

But this is where the dystopianism comes in, because the show chides you for harboring the very fantasies it helps you gratify. It wallows in their destructive consequences — makes that wallowing, in fact, simultaneous with the fulfillment of the fantasies. Will to power leads to suffering and chaos, which lead to more opportunities for the will to power to be acted upon, etc. This is a vastly more complex and interesting emotional appeal than “people secretly want kings.” The liberal order is always being implicitly upheld by the accommodation of our base desire for its opposite. To me, this is the most interesting ongoing thing about GoT, a franchise I’m otherwise completely tired of. Everyone wants to move to Hogwarts; only a lunatic would actually want to LIVE in Westeros. In an escapist genre, that’s interesting. It’s not subliminal royalism; it’s dark escapism, an escape that ultimately tends toward reconciliation with the existing order.

And what do liberals secretly love more than an excuse to reconcile with the existing order? Westeros makes Prime Day look utopian!

It is “a very good description of what a lot of prestige television has done,” Douthat agrees, but Game of Thrones is different:

These shows [The Sopranos, Mad Men, and Breaking Bad] invite liberal viewers into various illiberal or pre-liberal or just, I suppose, red-state worlds, which are more violent and sexist and id-driven than polite prestige-TV-viewing liberal society, and which offer viewers the kind of escapism that Phillips describes … in which there is a temporary attraction to being a mobster or hanging out with glamorous chain-smoking ’50s admen or leaving your put-upon suburban life behind and becoming Heisenberg the drug lord. But then ultimately because these worlds are clearly wicked, dystopic or just reactionary white-male-bastions you can return in relief to the end of history, making Phillips’ “reconciliation with the existing order” after sojourning for a while in a more inegalitarian or will-to-power world.


“Game of Thrones,” however, is somewhat different. Yes, it makes the current situation in Westeros look hellish, by effectively condensing all of the horrors of a century of medieval history into a few short years of civil war. And yes, it’s much darker and bloodier and has a much higher, “wait, I thought he was a hero” body count than a lot of fantasy fiction, which lets people describe it as somehow Sopranos-esque.

But fundamentally “The Sopranos” was a story without any heroes, a tragedy in which the only moral compass (uncertain as Dr. Melfi’s arrow sometimes was) was supplied by an outsider to its main characters’ world. Whereas “Game of Thrones” is still working within the framework of its essentially romantic genre — critiquing it and complicating it, yes, but also giving us a set of heroes and heroines to root for whose destinies are set by bloodlines and prophecies, and who are likely in the end to save their world from darkness and chaos no less than Aragorn or Shea Ohmsford or Rand al’Thor.

Put another way: On “The Sopranos,” there is no right way to be a mafioso. But on “Game of Thrones” there is a right way to be a lord or king and knight, and there are characters who model the virtues of each office, who prove that chivalry and wise lordship need not be a myth. Sometimes they do so in unexpected ways — the lady knight who has more chivalry than the men who jeer at her, the dwarf who rules more justly than the family members who look down on him. But this sort of reversal is typical of the genre, which always has its hobbits and stable boys and shieldmaidens ready to surprise the proud and prejudiced. And it coexists throughout the story with an emphasis on the importance of legitimacy and noblesse oblige and dynastic continuity, which is often strikingly uncynical given the dark-and-gritty atmosphere.

Consider that the central family, the Starks, are wise rulers whose sway over the North has endured for an implausible number of generations — “there has always been a Stark in Winterfell,” etc. — and whose people seems to genuinely love them. Their patriarch is too noble for his own good but only because he leaves his native fiefdom for the corruption of the southern court, and his naivete is still presented as preferable to the cynicism of his Lannister antagonists, who win temporary victories but are on their way to destroying their dynasty through their amorality and singleminded self-interestedness.

The problem is managing that blend while maintaining stability

Saturday, July 15th, 2017

Kaja Perina explores the mad genius mystery:

Grothendieck’s mindset embodied what polymath Alan Turing described as mathematical reasoning itself, “the combination of two facilities, which we may call ‘intuition and ingenuity.’” Grothendieck’s approach was to “dissolve” problems by finding the right level of generality at which to frame them. Mathematician Barry Mazur, now at Harvard, recalls conversations with Grothendieck as having been “largely, perhaps only, about viewpoint, never about specifics. It was principally ‘the right vantage,’ a way of seeing mathematics, that he sought, and perhaps only on a lesser level its byproducts.”

Grothendieck’s unique vantage point and thought style contributed to his genius. But they were also his undoing. The prospect of mathematical madness has been debated ever since Pythagoras, often described as the first pure mathematician, went on to lead a strange cult. Isaac Newton, Kurt Goedel, Ludwig Boltzmann, Florence Nightingale, and John Nash all attained mathematical prominence before succumbing to some type of psychopathology, including depression, delusions, and religious mysticism of the sort engendered by psychosis.


Thinking styles lie on a continuum. On one end is mechanistic, rule-based thinking, which is epitomized in minds that gravitate to math, science, engineering, and tech-heavy skill-sets. Mechanistic cognition is bottom-up, concerned with the laws of nature and with objects as they exist in the world, and stands in contrast to mentalistic thinking. Mentalistic cognition exists to decode and engage with the minds of others, both interpersonally and in terms of larger social forces. It is more holistic (top-down) and humanistic, concerned, broadly speaking, with people, not with things. This mindset makes loose, sometimes self-referential inferences about reality. If “hypermentalistic,” too much meaning will be attributed to events: All coincidences are meaningful and all events are interconnected.

Every mind lies somewhere on this diametric cognitive spectrum. And as with many spectra, at each extreme the signature thought style is dialed up too high to be fully functional. Autism, in this conceptualization, is an extreme form of mechanistic thinking. It stands in contrast to psychotic disorders, characterized by false beliefs in the sentience of inanimate objects and delusions about the self and others. Reading minds is the lingua franca of mentalistic cognition, and symptoms of psychosis are essentially mind-reading on steroids.

Extreme cognitive styles map onto genius in that autism is in some cases associated with high intelligence. General intelligence, after all, includes the ability to quickly master rule-based, highly abstract thinking. And psychotic spectrum disorders, including bipolar disorder, schizotypy, and schizophrenia, are disproportionately diagnosed in highly creative individuals (they’ve been most often measured in artists, musicians, and writers) or in their first-degree relatives. Grothendieck’s broad-spectrum thought style represents both off-the-charts intelligence and unparalleled creativity. It is within this rarified space that genius may reside. And the overshoot toward either pole—or to both—may, by the same token, engender mental illness.


The genius-madness debate has gone off course in asking whether creative individuals are at greater risk for developing mental illness than are their noncreative peers. Some are, some are not. The matter is confounded by the degree of giftedness in play. While creative types are more mentally stable than are noncreatives, the correlation reverses in the presence of exceptional creativity. Dean Keith Simonton, a professor of psychology at the University of California at Davis, finds that extraordinarily creative individuals are more likely to exhibit psychopathology than are noncreative people. He dubs this the “Mad Genius Paradox.”

An inability to filter out seemingly irrelevant information is a hallmark of both creative ideation and disordered thought. The state, known as reduced latent inhibition, allows more information to reach awareness, which can in turn foster associations between unrelated concepts. The barrage accounts for both the nonsensical ideas seen in psychosis and for novel thinking.

Over the centuries mathematical and artistic minds (and those with both gifts, such as the writer David Foster Wallace) have opined that their accomplishments flowed from the same liminal zone that harbored their greatest challenges. “The ideas I had about supernatural beings came to me the same way that my mathematical ideas did,” John Nash stated when asked why he’d once believed in space aliens.

And yet divergent thinking, while necessary for creative leaps, is hardly sufficient. In that direction lies a mind’s unravelling. Cognitive control and high intelligence must also be present, both to manage the informational cascade and to make novel use of it. “There are abnormalities of the brain that, when they co-exist with certain cognitive strengths, allow visionary thought to occur,” says psychologist Shelley Carson, who lectures on creativity at Harvard and is the author of Your Creative Brain.

“High productivity is associated with both intelligence and with high creativity, whether of a schizotypal or an autistic nature,” states Rex Jung, a neuropsychologist who studies creativity and intelligence at the University of New Mexico in Albuquerque. “These unusual characteristics are all distributed around the edges of a normal bell curve, making the possibility of [these minds] producing something new much more likely.”

The exceptional intelligence required for genius-level contributions to mathematics may not just optimize divergent thinking. It may also delay or prevent mental illness in those who are susceptible, at least for a significant period of time. Among men, the typical age of onset of schizophrenia or other psychotic spectrum disorders is in the late teens or early twenties. Yet Grothendieck, Newton, and Nash did not demonstrate thinking that could be characterized as delusional or psychotic until later in life: Nash at 30, Newton and Grothendieck well into midlife. From a neuronal perspective, the normal process of demyelination that begins in the mid-forties leads to a weakening of executive networks that are neuroprotective, explains Jung. Myelin function impacts processing speed, which is a key individual difference in intelligence, “it makes sense that someone who is highly intelligent and has a propensity to mental illness might begin to experience symptoms in this age range.”

Carson believes that these men were initially protected from illness not only by their brilliance but also by their drive to create.

This idea is echoed in the words of Pierre Cartier, among Grothendieck’s most accomplished contemporaries, who wrote that while he wished to avoid diagnosing his peer, he nonetheless considered Grothendieck’s output a buffer in his precarious mental state. “His capacity for scientific creation was the best antidote to depression, and the immersion in a living scientific milieu (Bourbaki and IHES) helped this to take place by giving it a collective dimension.”

It has been said that the ultimate mathematician is one who can see analogies between analogies. Such was the case for Grothendieck. He used metaphors, often of buildings, such as la belle demeure parfaite (the perfect mansion), to describe the solutions he sought. This is captured even in the language used by others to describe Grothendieck’s work: “Mathematicians like to walk along narrow little paths in unknown landscapes, looking for beautiful scenery or just for precious stones, but [Grothendieck] started by building a highway,” wrote Valentin Poenaru, a mathematician who knew him well. “Where some might build an acrobatic bridge between two distant mountain tops, he would just fill up the space between.”

Might the twin poles of ultramechanistic and ultracreative thinking be the variables needed for mathematical genius in particular? As Carson sees it, “People who have cognitive disinhibition added to high systematizing ability ‘see’ systems that others cannot see— visions interlocking and put together.”


Cognitive disinhibition is integral to the creative process: It underlies loose, associative thinking as well as the inability to filter out seemingly extraneous information. This type of thinking occurs when the executive control network, responsible for higher-order cognitive tasks, ramps down in favor of the default mode network. This brain network is active in the absence of explicit attentional goals; it comes online when daydreaming or parsing the minds of others—which accounts for its dominance in mentalistic thinking. Imaging studies confirm that both highly creative individuals and those at risk for psychotic spectrum disorders exhibit unusual patterns of connectivity between the two networks.

The default network and the central executive network normally operate in dynamic tension, a hallmark of mental health as well as of high IQ. Anthony Jack, a neuroscientist who studies these regions in his Brain, Mind and Consciousness Lab at Case Western Reserve University, explains that “creativity is the only desirable area in which the seesaw is disrupted. During ‘aha’ moments you have engagement of both. Genius comes from blending the two sides. The problem is managing that blend while maintaining stability.”

Slipper bread goes way, way back

Friday, July 14th, 2017

After reading about David Brooks’ beleaguered friend “with only a high school degree” who had to confront “ingredients like soppressata, capicollo and a striata baguette” at the gourmet sandwich shop, I thought of my own privileged progeny growing up with ciabatta, which I did not remember from my own childhood — for a very good reason:

Ciabatta (literally slipper bread) is an Italian white bread made from wheat flour, water, salt, and yeast, created in 1982 by a baker in Verona, Veneto, Italy, in response to the popularity of French baguettes.

It all comes down to eye glances

Friday, July 14th, 2017

MIT researchers are trying to figure out how people really drive:

In 2012, government-sponsored researchers rigged up 2,600 regular drivers’ vehicles with cameras and sensors in six states, then left them alone for more than a year. The result is a large, objective, and detailed database of actual driving behavior, the kind of info that’s very useful if you want to figure out exactly what causes crashes.

The MIT researchers and their colleagues took that database and added another twist. While many scientists looking to crack why a crash happened might look at the five or six seconds before the event, these researchers backed it all the way up, to around 20 seconds beforehand.

“Upstream, further prior to an event, we begin to see failures in attention allocation that are indicative of less awareness in the operating environment in the crash events,” says Bryan Reimer, an engineer who studies driver behavior at MIT. In other words: The problems that cause crashes start well before the crunch.

It all comes down to eye glances. Sure, the more time you spend looking off the road, the likelier your chance of crashing. But the time you spend looking on the road matters, too. If your glances at, say, the texts in your lap are longer than the darting ones you make back to the highway in front of you, you gradually lose awareness of where you are in space.

Usually, drivers are pretty good at managing that attentional and situational awareness, judging when it’s appropriate to look down at the radio, for example. But smartphones and in-car infotainment systems present a new issue: The driver isn’t really deciding when to engage with the product. “If the phone goes brrrrring, you feel socially or emotionally compelled to respond to it,” says Reimer. The problem is that the cueing arrives with no regard to when’s a good time.

Confronted with sandwiches named Padrino and Pomodoro

Thursday, July 13th, 2017

David Brooks has come to think that the structural barriers between the classes are less important than the informal social barriers that segregate the lower 80 percent:

Recently I took a friend with only a high school degree to lunch. Insensitively, I led her into a gourmet sandwich shop. Suddenly I saw her face freeze up as she was confronted with sandwiches named “Padrino” and “Pomodoro” and ingredients like soppressata, capicollo and a striata baguette. I quickly asked her if she wanted to go somewhere else and she anxiously nodded yes and we ate Mexican.

American upper-middle-class culture (where the opportunities are) is now laced with cultural signifiers that are completely illegible unless you happen to have grown up in this class. They play on the normal human fear of humiliation and exclusion. Their chief message is, “You are not welcome here.”

In her thorough book “The Sum of Small Things,” Elizabeth Currid-Halkett argues that the educated class establishes class barriers not through material consumption and wealth display but by establishing practices that can be accessed only by those who possess rarefied information.

To feel at home in opportunity-rich areas, you’ve got to understand the right barre techniques, sport the right baby carrier, have the right podcast, food truck, tea, wine and Pilates tastes, not to mention possess the right attitudes about David Foster Wallace, child-rearing, gender norms and intersectionality.

The educated class has built an ever more intricate net to cradle us in and ease everyone else out. It’s not really the prices that ensure 80 percent of your co-shoppers at Whole Foods are, comfortingly, also college grads; it’s the cultural codes.

Status rules are partly about collusion, about attracting educated people to your circle, tightening the bonds between you and erecting shields against everybody else. We in the educated class have created barriers to mobility that are more devastating for being invisible. The rest of America can’t name them, can’t understand them. They just know they’re there.

Unemployment is the greater evil

Thursday, July 13th, 2017

Policymakers seem intent on making the joblessness crisis worse, Ed Glaeser laments:

The past decade or so has seen a resurgent progressive focus on inequality — and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil. Economists Andrew Clark and Andrew Oswald have documented the huge drop in happiness associated with unemployment — about ten times larger than that associated with a reduction in earnings from the $50,000–$75,000 range to the $35,000–$50,000 bracket. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs. While fewer than 10 percent of full-time workers have used an illegal substance in any given week, 18 percent of the unemployed have done drugs in the last seven days, according to a 2013 study by Alejandro Badel and Brian Greaney.

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic. David Cutler and I examined the rise in opioid deaths between 1992 and 2012. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice.

Politicians, when they do focus on long-term unemployment, too often advance poorly targeted solutions, such as faster growth, more infrastructure investment, and less trade. More robust GDP growth is always a worthy aim, but it seems unlikely to get the chronically jobless back to work. The booms of the 1990s and early 2000s never came close to restoring the high employment rates last seen in the 1970s. Between 1976 and 2015, Nevada’s GDP grew the most and Michigan’s GDP grew the least among American states. Yet the two states had almost identical rises in the share of jobless prime-age men.

Infrastructure spending similarly seems poorly targeted to ease the problem. Contemporary infrastructure projects rely on skilled workers, typically with wages exceeding $25 per hour; most of today’s jobless lack such skills. Further, the current employment in highway, street, and bridge construction in the U.S. is only 316,000. Even if this number rose by 50 percent, it would still mean only a small reduction in the millions of jobless Americans. And the nation needs infrastructure most in areas with the highest population density; joblessness is most common outside metropolitan America. (See “If You Build It…,” Summer 2016.)

Finally, while it’s possible that the rise of American joblessness would have been slower if the U.S. had weaker trade ties to lower-wage countries like Mexico and China, American manufacturers have already adapted to a globalized world by mechanizing and outsourcing. We have little reason to be confident that restrictions on trade would bring the old jobs back. Trade wars would have an economic price, too. American exporters would cut back hiring. The cost of imported manufactured goods would rise, and U.S. consumers would pay more, in exchange for — at best — uncertain employment gains.

The techno-futurist narrative holds that machines will displace most workers, eventually. Social peace will be maintained only if the armies of the jobless are kept quiet with generous universal-income payments. This vision recalls John Maynard Keynes’s 1930 essay “Economic Possibilities for Our Grandchildren,” which predicts a future world of leisure, in which his grandchildren would be able to satisfy their basic needs with a few hours of labor and then spend the rest of their waking hours edifying themselves with culture and fun.

But for many of us, technological progress has led to longer work hours, not playtime. Entrepreneurs conjured more products that generated more earnings. Almost no Americans today would be happy with the lifestyle of their ancestors in 1930. For many, work also became not only more remunerative but more interesting. No Pennsylvania miner was likely to show up for extra hours (without extra pay) voluntarily. Google employees do it all the time.

Joblessness is not foreordained, because entrepreneurs can always dream up new ways of making labor productive. Ten years ago, millions of Americans wanted inexpensive car service. Uber showed how underemployed workers could earn something providing that service. Prosperous, time-short Americans are desperate for a host of other services — they want not only drivers but also cooks for their dinners and nurses for their elderly parents and much more. There is no shortage of demand for the right kinds of labor, and entrepreneurial insight could multiply the number of new tasks that could be performed by the currently out-of-work. Yet over the last 30 years, entrepreneurial talent has focused far more on delivering new tools for the skilled than on employment for the unlucky. Whereas Henry Ford employed hundreds of thousands of Americans without college degrees, Mark Zuckerberg primarily hires highly educated programmers.