Do the rich capture all the gains from economic growth?

November 13th, 2018

Do the rich capture all the gains from economic growth? Russ Roberts explains why it matters how you measure these things:

But the biggest problem with the pessimistic studies is that they rarely follow the same people to see how they do over time. Instead, they rely on a snapshot at two points in time. So for example, researchers look at the median income of the middle quintile in 1975 and compare that to the median income of the median quintile in 2014, say. When they find little or no change, they conclude that the average American is making no progress.

But the people in the snapshots are not the same people. These snapshots fail to correct for changes in the composition of workers and changes in household structure that distort the measurement of economic progress. There is immigration. There are large changes in the marriage rate over the period being examined. And there is economic mobility as people move up and down the economic ladder as their luck and opportunities fluctuate.

How important are these effects? One way to find out is to follow the same people over time. When you follow the same people over time, you get very different results about the impact of the economy on the poor, the middle, and the rich.

Studies that use panel data — data that is generated from following the same people over time — consistently find that the largest gains over time accrue to the poorest workers and that the richest workers get very little of the gains. This is true in survey data. It is true in data gathered from tax returns.

The Class of 1914 died for France

November 12th, 2018

Barbara Tuchman’s The Guns of August includes an apocryphal footnote about “the terrible drain of French manhood” from the Great War:

In the chapel of St. Cyr (before it was destroyed during World War II) the memorial tablet to the dead of the Great War bore only a single entry for “the Class of 1914.”

I cited this passage, and Philippe Lemoine dug up St. Cyr’s own numbers, suggesting that “just” 51 percent of the military academy’s Class of 1914 died for France.

A fuel cell that runs on methane at practical temperatures

November 12th, 2018

Methane fuel cells usually require temperatures of 750 to 1,000 degrees Celsius to run, but a new fuel cell with a new catalyst can run at 500 degrees, cooler than an automobile engine:

That lower temperature could trigger cascading cost savings in the ancillary technology needed to operate a fuel cell, potentially pushing the new cell to commercial viability. The researchers feel confident that engineers can design electric power units around this fuel cell with reasonable effort, something that has eluded previous methane fuel cells.

“Our cell could make for a straightforward, robust overall system that uses cheap stainless steel to make interconnectors,” said Meilin Liu, who led the study and is a Regents’ Professor in Georgia Tech’s School of Material Science and Engineering. Interconnectors are parts that help bring together many fuel cells into a stack, or functional unit.

“Above 750 degrees Celsius, no metal would withstand the temperature without oxidation, so you’d have a lot of trouble getting materials, and they would be extremely expensive and fragile, and contaminate the cell,” Liu said.

“Lowering the temperature to 500 degrees Celsius is a sensation in our world. Very few people have even tried it,” said Ben deGlee, a graduate research assistant in Liu’s lab and one of the first authors of the study. “When you get that low, it makes the job of the engineer designing the stack and connected technologies much easier.”

The new cell also eliminates the need for a major ancillary device called a steam reformer, which is normally needed to convert methane and water into hydrogen fuel.

[...]

Hydrogen is the best fuel for powering fuel cells, but its cost is exorbitant. The researchers figured out how to convert methane to hydrogen in the fuel cell itself via the new catalyst, which is made with cerium, nickel and ruthenium and has the chemical formula Ce0.9Ni0.05Ru0.05O2, abbreviated CNR.

When methane and water molecules come into contact with the catalyst and heat, nickel chemically cleaves the methane molecule. Ruthenium does the same with water. The resulting parts come back together as that very desirable hydrogen (H2) and carbon monoxide (CO), which the researchers surprisingly put to good use.

“CO causes performance problems in most fuel cells, but here, we’re using it as a fuel,” Chen said.

Peace on Earth

November 11th, 2018

The 100th anniversary of the armistice that ended the Great War seems like a good time to revisit 1939′s animated short Peace on Earth:

The Great War ended 100 years ago

November 11th, 2018

The Great War ended 100 years ago, on the “eleventh hour of the eleventh day of the eleventh month” of 1918. The Great War comes up here from time to time:

Bruce Sterling on architecture, design, science fiction, futurism and involuntary parks

November 10th, 2018

Benjamin Bratton interviews Bruce Sterling on architecture, design, science fiction, futurism and involuntary parks:

Some of the most important books Nick Szabo has read

November 9th, 2018

Nick Szabo shared a list of some of the most important books he’s read on Twitter:

  1. The Selfish Gene, by Richard Dawkins
  2. Metaphors We Live By, by George Lakoff and Mark Johnson
  3. The Wealth of Nations, by Adam Smith
  4. The Fatal Conceit, by F. A. Hayek

A proposal for an archive revisiter

November 8th, 2018

In his long list of statistical notes, Gwern includes a proposal for an archive revisiter:

One reason to take notes/clippings and leave comments in stimulating discussions is to later benefit by having references & citations at hand, and gradually build up an idea from disparate threads and make new connections between them. For this purpose, I make extensive excerpts from web pages & documents I read into my Evernote clippings (functioning as a commonplace book), and I comment constantly on Reddit, LessWrong, HN, etc. While expensive in time & effort, I often go back, months or years later, and search for a particular thing and expand & integrate it into another writing or expand it out to an entire essay of its own. (I also value highly not being in the situation where I believe something but I do not know why I believe it other than the conviction I read it somewhere, once.)

This sort of personal information management using simple personal information managers like Evernote works well enough when I have a clear memory of what the citation/factoid was, perhaps because it was so memorable, or when the citations or comments are in a nice cluster (perhaps because there was a key phrase in them or I kept going back & expanding a comment), but it loses out on key benefits to this procedure: serendipity and perspective.

As time passes, one may realize the importance of an odd tidbit or have utterly forgotten something or events considerably changed its meaning; in this case, you would benefit from revisiting & rereading that old bit & experiencing an aha! moment, but you don’t realize it. So one thing you could do is reread all your old clippings & comments, appraising them for reuse.

But how often? And it’s a pain to do so. And how do you keep track of which you’ve already read? One thing I do for my emails is semi-annually I (try to) read through my previous 6 months of email to see what might need to be followed up on10 or mined for inclusion in an article. (For example, an ignored request for data, or a discussion of darknet markets with a journalist I could excerpt into one of my DNM articles so I can point future journalists at that instead.) This is already difficult, and it would be even harder to expand. I have read through my LessWrong comment history… once. Years ago. It would be more difficult now. (And it would be impossible to read through my Reddit comments as the interface only goes back ~1000 comments.)

Simply re-reading periodically in big blocks may work but is suboptimal: there is no interface easily set up to reread them in small chunks over time, no constraints which avoid far too many reads, nor is there any way to remove individual items which you are certain need never be reviewed again. Reviewing is useful but can be an indefinite timesink. (My sent emails are not too hard to review in 6-month chunks, but my IRC logs are bad – 7,182,361 words in one channel alone – and my >38k Evernote clippings are worse; any lifestreaming will exacerbate the problem by orders of magnitude.) This is probably one reason that people who keep journals or diaries don’t reread Nor can it be crowdsourced or done by simply ranking comments by public upvotes (in the case of Reddit/LW/HN comments), because the most popular comments are ones you likely remember well & have already used up, and the oddities & serendipities you are hoping for are likely unrecognizable to outsiders.

This suggests some sort of reviewing framework where one systematically reviews old items (sent emails, comments, IRC logs by oneself), putting in a constant amount of time regularly and using some sort of ever expanding interval between re-reads as an item becomes exhausted & ever more likely to not be helpful. Similar to the logarithmically-bounded number of backups required for indefinite survival of data (Sandberg & Armstrong 2012), Deconstructing Deathism – Answering Objections to Immortality, Mike Perry 2013 (note: this is an entirely different kind of problem than those considered in Freeman Dyson’s immortal intelligences in Infinite in All Directions, which are more fundamental), discusses something like what I have in mind in terms of an immortal agent trying to review its memories & maintain a sense of continuity, pointing out that if time is allocated correctly, it will not consume 100% of the agent’s time but can be set to consume some bounded fraction.

[...]

So you could imagine some sort of software along the lines of spaced repetition systems like Anki, Mnemosyne, or Supermemo which you spend, say, 10 minutes a day at, simply rereading a selection of old emails you sent, lines from IRC with n lines of surrounding context, Reddit & LW comments etc; with an appropriate backoff & time-curve, you would reread each item maybe 3 times in your lifetime (eg first after a delay of a month, then a year or two, then decades). Each item could come with a rating function where the user rates it as an important or odd-seeming or incomplete item and to be exposed again in a few years, or as totally irrelevant and not to be shown again – as for many bits of idle chit-chat, mundane emails, or intemperate comments is not an instant too soon! (More positively, anything already incorporated into an essay or otherwise reused likely doesn’t need to be resurfaced.)

This wouldn’t be the same as a spaced repetition system which is designed to recall an item as many times as necessary, at the brink of forgetting, to ensure you memorize it; in this case, the forgetting curve & memorization are irrelevant and indeed, the priority here is to try to eliminate as many irrelevant or useless items as possible from showing up again so that the review doesn’t waste time.

More specifically, you could imagine an interface somewhat like Mutt which reads in a list of email files (my local POP email archives downloaded from Gmail with getmail4, filename IDs), chunks of IRC dialogue (a grep of my IRC logs producing lines written by me +- 10 lines for context, hashes for ID), LW/Reddit comments downloaded by either scraping or API via the BigQuery copy up to 2015, and stores IDs, review dates, and scores in a database. One would use it much like a SRS system, reading individual items for 10 or 20 minutes, and rating them, say, upvote (this could be useful someday, show me this ahead of schedule in the future) / downvote (push this far off into the future) / delete (never show again). Items would appear on an expanding schedule.

[...]

As far as I know, some to-do/self-help systems have something like a periodic review of past stuff, and as I mentioned, spaced repetition systems do something somewhat similar to this idea of exponential revisits, but there’s nothing like this at the moment.

Students don’t know how they study and learn best

November 7th, 2018

Some progressive teachers take pride in allowing students to choose how they study and learn best, but there’s a serious flaw they overlook: students don’t know how they study and learn best:

Karpicke, Butler, and Roediger III (2009) (1) explored study habits used by college students. They surveyed 177 students and asked them two questions. For the sake of this post, I will only focus on question one:

What kind of strategies do you use when you are studying? List as many strategies as you use and rank-order them from strategies you use most often to strategies you use least often.

The results? Repeated rereading was by far the most frequently listed strategy (84% reported using) and 55% reported that it was their number one strategy used. Only 11% reported practicing recall (self-testing) of information and 1% identified practicing recall as their number one strategy. This is not good for student-choice of study. 55% of those surveyed intuitively believed that rereading their notes best utilized their study time…assuming students intended on using their time most effectively. This is just not so.

A phenomenon known as the testing effect indicates that retrieving information from memory has a great effect on learning and strengthens long-term retention of information (2). The testing effect can take many forms, with the most important aspect being students retrieve information. A common saying in my room is to make sure my students are only using their brain…if you’re using notes, the textbook, or someone else’s brain, you’re not doing it right. While many correctly see this attempt as a great way to regulate and assess one’s knowledge, the act of recalling and retrieving strengthens long-term retention of information.

This is not so with repetitive rereading. Memory research has shown rereading by itself is not an effective or efficient strategy for promoting learning and long-term retention (3). Perhaps students believe the more time I spend studying, the more effective the learning. Is it correct to believe that the longer I study something and keep it in my working memory, the better I will remember it? No.

A new Molecular CT scan could dramatically speed drug discovery

November 6th, 2018

Researchers have adapted a third technique, commonly used to chart much larger proteins, to determine the precise shape of small organic molecules:

The gold standard for determining chemical structures has long been x-ray crystallography. A beam of x-rays is fired at a pure crystal containing millions of copies of a molecule lined up in a single orientation. By tracking how the x-rays bounce off atoms in the crystal, researchers can work out the position of every atom in the molecule. Crystallography can pinpoint atomic positions down to less than 0.1 nanometers, about the size of a sulfur atom. But the technique works best with fairly large crystals, which can be hard to make. “The real lag time is just getting a crystal,” says Brian Stoltz, an organic chemist at the California Institute of Technology (Caltech) in Pasadena. “That can take weeks to months to years.”

The second approach, known as nuclear magnetic resonance (NMR) spectroscopy, doesn’t require crystals. It infers structures by perturbing the magnetic behavior of atoms in molecules and then tracking their behavior, which changes depending on their atomic neighbors. But NMR also requires a fair amount of starting material. And it’s indirect, which can lead to mapping mistakes with larger druglike molecules.

The new approach builds on a technique called electron diffraction, which sends an electron beam through a crystal and, as in x-ray crystallography, determines structure from diffraction patterns. It has been particularly useful in solving the structure of a class of proteins lodged in cell membranes. In this case, researchers first form tiny 2D sheetlike crystals of multiple copies of a protein wedged in a membrane.

But in many cases, efforts to grow the protein crystals go awry. Instead of getting single-membrane sheets, researchers end up with numerous sheets stacked atop one another, which can’t be analyzed by conventional electron diffraction. And the crystals can be too small for x-ray diffraction. “We didn’t know what to do with all these crystals,” says Tamir Gonen, an electron crystallography expert at the University of California, Los Angeles (UCLA). So, his team varied the technique: Instead of firing their electron beam from one direction at a static crystal, they rotated the crystal and tracked how the diffraction pattern changed. Instead of a single image, they got what was more like molecular computerized tomography scan. That enabled them to get structures from crystals one-billionth the size of those needed for x-ray crystallography.

Gonen says because his interest was in proteins, he never thought much about trying his technique on anything else. But earlier this year, Gonen moved from the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, to UCLA. There, he teamed up with colleagues, along with Stoltz at Caltech, who wanted to see whether the same approach would work not just with proteins, but with smaller organic molecules. The short answer is it did. On the chemistry preprint server ChemRxiv, the California team reported on Wednesday that when they tried the approach with numerous samples, it worked nearly every time, delivering a resolution on par with x-ray crystallography. The team could even get structures from mixtures of compounds and from materials that had never formally been crystallized and were just scraped off a chemistry purification column. These results all came after just a few minutes of sample preparation and data collection. What’s more, a collaboration of German and Swiss groups independently published similar results using essentially the same technique this week.

Energy drinks are associated with mental health problems, anger-related behaviors, and fatigue

November 5th, 2018

Energy drinks are popular with young men, especially young men in the military, and they may be contributing to mental health problems:

What the authors found was that over the course of the month leading up to the survey, more than 75 percent of soldiers consumed energy drinks. More surprising, however, was that 16 percent “of soldiers in this study reported continuing to consume two or more energy drinks per day in the post-deployment period,” the authors wrote.

High energy drink use, which was classified as consuming two or more drinks per day, was significantly associated with those survey respondents who reported mental health problems, anger-related behaviors and fatigue, the authors found.

Those consuming less than one energy drink per week reported these symptoms at a significantly lower rate.

Also of note is that energy drink use in this Army infantry sample was five times higher than previous studies that analyzed consuming patterns of airmen and the general population’s youth.

The original study is available online.

Your family pet is a secret badass

November 4th, 2018

When screenwriter Zack Stentz was a little kid, he was obsessed by the Chuck Jones adaptation of Kipling’s “Rikki-Tikki-Tavi“:

I think the idea that your family pet is a secret badass who will fight cobras to protect you at night spoke to me on a deep level.

I remember loving it too, so I was surprised when someone mentioned another Chuck Jones-animated adaptation of a Kipling story, “The White Seal.”

Chuck Jones is a fascinating character — as you might expect of the guy who created the Road Runner, Wile E. Coyote, Pepé Le Pew, and Marvin Martian — and I remember enjoying his memoir, Chuck Amuck. I distinctly remember one anecdote.

Chuck’s father kept starting businesses, and each time he started a new business, he bought lots of letterhead. When the business soon failed, his kids were encouraged to use up the paper as fast as possible — so young Chuck got lots and lots of practice drawing.

Chuck’s grandson seems to have inherited a bit of the animator’s spirit, judging from this look at how Chuck studied seals for “The White Seal”:

There’s something different about being blown up

November 3rd, 2018

The “routine” treatment for a head injury — whether in Iraq, Afghanistan, or an American emergency room — works, but not for all traumas:

As soon as you enter the emergency room (ER) as a “Head Injury,” your blood pressure and breathing will be stabilized. ER doctors know the procedures and will, if there are signs of increased intracranial pressures, put you into a drug-induced coma to slow any ongoing damage to injured brain cells and protect any of the remaining healthy tissues from undergoing any secondary damage.

A calcium channel blocker will be administered to help stabilize the outer membranes of the injured nerve cells to maintain normal intracellular metabolism. If your blood pressure becomes too high, ER personnel will lower the pressures to protect against any re-bleeding or the expansion of any blood clots that have already formed within the brain following the initial injury. If the pressures are too low, which can further decrease the blood flow to what remains of the undamaged brain tissues – itself leading to more neurological damage, medications will be given to raise the pressures to maintain adequate blood flow to the brain and central nervous system despite the injury.

Those parts of the brain not damaged still have to receive their usual amounts of oxygen and nutrients. But even with all this care after a traumatic brain injury, recovery is always one of those medically “iffy” things.

If the brain continues to swell, damaging as yet undamaged parts of the brain, the neurosurgeons will begin IV fluids of 8 to 12% saline to control swelling. If that doesn’t work, they will add an IV diuretic to drain the body of fluids in an effort to keep down the increasing intracranial pressures that may continue to compress arteries, cutting off oxygen to the still healthy brain cells. If the hypertonic fluids and diuretics fail to work, they will take you to the operating room and neurosurgeons will remove the top of the skull to allow the brain to swell without compressing and damaging any of the still undamaged underlying tissues.

Since the brain is in a closed space, the overriding idea behind removing the top of the skull is to relieve any increasing intra-cranial pressures that would surely further damage the tissues of the physically compressed brain. Such a development would be even more damaging to tissues as the decreased delivery of oxygen would impair the still undamaged brain tissues. When the swelling has finally decreased and the brain is back to normal size, the neurosurgeons will simply put back that part of the skull removed and wait for the patient to recover.

All of this works and has worked hundreds of times in military surgical hospitals and in emergency rooms and major trauma centers around the country. It certainly works if the patient has been shot in the head.

[...]

But what we have learned from the battlefields of our newest wars is that the brain damage from an IED appears to be a different kind of traumatic brain injury.

Treatments at an earlier time regarded as usual for head injuries do not work. There is clearly something different and so unexpected going on down at the cellular or sub-cellular level of the brain following exposure to a pressure wave that is not the same as hitting your head on the pavement, falling in a bathroom, or being shot in the head. There is simply something fundamentally different about being blown up.

Stop when you’re almost finished

November 2nd, 2018

Performance psychologist Noa Kageyama recommends harnessing resumptive drive, or the Zeigarnik effect, to get yourself to practice when you don’t feel like it:

Bluma Zeigarnik described a phenomenon way back in 1927, in which she observed while sitting in a restaurant that waiters seemed to have a selective memory. As in, they could remember complicated customers’ orders that hadn’t yet been filled, but once all the food had been served (or maybe when the bill was paid?), it’s as if the order was wiped from their memory.

Back in her lab, she found that indeed, participants were much more likely to remember tasks they started but didn’t finish, than tasks that were completed (hence, the Zeigarnik effect).

Another form of the Zeigarnik effect — and the one more relevant to what we’re talking about here — is the observation that people tend to be driven to resume tasks in which they were interrupted and unable to finish.

Researchers at Texas Christian University & University of Rochester ran a study on this form of the Zeigarnik effect.

Subjects were given eight minutes to shape an eight-cube, three-dimensional puzzle into five different forms. They were told to work as quickly as possible, and given three minutes to complete the first two puzzles as practice.

Then they were given five minutes to solve the last three puzzles.

The researchers deliberately made the second practice puzzle difficult — one that was unlikely to be solved within the time available. And just as they had hoped, only 6 of the 39 participants solved the difficult puzzle.

After their time was up, the participants had eight minutes of free time to do as they wished while the researcher running the experiment left the room to retrieve some questionnaires they accidentally forgot to bring, saying they would be back in “5 or 10 minutes.” This was all a ruse, of course, to see what the participants would do when left alone.

Despite there being other things in the room to do (e.g. a TV, magazines, newspaper, etc.), 28 of the 39 participants (72%) resumed working on the puzzles.

[...]

Of the six who completed the difficult puzzle, only one (17%) resumed working on the puzzles (and did so for one minute and 18 seconds).

Of the 33 who did not complete the challenging puzzle, 27 (82%) resumed working on the puzzle, and on average, spent more than two and a half times as long (3:20) working on the puzzles.

So, when interrupted in the middle of a task, not only were participants more motivated to resume working on that task, but they also continued working on it for much longer.

[...]

So instead of thinking about practicing for an hour, or having to work on 10 excerpts, or memorize a concerto, just tune your instrument. Or play a scale really slowly. Or set the timer for five minutes and pick one little thing to fix. And if at the end of five, you don’t feel like continuing, put your instrument away and try again later.

Don’t feel like studying? Just crack open the book. Work on one math problem. Write three sentences of your essay. Create two flash cards.

Second, once you’ve finally gotten yourself into the mood to practice or study, try stopping in the middle of a task. Meaning, if you’re working on a tricky passage that has you stumped, test out a few solutions, but leave yourself a few possible solutions remaining before taking a practice break. Stop when you’re almost finished solving the math problem. Or in the middle of a sentence.

It’s not what you know, but whether you use it

November 1st, 2018

Two researchers from the City University of New York did a study of basketball players to discern a difference between the practice habits of the best free throw shooters (70% or higher) and the worst free throw shooters (55% or lower):

Difference #1: Goals were specific

The best free throw shooters had specific goals about what they wanted to accomplish or focus on before the made a practice free throw attempt. As in, “I’m going to make 10 out of 10 shots” or “I’m going to keep my elbows in.”

The worst free throw shooters had more general goals — like “Make the shot” or “Use good form.”

Difference #2: Attributions of failure were specific

Invariably, the players would miss shots now and again, but when the best free throw shooters missed, they tended to attribute their miss to specific technical problems — like “I didn’t bend my knees.” This lends itself to a more specific goal for the next practice attempt, and a more thoughtful reflection process upon the hit or miss of the subsequent free throw. Far better than saying “I suck” or “What’s wrong with me?” or “Crap, I’m never going to get this.”

In contrast, the worst performers were more likely to attribute failure to non-specific factors, like “My rhythm was off” or “I wasn’t focused” which doesn’t do much to inform the next practice attempt.

It’s not what you know, but whether you use it

You might be thinking that perhaps the worst performers didn’t focus on specific technical strategies because they simply didn’t know as much. That perhaps the best performers were able to focus on technique and strategy because they knew more about how to shoot a free throw with proper form.

The researchers thought of this as well, and specifically controlled for this possibility by testing for the players’ knowledge of basketball free throw shooting technique. As it turns out, there were no significant differences in knowledge between experts and non-experts.

So while both the top performers and the worst performers had the same level of knowledge to draw from, very few of the worst performers actually utilized this knowledge base. Meanwhile, the best performers were much more likely to utilize their knowledge to think, plan, and direct their practice time more productively.