A proposal for an archive revisiter

November 8th, 2018

In his long list of statistical notes, Gwern includes a proposal for an archive revisiter:

One reason to take notes/clippings and leave comments in stimulating discussions is to later benefit by having references & citations at hand, and gradually build up an idea from disparate threads and make new connections between them. For this purpose, I make extensive excerpts from web pages & documents I read into my Evernote clippings (functioning as a commonplace book), and I comment constantly on Reddit, LessWrong, HN, etc. While expensive in time & effort, I often go back, months or years later, and search for a particular thing and expand & integrate it into another writing or expand it out to an entire essay of its own. (I also value highly not being in the situation where I believe something but I do not know why I believe it other than the conviction I read it somewhere, once.)

This sort of personal information management using simple personal information managers like Evernote works well enough when I have a clear memory of what the citation/factoid was, perhaps because it was so memorable, or when the citations or comments are in a nice cluster (perhaps because there was a key phrase in them or I kept going back & expanding a comment), but it loses out on key benefits to this procedure: serendipity and perspective.

As time passes, one may realize the importance of an odd tidbit or have utterly forgotten something or events considerably changed its meaning; in this case, you would benefit from revisiting & rereading that old bit & experiencing an aha! moment, but you don’t realize it. So one thing you could do is reread all your old clippings & comments, appraising them for reuse.

But how often? And it’s a pain to do so. And how do you keep track of which you’ve already read? One thing I do for my emails is semi-annually I (try to) read through my previous 6 months of email to see what might need to be followed up on10 or mined for inclusion in an article. (For example, an ignored request for data, or a discussion of darknet markets with a journalist I could excerpt into one of my DNM articles so I can point future journalists at that instead.) This is already difficult, and it would be even harder to expand. I have read through my LessWrong comment history… once. Years ago. It would be more difficult now. (And it would be impossible to read through my Reddit comments as the interface only goes back ~1000 comments.)

Simply re-reading periodically in big blocks may work but is suboptimal: there is no interface easily set up to reread them in small chunks over time, no constraints which avoid far too many reads, nor is there any way to remove individual items which you are certain need never be reviewed again. Reviewing is useful but can be an indefinite timesink. (My sent emails are not too hard to review in 6-month chunks, but my IRC logs are bad – 7,182,361 words in one channel alone – and my >38k Evernote clippings are worse; any lifestreaming will exacerbate the problem by orders of magnitude.) This is probably one reason that people who keep journals or diaries don’t reread Nor can it be crowdsourced or done by simply ranking comments by public upvotes (in the case of Reddit/LW/HN comments), because the most popular comments are ones you likely remember well & have already used up, and the oddities & serendipities you are hoping for are likely unrecognizable to outsiders.

This suggests some sort of reviewing framework where one systematically reviews old items (sent emails, comments, IRC logs by oneself), putting in a constant amount of time regularly and using some sort of ever expanding interval between re-reads as an item becomes exhausted & ever more likely to not be helpful. Similar to the logarithmically-bounded number of backups required for indefinite survival of data (Sandberg & Armstrong 2012), Deconstructing Deathism – Answering Objections to Immortality, Mike Perry 2013 (note: this is an entirely different kind of problem than those considered in Freeman Dyson’s immortal intelligences in Infinite in All Directions, which are more fundamental), discusses something like what I have in mind in terms of an immortal agent trying to review its memories & maintain a sense of continuity, pointing out that if time is allocated correctly, it will not consume 100% of the agent’s time but can be set to consume some bounded fraction.

[...]

So you could imagine some sort of software along the lines of spaced repetition systems like Anki, Mnemosyne, or Supermemo which you spend, say, 10 minutes a day at, simply rereading a selection of old emails you sent, lines from IRC with n lines of surrounding context, Reddit & LW comments etc; with an appropriate backoff & time-curve, you would reread each item maybe 3 times in your lifetime (eg first after a delay of a month, then a year or two, then decades). Each item could come with a rating function where the user rates it as an important or odd-seeming or incomplete item and to be exposed again in a few years, or as totally irrelevant and not to be shown again – as for many bits of idle chit-chat, mundane emails, or intemperate comments is not an instant too soon! (More positively, anything already incorporated into an essay or otherwise reused likely doesn’t need to be resurfaced.)

This wouldn’t be the same as a spaced repetition system which is designed to recall an item as many times as necessary, at the brink of forgetting, to ensure you memorize it; in this case, the forgetting curve & memorization are irrelevant and indeed, the priority here is to try to eliminate as many irrelevant or useless items as possible from showing up again so that the review doesn’t waste time.

More specifically, you could imagine an interface somewhat like Mutt which reads in a list of email files (my local POP email archives downloaded from Gmail with getmail4, filename IDs), chunks of IRC dialogue (a grep of my IRC logs producing lines written by me +- 10 lines for context, hashes for ID), LW/Reddit comments downloaded by either scraping or API via the BigQuery copy up to 2015, and stores IDs, review dates, and scores in a database. One would use it much like a SRS system, reading individual items for 10 or 20 minutes, and rating them, say, upvote (this could be useful someday, show me this ahead of schedule in the future) / downvote (push this far off into the future) / delete (never show again). Items would appear on an expanding schedule.

[...]

As far as I know, some to-do/self-help systems have something like a periodic review of past stuff, and as I mentioned, spaced repetition systems do something somewhat similar to this idea of exponential revisits, but there’s nothing like this at the moment.

Students don’t know how they study and learn best

November 7th, 2018

Some progressive teachers take pride in allowing students to choose how they study and learn best, but there’s a serious flaw they overlook: students don’t know how they study and learn best:

Karpicke, Butler, and Roediger III (2009) (1) explored study habits used by college students. They surveyed 177 students and asked them two questions. For the sake of this post, I will only focus on question one:

What kind of strategies do you use when you are studying? List as many strategies as you use and rank-order them from strategies you use most often to strategies you use least often.

The results? Repeated rereading was by far the most frequently listed strategy (84% reported using) and 55% reported that it was their number one strategy used. Only 11% reported practicing recall (self-testing) of information and 1% identified practicing recall as their number one strategy. This is not good for student-choice of study. 55% of those surveyed intuitively believed that rereading their notes best utilized their study time…assuming students intended on using their time most effectively. This is just not so.

A phenomenon known as the testing effect indicates that retrieving information from memory has a great effect on learning and strengthens long-term retention of information (2). The testing effect can take many forms, with the most important aspect being students retrieve information. A common saying in my room is to make sure my students are only using their brain…if you’re using notes, the textbook, or someone else’s brain, you’re not doing it right. While many correctly see this attempt as a great way to regulate and assess one’s knowledge, the act of recalling and retrieving strengthens long-term retention of information.

This is not so with repetitive rereading. Memory research has shown rereading by itself is not an effective or efficient strategy for promoting learning and long-term retention (3). Perhaps students believe the more time I spend studying, the more effective the learning. Is it correct to believe that the longer I study something and keep it in my working memory, the better I will remember it? No.

A new Molecular CT scan could dramatically speed drug discovery

November 6th, 2018

Researchers have adapted a third technique, commonly used to chart much larger proteins, to determine the precise shape of small organic molecules:

The gold standard for determining chemical structures has long been x-ray crystallography. A beam of x-rays is fired at a pure crystal containing millions of copies of a molecule lined up in a single orientation. By tracking how the x-rays bounce off atoms in the crystal, researchers can work out the position of every atom in the molecule. Crystallography can pinpoint atomic positions down to less than 0.1 nanometers, about the size of a sulfur atom. But the technique works best with fairly large crystals, which can be hard to make. “The real lag time is just getting a crystal,” says Brian Stoltz, an organic chemist at the California Institute of Technology (Caltech) in Pasadena. “That can take weeks to months to years.”

The second approach, known as nuclear magnetic resonance (NMR) spectroscopy, doesn’t require crystals. It infers structures by perturbing the magnetic behavior of atoms in molecules and then tracking their behavior, which changes depending on their atomic neighbors. But NMR also requires a fair amount of starting material. And it’s indirect, which can lead to mapping mistakes with larger druglike molecules.

The new approach builds on a technique called electron diffraction, which sends an electron beam through a crystal and, as in x-ray crystallography, determines structure from diffraction patterns. It has been particularly useful in solving the structure of a class of proteins lodged in cell membranes. In this case, researchers first form tiny 2D sheetlike crystals of multiple copies of a protein wedged in a membrane.

But in many cases, efforts to grow the protein crystals go awry. Instead of getting single-membrane sheets, researchers end up with numerous sheets stacked atop one another, which can’t be analyzed by conventional electron diffraction. And the crystals can be too small for x-ray diffraction. “We didn’t know what to do with all these crystals,” says Tamir Gonen, an electron crystallography expert at the University of California, Los Angeles (UCLA). So, his team varied the technique: Instead of firing their electron beam from one direction at a static crystal, they rotated the crystal and tracked how the diffraction pattern changed. Instead of a single image, they got what was more like molecular computerized tomography scan. That enabled them to get structures from crystals one-billionth the size of those needed for x-ray crystallography.

Gonen says because his interest was in proteins, he never thought much about trying his technique on anything else. But earlier this year, Gonen moved from the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, to UCLA. There, he teamed up with colleagues, along with Stoltz at Caltech, who wanted to see whether the same approach would work not just with proteins, but with smaller organic molecules. The short answer is it did. On the chemistry preprint server ChemRxiv, the California team reported on Wednesday that when they tried the approach with numerous samples, it worked nearly every time, delivering a resolution on par with x-ray crystallography. The team could even get structures from mixtures of compounds and from materials that had never formally been crystallized and were just scraped off a chemistry purification column. These results all came after just a few minutes of sample preparation and data collection. What’s more, a collaboration of German and Swiss groups independently published similar results using essentially the same technique this week.

Energy drinks are associated with mental health problems, anger-related behaviors, and fatigue

November 5th, 2018

Energy drinks are popular with young men, especially young men in the military, and they may be contributing to mental health problems:

What the authors found was that over the course of the month leading up to the survey, more than 75 percent of soldiers consumed energy drinks. More surprising, however, was that 16 percent “of soldiers in this study reported continuing to consume two or more energy drinks per day in the post-deployment period,” the authors wrote.

High energy drink use, which was classified as consuming two or more drinks per day, was significantly associated with those survey respondents who reported mental health problems, anger-related behaviors and fatigue, the authors found.

Those consuming less than one energy drink per week reported these symptoms at a significantly lower rate.

Also of note is that energy drink use in this Army infantry sample was five times higher than previous studies that analyzed consuming patterns of airmen and the general population’s youth.

The original study is available online.

Your family pet is a secret badass

November 4th, 2018

When screenwriter Zack Stentz was a little kid, he was obsessed by the Chuck Jones adaptation of Kipling’s “Rikki-Tikki-Tavi“:

I think the idea that your family pet is a secret badass who will fight cobras to protect you at night spoke to me on a deep level.

I remember loving it too, so I was surprised when someone mentioned another Chuck Jones-animated adaptation of a Kipling story, “The White Seal.”

Chuck Jones is a fascinating character — as you might expect of the guy who created the Road Runner, Wile E. Coyote, Pepé Le Pew, and Marvin Martian — and I remember enjoying his memoir, Chuck Amuck. I distinctly remember one anecdote.

Chuck’s father kept starting businesses, and each time he started a new business, he bought lots of letterhead. When the business soon failed, his kids were encouraged to use up the paper as fast as possible — so young Chuck got lots and lots of practice drawing.

Chuck’s grandson seems to have inherited a bit of the animator’s spirit, judging from this look at how Chuck studied seals for “The White Seal”:

There’s something different about being blown up

November 3rd, 2018

The “routine” treatment for a head injury — whether in Iraq, Afghanistan, or an American emergency room — works, but not for all traumas:

As soon as you enter the emergency room (ER) as a “Head Injury,” your blood pressure and breathing will be stabilized. ER doctors know the procedures and will, if there are signs of increased intracranial pressures, put you into a drug-induced coma to slow any ongoing damage to injured brain cells and protect any of the remaining healthy tissues from undergoing any secondary damage.

A calcium channel blocker will be administered to help stabilize the outer membranes of the injured nerve cells to maintain normal intracellular metabolism. If your blood pressure becomes too high, ER personnel will lower the pressures to protect against any re-bleeding or the expansion of any blood clots that have already formed within the brain following the initial injury. If the pressures are too low, which can further decrease the blood flow to what remains of the undamaged brain tissues – itself leading to more neurological damage, medications will be given to raise the pressures to maintain adequate blood flow to the brain and central nervous system despite the injury.

Those parts of the brain not damaged still have to receive their usual amounts of oxygen and nutrients. But even with all this care after a traumatic brain injury, recovery is always one of those medically “iffy” things.

If the brain continues to swell, damaging as yet undamaged parts of the brain, the neurosurgeons will begin IV fluids of 8 to 12% saline to control swelling. If that doesn’t work, they will add an IV diuretic to drain the body of fluids in an effort to keep down the increasing intracranial pressures that may continue to compress arteries, cutting off oxygen to the still healthy brain cells. If the hypertonic fluids and diuretics fail to work, they will take you to the operating room and neurosurgeons will remove the top of the skull to allow the brain to swell without compressing and damaging any of the still undamaged underlying tissues.

Since the brain is in a closed space, the overriding idea behind removing the top of the skull is to relieve any increasing intra-cranial pressures that would surely further damage the tissues of the physically compressed brain. Such a development would be even more damaging to tissues as the decreased delivery of oxygen would impair the still undamaged brain tissues. When the swelling has finally decreased and the brain is back to normal size, the neurosurgeons will simply put back that part of the skull removed and wait for the patient to recover.

All of this works and has worked hundreds of times in military surgical hospitals and in emergency rooms and major trauma centers around the country. It certainly works if the patient has been shot in the head.

[...]

But what we have learned from the battlefields of our newest wars is that the brain damage from an IED appears to be a different kind of traumatic brain injury.

Treatments at an earlier time regarded as usual for head injuries do not work. There is clearly something different and so unexpected going on down at the cellular or sub-cellular level of the brain following exposure to a pressure wave that is not the same as hitting your head on the pavement, falling in a bathroom, or being shot in the head. There is simply something fundamentally different about being blown up.

Stop when you’re almost finished

November 2nd, 2018

Performance psychologist Noa Kageyama recommends harnessing resumptive drive, or the Zeigarnik effect, to get yourself to practice when you don’t feel like it:

Bluma Zeigarnik described a phenomenon way back in 1927, in which she observed while sitting in a restaurant that waiters seemed to have a selective memory. As in, they could remember complicated customers’ orders that hadn’t yet been filled, but once all the food had been served (or maybe when the bill was paid?), it’s as if the order was wiped from their memory.

Back in her lab, she found that indeed, participants were much more likely to remember tasks they started but didn’t finish, than tasks that were completed (hence, the Zeigarnik effect).

Another form of the Zeigarnik effect — and the one more relevant to what we’re talking about here — is the observation that people tend to be driven to resume tasks in which they were interrupted and unable to finish.

Researchers at Texas Christian University & University of Rochester ran a study on this form of the Zeigarnik effect.

Subjects were given eight minutes to shape an eight-cube, three-dimensional puzzle into five different forms. They were told to work as quickly as possible, and given three minutes to complete the first two puzzles as practice.

Then they were given five minutes to solve the last three puzzles.

The researchers deliberately made the second practice puzzle difficult — one that was unlikely to be solved within the time available. And just as they had hoped, only 6 of the 39 participants solved the difficult puzzle.

After their time was up, the participants had eight minutes of free time to do as they wished while the researcher running the experiment left the room to retrieve some questionnaires they accidentally forgot to bring, saying they would be back in “5 or 10 minutes.” This was all a ruse, of course, to see what the participants would do when left alone.

Despite there being other things in the room to do (e.g. a TV, magazines, newspaper, etc.), 28 of the 39 participants (72%) resumed working on the puzzles.

[...]

Of the six who completed the difficult puzzle, only one (17%) resumed working on the puzzles (and did so for one minute and 18 seconds).

Of the 33 who did not complete the challenging puzzle, 27 (82%) resumed working on the puzzle, and on average, spent more than two and a half times as long (3:20) working on the puzzles.

So, when interrupted in the middle of a task, not only were participants more motivated to resume working on that task, but they also continued working on it for much longer.

[...]

So instead of thinking about practicing for an hour, or having to work on 10 excerpts, or memorize a concerto, just tune your instrument. Or play a scale really slowly. Or set the timer for five minutes and pick one little thing to fix. And if at the end of five, you don’t feel like continuing, put your instrument away and try again later.

Don’t feel like studying? Just crack open the book. Work on one math problem. Write three sentences of your essay. Create two flash cards.

Second, once you’ve finally gotten yourself into the mood to practice or study, try stopping in the middle of a task. Meaning, if you’re working on a tricky passage that has you stumped, test out a few solutions, but leave yourself a few possible solutions remaining before taking a practice break. Stop when you’re almost finished solving the math problem. Or in the middle of a sentence.

It’s not what you know, but whether you use it

November 1st, 2018

Two researchers from the City University of New York did a study of basketball players to discern a difference between the practice habits of the best free throw shooters (70% or higher) and the worst free throw shooters (55% or lower):

Difference #1: Goals were specific

The best free throw shooters had specific goals about what they wanted to accomplish or focus on before the made a practice free throw attempt. As in, “I’m going to make 10 out of 10 shots” or “I’m going to keep my elbows in.”

The worst free throw shooters had more general goals — like “Make the shot” or “Use good form.”

Difference #2: Attributions of failure were specific

Invariably, the players would miss shots now and again, but when the best free throw shooters missed, they tended to attribute their miss to specific technical problems — like “I didn’t bend my knees.” This lends itself to a more specific goal for the next practice attempt, and a more thoughtful reflection process upon the hit or miss of the subsequent free throw. Far better than saying “I suck” or “What’s wrong with me?” or “Crap, I’m never going to get this.”

In contrast, the worst performers were more likely to attribute failure to non-specific factors, like “My rhythm was off” or “I wasn’t focused” which doesn’t do much to inform the next practice attempt.

It’s not what you know, but whether you use it

You might be thinking that perhaps the worst performers didn’t focus on specific technical strategies because they simply didn’t know as much. That perhaps the best performers were able to focus on technique and strategy because they knew more about how to shoot a free throw with proper form.

The researchers thought of this as well, and specifically controlled for this possibility by testing for the players’ knowledge of basketball free throw shooting technique. As it turns out, there were no significant differences in knowledge between experts and non-experts.

So while both the top performers and the worst performers had the same level of knowledge to draw from, very few of the worst performers actually utilized this knowledge base. Meanwhile, the best performers were much more likely to utilize their knowledge to think, plan, and direct their practice time more productively.

It’s not necessary to have a brain disorder in order to control one’s fear

October 31st, 2018

Scientists are starting to understand the biology of bravery:

Most of the science focuses on the amygdala, the almond-shaped structure deep in the brain (one on each side) that generates such feelings as fear and anxiety. In 2005, a team led by Gleb Shumyatsky at Rutgers University reported in the journal Cell that stathmin, a protein produced by the STMN1 gene, has an important role in the amygdala. Mice that were bred not to have the protein explored more of a new environment. They lacked what the researchers called “innate fear” and were unable to form memories of fear-inducing events.

The researchers also manipulated the gene as a kind of “volume” control, producing different levels of stathmin, which in turn resulted in different levels of fear in the mice. In 2010, researchers led by Burkhard Brocke at the Institute of Psychology II in Germany found that people with an exaggerated response to fear had mutations in the gene that controls this volume switch.

As for how we overcome fear, scientists have found brain structures that appear to resist the prompting of the amygdala. In a 2010 study published in the journal Neuron, the neurobiologist Uri Nili at the Weizmann Institute in Israel scanned the brains of research subjects who were afraid of snakes as they decided whether or not to move a live snake closer or farther away on a conveyor belt. The more people were able to overcome their fear and move the snake closer, the more activity they showed in the sgACC, a brain region that sits between the amygdala and the hypothalamus, which stimulates the release of hormones. A control group that wasn’t scared of snakes didn’t show such activity.

Hormones released in the amygdala itself also have been shown to affect bravery. Oliver Bosch, a neurobiologist at the University of Regensburg in Germany, studies maternal instinct in mammals and has found that oxytocin is released in the amygdala when a mother faces a danger to herself and her children. This hormone, in turn, blocks the production of a hormone called CRH, which primes the body for action but can generate feelings of fear and anxiety. It is this sort of hormonal override that would have given Angie Padron, the mother in Florida, the instant courage to confront her assailants. As she herself said of the incident, her instincts just kicked in.

Indeed, taking the amygdala entirely out of the picture can virtually eliminate fear. Justin Feinstein, a clinical neuropsychologist at the Laureate Institute for Brain Research at the University of Tulsa, works with three women, known in the literature just by their initials, who have Urbach-Wiethe disease, a rare genetic disorder that destroys the amygdala. One of them, SM, has never experienced fear in her adult life. A man once threatened her by putting a gun to her head and shouting “Bam!” She didn’t flinch.

Of course, it’s not necessary to have a brain disorder in order to control one’s fear, even in the face of heart-stopping danger. Consider Alex Honnold, the climber who has scaled the 3,000-foot El Capitan in Yosemite National Park without ropes (as featured in the new documentary, “Free Solo”) and made other notable ascents. In 2016, Mr. Honnold’s brain was scanned by neuroscientist Jane Joseph at the Medical University of South Carolina in Charleston. When exposed to images that excite the amygdala in most people, his brain scans showed no response. What’s unclear is whether this capacity predates and enables his daredevil climbing or has been created by it.

[...]

But the amygdala isn’t the only candidate for controlling fear. In a study published earlier this month in the journal Nature Communications, Sanja Mikulovic and colleagues at Uppsala University in Sweden showed that cells called OLM neurons produce theta brain waves, which are seen during meditation and when you feel safe despite a threat in the environment. By manipulating those cells in laboratory mice, the scientists were able to dial up a mouse’s willingness to venture into unexplored areas and tamp down its indications of anxiety, even when smelling a cat. Nicotine also stimulates OLM neurons in humans, a reason that some people chain-smoke to relieve stress.

We know, too, that training and conditioning alters pathways in the brain and can help to mitigate stress and promote calm in fearful situations. A study published in the journal PLOS Biology last year showed, for example, how training instills a kind of autopilot setting. Researcher Sirawaj Itthipuripat at the University of California, San Diego, measured brain activity when people were learning a task and found that less was needed after training, though improvement in performance remained. Another recent paper connected that idea to how people respond to uncertainty and threats. A team of German and Greek researchers completed a nine-month longitudinal study, published in the journal Science Advances, that showed some forms of training changed structures in the cortex and reduced secretions of the stress hormone cortisol.

Military training is partly designed to hold fear in check when carrying out missions that risk death and injury, as well as in the case of disaster. Dave Henson’s training before he deployed to Afghanistan helped him to stay composed while detecting and disarming improvised explosives. Then, a year into his tour, Mr. Henson stepped on an IED. He lost both of his legs.

Once the immediate shock of the blast receded, he found himself reciting the process that he had been trained to follow in the event of a casualty scenario. “The training definitely kicked in,” he says; it distracted him from the pain.

All Hallows’ Eve

October 31st, 2018

I’ve written a surprising amount about Halloween and horror over the years:

The fall of Big Data and the rise of the Blockchain economy

October 30th, 2018

George Gilder’s Life After Google predicts the fall of Big Data and the rise of the Blockchain economy:

Famously, Google gives most of its content away for free, or (in comments Gilder credits to Tim Cook) if it’s free, you’re not the customer; you’re the product. That’s the least of it. Spanish has two words for “free”–gratis and libre. In our context it means gratis.

Let’s count the ways gratis benefits Google:

  • They are completely immune from any antitrust prosecution and most other regulatory oversight.
  • They can roll out buggy, beta software to consumers and improve it over time.
  • They don’t have to take responsibility for security. Unlike a bank, Google is at no risk if somehow your data gets corrupted or stolen.
  • They provide no customer support.
  • Your data doesn’t belong to you. Instead it belongs to Google, which can monetize it with the help of AI.
  • You get locked into a Google world, where everything you own is now at their mercy. (I’m in that situation.) Your data is precisely not libre.

Note that Google didn’t even bother to show up at the recent Congressional hearings about “fake news.” They consider themselves above the law (or, perhaps more accurately, below the law). They can get away with this because it’s free.

There are some disadvantages.

  • It’s not really free, but instead of paying with money you pay with time. Attention is the basic currency of Google-world.
  • People hate ads. “[O]nly 0.06 percent of smartphone ads were clicked through. Since more than 50 percent of the clicks were by mistake, according to surveys, the intentional response rate was 0.03 percent.” This works only for spammers. Ad-blockers are becoming universal.
  • Google thinks it can circumvent that by using AI to generate ads that will interest the user. No matter–people still hate them.The result is the value of advertising is declining. Gilder does not believe that AI will ever solve this problem. (I agree with him.)
  • Most important–Google loses any information about how valuable its products are. Airlines, for example, respond sensitively to price signals when determining which routes to fly, what equipment to use, what service levels to provide, etc. Price is the best communication mechanism known for conveying economic information. You immediately know what is valuable to consumers, and what isn’t.Google loses all that information by going gratis.Is Gmail more valuable than Waze? Google has no idea. As a result it has no way of knowing where to invest its money and resources. It’s just blindly throwing money at a dartboard.

Wired to look for chances to earn money

October 29th, 2018

Americans have a blind spot when it comes to saving:

Americans seem to excel at working. But saving? Not so much. As of last year, the median American household had only $1,100 saved for retirement, according to an analysis from the Federal Reserve Bank of St. Louis.

While many factors likely contribute to the poor U.S. savings rate, a recent Cornell University study published in the journal Nature Communications pointed to another factor that may be at least partially to blame: our brains. More specifically, the researchers found that our brains may be wired to look for chances to earn money — but fail to recognize chances to save, even when they are right in front of us.

The study measured something we can’t usually measure ourselves: how much attention we pay to earning and saving opportunities. First, participants had to identify colors shown quickly on a computer: one “earning” color that let them gain 30 cents, a neutral color that had no monetary effect and one “saving” color that let them avoid losing 30 cents.

When the “earning” color was shown, a staggering 87.5% of participants identified it more quickly and accurately than when the “saving” color was shown. Even in trials that framed “saving” as earnings that would come slightly later, participants were still better at immediate earning.

In the study’s second part, participants had to identify which color appeared first. Three out of four said they saw the “earning” color appear first — when in fact, the “saving” color did. This suggests our “earning” bias may even be strong enough to warp our perception of time.

Any idiot can train himself into the ground

October 28th, 2018

Performance psychologist Dr. Noa Kageyama discusses the importance of mentally disengaging from work and practice:

A group of German and US researchers conducted a study of 109 individuals. The setup was pretty simple, consisting of two surveys, spaced 4 weeks apart to see how participants’ mental and emotional states might change over time.

The researchers were primarily interested in the relationship between psychological detachment (our ability to disengage from work during our “off” hours — a key factor in greater well-being and performance), exhaustion (feeling fatigued, emotionally drained/overwhelmed, and unable to meet the demands of our work), time pressure, and pleasurable leisure activities (the degree to which we engage in activities that recharge our batteries and balance out our work demands).

There were a couple interesting findings that came out of the resulting data.

Exhaustion begets exhaustion

You would think that emotionally exhausted folks would be more detached and disengaged from work in their off-work hours. Paradoxically, the opposite seems to be true.

The data suggest that individuals who were exhausted had an increasingly difficult time disconnecting from work concerns as the weeks went by. The idea being, when we’re exhausted, we tend not to do our best work, which makes us feel less capable of meeting the demands of the situation, which makes us worry more and expend even more energy, effort, and time trying to make up for our sub-par work, which only keeps the cycle of worry/practice/exhaustion going.

To use a music example, when we have a big audition coming up, there’s a tendency to worry more about our level of preparation, which leads us to practice more, worry more, and obsess more, which in turn makes it harder to disengage, take a break, and recoup our energy outside of the practice room, so we can dive back in refreshed, recharged, and ready to do our most productive and focused work.

Indeed, someone recently suggested to me that while our instinct when behind in our work is to put in a few extra hours at the office after work to catch up, what ends up happening is that we get home late, feel even more tired and drained, get less rest and relaxation, and return to work tired yet again to repeat the cycle. Instead, she suggested that it’s more productive to go home early, get quality R&R, and go to work early the next morning, fresher, more productive, and more motivated to get things done.

Time pressure makes things worse

The other finding was that time pressure seems to make detaching from work more difficult if you’re already feeling exhausted. As in, exhausted folks find it increasingly difficult to mentally detach from work and get the mental/physical break they need when they feel like they’re on a time crunch.

This makes sense too, as the less time we have to prepare, and the closer we get to the day of a big audition, the more likely we are to worry, stress, and obsess about it, even when we’re not practicing.

[...]

As Olympic marathoner Keith Brantly once said, “Any idiot can train himself into the ground; the trick is working in training to get gradually stronger.”

If you’re going to practice, you might as well do it right

October 27th, 2018

The most valuable lesson Noa Kageyama learned from playing the violin was, if you’re going to practice, you might as well do it right:

I began playing the violin at age two, and for as long as I can remember, there was one question which haunted me every day.

Am I practicing enough?

I scoured books and interviews with great artists, looking for a consensus on practice time that would ease my conscience. I read an interview with Rubinstein, in which he stated that nobody should have to practice more than four hours a day. He explained that if you needed that much time, you probably weren’t doing it right.

And then there was violinist Nathan Milstein who once asked his teacher Leopold Auer how many hours a day he should be practicing. Auer responded by saying “Practice with your fingers and you need all day. Practice with your mind and you will do as much in 1 1/2 hours.”

Even Heifetz indicated that he never believed in practicing too much, and that excessive practice is “just as bad as practicing too little!” He claimed that he practiced no more than three hours per day on average, and that he didn’t practice at all on Sundays.

[...]

Here are the five principles I would want to share with a younger version of myself. I hope you find something of value on this list as well.

1. Focus is everything
Keep practice sessions limited to a duration that allows you to stay focused. This may be as short as 10-20 minutes, and as long as 45-60+ minutes.

2. Timing is everything, too
Keep track of times during the day when you tend to have the most energy. This may be first thing in the morning, or right before lunch. Try to do your practicing during these naturally productive periods, when you are able to focus and think most clearly. What to do in your naturally unproductive times? I say take a guilt-free nap.

3. Don’t trust your memory
Use a practice notebook. Plan out your practice, and keep track of your practice goals and what you discover during your practice sessions. The key to getting into “flow” when practicing is to constantly strive for clarity of intention. Have a crystal clear idea of what you want (e.g. the sound you want to produce, or particular phrasing you’d like to try, or specific articulation, intonation, etc. that you’d like to be able to execute consistently), and be relentless in your search for ever better solutions.

When you stumble onto a new insight or discover a solution to a problem, write it down! As you practice more mindfully, you’ll began making so many micro-discoveries that you will need written reminders to remember them all.

4. Smarter, not harder
When things aren’t working, sometimes we simply have to practice more. And then there are times when it means we have to go in a different direction.

I remember struggling with the left-hand pizzicato variation in Paganini’s 24th Caprice when I was studying at Juilliard. I kept trying harder and harder to make the notes speak, but all I got was sore fingers, a couple of which actually started to bleed (well, just a tiny bit).

Instead of stubbornly persisting with a strategy that clearly wasn’t working, I forced myself to stop. I brainstormed solutions to the problem for a day or two, and wrote down ideas as they occurred to me. When I had a list of some promising solutions, I started experimenting.

I eventually came up with a solution that worked, and the next time I played for my teacher, he actually asked me to show him how I made the notes speak so clearly!

5. Stay on target with a problem-solving model
It’s extraordinarily easy to drift into mindless practice mode. Keep yourself on task using the 6-step problem solving model below.

1. Define the problem (What result did I just get? What do I want this note/phrase to sound like instead?)
2. Analyze the problem (What is causing it to sound like this?)
3. Identify potential solutions (What can I tweak to make it sound more like I want?)
4. Test the potential solutions and select the most effective one (What tweaks seem to work best?)
5. Implement the best solution (Reinforce these tweaks to make the changes permanent)
6. Monitor implementation (Do these changes continue to produce the results I’m looking for?)

Or simpler yet, try out this model from Daniel Coyle’s excellent book The Talent Code.
1. Pick a target
2. Reach for it
3. Evaluate the gap between the target and the reach
4. Return to step one

It’s just plain good science fiction and it satisfies

October 26th, 2018

I haven’t read The Da Vinci Code — or any other conspiracy thrillers, now that I think of it — but I have to assume that Hans G. Schantz‘s Hidden Truth series reads like Dan Brown’s bestselling novel — but with physics taking the place of theology.

Schantz can credibly weave physics into his story, because he is a trained physicist and “wrote the book” on The Art and Science of Ultra-Wideband Antennas, and the first book definitely made me want to know more about the pioneers of electromagnetic theory — many of whom did die young or inexplicably left the field.

But the real draw — or drawback — of the novel is that it is unambiguously conservative and especially anti-Progressive. This makes it a bit of a guilty pleasure, if you ascribe to Jordan Peterson’s point about art versus propaganda:

Neovictorian reviewed the second book, and I think he reviewed it well:

It’s fun, it’s well written, it’s just plain good science fiction and it satisfies. Also, it’s a practical guide to understanding, infiltrating and grandly screwing with college SJWs. After you’ve read it, buy a copy (of both volumes) for your friends and children at school! Buy copies for younger kids, too. These books show how young people should conduct themselves with honor and perseverance, and not through preaching, but through example.

I may have to read Neovictorian’s own Sanity next.