The brain seems to be wired to be periodically distractible

Thursday, October 10th, 2019

To pay attention, the brain uses filters, not a spotlight:

For a long time, because attention seemed so intricately tied up with consciousness and other complex functions, scientists assumed that it was first and foremost a cortical phenomenon. A major departure from that line of thinking came in 1984, when Francis Crick, known for his work on the structure of DNA, proposed that the attentional searchlight was controlled by a region deep in the brain called the thalamus, parts of which receive input from sensory domains and feed information to the cortex. He developed a theory in which the sensory thalamus acted not just as a relay station, but also as a gatekeeper — not just a bridge, but a sieve — staunching some of the flow of data to establish a certain level of focus.

[...]

[Michael Halassa, a neuroscientist at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology] was drawn to a thin layer of inhibitory neurons called the thalamic reticular nucleus (TRN), which wraps around the rest of the thalamus like a shell. By the time Halassa was a postdoctoral researcher, he had already found a coarse level of gating in that brain area: The TRN seemed to let sensory inputs through when an animal was awake and attentive to something in its environment, but it suppressed them when the animal was asleep.

In 2015, Halassa and his colleagues discovered another, finer level of gating that further implicated the TRN as part of Crick’s long-sought circuit — this time involving how animals select what to focus on when their attention is divided among different senses. In the study, the researchers used mice trained to run as directed by flashing lights and sweeping audio tones. They then simultaneously presented the animals with conflicting commands from the lights and tones, but also cued them about which signal to disregard. The mice’s responses showed how effectively they were focusing their attention. Throughout the task, the researchers used well-established techniques to shut off activity in various brain regions to see what interfered with the animals’ performance.

As expected, the prefrontal cortex, which issues high-level commands to other parts of the brain, was crucial. But the team also observed that if a trial required the mice to attend to vision, turning on neurons in the visual TRN interfered with their performance. And when those neurons were silenced, the mice had more difficulty paying attention to sound. In effect, the network was turning the knobs on inhibitory processes, not excitatory ones, with the TRN inhibiting information that the prefrontal cortex deemed distracting. If the mouse needed to prioritize auditory information, the prefrontal cortex told the visual TRN to increase its activity to suppress the visual thalamus — stripping away irrelevant visual data.

The attentional searchlight metaphor was backward: The brain wasn’t brightening the light on stimuli of interest; it was lowering the lights on everything else.

[...]

With tasks similar to those they’d used in 2015, the team probed the functional effects of various brain regions on one another, as well as the neuronal connections between them. The full circuit, they found, goes from the prefrontal cortex to a much deeper structure called the basal ganglia (often associated with motor control and a host of other functions), then to the TRN and the thalamus, before finally going back up to higher cortical regions. So, for instance, as visual information passes from the eye to the visual thalamus, it can get intercepted almost immediately if it’s not relevant to the given task. The basal ganglia can step in and activate the visual TRN to screen out the extraneous stimuli, in keeping with the prefrontal cortex’s directive.

[...]

Halassa’s findings indicate that the brain casts extraneous perceptions aside earlier than expected. “What’s interesting,” said Ian Fiebelkorn, a cognitive neuroscientist at Princeton University, is that “filtering is starting at that very first step, before the information even reaches the visual cortex.”

[...]

According to his findings, the focus of the attentional spotlight seems to get relatively weaker about four times per second, presumably to prevent animals from staying overly focused on a single location or stimulus in their environment. That very brief suppression of what’s important gives other, peripheral stimuli an indirect boost, creating an opportunity for the brain to shift its attention to something else if necessary. “The brain seems to be wired to be periodically distractible,” he said.

It’s a very vulnerable point, and plants have targeted it

Wednesday, October 9th, 2019

Monarch butterflies eat only milkweed, a poisonous plant that should kill them, and even store the toxins in their own bodies as a defense against hungry birds:

Only three genetic mutations were necessary to turn the butterflies from vulnerable to resistant, the researchers reported in the journal Nature. They were able to introduce these mutations into fruit flies, and suddenly they were able to eat milkweed, too.

[...]

Insects began dining on plants over 400 million years ago, spurring the evolution of many botanical defenses, including harsh chemicals. Certain plants, including milkweed, make particularly nasty toxins known as cardiac glycosides.

The right dose can stop a beating heart or disrupt the nervous system. For thousands of years, African hunters have put these poisons on the tips of arrows. Agatha Christie wrote a murder mystery featuring foxglove, which produces cardiac glycosides.

The toxins gum up so-called sodium pumps, an essential component of all animal cells. “It’s a very vulnerable point, and plants have targeted it,” said Susanne Dobler, a molecular biologist at the University of Hamburg in Germany.

These pumps move positively charged sodium atoms out of cells, giving their interiors a negative charge. Heart cells need sodium pumps to build enough electrical charge to deliver a heartbeat. Nerves use the pumps to produce signals to the brain. If the pumps fail, then those functions come to a halt.

[...]

The researchers compared the genes that serve as blueprints for the sodium pump in poison-resistant species, like the milkweed beetle and the milkweed bug. Most of these species, it turned out, had gained the same three mutations.

[...]

Monarchs share one of the mutations with a related butterfly that doesn’t eat milkweed, and a second mutation with a closer relative that eats milkweed but doesn’t store cardiac glycosides in its wings. The third mutation arose in an even more recent ancestor.

Gaining these mutations gradually altered the sodium pumps in the monarchs’ cells, Dr. Dobler suspected, so that the cardiac glycosides couldn’t disrupt them. As the butterflies became more resistant, they were able to enjoy a new supply of food untouched by most other insects.

[...]

Noah Whiteman, an evolutionary biologist at the University of California, Berkeley, led the effort to test this hypothesis. “These three mutations may be the thing that unlocked the door” for the butterflies, he said.

He and his colleagues figured out how to use Crispr, the gene-editing technology, to introduce the mutations into fruit flies. The flies survive on rotting fruit, and even a small dose of cardiac glycosides can be deadly to them.

The researchers began by giving the flies the first mutation to arise in the ancestors of monarchs. The larvae that carried this mutation were able to survive on a diet of yeast laced with low levels of cardiac glycosides.

The second mutation let the flies withstand even more toxins, and the third made them entirely resistant. With all three mutations, the flies even ate dried milkweed powder.

The third mutation had another striking effect. When the flies with the gene developed into adults, their bodies carried low levels of cardiac glycoside, useful as a defense against predation.

O brave new world that has such insects in it!

Heat training can boost your cool-weather performance

Monday, October 7th, 2019

A 2010 study from the University of Oregon found that 10 days of training in 104 degrees Fahrenheit boosted cyclists’ VO2max by 5 percent, Alex Hutchinson notes, even when the subjects were later tested in cool temperatures, and a new study out of Swansea University supports this finding:

The study involved 22 cyclists (all male, alas), all of whom were serious amateur cyclists training an average of 14 hours a week and competing regularly. The adaptation protocol was 10 consecutive days of cycling in the lab for 60 minutes at an intensity equal to 50 percent of their VO2max, with half of them in the heat group at a room temperature of 100.4 F (38 degrees Celsius) and the other half in a control group at 68 F (20 C). They also continued with their normal training outside the lab, subtracting their lab rides to maintain roughly the same training volume as usual. The outcome measure on the test days was VO2max, a marker of aerobic fitness that has a reasonably good correlation with race performance, tested at 68 F (20 C).

If you looked at the data right after the heat adaptation period, or even a couple of days later, you’d conclude that it makes you worse. The VO2max readings were lower. But three days after the heat adaptation, VO2max readings started to climb, and four days afterwards, they peaked at 4.9 percent higher than baseline, strikingly similar to the 2010 Oregon study. The control group, meanwhile, hardly saw any change.

Traumatic brain injury causes intestinal damage

Friday, October 4th, 2019

University of Maryland School of Medicine (UMSOM) researchers have found a two-way link between traumatic brain injury (TBI) and intestinal changes:

Researchers have known for years that TBI has significant effects on the gastrointestinal tract, but until now, scientists have not recognized that brain trauma can make the colon more permeable, potentially allowing allow harmful microbes to migrate from the intestine to other areas of the body, causing infection.. People are 12 times more likely to die from blood poisoning after TBI, which is often caused by bacteria, and 2.5 times more likely to die of a digestive system problem, compared with those without such injury.

In this study, the researchers examined mice that received an experimental TBI. They found that the intestinal wall of the colon became more permeable after trauma, changes that were sustained over the following month.

It is not clear how TBI causes these gut changes. A key factor in the process may be enteric glial cells (EGCs), a class of cells that exist in the gut. These cells are similar to brain astroglial cells, and both types of glial cells are activated after TBI. After TBI, such activation is associated with brain inflammation that contributes to delayed tissue damage in the brain. Researchers don’t know whether activation of ECGs after TBI contributes to intestinal injury or is instead an attempt to compensate for the injury.

The researchers also focused on the two-way nature of the process: how gut dysfunction may worsen brain inflammation and tissue loss after TBI. They infected the mice with Citrobacter rodentium, a species of bacteria that is the rodent equivalent of E. coli, which infects humans. In mice with a TBI who were infected with this the bacteria, brain inflammation worsened. Furthermore, in the hippocampus, a key region for memory, the mice who had TBI and were then infected lost more neurons than animals without infection.

The CIA paid $240,000 to buy the world’s entire supply of LSD

Saturday, September 28th, 2019

The director of the CIA’s infamous MK-ULTRA program, Sidney Gottlieb, was the unwitting godfather of the entire LSD counterculture:

In the early 1950s, he arranged for the CIA to pay $240,000 to buy the world’s entire supply of LSD. He brought this to the United States, and he began spreading it around to hospitals, clinics, prisons and other institutions, asking them, through bogus foundations, to carry out research projects and find out what LSD was, how people reacted to it and how it might be able to be used as a tool for mind control.

Now, the people who volunteered for these experiments and began taking LSD, in many cases, found it very pleasurable. They told their friends about it. Who were those people? Ken Kesey, the author of One Flew Over the Cuckoo’s Nest, got his LSD in an experiment sponsored by the CIA by MK-ULTRA, by Sidney Gottlieb. So did Robert Hunter, the lyricist for the Grateful Dead, which went on to become a great purveyor of LSD culture. Allen Ginsberg, the poet who preached the value of the great personal adventure of using LSD, got his first LSD from Sidney Gottlieb. Although, of course, he never knew that name.

CR is unpleasant to most humans

Thursday, September 26th, 2019

Rapamycin is an immunosuppressant for transplant patients, but it’s also been found to increase lifespan in lab animals. Dr. Alan Green, who prescribes rapamycin for anti-aging purposes, recommends Blagosklonny’s paper, Disease or not, aging is easily treatable:

Is aging a disease? It does not matter because aging is already treated using a combination of several clinically-available drugs, including rapamycin. Whether aging is a disease depends on arbitrary definitions of both disease and aging. For treatment purposes, aging is a deadly disease (or more generally, pre-disease), despite being a normal continuation of normal organismal growth. It must and, importantly, can be successfully treated, thereby delaying classic age-related diseases such as cancer, cardiovascular and metabolic diseases, and neurodegeneration.

[...]

As the simplest example, calorie restriction (CR) slows aging in diverse organisms, including primates [43-50]. Similarly, intermittent fasting (IF) and ketogenic diet (severe carbohydrate restriction) extend life span in mammals [48, 51-54]. CR (as well as carbohydrate restriction and IF fasting) improves health in humans [45, 48, 53, 55-62]. However, CR is unpleasant to most humans and its life-extending capacity is limited. Nutrients activate the mTOR (molecular Target of Rapamycin) nutrient-sensing pathway [63-65] and, as we will discuss mTOR drives aging, inhabitable by rapamycin. Rapamycin-based anti-aging therapies have been recently implemented by Dr. Alan Green (https://rapamycintherapy.com).

There’s a bit of circularity there.

Creativity is not an accident

Wednesday, September 25th, 2019

Creativity is not an accident, Scott Berkun argues — while listing a number of serendipitous accidents:

Microwave oven: In 1945 Percy Spencer, an engineer at Raytheon, discovered a candy bar that melted in his pocket near radar equipment. He chose to do a series of experiments to isolate why this happened and discovered microwaves. It would take ~20 years before the technology developed sufficiently to reach consumers.

Safety Glass: In 1903 scientist Edouard Benedictus, while in his lab, did drop a flask by accident, and to his surprise it did not break. He discovered the flask held residual cellulose nitrate, creating a protective coating. It would be more than a decade before it was used commercially in gas masks.

Artificial Sweeteners: Constantine Fahlberg, a German scientist, discovered Saccharin, the first artificial sweetener, in 1879. After working in his lab he didn’t wash his hands, and at dinner discovered an exceptionally sweet taste. He returned to his lab, tasting his various experiments, until rediscovering the right one (literally risking his life in an attempt to understand his accident).

Smoke Detector: Walter Jaeger was trying to build a sensor to detect poison gas. It didn’t work, and as the story goes, he lit a cigarette and the sensor went off. It could detect smoke particles, but not gas. It took the work of other inventors to build on his discovery to make commercial smoke detectors.

X-Rays: Wilhelm Roentgen was already working on the effects of cathode rays during 1895, before he actually discovered X-rays. was a scientist working on cathode rays. On November 8, 1895, during an experiment, he noticed crystals glowing unexpectedly. On investigation he isolated a new type of light ray.

[...]

The Myths of Innovation (the actual myths) will always be popular, which means for any inspiring story of a breakthrough, we must ask:

  1. How much work did the creator do before the accident/breakthrough happened?
  2. How much work did they do after the accident/breakthrough to understand it?
  3. What did they sacrifice (time/money/reputation) to convince others of the value of the discovery?

It’s answering these 3 questions about any creativity story in the news, however accidental or deliberate, that reveals habits to emulate if we want to follow in their footsteps.

Adding tea to milk is not the same as adding milk to tea

Monday, September 23rd, 2019

Ronald Fisher was working at an agricultural research station north of London in the 1920s, when he fixed a cup of tea for an algae biologist named Muriel Bristol:

He knew she took milk with tea, so he poured some milk into a cup and added the tea to it.

That’s when the trouble started. Bristol refused the cup. “I won’t drink that,” she declared.

Fisher was taken aback. “Why?”

“Because you poured the milk into the cup first,” she said. She explained that she never drank tea unless the milk went in second.

[...]

“Surely,” Fisher reasoned with Bristol, “the order doesn’t matter.”

“It does,” she insisted. She even claimed she could taste the difference between tea brewed each way.

Fisher scoffed. “That’s impossible.”

[...]

“Let’s run a test,” [chemist William Roach] said. “We’ll make some tea each way and see if she can taste which cup is which.”

Bristol declared she was game. Fisher was also enthusiastic. But given his background designing experiments he wanted the test to be precise. He proposed making eight cups of tea, four milk-first and four tea-first. They’d present them to Bristol in random order and let her guess.

[...]

By the eighth cup Fisher was goggle-eyed behind his spectacles. Bristol had gotten every single one correct.

It turns out adding tea to milk is not the same as adding milk to tea, for chemical reasons. No one knew it at the time, but the fats and proteins in milk—which are hydrophobic, or water hating—can curl up and form little globules when milk mixes with water. In particular, when you pour milk into boiling hot tea, the first drops of milk that splash down get divided and isolated.

Surrounded by hot liquid, these isolated globules get scalded, and the whey proteins inside them—which unravel at around 160ºF—change shape and acquire a burnt-caramel flavor. (Ultra-high-temperature pasteurized milk, which is common in Europe, tastes funny to many Americans for a similar reason.) In contrast, pouring tea into milk prevents the isolation of globules, which minimizes scalding and the production of off-flavors.

[...]

Perhaps a little petulant, Fisher wondered whether Bristol had simply gotten lucky and guessed correctly all eight times. He worked out the math for this possibility and realized the odds were 1 in 70. So she probably could taste the difference.

But even then, he couldn’t stop thinking about the experiment. What if she’d made a mistake at some point? What if she’d switched two cups around, incorrectly identifying a tea-first cup as a milk-first cup and vice versa? He reran the numbers and found the odds of her guessing correctly in that case dropped from 1 in 70 to around 1 in 4. In other words, accurately identifying six of eight cups meant she could probably taste the difference, but he’d be much less confident in her ability—and he could quantify exactly how much less confident.

Furthermore, that lack of confidence told Fisher something: the sample size was too small. So he began running more numbers and found that 12 cups of tea, with 6 poured each way, would have been a better trial. An individual cup would carry less weight, so one data point wouldn’t skew things so much. Other variations of the experiment occurred to him as well (for example, using random numbers of tea-first and milk-first cups), and he explored these possibilities over the next few months.

[...]

Fisher published the fruit of his research in two seminal books, Statistical Methods for Research Workers and The Design of Experiments. The latter introduced several fundamental ideas, including the null hypothesis and statistical significance, that scientists worldwide still use today.

Not a sonic attack, but a poisoning

Sunday, September 22nd, 2019

The mysterious ailments experienced by Canadian and U.S. diplomats and their families in Cuba may not have come from sonic attacks, but from poison — or, rather, pesticide:

A number of Canadians and Americans living in Havana fell victim to an unexplained illness starting in late 2016, complaining of concussion-like symptoms, including headaches, dizziness, nausea and difficulty concentrating. Some described hearing a buzzing or high-pitched sounds before falling sick.

[...]

The symptoms experienced by the diplomats and their families, rather, are consistent with low-dose exposure, leading researchers to examine the effects of cholinesterase (ChE) blockers in commercial products.

ChE is one of the key enzymes required for the proper functioning of the nervous system. Certain classes of pesticides work by inhibiting ChE.

Cuba, like other tropical countries, regularly sprays pesticides to kill insects that carry infectious diseases.

The researchers found that since 2016, Cuba launched an aggressive campaign against mosquitoes to stop the spread of the Zika virus.

The embassies actively sprayed in offices, as well as inside and outside diplomatic residences — sometimes five times more frequently than usual. Many times, spraying operations were carried out every two weeks, according to embassy records.

Toxicological analysis of the Canadian victims confirmed the presence of pyrethroid and organophosphate — two compounds found in fumigation products.

There was also a correlation between the individuals most affected by the symptoms and the number of fumigations that were performed at their residence.

(Hat tip to our Slovenian Guest.)

The balls sink in and slowly decelerate

Thursday, September 19th, 2019

The Castillo de San Marcos is Florida’s cannonball-eating Spanish fort:

The fort guarded the Spanish empire’s trade routes as well as the surrounding city of St. Augustine, and the English wanted to run this politically and economically important outpost for themselves. Led by Carolina’s governor James Moore, the English boats dropped their anchors and laid siege.

But even after nearly two months of being shelled with cannonballs and gunfire, the fort’s walls wouldn’t give. In fact, they appeared to be “swallowing” the British cannonballs, which then became embedded within the stone. Precisely how the walls did this remained a mystery for the next three centuries.

Cannonball hole and bullet holes in Castillo de San Marcos

Built from coquina — sedimentary rock formed from compressed shells of dead marine organisms — the walls suffered little damage from the British onslaught. As one Englishman described it, the rock “will not splinter but will give way to cannon ball as though you would stick a knife into cheese.”

[...]

Jannotti and the Sanika Subhash bought a few small coquina samples from the gift shop at Castillo de San Marcos, and shot small steel balls at them with speeds of 110 to 160 miles per hour. The idea was to mimic the collision conditions of a cannon firing, albeit in miniature. The researchers also used a high-speed camera that took 200,000 images per second to visualize how the coquina samples reacted to those impacts. They ran similar tests on other materials, namely sandstone and structural foam, in order to compare their properties with those of coquina.

[...]

On the contrary, coquina had a rare ability to absorb mechanical stress, which stemmed from its loosely connected inner structure. Although the little shell pieces that make up coquina are piled and pressed into each other for thousands of years, they aren’t cemented together, so they can shuffle around a bit.

So when a cannonball slammed into the coquina walls of Castillo de San Marcos, it crushed the shells it directly hit, but the surrounding particles simply reshuffled to make space for the ball. “Coquina is very porous and its shells are weakly bonded together,” Jannotti says. “It acts almost as natural foam — the balls sink in, and slowly decelerate.”

Growth hormone, DHEA, and metformin reversed aging

Thursday, September 12th, 2019

A small clinical study suggests that it might be possible to reverse the body’s epigenetic clock, which measures a person’s biological age:

For one year, nine healthy volunteers took a cocktail of three common drugs — growth hormone and two diabetes medications — and on average shed 2.5 years of their biological ages, measured by analysing marks on a person’s genomes. The participants’ immune systems also showed signs of rejuvenation.

[…]

The latest trial was designed mainly to test whether growth hormone could be used safely in humans to restore tissue in the thymus gland. The gland, which is in the chest between the lungs and the breastbone, is crucial for efficient immune function. White blood cells are produced in bone marrow and then mature inside the thymus, where they become specialized T cells that help the body to fight infections and cancers. But the gland starts to shrink after puberty and increasingly becomes clogged with fat.

Evidence from animal and some human studies shows that growth hormone stimulates regeneration of the thymus. But this hormone can also promote diabetes, so the trial included two widely used anti-diabetic drugs, dehydroepiandrosterone (DHEA) and metformin, in the treatment cocktail.

One scary adverse event could cripple the whole enterprise

Tuesday, September 10th, 2019

Tim Ferriss has put aside many of his other projects to advance psychedelic medicine:

“It’s important to me for macro reasons but also deeply personal ones,” Mr. Ferriss, 42, said. “I grew up on Long Island, and I lost my best friend to a fentanyl overdose. I have treatment-resistant depression and bipolar disorder in my family. And addiction. It became clear to me that you can do a lot in this field with very little money.”

Mr. Ferriss provided funds for a similar center at Imperial College London, which was introduced in April, and for individual research projects at the University of San Francisco, California, testing psilocybin as an aide to therapy for distress in long-term AIDS patients.

[...]

Experiments using ecstasy and LSD, for end-of-life care, were underway by the mid-2000s. Soon, therapists began conducting trials of ecstasy for post-traumatic stress, with promising results. One of the most influential scientific reports appeared in 2006: a test of the effects of a strong dose of psilocybin on healthy adults. In that study, a team led by Roland Griffiths at Johns Hopkins found that the volunteers “rated the psilocybin experience as having substantial personal meaning and spiritual significance and attributed to the experience sustained positive changes in attitudes and behavior.”

At least as important as the findings, which were exploratory, was the source, Johns Hopkins, with all its reputational weight, and no history of institutional bias toward alternative treatments. “I got interested through meditation in altered states of consciousness, and I came into this field with no ax to grind,” said Dr. Griffiths, the director of the new center.

By late 2018, the Hopkins group had reported promising results using psilocybin for depression, nicotine addiction and cancer-related distress. Others around the world, including Dr. David Nutt at Imperial College London, were producing similar results.

Mr. Ferriss, who organized half the $17 million in commitments and contributed more than $2 million of his own for the new Hopkins center, said he approached wealthy friends who he knew had an interest in mental health. The new venture, he said he told them, “truly has the chance to bend the arc of history, and I’ve spent nearly five years looking at and testing options in this space to find the right bet. Would you have any interest in discussing?”

Mr. Ferriss said he met Dr. Griffiths in 2015, became intrigued with the research, and began thinking about the Hopkins group as he might an investment bet. He launched a crowdfunding campaign for a small depression study, to see how efficiently the Hopkins team used the money. “Essentially it was a seed investment,” Mr. Ferriss said. “I ran a beta test, and they really delivered.”

Craig Nerenberg, one of those friends and the founder of the hedge fund Brenner West Capital Partners, quickly agreed to contribute. “I have lost a family member to addiction and have felt the pain of loved ones who struggled through depression,” Mr. Nerenberg said by email. “It’s hard for me to imagine a contribution that I can make which — if the research data continues to bear out — will have a greater impact over the next decade.”

The remaining half of the commitments for the center came from the Steven & Alexandra Cohen Foundation and is earmarked for treatment of Lyme disease. Mr. Cohen is a billionaire investor; the foundation focuses on eduction, veterans issues, Lyme disease and children’s health, among other things. In an email, Ms. Cohen wrote, “I strongly believe that we must dare to change the minds of those who think this drug is for recreational purposes only and acknowledge that it is a miracle for many who are desperate for relief from their symptoms or for the ability to cope with their illnesses. It may even save lives.”

Investigators at the Hopkins center, its counterpart at Imperial College London and elsewhere still have an enormous amount of work to do to learn which mind-altering substances are beneficial for whom, at what doses, and when such treatment is dangerous. The same concerns that shut down similar research in the 1970s are audible in the caution expressed by many psychiatrists today: These are powerfully mind-altering substances, and administering them to people who are already unstable is uncertain work, to put it mildly. One scary adverse event could cripple the whole enterprise.

Do the work, and push pretty hard

Wednesday, September 4th, 2019

Lifting to failure is generally better, but not always:

Amid the confusing torrent of advice about the best ways to build strength, I’ve taken comfort from a series of reassuringly simple studies from McMaster University over the past decade. Researcher Stuart Phillips and his colleagues have repeatedly demonstrated that if you do a series of lifts to failure — that is, until you can’t do another rep — then it doesn’t much matter how heavy the weight is or how many reps you do. As long as you’re maxing out, you’ll gain similar amounts of strength with light or heavy weights.

But there’s an interesting caveat to this advice, according to a new study from a team at East Tennessee State University led by Kevin Carroll, published in Sports: just because you can lift to failure doesn’t mean you always should.

Researchers have previously pointed out that it takes longer to recover from a strength training session when you go to failure than when you stop a few reps short, with negative neuromuscular effects lasting 24 to 48 hours. You also recover more quickly even if you do the exact same number of reps but take a little extra rest halfway so that you don’t quite hit failure. On the surface, this is a trivially obvious point: of course it takes longer to recover if you work harder! The question, though, is whether there’s something particularly damaging or exhausting about going all the way to failure that outweighs the positive training effect you get from working harder.

[...]

So, in summary, two groups doing almost the same training, except one group was hitting failure on the last set of each exercise in every workout. The initial results from this study were published last year, showing that the relative intensity group had greater improvements in maximum strength and vertical jump. The new paper adds a bunch of information based on muscle biopsies and ultrasound, showing a greater increase for the relative intensity group in overall muscle size, the size of individual muscle fibers, and the presence of several key molecular signals of muscle growth.

Before we conclude that failure is bad, there’s one other detail of the training program that’s worth mentioning. While the failure group was hammering away three times a week, the relative intensity group was doing two harder (though not to failure) workouts and one easier workout each week. For example, a max strength workout of three sets of five reps might start at 85 percent for the two hard workouts, but then drop to 70 percent for the easier one.

This seems like a whole different variable thrown into the mix, and it reminds me of a study from Marcas Bamman’s group at the University of Alabama at Birmingham a couple of years ago. In a big study of older adults, he found that doing two harder workouts and one easier workout each week produced better strength gains that just two hard workouts or just three hard workouts a week. He suggested that lingering inflammation in the muscles made the subjects unable to fully benefit from three hard workouts a week. Instead, doing a third easier workout added some fitness gains compared to just two weekly workouts, but still allowed the muscles to recover.

So to me, the message from the new study isn’t necessarily that lifting to failure is bad. It’s that lifting to failure all the time might be counterproductive (and especially so as you get older, Bamman’s results suggest). The point Phillips has been trying to make is that, for the vast majority of us, all the variables that make your head spin — sets, reps, one-rep max percentages, and so on — are utterly minor details compared to the main goal of simply doing the work, and sometimes pushing pretty hard.

You have your parents’ tendons

Monday, September 2nd, 2019

You have your parents’ tendons:

A study from Ritsumeikan University, home to one of the top collegiate running programs in Japan, looked at injury risk in 24 elite long-distance runners. The researchers weren’t concerned with mileage levels, shoe type, stretching routines, or any of the usual factors we associate with running injuries. Instead, they were focused on spit.

Over the past decade or so, a series of studies have suggested that certain gene variants can affect the structure of your collagen fibrils, the basic building blocks of tendons and ligaments. Some versions of these genes make you less likely to develop problems like Achilles tendinopathy; others make you more likely. Researchers have found, for example, that rugby players who make it to the elite level are more likely to have the tendon-protective gene variants, presumably because those who don’t are more likely to have their careers derailed by injury.

In the new Japanese study, the athletes were asked about their history of tendon and ligaments inflammations and injuries during their university career, then gave a spit sample for DNA analysis. The injury data was compared to five specific variants in four different genes that have previously been associated with tendon and ligament structure. For three of the five variants, those with the “bad” version were indeed significantly more likely to have suffered tendon and ligament injuries. (The fourth variant didn’t have any predictive value in this group, and the fifth didn’t yield any information because all the runners in the study had the same version of the gene.)

Given previous research, these results aren’t particular surprising. The question is what you do with this information. There are companies that offer personal genetic testing that includes some of these gene variants (COL5A1 was the best predictor in this study), so you can find out your status and…do what, exactly?

In a review of the field a few years ago, some of the leading researchers suggested  that, rather than getting a DNA test, you should simply be aware of whether you have a personal or family history of tendon and ligament injuries. Either way, it’s worth thinking about what you would change in your training if you suddenly discovered that your tendons were, say, 10 or 20 percent more likely to get inflamed compared to the average person. If you think you would start doing more stretching or strengthening or icing or “listening to your body” or whatever, then my question is simple: why aren’t you doing that already?

Do ice baths suppress muscle gains?

Sunday, September 1st, 2019

Do ice baths suppress muscle gains?

Fuchs and his colleagues had 12 volunteers do a strength-training session, then hop into an ice tub—or actually, half an ice tub. One leg was submerged in cold water at 46 degrees Fahrenheit (8 Celsius), while the other leg was submersed in tepid water at 86 degrees Fahrenheit (30 Celsius), for 20 minutes. Then they chugged a recovery shake with 45 grams of carbohydrate and 20 grams of protein, the latter of which contained a tracer that allowed the researchers to determine how much of the protein was incorporated into new muscle. Over the following two weeks, the researchers took frequent blood samples and muscle biopsies to track their progress.

Sure enough, the rate of muscle protein synthesis was significantly lower in the cooled leg than in the leg that got the lukewarm bath, with a difference over the course of two weeks of about 13 percent. Now, lab measures like muscle protein synthesis are still not the same as measuring actual differences in strength over a longer period of time. It’s awfully suggestive, though, and bolsters the case that ice baths—and, presumably, other recovery enhancers—may come with a hidden cost to fitness gains.