Night Enhancement Eyedrops

Saturday, March 28th, 2015

Chlorin e6 is a chlorophyll analog used as a photosensitizer in laser-assisted cancer remediation, but, when mixed with DMSO and applied to the eyes, the photosensitizer can enhance night vision, too:

The Ce6 (Frontier Scientific, CAS: 19660-77-6 ), was found to be a fine black powder which clung to all surfaces. To make manipulating the chemical easier, a large batch of the total solution was made and then aliquoted into separate containers for storage.

200mg of Ce6 was mixed with 400 units (4ml) of insulin (70/30 Lantus). To this was added 5.38ml of sterile saline solution (0.9% sodium chloride). The mixture was sonicated briefly (30 seconds) to allow for proper dispersal of the powder into saturated solution and then 625?l of DMSO (Amresco) was added. The solution was sealed with parafilm and sonicated for 150 seconds. The resulting liquid was thin and black in color. Solution was kept in glass aliquots wrapped in foil at 20°C.

For the application, the subject rested supine and his eyes were flushed with saline to remove any micro-debris or contaminants that might be present. Eyes were pinned open with a small speculum to remove the potential for blinking, which may force excess liquid out before it had a chance to absorb. Ce6 solution was added to the conjunctival sac via micropipette at 3 doses of 50?l into each eye. After each application, pressure was applied to the canthus to stop liquid from moving from the eye to the nasal region. Each dose was allowed to absorb between reloading the pipette, with the black color disappearing after only a few seconds.

After application was complete, the speculum was removed and black sclera lenses were placed into each eye to reduce the potential light entering the eye. Black sunglasses were then worn during all but testing, to ensure increased low light conditions and reduce the potential for bright light exposure.

Chlorin e6 Eye Drops

The Ce6 solution has been shown to work in as little as one hour, with the effects lasting for “many hours” afterwards3. After 2 hours of adjustment, the subject and 4 controls were taken to a darkened area and subjected to testing. Three forms of subjective testing were performed. These consisted of symbol recognition by distance, symbol recognition on varying background colors at a static distance, and the ability to identify moving subjects in a varied background at varied distances. Symbol recognition consisted of placing a collection of objects with markings on them (numbers, letters, shapes). Subjects were then asked to identify the markings, each viewing the objects from the same location at a distance of 10 meters. The markings were not made prior to the moment of testing.

For subject recognition, individuals went moved in a small grove of trees. They were allowed to chose their own location independently. Distances ranged from 25 to 50 meters from observation point and trees and brush were used for “blending”. Locations were chosen without being observed by the test subjects. The Ce6 subject and controls were handed a laser pointer and asked to identify the location of the people in the grove. After testing the Ce6 subject replaced the sunglasses which were not removed until sleep. Eyesight in the morning seemed to have returned to normal and as of 20 days, there have been no noticeable effects.

The Ce6 subject consistently recognized symbols that did not seem to be visible to the controls. The Ce6 subject identified the distant figures 100% of the time, with the controls showing a 33% identification rate.


Friday, March 27th, 2015

East Asia is growing increasingly myopic — literally:

Sixty years ago, 10–20% of the Chinese population was short-sighted. Today, up to 90% of teenagers and young adults are. In Seoul, a whopping 96.5% of 19-year-old men are short-sighted.

Other parts of the world have also seen a dramatic increase in the condition, which now affects around half of young adults in the United States and Europe — double the prevalence of half a century ago. By some estimates, one-third of the world’s population — 2.5 billion people — could be affected by short-sightedness by the end of this decade.


For many years, the scientific consensus held that myopia was largely down to genes. Studies in the 1960s showed that the condition was more common among genetically identical twins than non-identical ones, suggesting that susceptibility is strongly influenced by DNA. Gene-finding efforts have now linked more than 100 regions of the genome to short-sightedness.

But it was obvious that genes could not be the whole story. One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. Genetic changes happen too slowly to explain this rapid change — or the soaring rates in myopia that have since been documented all over the world (see ‘The march of myopia’). “There must be an environmental effect that has caused the generational difference,” says Seang Mei Saw, who studies the epidemiology and genetics of myopia at the National University of Singapore.

There was one obvious culprit: book work. That idea had arisen more than 400 years ago, when the German astronomer and optics expert Johannes Kepler blamed his own short-sightedness on all his study. The idea took root; by the nineteenth century, some leading ophthalmologists were recommending that pupils use headrests to prevent them from poring too closely over their books.

The modern rise in myopia mirrored a trend for children in many countries to spend more time engaged in reading, studying or — more recently — glued to computer and smartphone screens. This is particularly the case in East Asian countries, where the high value placed on educational performance is driving children to spend longer in school and on their studies. A report last year3 from the Organisation for Economic Co-operation and Development showed that the average 15-year-old in Shanghai now spends 14 hours per week on homework, compared with 5 hours in the United Kingdom and 6 hours in the United States.

Researchers have consistently documented a strong association between measures of education and the prevalence of myopia. In the 1990s, for example, they found that teenage boys in Israel who attended schools known as Yeshivas (where they spent their days studying religious texts) had much higher rates of myopia than did students who spent less time at their books4. On a biological level, it seemed plausible that sustained close work could alter growth of the eyeball as it tries to accommodate the incoming light and focus close-up images squarely on the retina.

Attractive though the idea was, it did not hold up. In the early 2000s, when researchers started to look at specific behaviours, such as books read per week or hours spent reading or using a computer, none seemed to be a major contributor to myopia risk5. But another factor did. In 2007, Donald Mutti and his colleagues at the Ohio State University College of Optometry in Columbus reported the results of a study that tracked more than 500 eight- and nine-year-olds in California who started out with healthy vision6. The team examined how the children spent their days, and “sort of as an afterthought at the time, we asked about sports and outdoorsy stuff”, says Mutti.

It was a good thing they did. After five years, one in five of the children had developed myopia, and the only environmental factor that was strongly associated with risk was time spent outdoors6. “We thought it was an odd finding,” recalls Mutti, “but it just kept coming up as we did the analyses.” A year later, Rose and her colleagues arrived at much the same conclusion in Australia7. After studying more than 4,000 children at Sydney primary and secondary schools for three years, they found that children who spent less time outside were at greater risk of developing myopia.

Rose’s team tried to eliminate any other explanations for this link — for example, that children outdoors were engaged in more physical activity and that this was having the beneficial effect. But time engaged in indoor sports had no such protective association; and time outdoors did, whether children had played sports, attended picnics or simply read on the beach. And children who spent more time outside were not necessarily spending less time with books, screens and close work. “We had these children who were doing both activities at very high levels and they didn’t become myopic,” says Rose. Close work might still have some effect, but what seemed to matter most was the eye’s exposure to bright light.

March of Myopia

Based on epidemiological studies, Ian Morgan, a myopia researcher at the Australian National University in Canberra, estimates that children need to spend around three hours per day under light levels of at least 10,000 lux to be protected against myopia. This is about the level experienced by someone under a shady tree, wearing sunglasses, on a bright summer day. (An overcast day can provide less than 10,000 lux and a well-lit office or classroom is usually no more than 500 lux.) Three or more hours of daily outdoor time is already the norm for children in Morgan’s native Australia, where only around 30% of 17-year-olds are myopic. But in many parts of the world — including the United States, Europe and East Asia — children are often outside for only one or two hours.

Antibiotics found to have unexpected effects on mitochondria

Wednesday, March 25th, 2015

Mitochondria are bacteria that evolved to live within our cells, so we shouldn’t be too surprised when researchers find that antibiotics affect them:

“After several days of treatment with high doses of doxycycline, mitochondrial respiration was visibly altered,” explains Moullan. More surprising still, the consequences were observed all the way down the food chain, from mammals to flies to nematode worms to plants. “The worms’ development was hindered. On the other hand, signs of aging appeared more slowly, something we had observed in earlier studies.”

The scientists also carried out growth tests on Arabidopsis thaliana, a common plant that’s frequently used in laboratory research. After growing for a week on a normal substrate, it was transplanted into soil with varying concentrations of doxycycline. “Delays in growth, some quite severe, were observed after a few days, even in soils in which the concentration of antibiotics was no stronger than is found in some agricultural soils today,” says Moullan.

This pollution whose consequences are just beginning to be appreciated is caused by the widespread administration of antibiotics to livestock. “Because they are give orally in feed, they are only partially digested and end up in manure, which is then spread on the fields,” explains Mouchiroud.

The quantities involved are huge, and the economic stakes equally sobering. In 2011, 5.6 million kg of tetracycline was administered to US livestock. A study showed that nearly half of the 210 kg of antibiotics produced in China in 2007 were tetracyclines for veterinary use. “The effects on growth of plants other than A. thaliana have not yet been studied, but our work indicates a need for caution,” says Moullan.

The researchers also call on their scientific colleagues to be more careful when using antibiotics in experiments for modulating gene expression. “You observe the effect you’re looking for, but you lose sight of the fact that these substances have serious consequences for overall metabolic function,” says Mouchiroud.

Why We Reject Facts & Embrace Conflict

Wednesday, March 25th, 2015

Musa al-Gharbi explains why we reject facts & embrace conflict:

There is a growing body of research suggesting that when beliefs become tied to one’s sense of identity, they are not easily revised. Instead, when these axioms are threatened, people look for ways to outright dismiss inconvenient data. If this cannot be achieved by highlighting logical, methodological or factual errors, the typical response is to leave the empirical sphere altogether and elevate the discussion into the moral and ideological domain, whose tenets are much more difficult to outright falsify (generally evoking whatever moral framework best suits one’s rhetorical needs).

While often described in pejorative terms, these phenomena may be more akin to “features,” than “bugs,” of our psychology.

For instance, the Machiavellian Intelligence Hypothesis holds that the primary function of rationality is social, rather than epistemic. Specifically, our rational faculties were designed to mitigate social conflicts (or conflicting interests). But on this account, rationality is not a neutral mediator. Instead, it is deployed in the service of one’s own interests and desires — which are themselves heavily informed by our sense of identity.

This is because our identities are, among other things, prisms through which we interpret the world. These trends hold just as true for secular agents as religious ones, for liberal ideologues as conservatives (as for so-called “independents,” they are generally partisans in disguise) — the phenomenon is known in academic circles as “cultural cognition.”

Importantly, this identity-based reasoning does not reflect a lack of cognitive sophistication. Quite the reverse: the better an agent is at justifying their beliefs and dismantling undesirable arguments or evidence from others — these tend to be more prone to, and less aware of, their biases; their beliefs are much more difficult to successfully challenge or revise.

As a result of these trends, identity-based disagreements often seem intractable: rather than leading to consensus, these clashes typically generate fundamentalism and polarization — often causing significant social dysfunction and instability, and not just in the ideological or political spheres. Identity-based armed conflicts, for instance, tend to be much more violent, and much more difficult to resolve, than other forms of war. And what’s worse, mediators, especially when they present themselves as objective or neutral, tend to exacerbate and prolong these struggles.

There is an analog in the socio-political sphere, namely the tendency to try and neutralize conflicts by framing issues in secular terms, appealing to “universal” truths or values. But of course, these interpretations tend to be highly-controversial–relying on a host of implicit, and often problematic, assumptions about everything from how others think to what serves their interests.


But unless the dominant party (or the systems and institutions it has established) is beyond meaningful challenge, the typical effect of this approach is increased polarization; and the higher the perceived stakes, the stronger the “us v. them” effect will be (even to the point of radicalization). This is because fostering parochial altruism is essential for intergroup competition. And so when there is an opportunity for a meaningful shift in power (such as in the lead-up to an election or in the aftermath of a crisis), this cultural partisanship will be especially pronounced.

Accordingly, the best way to reduce polarization is not by obscuring critical differences under the pretense of universalism. Instead, societies should aspire to lower the perceived stakes of these identity conflicts.

For example, rigidity, polarization and groupthink are much less common, and more easily addressed, in deliberations within an identity group; closed-mindedness is largely a response to a perceived threat from outside. In heterogeneous contexts, many of the benefits of this enclave deliberation can be achieved by engaging interlocutors in terms of their own framing and narratives, mindful of their expressed concerns and grievances. That is, identity differences should not be suppressed, avoided or merely tolerated, but instead emphasized, encouraged and substantively respected — emphasizing pluralism over sectarianism. This can create a foundation where good-faith exchange and intergroup cooperation are feasible. Or put another way, the problem isn’t cultural cognition, it’s the lack of cross-cultural competence.

Are Psychedelics The Wonder Drug We’ve Been Waiting For?

Tuesday, March 24th, 2015

Two new studies have found no link between using psychedelic drugs and going crazy — developing schizophrenia, psychosis, depression, or anxiety disorders — and they may in fact be wonder drugs:

People who had tried LSD or psilocybin had lower lifetime rates of suicidal thoughts and attempts.

Of course, this isn’t the first positive mental health outcome to be attributed to these drugs. The research into psychedelics as a treatment for end-of-life anxiety (brought on by terminal illness) shows that these substances are effective in treating severe anxiety and — equally important — that these benefits persist over time.

Meanwhile, researchers at the Imperial College in London have also begun peeling back the veil on the so-called ‘mind-expanding’ nature of psychedelics, finding some serious scientific evidence for reasons why these drugs help users release longstanding narrow-minded, negative outlooks.

And, finally, there’s also a bevy of research dating back to the 1950s that shows strong correlations between psychedelics and enhanced creativity. This research helps explain why Steve Jobs said taking LSD was one of the most important things he’s done in his lifetime, why Francis Crick was high on low-dose acid when he discovered the double-helix and why Tim Ferriss, in a recent interview with CNN, said: “”The billionaires I know, almost without exception, use hallucinogens on a regular basis. [They're] trying to be very disruptive and look at the problems in the world… and ask completely new questions.”

But the larger point is that one in five adult Americans takes some kind of mental health drug — meaning anti-anxiety, anti-depressant, anti-psychotic, etc. What’s more, success rates are suspect. Only 15 percent of people treated for depression with drugs, for example, show long term remission.

But psychedelics — a class of long-vilified substances — are not only much safer than we believed (i.e. they don’t appear to make you crazy) and also shows significant long term mental health benefits across multiple categories: anti-depressant, anti-anxiety and performance-enhancement (for creativity). What’s more, to receive these benefits, you only need to take these substances a few times (not every day like other mental health medications).

And, really, you’re only messing with your brain. What could go wrong?

Why children differ in motivation to learn

Sunday, March 22nd, 2015

A recent study of 13,000 twins from 6 countries examined why children differ in motivation to learn:

Contrary to common belief, enjoyment of learning and children’s perceptions of their competence were no less heritable than cognitive ability. Genetic factors explained approximately 40% of the variance and all of the observed twins’ similarity in academic motivation. Shared environmental factors, such as home or classroom, did not contribute to the twin’s similarity in academic motivation. Environmental influences stemmed entirely from individual specific experiences.

Chemical trick speeds up 3D printing

Saturday, March 21st, 2015

UNC chemists have harnessed a chemical trick to speed up 3D printing:

A team led by Joseph DeSimone, a chemist at the University of North Carolina at Chapel Hill, has now refined the liquid-resin process to make it go continuously rather than in fits and starts. They made the bottom of the container that holds the resin bath from a material that is permeable to oxygen. Because oxygen inhibits the solidification of resin, it creates a ‘dead zone’ — a layer just tens of microns thick at the bottom of the container — where the resin stays liquid even when ultraviolet rays are shining on it. The solidification reaction happens instead just above the dead zone. Because liquid is always present below the slowly forming object, the researchers can pull it up in a continuous manner, rather than waiting for new liquid resin to flow in.

One Woman’s Drive to Revolutionize Medical Testing

Saturday, March 21st, 2015

Elizabeth Holmes, the 30-year-old CEO of Theranos, is a driven young woman:

Her home is a two-bedroom condo in Palo Alto, and she lives an austere life. Although she can quote Jane Austen by heart, she no longer devotes time to novels or friends, doesn’t date, doesn’t own a television, and hasn’t taken a vacation in ten years. Her refrigerator is all but empty, as she eats most of her meals at the office. She is a vegan, and several times a day she drinks a pulverized concoction of cucumber, parsley, kale, spinach, romaine lettuce, and celery.

Growing up, Holmes was in constant motion. Her father, Chris, worked for government agencies, including, for much of his career, the U.S. Agency for International Development and the State Department, often travelling abroad, overseeing relief and disease-eradication efforts in developing nations; today, he is the global water coördinator for U.S.A.I.D. Her mother, Noel, worked for nearly a decade as a foreign-policy and defense aide on Capitol Hill, until Elizabeth and her brother Christian, two years younger, were born. The family moved several times, which meant there was little opportunity to develop lasting friendships. Holmes describes herself as a happy loner, collecting insects and fishing with her father.

“I was probably, definitely, not normal,” she said. “I was reading ‘Moby-Dick’ from start to finish when I was about nine. I read a ton of books. I still have a notebook with a complete design for a time machine that I designed when I must have been, like, seven. The wonderful thing about the way I was raised is that no one ever told me that I couldn’t do those things.”

Chris Holmes’s great-grandfather Christian Holmes emigrated from Denmark, studied engineering, settled in Cincinnati, and became a physician. When Elizabeth was eight, she was given a tour of the local hospital where he worked and which was named in his honor. He had married the daughter of a patient, Charles Fleischmann, who pioneered packaged yeast and built a baking empire around it. (A nephew, Raoul Fleischmann, started this magazine in 1925, with Harold Ross.) Not all of Fleischmann’s children shared his entrepreneurial drive, and this was a common subject of conversation in the Holmes household. “I grew up with those stories about greatness,” she said, “and about people deciding not to spend their lives on something purposeful, and what happens to them when they make that choice—the impact on character and quality of life.”

In 1993, when Elizabeth was nine, her father took a job in Houston, as executive assistant to the C.E.O. of Tenneco, which was then a manufacturing and energy conglomerate. She knew that her father felt guilty for uprooting the family, so she wrote a letter to console him: “What I really want out of life is to discover something new, something that mankind didn’t know was possible to do.” She reassured him that Texas suited her, because “it’s big on science.”

For several years in the nineteen-eighties, Chris Holmes spent two weeks a month in China, helping American companies invest in large-scale development projects. Soon after the family moved to Houston, Elizabeth started studying Mandarin; by the summer following her sophomore year of high school, she was intent on taking summer classes in Mandarin at Stanford. She repeatedly called the admissions office for information, only to be told, each time, that the program did not enroll high-school students. One day, her father recalls, the head of the program became so annoyed that he grabbed the phone from the employee who was talking to Holmes. “You’ve been calling constantly,” he told her. “I just can’t take it anymore. I’m going to give you the test right now!” He asked questions in Mandarin; she answered fluently, and he accepted her on the spot. She completed three years of college Mandarin while still in high school.

In 2001, in her senior year, Holmes applied to Stanford, was accepted, and then was named a President’s Scholar, which came with a small stipend to select her own research project. Her parents sent her off with a copy of Marcus Aurelius’ “Meditations,” her father said, “to convey to her: Live a purposeful life.” Holmes elected to study chemical engineering. She was drawn to the work of Channing Robertson, the chemical engineer and, at the time, a dean at the engineering school. Robertson is seventy-one and fit, with thinning hair and a relaxed smile; I visited him in his home on campus. Holmes’s first class with him was a seminar on devices designed to control the release of drugs into the human body. One day, in her freshman year, Robertson said, she came to his office to ask if she could work in his lab with the Ph.D. students. He hesitated, but she persisted and he gave in. At the end of the spring term, she told him that she planned to spend the summer working at the Genome Institute, in Singapore. He warned her that prospective students had to speak Mandarin.

“I’m fluent in Mandarin,” she said.

“I’m thinking, What’s next? She’s already coming into the research group meetings at the end of her freshman year with my Ph.D. students. I find myself listening to her more than to them about the next experiments to be done and the progress that’s been made. I realized she’s different.”

That summer, at the Genome Institute, Holmes worked on testing for severe acute respiratory syndrome, or SARS, an often fatal virus that had broken out in China. Testing was done in the traditional manner, by collecting blood samples with syringes and mucus with nasal swabs. These methods could detect who was infected, but a separate system was needed to dispense medication, and still another system to monitor results. Holmes questioned the approach. At Stanford, she had been exploring what has become known as lab-on-a-chip technology, which allows multiple measurements to be taken from tiny amounts of liquid on a single microchip. “With the type of engineering work and systems I had been focussing on at Stanford, it was quite clear that there were much better ways to do it,” she said.

Before returning to Stanford, Holmes conceived of a way to perform multiple tests at once, using the same drop of blood, and to wirelessly deliver the resulting information to a doctor. That summer, she filed a patent for the idea; it was ultimately approved, in November of 2007. Once back on campus, she went to see Robertson in his office and announced that she wanted to start a company. Robertson was impressed by the idea but urged her to at least consider finishing her degree first.

“Why?” she responded. “I know what I want to do.”

Holmes was consumed by the idea of developing a company. “I got to a point where I was enrolled in all these courses, and my parents were spending all this money, and I wasn’t going to any of them,” she said. “I was doing this full time.” Her parents allowed her to take the money they had set aside for tuition and use it to seed her company. In March, 2004, she dropped out of Stanford; one month later, she incorporated Theranos (the name is a combination of “therapy” and “diagnosis”). She persuaded Robertson to spend one day a week as a technical adviser to the company and to serve as her first board member. Eventually, he retired from his tenured position, and began working at Theranos full time.

Robertson introduced Holmes to several venture capitalists. She insisted that they abide by her terms, which included an understanding that she would retain control and pour the profits back into the company. By December of 2004, she had raised six million dollars from an assortment of investors. As she and the chemists and engineers dug deeper, she became convinced that they could accomplish five objectives: extract blood without syringes, make a diagnosis from a few drops of blood, automate the tests to minimize human error, do the test and get the results more quickly, and do this more economically.

A key to the company’s success was the hiring of Sunny Balwani, a software engineer, now forty-nine, whom Holmes had met in Beijing the summer after her senior year of high school. At the time, he was getting an M.B.A. from Berkeley. He had worked at Lotus and at Microsoft and been a successful entrepreneur, and in 2004 he began graduate studies in computer science at Stanford. He and Holmes spoke often, and they shared a belief that software, not just chemistry or biology, mattered. If Theranos was going to be able to analyze a few drops of blood, engineers would have to develop the software to do it. In 2009, Balwani joined as C.O.O. and president. “Our platform is about automation,” he says. “We have automated the process from start to finish.”

Theranos has managed to keep its technology a secret for much of its decade of existence in part because it occupies a regulatory gray area. Most other diagnostic labs, including Quest and Laboratory Corporation of America, perform blood tests on equipment that they buy from outside manufacturers, like Siemens and Roche Diagnostics. Before those devices can be sold, they must be approved by the F.D.A., a process that makes their tests’ performances more visible to the public. But, since Theranos manufactures its own testing equipment, the F.D.A. doesn’t need to approve it, as long as the company doesn’t sell it or move it out of its labs.

Train Your Breathing Muscles

Friday, March 20th, 2015

By the time they reached Base Camp, at just over 16,000 feet, the members of a British military expedition found that their arterial oxygen saturation was 20 percent lower than it had been at sea level — except for the members who had been assigned inspiratory muscle training for four weeks leading up to the expedition:

When the IMT group got to Base Camp, they had desaturated by only 14 percent, a significant six-percent advantage over the control group that persisted as they kept climbing to the advanced base camp at over 18,000 feet.


Humans have between seven and 11 pounds of respiratory muscle, primarily the diaphragm and intercostal muscles around the ribcage, which consumes energy and fatigues just like the hamstrings or biceps. The idea of training this muscle—and particularly the muscles required for inhaling—originated with patients suffering from breathing-related conditions like chronic obstructive pulmonary disease.

The basic IMT protocol those patients followed hasn’t changed: you take a deep breath through a tube with variable resistance that makes it harder to inhale. Repeat 30 times a day, ramping up the resistance as your muscles get stronger.

For over a decade, researchers have been studying whether IMT can boost endurance performance at sea level. The evidence remains mixed, but a meta-analysis of 21 studies in 2013 concluded that it probably offers a small boost, particularly in breathing-constrained sports like swimming. At altitude, though, the situation is different: breathing takes a significantly higher proportion of your overall energy, consuming 20 to 30 percent more oxygen by 9,000 feet, so the breathing muscles fatigue more quickly.

The idea that IMT might be useful at altitude was first tested in a 2007 Kansas State study that found improvements in exercise at a simulated elevation of around 10,000 feet. After four weeks of IMT, blood oxygen levels during exercise were higher, and the strengthened respiratory muscles were able to handle the demands of breathing in thin air more easily, reducing their total oxygen usage. Ratings of effort and breathing discomfort were also reduced. Curiously, actual performance in a time-to-exhaustion trial was unchanged. Read: the subjects didn’t have better endurance, they just felt better.

More recently, Lomax, the author of the Makalu study, has followed up with a lab study of her own, also at a simulated altitude of around 10,000 feet. While the results haven’t been published yet, she found that four weeks of IMT produced higher arterial oxygen levels, reduced overall oxygen demand, increased breathing efficiency, and reduced breathing discomfort during exercise. As in the Kansas State study, the benefits were apparent only during exercise at altitude, not at sea level.

The training tool of choice is an incentive spirometer, like the PowerBreathe.

Why Adults Are More Likely To Be Hit Hard By The Measles Than Children

Thursday, March 19th, 2015

Dr. John Swartzberg, a clinical professor at the Berkeley School of Public Health, answered a few questions about childhood diseases, like, Why are adults more likely to get hit hard by measles?

He believes that these viruses and humans could be examples of “host-parasite interaction,” and that we have “adapted to each other,” over long periods of time. Those adaptations are dependent on us contracting diseases at a specific time of life.

This isn’t as odd as it sounds. Although we think of our bodies as fighting invading viruses, the relationship isn’t adversarial. The measles virus isn’t “trying” to kill us any more than the polio virus was trying to kill us.

The polio virus came to be known as a fearsome killer of children. That, according to Swartzberg, was because no one understood the virus’s own, very skinny, U-shaped curve. Polio, when contracted by very young children, isn’t nearly the killer we think it is. The polio virus is spread through infected fecal matter. When the public water supply consisted of rivers, lakes, wells, and pumps, infected fecal matter and drinking water mixed regularly. Infants were exposed to the virus early. When the government cleaned up drinking water — saving infants and adults from many other diseases — young children no longer came into contact with the polio virus. It was only later in childhood, when they went swimming in pools and streams, that kids contracted the virus. The parasite and the host no longer had matching adaptations. The disease that, for the most part, had been mild, became devastating.

That being said, even a disease that is “for the most part” mild can have terrible consequences, and there’s no comfort in being in the shallow part of the U, if you’re below the mortality line. In 2013, there were 145,700 deaths due to the measles. Before the measles vaccine became widely available, there were 2.6 million measles deaths per year. Some of those deaths were of children who got measles at the “right” age, when the effects of the disease were supposed to be mild. Some of those deaths were of people at the wrong age, who caught measles from children at the right age.

Does neural crest development drive domestication syndrome?

Monday, March 16th, 2015

Altered neural crest development could be the reason mammals change in oddly consistent ways during domestication:

As first noted by Darwin more than 140 years ago, domestic mammals tend to share certain characteristics—a suite of traits called the domestication syndrome.

The syndrome includes increased docility and tameness, coat color changes, reductions in tooth size, changes in craniofacial morphology, alterations in ear and tail form, more frequent and nonseasonal estrus cycles, alterations in hormone levels, changed concentrations of neurotransmitters, prolonged juvenile behavior, and reduced forebrain size.

Wilkins and Wrangham set about listing these mysterious marks of domestication and trying to match them to tissues affected by the neural crest. Within half an hour they decided that neural crest changes could plausibly account for most of the syndrome’s traits.

The neural crest hypothesis builds on observations from the long-running fox domestication experiments started in 1959 in Novosibirsk, Siberia, by Dmitri Belyaev:

After generations of selection purely for tameness, Novosibirsk foxes today show not only a friendly, people-loving disposition reminiscent of dogs, but also seemingly unrelated traits like curly tails, floppier ears and patches of white fur.

One of the many changes seen in the tame foxes was reduced size and function of their adrenal glands, which release stress hormones during the “fight-or-flight” response. This dampened adrenal function may lie at the heart of the behavioral changes observed in domestication syndrome. Wilkins et al. argue that one way to end up with smaller adrenal glands is via mild deficits of the neural crest.

The neural crest is a cell population that pinches off from the edge of the developing neural tube duing early embryogenesis. These cells migrate to many parts of the body and form the precursors of a plethora of tissue types, including pigment cells, parts of the skull, larynx, ears, teeth, sympathetic nervous system, and, of course, parts of the adrenal glands. So subtle changes in neural crest cell numbers, migration, or proliferation would lead to widespread phenotypic effects.

Neural Crest Domestication Syndrome Schematic

Wilkins et al. argue that their ideas dovetail with certain effects of human neural crest cell disorders, like the patches of depigmented skin and hair seen in Waardenburg syndrome or the jaw, ear and teeth phenotypes of Treacher Collins syndrome.

And even though neural crest cells don’t directly develop into the central nervous system, they could still partly explain why many domestic mammals have smaller forebrains than their wild ancestors. Experiments in chick embryos suggest that signals from neural crest cells play a crucial role in forebrain development. At this stage, not every component of the domestication syndrome can be firmly tied into the hypothesis. For example, the curly tails of dogs, pigs, and domestic foxes don’t have an obvious connection to neural crest deficits. Nonetheless, the authors believe enough links exist to warrant experimental tests of their predictions.

Aromatase Inhibitors

Saturday, March 14th, 2015

P.D. Mangan smashed his own chronic fatigue — through diet, exercise, sleep, vitamins and minerals — and expanded his view on health and fitness. When he looked into testosterone replacement therapy (TRT), his (carefully selected) doctor recommended an aromatase inhibitor instead:

Aromatase inhibitors work to boost T by decreasing the production of estrogens, especially estradiol, the most potent estrogen. Since estradiol feeds back on natural T production, inhibiting it, lowering estradiol levels results in an increase in T. My estradiol level at the last reading had been 70, higher than the upper limit of normal for men. Why that was I don’t know, but estradiol levels increase with age in men, so maybe that was all there was to it.

So my doctor prescribed me anastrozole, the most commonly used aromatase inhibitor. Anastrozole, also known by the trade name of Arimidex, is generic and cheap: I pay about twelve dollars for a three month supply. The dosage is one-half milligram twice a week, which is quite low. Estradiol is necessary even in men, with things like bone composition depending on it, so you don’t want to drive it too low or abolish it altogether.

I noticed a difference shortly after starting to take it. For one thing, my exercise recovery appears a lot better. I used to need a solid three days off between weight sessions in order to recover fully; now I need only two. I haven’t gained any weight, but I’d say my body composition is better: a bit leaner, a bit more muscle. (I can’t seem to gain weight to save my life at this point.) And, yes, my sex drive increased noticeably.

Last time my T level was measured, it had increased to 700, a modest increase of about 20%. In some studies, using higher doses of anastrozole and in low T men, T levels have increased as much as 50%, and free T levels even more. Probably my modest but noticeable results came about because I use a low dose and wasn’t low T to begin with.

However, my estradiol level decreased to 40, well within the normal range of a young man. This may also account for what I consider a successful result of the treatment.

Workforce Science

Friday, March 13th, 2015

Michael Housman, chief analytics officer for Evolv, discusses workforce science with Stephen Dubner (Think Like a Freak):

We looked specifically at pay in a research study that we just finished. We found, there is no question that pay enables people to stay longer, and they perform better. But the magnitude of the effects were actually not as big as we had expected. So for every 10 percent increase in pay, there’s a 5 percent reduction in quitting behavior. So it’s a less than one-for-one offset. And what’s more, is that when someone receives a raise, there are kind of these warm fuzzies that are associated with receiving the raise. There’s this halo effect. We found that that effect lasts longer than a week, but not as long as a month.


Your supervisor alone accounts for about as much variance in terms of longevity in these roles as everything else combined. The effects are staggering. Anecdotally, this seems to resonate with people because everyone has had a bad boss that made them leave the job. And we’ve really made understanding that supervisor/employee relationship a priority of ours because I came into this thinking that it was all about raw talent. You get the right person in the job and everything will work itself out, and that’s really the key decision. Our research has actually show that that’s actually a relatively small piece of the pie, something in the range of 10 to 15 percent.


What we found was that people who said they were honest actually were 33 percent more likely to be terminated for policy violations. So, learned our lesson, which is you don’t ask people if they’re honest because you tend not to get an honest answer.


We came up with a very creative way of measuring what we think is honesty and integrity, which is that we asked them upfront, early in the assessment, how are your computer skills, what’s your typing speed, do you feel comfortable with the keyboard and mouse, toggling between the screen and so on and so forth. And then guess what? About five or six screens later we tested them. We asked them what’s the shortcut for cutting and pasting text using a word processor. We actually measured their typing speed and accuracy. And what we found when we compared their self-assessed responses to their actual technical proficiency is that there were two groups of people that came out. One group was relatively honest. They were what they said they were in terms of the technical skills. And the other group we will call a little bit creative in that they claimed to be exceptional with the keyboard and mouse, but they couldn’t type more than 10 words a minute.

Evolv found that the honest employees tested better on just about every performance metric — except sales.

Orca Matriarchs

Wednesday, March 11th, 2015

Female orcas (killer whales) live into their nineties, even though they typically stop breeding at 40. Males only live to 50:

The only other species known to go through a menopause and live so long without reproducing are humans and short-finned pilot whales.

Croft and colleagues watched 750 hours of video of orca family pods. Over 100 individually recognisable orcas were filmed in the coastal Pacific waters off British Columbia and Washington since 1976.

The team found that post-menopausal females were 32 and 57 per cent more likely than non-menopausal adult females or adult males respectively to lead the group. They were also significantly more likely to lead the group in years when their staple food – chinook salmon – was in short supply.

“It’s probably accumulated experience,” says Croft. “Anyone who fishes for migratory trout or salmon will tell you that timing is key, that the fish return in particular cycles of tides and times of the year. Post-menopausal females probably get to know where to look and when.”


In most known animal species, males rapidly leave their parents, becoming completely independent. Male and female orcas, by contrast, stay in a family unit for life, with the males occasionally exchanging pods temporarily to breed. The upshot, says Croft, is that if females survive for many decades, breeding for the first three or four, their pod becomes increasingly replete with their descendants. Therefore, it becomes more and more in their own interests to safeguard the survival of the pod, and thereby their own genetic legacy.

“There’s a tipping point where they stop reproducing and help their offspring instead, as do grandmothers in the human context,” says Croft.

The findings seem to support the “grandmother hypothesis”, the idea that older women in hunter-gatherer communities evolved to go through the menopause so that they could carry on passing on their wisdom and experience about food sources and other survival tips without the added costs of having more children themselves.

Hunting with Wolves

Tuesday, March 10th, 2015

Modern humans formed an alliance with wolves soon after entering Europe:

We tamed some and the dogs we bred from them were then used to chase prey and to drive off rival carnivores, including lions and leopards, that tried to steal the meat.

“Early wolf-dogs would have tracked and harassed animals like elk and bison and would have hounded them until they tired,” said Shipman. “Then humans would have killed them with spears or bows and arrows.

“This meant the dogs did not need to approach these large cornered animals to finish them off — often the most dangerous part of a hunt — while humans didn’t have to expend energy in tracking and wearing down prey. Dogs would have done that. Then we shared the meat. It was a win-win situation.”

At that time, the European landscape was dominated by mammoths, rhinos, bison and several other large herbivores. Both Neanderthals and modern humans hunted them with spears and possibly bows and arrows. It would have been a tricky business made worse by competition from lions, leopards, hyenas, and other carnivores, including wolves.

“Even if you brought down a bison, within minutes other carnivores would have been lining up to attack you and steal your prey,” said Shipman. The answer, she argues, was the creation of the human-wolf alliance. Previously they separately hunted the same creatures, with mixed results. Once they joined forces, they dominated the food chain in prehistoric Europe — though this success came at a price for other species. First Neanderthals disappeared to be followed by lions, mammoths, hyenas and bison over the succeeding millennia. Humans and hunting dogs were, and still are, a deadly combination, says Shipman.

Humans slowly changed wolves into dogs, but humans may have changed to:

Consider the whites of our eyes, she states. The wolf possesses white sclera as does Homo sapiens though, crucially, it is the only primate that has them.

“The main advantage of having white sclera is that it is very easy to work out what another person is gazing at,” added Shipman. “It provides a very useful form of non-verbal communication and would have been of immense help to early hunters. They would been able to communicate silently but very effectively.”

Thus the mutation conferring white sclera could have become increasingly common among modern humans 40,000 years ago and would have conferred an advantage on those who were hunting with dogs.

(Hat tip to HBD Chick.)