Hosting experiments in governance styles

Wednesday, October 18th, 2017

The Seasteading Institute and its for-profit spin-off, Blue Frontiers, have racked up some real-world achievements in the past year, Nature (!) reports:

They signed a memorandum of understanding with the government of French Polynesia in January that lays the groundwork for the construction of their prototype. And they gained momentum from a conference of interested parties in Tahiti in May, which hundreds of people attended. The project’s focus has shifted from building a libertarian oasis to hosting experiments in governance styles and showcasing a smorgasbord of sustainable technologies for, among other things, desalination, renewable energy and floating food-production. The shift has brought some gravitas to the undertaking, and some ecologists have taken interest in the possibilities of full-time floating laboratories.

But the project still faces some formidable challenges. The team must convince the people of French Polynesia that the synthetic islands will benefit them; it must raise enough money to actually build the prototype, which it estimates will cost up to US$60 million; and once it is built, the group must convince the world that artificial floating islands are more than just a gimmick. Producing solid science and broadly useful technology would go a long way towards making that case.

Brain drain is real

Friday, October 6th, 2017

Brain Drain via Lymphatic SystemIn 1816, an Italian anatomist reported finding lymphatic vessels on the surface of the brain, but for two centuries the dogma has remained that the brain is an exceptional organ, with no way to remove waste:

Then in 2015, two studies of mice found evidence of the brain’s lymphatic system in the dura. Coincidentally, that year, Dr. Reich saw a presentation by Jonathan Kipnis, Ph.D., a professor at the University of Virginia and an author of one the mouse studies.

“I was completely surprised. In medical school, we were taught that the brain has no lymphatic system,” said Dr. Reich. “After Dr. Kipnis’ talk, I thought, maybe we could find it in human brains?”

To look for the vessels, Dr. Reich’s team used MRI to scan the brains of five healthy volunteers who had been injected with gadobutrol, a magnetic dye typically used to visualize brain blood vessels damaged by diseases, such as multiple sclerosis or cancer. The dye molecules are small enough to leak out of blood vessels in the dura but too big to pass through the blood-brain barrier and enter other parts of the brain.

At first, when the researchers set the MRI to see blood vessels, the dura lit up brightly, and they could not see any signs of the lymphatic system. But, when they tuned the scanner differently, the blood vessels disappeared, and the researchers saw that dura also contained smaller but almost equally bright spots and lines which they suspected were lymph vessels. The results suggested that the dye leaked out of the blood vessels, flowed through the dura and into neighboring lymphatic vessels.

To test this idea, the researchers performed another round of scans on two subjects after first injecting them with a second dye made up of larger molecules that leak much less out of blood vessels. In contrast with the first round of scans, the researchers saw blood vessels in the dura but no lymph vessels regardless of how they tuned the scanner, confirming their suspicions.

They also found evidence for blood and lymph vessels in the dura of autopsied human brain tissue. Moreover, their brain scans and autopsy studies of brains from nonhuman primates confirmed the results seen in humans, suggesting the lymphatic system is a common feature of mammalian brains.

“These results could fundamentally change the way we think about how the brain and immune system inter-relate,” said Walter J. Koroshetz, M.D., NINDS director.

Dr. Reich’s team plans to investigate whether the lymphatic system works differently in patients who have multiple sclerosis or other neuroinflammatory disorders.

It is to be feared that it may have become too popular

Friday, October 6th, 2017

Techniques of Systems Analysis presents a toy problem of allocating resources to planes and bombs in order to attack a couple potentially sheltered airfields with area defenses and local defenses. There are increasing returns to more planes attacking a particular airfield, as they can saturate defenses, but there are decreasing returns to hitting a particular airfield with more bombs:

It is probably clear to the reader that any reasonable person, including for example the ancient Greeks, could have followed our qualitative reasoning and understood, that when one is poor

  1. most of the money should be spent on decreasing attrition (buying planes)
  2. that one should concentrate on one target,

and conversely that when one is rich

  1. one should spend more money on bombs because the enemy’s defenses are automatically saturated by the number of planes in the attack
  2. that one can now afford to attack both targets.

The exciting thing that we have done is to make the above qualitative remarks numerical; that is, we have change what we called an “intuitive judgment” into what we called a “considered opinion.” How exciting this is can be seen from the fact that the ability to make this type of calculation and end up with Charts 17 and 18 is as much of an intellectual invention as the steam engine or the telegraph is a technical invention.

Techniques of Systems Analysis Charts 17 and 18

In fact, the concepts needed for this kind of analysis were invented in roughly the same time period as these two gadgets were. Moreover, they were not used for this kind of a question until late in the nineteenth century. In fact, it is only in the post World War II period, which saw a great expansion in the intellectual tools, computing ability, and suitable problems for this kind of analysis that it really became popular as an aid to the military planner. It is to be feared that it may have become too popular. Many people got so excited about the possibilities that they went overboard and claimed entirely too much for the technique.

One trouble was that people did not generally realize that even modern computing methods are not really powerful enough to evaluate complicated systems without the aid of a good deal of skillful “intuitive” supervision and guidance and, even more to the point, that the problems of uncertainty can swamp or negate a good deal of straightforward analysis. In many cases it was necessary to idealize the problem so much to make it tractable to analysis that the resulting considered opinion was less valuable than almost any reasonable intuitive judgment which was based on an examination of the unidealized problem.

The medieval period really shaped Europeans

Wednesday, September 20th, 2017

In an old interview, HBD Chick describes some of the ideas she has popularized:

In the 1960s, John Hajnal noticed a curious feature in Europe populations and that is the fact that, compared to just about everybody else in the world, northwest Europeans have this history (going back to at least the 1500s) of marrying quite late (mid-20s+) and/or not marrying at all. The line divides eastern and western Europe, but some other areas — like southern Italy and Spain, Ireland, and parts of Finland — are also “outside” the Hajnal line.

I picked up on it from an historian of medieval Europe and family history, Michael Mitterauer.  In his book, Why Europe?, Mitterauer discusses at some length how the Hajnal line coincides in space with the extent of manorialism in medieval Europe, the connection being that, because young people often had to wait to take possession of a farm within the medieval manor system, they also had to wait to marry.  I suspect that, over time, this led to the selection for, as they call it, “low time preference” in northwestern Europeans — or, at least, that this was the start of it in Europe. In other words, those individuals who could “restrain themselves” were eventually rewarded with reproductive success in the form of having access to a dedicated piece of farmland on a manor.  These are (some of) the people who successfully reproduced in the Middle Ages (along with the aristocracy).

Interestingly, the Hajnal line seems to coincide with other curious features of northwestern European society, too, such as little or no cousin marriage. Mitterauer makes the (convincing, I think) argument that the various bans on cousin marriage across medieval Europe enabled the spread of manors eastwards across the continent out of the Frankish heartland in northeast France/Belgium, since the cousin marriage ban weakened European clans, and clans and manorialism did not go together, the manor system being based around nuclear families.  Mitterauer points out the eastern limit of manorialism in Europe coincides with the Hajnal line and with the earliest and strongest bans on cousin marriage. Cousin marriage was, eventually, banned in eastern Europe (Russia, for example), but much later than in western Europe. Also, extended families seem to be more important “outside” the Hajnal line, in eastern Europe for example. Even average IQs appear to be generally higher “inside” the line than out, so I suspect that Hajnal’s discovery is much more important biologically than folks have supposed up ’til now.  Population geneticists and evolutionary biologists really ought to take a very close look at it.

Most folks out there who are interested in human biodiversity and the differences we see in American society today have probably read Hackett Fischer’s Albion’s Seed, but I cannot recommend enough Mitterauer’s Why Europe? for really understanding where Europeans came from!  It should really be on everyone’s shelf next to Albion’s Seed (or also on their Kindles).  I think, taking a page out of The 10,000 Year Explosion, that the medieval period really shaped Europeans — even transformed them (us!) — especially northwest Europeans. And I think the population’s switch to regular outbreeding (i.e., the avoidance of cousin marriage) played a huge role in that transformation because it set the stage for a whole new range of selection pressures to act on the population. The loosening of genetic ties in medieval Europe led the population down a path towards greater individuality versus collectivity, greater feelings of universalism versus particularism, and less of an orientation towards the extended family and more of a focus on the commonweal. These are all really a very unique set of traits compared to most other human populations, and the roots of those traits are biological, and their origins not that old. At least that’s what I think!

Depression is a physical illness

Wednesday, September 13th, 2017

Depression is a physical illness, research suggests — one that could be treated with anti-inflammatory drugs:

A raft of recent papers, and unexpected results from clinical trials, have shown that treating inflammation seems to alleviate depression.

Likewise when doctors give drugs to boost the immune system to fight illness it is often accompanied by depressive mood — in the same way as how many people feel down after a vaccination.

Professor Ed Bullmore, Head of the Department of Psychiatry at the University of Cambridge, believes a new field of ‘immuno-neurology’ is on the horizon.

“It’s pretty clear that inflammation can cause depression,” he told a briefing in London to coincide with this week’s Academy of Medical Sciences FORUM annual lecture which has brought together government the NHS and academics to discuss the issue.

“In relation to mood, beyond reasonable doubt, there is a very robust association between inflammation and depressive symptoms. We give people a vaccination and they will become depressed. Vaccine clinics could always predict it, but they could never explain it.

“The question is does the inflammation drive the depression or vice versa or is it just a coincidence?

“In experimental medicine studies if you treat a healthy individual with an inflammatory drug, like interferon, a substantial percentage of those people will become depressed. So we think there is good enough evidence for a causal effect.”

[...]

The immune system triggers an inflammatory response when it feels it is under threat, sparking wide-ranging changes in the body such as increasing red blood cells, in anticipation that it may need to heal a wound soon.

Scientists believe that associated depression may have brought an evolutionary benefit to our ancestors. If an ill or wounded tribal member became depressed and withdrawn it would prevent a disease being passed on.

[...]

Around 60 per cent of people referred to cardiologists with chest pain do not have a heart problem but are suffering from anxiety.

Figures also show that around 30 per cent of people suffering from inflammatory diseases such as rheumatoid arthritis are depressed — more than four times higher than the normal population.

Likewise people who are depressed after a heart attack are much more likely to suffer a second one, while the lifespan for people with cancer is hugely reduced for people with mental illness.

[...]

One promising treatment for depression on the horizon is the use of electrical stimulation to change the signals between the brain and the immune system.

Prof Kevin Tracey, President and CEO, of the US Feinstein Institute for Medical Research, discovered that the brain controls production of a deadly inflammatory chemical called TNF, which if released in high doses can be fatal, causing people to, literally, die of shock.

He has recently developed a electrical device which reproduces the connection and switches off the chemical. Three quarters of patients with rheumatoid arthritis recovered following trials.

Gregory Cochran revisits Guns, Germs, and Steel

Sunday, September 10th, 2017

Gregory Cochran can be an extremely uncharitable critic, but the middle of his review of Jared Diamond’s Guns, Germs, and Steel is measured:

Most significant domestic animals were domesticated somewhere in Eurasia or North Africa, only a couple in South America (llamas and vicuna), nothing in the rest of the world. Diamond argues that this wasn’t because populations varied in their interest in or aptitude for domestication. Instead, the explanation is that only a few large animals were suitable for domestication.

He’s unconvincing. Sure, there were places where this was true: what were the Maori in New Zealand going to domesticate — weta? And Australia didn’t have a lot of large mammals, at least not after people wiped out its megafauna. But there are plenty of large animals in Sub-Saharan Africa, yet none were domesticated. He argues that zebras were wilder, more untameable than horses — but people have tamed zebras, while the wild ancestors of horses (tarpans, which survived into the 19th century) were usually described as untameable. The wild ancestors of cows (aurochsen, which survived into the 17th century) were big and mean. They enjoyed impaling people on their horns and flinging them for distance. The eland is a large African antelope, and by Diamond’s argument it must be untameable, since the locals never tamed it. But in fact it’s rather easy to tame, and there’s now a domesticated version.

The key here is that one can select for disposition, for tameness, as well as obvious physical features, and an animal can go from totally wild to cuddly in ten generations — remember the selection experiment with Siberian foxes. In the long run disposition is not a big obstacle. Selection fixes it — selection applied to above-neck traits.

Diamond makes a similar argument about domesticating plants as crops: only a few plants were suitable for domestication, and part of the reason that some populations never developed crops was a lack of suitable plant species. I’ll give him Eskimos. but that’s about it.

Here his argument is far weaker: there are a buttload of plants that could be domesticated and might be quite useful, yet have not been. Enthusiastic agronomists keep trying to get funding for domestication of jojoba, or buffalo gourd, or guayule — usually government interest runs out well before success.

The reason that a few crops account for the great preponderance of modern agriculture is that a bird in the hand — an already-domesticated, already-optimized crop — feeds your family/makes money right now, while a potentially useful yet undomesticated crop doesn’t. One successful domestication tends to inhibit others that could flourish in the same niche. Several crops were domesticated in the eastern United States, but with the advent of maize and beans (from Mesoamerica) most were abandoned. Maybe if those Amerindians had continued to selectively breed sumpweed for a few thousand years, it could have been a contender: but nobody is quite that stubborn.

Teosinte was an unpromising weed: it’s hard to see why anyone bothered to try to domesticate it, and it took a long time to turn it into something like modern maize. If someone had brought wheat to Mexico six thousand years ago, likely the locals would have dropped maize like a hot potato. But maize ultimately had advantages: it’s a C4 plant, while wheat is C3: maize yields can be much higher.

Why didn’t people domesticate foxes, back in the day? Is it because foxes are solitary hunters, don’t have the right pack structure and thus can’t be domesticated, blah blah blah? No: they’re easy to domesticate. But we already had dogs: what was the point? You had to be crazy like a Russian.

One other factor has tended to suppress locally-domesticated plants — what you might call alien advantage. If you grow a crop near its origin, there will be local pests and pathogens that are adapted to it. It you try growing it in a distant land with a compatible climate, it often does very much better than in its own country. So… crops from Central and South America have done very well in Africa, or sometimes in Southeast Asia. Rubber tree plantations work fine in Malaysia and Liberia but fail in Brazil. Maize is the biggest crop in Africa, while manioc and peanuts are important. Most cocoa is grown in Africa: most coffee is grown in South America.

Sometimes, Diamond was wrong, but in a perfectly reasonable way, not in the devoted service of a flawed thesis, but just because the facts weren’t all in yet. We all need to worry about that.

He considered the disastrous impact of Eurasian and African diseases on the inhabitants of the New World, contrasted with a much smaller impact in the opposite direction, and concluded that a major factor had probably been transmission from domesticated animals. Eurasians domesticated quite a few animals, Amerindians not many — perhaps that was the explanation. In Guns, Germs, and Steel (p 207), he mentions measles, tuberculosis, smallpox, influenza, pertussis (whooping cough), and falciparum malaria as likely cases of transmission from domesticated animals.

We know a lot more about this we did twenty years ago, since we’ve been sequencing the genes of everything in sight — and it appears that Diamond was mistaken about the most important members of that list. TB appears to be ancient in humans, smallpox probably came from some East African rodent, while falciparum malaria seems to have derived from a form of malaria carried by gorillas. Measles really does descend from rinderpest, a cattle plague, but then rinderpest (and mumps) probably descend from bat viruses. Domesticated animals do play a role in influenza, along with wild birds. I don’t think we know the origins of pertussis.

So why then was the Old World such a fount of infectious disease? Well, it’s bigger. Civilization was older, had had more time to pick up crowd diseases. Humans have close relatives in the Old World that carried important pathogens (chimps and gorillas), while Sasquatches are germ-free. Important pathogens, especially those with insect vectors like malaria, maybe couldn’t make it to the New World through ice-age Beringia. Transportation and trade were more advanced in the Old World, and spread disease more efficiently.

I don’t think that Diamond was making excuses for Amerindians in this, as he was when talking about domestication: having lots of plagues isn’t usually considered an accomplishment. Origination in livestock seemed like a reasonable idea at the time, considering the state of the art. It seemed so to others as well, like William McNeill. It’s not totally wrong — definitely true for measles — but it’s not a huge part of the explanation.

Sometimes Diamond was right. He says that it’s a lot easier for crops to spread east and west than north and south, and he’s correct. Middle Eastern crops worked in much of Europe, especially southern Europe, and also were important in India and China. On the other hand maize had to adapt to shorter growing seasons as it spread into North America: this took time. Post-Columbian spread of maize in Africa was much faster.

Geographical barriers were major factors in slowing the spread of civilization. Although a few distressed mariners must have occasionally crossed the Pacific in ancient times, nothing significant (in terms of crops or ideas) seems to have made it across before Columbus. Amerindians had to develop everything themselves, while populations in the Old World were sharing seeds and ideas (and plagues). Having to invent everything from scratch is a disadvantage, no question.

The geography of the Americas greatly inhibited contact between Mesoamerica and the Andean civilization: even today the Pan-American highway doesn’t go all the way through. The Sahara was even worse, but most of the budding civilizations of Eurasia did manage some contact.

Being bitten by an Australian tiger snake is a wholly unpleasant experience

Saturday, September 2nd, 2017

Being bitten by an Australian tiger snake is a wholly unpleasant experience:

Within minutes, you start to feel pain in your neck and lower extremities — symptoms that are soon followed by tingling sensations, numbness, and profuse sweating. Breathing starts to become difficult, paralysis sets in, and if left untreated, you’ll probably die. Remarkably, the venom responsible for these horrifying symptoms has remained the same for 10 million years — the result of a fortuitous mutation that makes it practically impossible for evolution to find a counter-solution.

[...]

The secret to tiger snake venom has to do with its biological target — a clotting protein called prothrombin. This critically important protein is responsible for healthy blood clotting, and it exists across a diverse array of animal species (humans included). Any changes to this protein and the way it works can be catastrophic to an animal, leading to life-threatening conditions such as hemophilia. It’s this vulnerable target that makes the tiger venom so potent, but at the same time, animals are under intense evolutionary pressure to maintain prothrombin in its default, functional state. As Fry explained in a release, if the animals had any variation in their blood clotting proteins, “they would die because they would not be able to stop bleeding.”

Influential and affable

Friday, September 1st, 2017

Researchers developed a six-item self-report measure of charisma — having influence over others (including being able to guide them) and coming across as affable (being able to make others feel comfortable and at ease):

Participants taking the new test are asked to rate their agreement on a five-point scale from 1 Strongly Disagree to 5 Strongly Agree, whether “I am someone who…”:

  • Has a presence in a room
  • Has the ability to influence people
  • Knows how to lead a group
  • Makes people feel comfortable
  • Smiles at people often
  • Can get along with anyone

(The first three items tap the influence factor of charisma and the last three items tap the affability factor.)

Having devised their test, the researchers put it through its paces in a number of ways. For example, they asked volunteers to complete the new charisma measure plus lots of other established psychological measures, and were able to show that scores on the new test are related to but distinct from established psychological constructs such as the Big Five personality traits, emotional intelligence and political skill. For instance, people’s scores on the the affability factor of the new test correlated with their trait Agreeableness, which makes conceptual sense. On the other hand, charisma scores appeared to be completely separate from intelligence, suggesting that “individual differences in general charisma are not redundant with cognitive ability”.

In another study the researchers asked small groups of unacquainted students to chat to each other for five minutes and to rate themselves and other group members on the charisma test. This showed that individuals’ charisma self-ratings on the test correlated with the charisma ratings they received from others. In another similar study, students’ self-ratings on the charisma test correlated with ratings they received from friends or family.

The researchers also asked pairs of unacquainted students to chat to each other for ten minutes and then rate each other’s likability. The students also rated themselves on standard personality measures and on the new charisma measure. The higher the students scored on charisma (specifically the affability factor), the more likable they tended to be rated by their partners, even after taking into account their scores on the Big Five personality traits of Extraversion, Agreeableness etc.

In another demonstration of the tests’ validity, the researchers asked more student volunteers to read out either a weak or strong argument for wind energy and then to complete the charisma test. Next, participants on Amazon’s Mechanical Turk listened back to the recordings and rated how persuasive they found them. When it came to the weak arguments, they found participants who’d scored themselves higher on charisma (specifically the influence factor) to be more persuasive. In relation to the affability factor, women who scored higher on this were rated as more persuasive, whereas for men the affability scores were not relevant (the researchers speculated this has to do with cultural expectations for women to be warm).

Low explosives deflagrate

Thursday, August 31st, 2017

High explosive detonate, while low explosives deflagrate:

Low explosives are compounds where the rate of decomposition proceeds through the material at less than the speed of sound. The decomposition is propagated by a flame front (deflagration) which travels much more slowly through the explosive material than a shock wave of a high explosive. Under normal conditions, low explosives undergo deflagration at rates that vary from a few centimetres per second to approximately 400 metres per second.

[...]

Low explosives are normally employed as propellants. Included in this group are petroleum products such as propane and gasoline, gunpowder (both black and smokeless), and light pyrotechnics, such as flares and fireworks.

[...]

High explosives (HE) are explosive materials that detonate, meaning that the explosive shock front passes through the material at a supersonic speed. High explosives detonate with explosive velocity ranging from 3 to 9 km/s. For instance, TNT has a detonation (burn) rate of approximately 5.8 km/s (19,000 feet per second),

Detonation has an interesting etymology:

Detonation (from Latin detonare, meaning “to thunder down”) is a type of combustion involving a supersonic exothermic front accelerating through a medium that eventually drives a shock front propagating directly in front of it.

[...]

In classical Latin, detonare means “to stop thundering”, as in weather. The modern meaning developed later.

They were warriors out of a very organized society

Saturday, August 26th, 2017

Archaeologists uncovered four ancient ring-shaped fortresses in Denmark in the 1930s and only recently discovered another:

Q: How did you discover the fortress?

A: It’s a bit of a detective story. I’ve been working with these ring fortresses for quite some time, and I came to the conclusion that their distribution didn’t make sense. There were gaps in the network of known fortresses where logically another fortress should have been. I went out looking for landscape features that matched those of the fortresses we knew already, namely accessibility to land and water routes. There were only a few locations in Denmark that really fit the pattern. The Danish state has made a high-resolution LIDAR image of the whole country, so we searched that and found this very, very big feature.

Q: How did it go unseen for so long?

A: The agricultural activity around it was extremely destructive. For hundreds of years throughout the Middle Ages, peasants ploughed and leveled the field. When we came, the fortress’s ramparts were less than half a meter above the average level of the field. You could walk the field, and I might have a hard time convincing you there was anything at all, but the LIDAR image was decisive.

Q: Why did Vikings build ring-shaped fortresses?

A: The ring is the perfect shape for a fortress. It’s the shape that encompasses the greatest area within the smallest circumference. But there’s no need to make it a perfect circle, and that’s what distinguishes the Viking Age ring fortresses in Denmark. Clearly the person who built these Viking ring fortresses—and we think that was King Harald “Bluetooth” Gormsson [who united Scandinavia, converted the Danes to Christianity and, more recently, lent his name to Bluetooth wireless technology], whose father was the first ruler of the Danish kingdom—wanted something more. All the fortresses share this strict geometry. Somebody with magnificent land-surveying skills was involved in this building work for no other reason than sheer prestige and to signal command and ability.

Q: Did the they invent the ring-shaped fortress?

A: No, they probably learned it from their own invasions in England. The people there built a network of fortifications about 100 years before our structures as a defense against the Vikings. It worked so well that the invaders could not get a foothold and had to turn back. It was a huge success for the Anglo-Saxon kings. So we believe that when ring fortresses then pop up in Denmark, it’s a copying of that strategy.

Viking Ring Fort Reconstruction

Q: Why are these structures important?

A: These ring fortresses have been the biggest mystery in Viking archaeology since the 1930s. People couldn’t believe the Vikings in their own country built these structures. They thought foreign armies must have built them. But as we found more of these, we found it was indeed a Danish king and his Viking warriors, and for that reason they have been part of the most fundamental reassessment of what the Vikings were all about. They were warriors, obviously, but they were warriors out of a very organized society.

They aren’t nestled within the same part of the fungal family tree

Thursday, August 24th, 2017

Many mushrooms are magical, and they’re not all closely related:

Around 200 species [produce psilocybin], but they aren’t nestled within the same part of the fungal family tree. Instead, they’re scattered around it, and each one has close relatives that aren’t hallucinogenic. “You have some little brown mushrooms, little white mushrooms … you even have a lichen,” Slot says. “And you’re talking tens of millions of years of divergence between those groups.”

It’s possible that these mushrooms evolved the ability to make psilocybin independently. It could be that all mushrooms once did so, and most of them have lost that skill. But Slot thought that neither explanation was likely. Instead, he suspected that the genes for making psilocybin had jumped between different species.

These kinds of horizontal gene transfers, where genes shortcut the usual passage from parent to offspring and instead move directly between individuals, are rare in animals, but common among bacteria. They happen in fungi, too. In the last decade, Slot has found a couple of cases where different fungi have exchanged clusters of genes that allow the recipients to produce toxins and assimilate nutrients. Could a similar mobile cluster bestow the ability to make psilocybin?

To find out, Slot’s team first had to discover the genes responsible for making the drug. His student Hannah Reynolds searched for genes that were present in various hallucinogenic mushrooms, but not in their closest non-trippy relatives. A cluster of five of genes fit the bill, and they seem to produce all the enzymes necessary to make psilocybin from its chemical predecessors.

After mapping the presence of these five genes in the fungal family tree, Slot’s team confirmed that they most likely spread by jumping around as a unit. That’s why they’re in the same order relative to each other across the various hallucinogenic mushrooms.

These genes seem to have originated in fungi that specialize in breaking down decaying wood or animal dung. Both materials are rich in hungry insects that compete with fungi, either by eating them directly or by going after the same nutrients. So perhaps, Slot suggests, fungi first evolved psilocybin to drug these competitors.

His idea makes sense. Psilocybin affects us humans because it fits into receptor molecules that typically respond to serotonin — a brain-signaling chemical. Those receptors are ancient ones that insects also share, so it’s likely that psilocybin interferes with their nervous system, too. “We don’t have a way to know the subjective experience of an insect,” says Slot, and it’s hard to say if they trip. But one thing is clear from past experiments: Psilocybin reduces insect appetites.

Researchers have quantified Muhammad Ali’s mental decline

Thursday, August 24th, 2017

Researchers have quantified Muhammad Ali’s mental decline as he took more punches throughout his career:

In 1968, Ali spoke at a rate of 4.1 syllables per second, which is close to average for healthy adults. By 1971, his rate of speech had fallen to 3.8 syllables per second, and it continued sliding steadily, year by year, fight by fight. An ordinary adult would see little or no decline in his speaking rate between the ages of 25 and 40, but Ali experienced a drop of more than 26% in that same period. Slowing his speaking rate couldn’t indefinitely compensate for the deterioration of signals between his brain and his speech muscles. The paper suggests that by 1978, six years before his Parkinson’s syndrome diagnosis and three years before his retirement from boxing, Ali was slurring his words.

In addition to this overall decline in speech, researchers found a strong relationship between Ali’s activity in the ring and his verbal skills. The more punches he took, the more steeply his speaking abilities declined. (Listen to a sample of Ali’s speech changes.)

In 1977, the 35-year-old Ali fought a brutal, 15-round bout with Earnie Shavers. One of the strongest punchers in boxing history, Shavers hit Ali with 266 punches, including 209 power punches, according to the new CompuBox data. Before his fight with Shavers, Ali spoke at a rate of 3.7 syllables per sec. After the fight, his speaking rate fell 16% to 3.1 syllables per sec. His voice also became less animated in the immediate aftermath of fights.

Eating raw marine mammals isn’t the same as eating cooked land mammals

Tuesday, August 22nd, 2017

A couple decades ago, when I first became interested in evolutionary fitness and ketogenic diets, I read that the Eskimos had traditionally lived on a diet almost entirely bereft of carbohydrates — a diet that Vilhjalmur Stefansson tried to promote amongst non-Eskimos in magazine articles and then in his 1946 book, Not by Bread Alone.

Traditional Inuit Diet

But Stefansson Westernized the Inuit diet of raw marine mammals and instead promoted a cooked, all-animal-food-diet, including dairy and eggs — a difference that matters once you realize how marine mammals have adapted to diving and operating with limited air:

Stefansson — who died of a stroke at 82 (though, surprisingly, he lived longer than a lot of other VLC authors) — made the fatal assumption that land mammals and marine mammals are similar. They aren’t. They are entirely different, and the difference is tantamount to different species classification. The Inuit were exploiting unique carbohydrate properties in these marine mammals that aren’t found in land mammals.

It turns out that marine mammals that spend a good deal of their time diving to great depths have significant glycogen stores. Sperm whales make routine dives to 400 meters for 40 minutes and can reach a maximum depth of 2000 meters (6,560 feet, or 1.25 miles). Narwhals make some of the deepest dives recorded for a marine mammal, diving to at least 800 meters (2,600 feet) 18 and 25 times per day every day for 6 months, with many dives reaching 1,500 meters (4,900 feet). Narwhals have been recorded diving to as deep as 1,800 meters (5,900 ft, over one mile). In addition to making remarkably deep dives, narwhals also spend more than 3 hours per day below 800 meters — this is an incredible amount of time at a depth where the pressure can exceed 2200 PSI (150 atmospheres).

During their deep dives these marine mammals run out of oxygen and switch to their unique glycogen-based energy stores. They store large quantities of glycogen in very odd places, but it typically gets concentrated in the skin and organs. Researchers have discovered significant “glycogen pools” in the narwhal’s arterial thoracic retia. Ringed seals have “large quantities of glycogen” in a gelatinous material near their sinuses. A sperm whale’s blubber ranges from 8–30% carbohydrates, mostly believed to be glycogen. The hearts and brains of weddel seals have concentrations of glycogen that are two to three times that of land mammals. Furthermore; in marine mammals, these organs tend to be larger in proportion to the total body weight than in land-based mammals.

In 1973, George and Ronald wrote about the harp seal, “All the fiber types contained considerable amounts of glycogen…it is postulated that the seal muscle is basically geared for anaerobic use of carbohydrate as an adaptation for the animal’s diving habit.”

In a paper on diving marine mammals Hochachka and Storey wrote, in 1975, “In the terminal stages of prolonged diving, however, even these organs must tolerate anoxia for surprisingly long times, and they typically store unusually large amounts of glycogen for this purpose.”

Perhaps what’s most disappointing is that Stefansson never bothered to clearly explain the Inuit’s favorite sweet-tasting whale skin dish (muktuk), that was already known by scientists to be a carbohydrate-rich food. In 1912, the Journal of the American Medical Association (JAMA) had reported, “the skin [of the narwhal] contains a remarkable amount of glycogen, thus supplying sufficient quantities of a carbohydrate to cure the scorbutus. The walrus liver also contains much glycogen.”

So, this idea that we can compare glycogen content of a [grilled, braised, stewed, or otherwise thoroughly cooked, long after dead] cow or human to that of what the Inuit were eating is entirely misguided. We’re talking about marine animals that need large quantities of glycogen to complete their extended deep dives.

It’s well known that glycogen does not survive very long post-mortem. So, it was no coincidence that the Inuit often consumed glycogen-rich foods quickly and froze whatever they couldn’t consume. Peter Freuchen, a Danish doctor and member of the 5th Thule expedition based at Melville Peninsula from 1919-1925, wrote that when a whale was brought to the beach at Repulse Bay everyone feasted on large quantities of the skin until their jaws became too sore to continue.

After a hunt, seals are quickly cut to expose the internal organs. Kristen Borré writes in her 1991 report for the Medical Anthropology Quarterly, that “one of the hunters slits the abdomen laterally, exposing the internal organs. Hunters first eat pieces of liver or they use a tea cup to gather some blood to drink.” This was no coincidence. The parts of the animals with the most glycogen were eaten quickly.

At the time of death, the glycogen and free glucose in beef muscle contains approximately 6g of glucose equivalents per pound. As explained above, diving marine mammals have much more glycogen than land mammals. When we consider that the average Inuit consumed 5 to 10 pounds, or more, of raw fresh or flash-frozen meat per day, it should be clear that they were consuming a lot of glycogen.

But, of course, the Inuit consumed other carbs, too. They consumed berries, seaweed, nuts, corms, and tubers — such as yupik potatoes, boiled polysaccharide-rich seaweed, glycogen-rich winter mussels. See the Disrupting Paleo series for a more indepth discussion of these foods and their importance in the Inuit diet.

What about the glycogen in the foods that weren’t consumed rapidly? If only the Eskimos had access to extremely cold temperatures where they could rapidly freeze chunks of meats immediately after hunting… Hmmm… Kidding aside, the Inuit not only consumed fresh raw meat, blubber and skin that was rich in glycogen, but they also consumed it flash frozen — thus preserving and maximizing its glycogen.

Interestingly, Clarence Birdseye — who invented technology for “flash freezing” — learned about it from the Inuit. According to Wikipedia, “He was taught by the Inuit how to ice fish under very thick ice. In -40°C weather, he discovered that the fish he caught froze almost instantly, and, when thawed, tasted fresh.” He recognized immediately that the frozen seafood sold in New York was of lower quality than the frozen fish of Labrador, and saw that applying this knowledge would be lucrative.

While listening to the audio version of Endurance, about Shackleton’s failed attempt to cross the last uncharted continent on foot, I noted that the famished explorers found penguin liver surprisingly delicious.

Greeks are close to Jews, and Lebanese are far from Arabs

Monday, August 21st, 2017

Nassim Nicholas Taleb and Pierre Zalloua share some recent genetic discoveries that neither Antisemitic Nordic Supremacists nor Arab Nationalists will like:

1) Greeks were close to Jews, as the “Aryan” theory is genetically bogus & 2) Lebanese (both Christian and Moslem) are very far from being Arabs (the historical accounts of Arab migrations to Lebanon are fiction).

The Greeks had to learn civilization all over again

Sunday, August 20th, 2017

Without Classical Greece and its accomplishments, Razib Khan says, the West wouldn’t make any sense:

But here I have to stipulate Classical, because Greeks existed before the Classical period. That is, a people who spoke a language that was recognizably Greek and worshipped gods recognizable to the Greeks of the Classical period. But these Greeks were not proto-Western in any way. These were the Mycenaeans, a Bronze Age civilization which flourished in the Aegean in the centuries before the cataclysms outlined in 1177 B.C.

The issue with the Mycenaean civilization is that its final expiration in the 11th century ushered in a centuries long Dark Age. During this period the population of Greece seems to have declined, and society reverted to a more simple structure. By the time the Greeks emerged from this Dark Age much had changed. For example, they no longer used Linear B writing. Presumably this technique was passed down along lineages of scribes, whose services were no longer needed, because the grand warlords of the Bronze Age were no longer there to patronize them and make use of their skills. In its stead the Greeks modified the alphabet of the Phoenicians.

To be succinct the Greeks had to learn civilization all over again. The barbarian interlude had broken continuous cultural memory between the Mycenaeans and the Greeks of the developing polises of the Classical period. The fortifications of the Mycenaeans were assumed by their Classical descendants to be the work of a lost race which had the aid of monstrous cyclops.

Of course not all memories were forgotten. Epic poems such as The Iliad retained the memory of the past through the centuries. The list of kings who sailed to Troy actually reflected the distribution of power in Bronze Age Greece, while boar’s tusk helmets mentioned by Homer were typical of the period. To be sure, much of the detail in Homer seems more reflective of a simpler society of petty warlords, so the nuggets of memory are encased in later lore accrued over the centuries.

A recent Nature paper looks at the Genetic origins of the Minoans and Mycenaeans and shows that Minoans and Mycenaeans were genetically similar.