We know that fungi can infect humans

Tuesday, February 14th, 2023

I haven’t watched The Last of Us (yet), but it seems to be based on a scenario I’ve discussed before of how a zombie outbreak could (semi-plausibly) happen:

We know that fungi can infect humans. We also know that fungal networks exist in most of the world’s forests. These mycorrhizal networks have a symbiotic relationship with trees and other plants in the forest, exchanging nutrients for mutual benefit. These networks can be quite large, and there are studies that demonstrate the potential for chemical signals to be transmitted from one plant to another via the mycorrhizal network. That, in turn, means that fungal filaments could perform both vascular and neural functions within a corpse.

This leads us to the following scenario: microscopic spores are inhaled, ingested, or transmitted via zombie bite. The spores are eventually dispersed throughout the body via the bloodstream. Then they lie dormant. When the host dies, chemical signals (or, more accurately, the absence of chemical signals) within the body that occur upon death trigger the spores to activate, and begin growing. The ensuing fungal network carries nutrients to muscles in the absence of respiration or normal metabolism.

Part of the fungal network grows within the brain, where it interfaces with the medulla and cerebellum, as well as parts of the brain involving vision, hearing and possibly scent. Chemicals released by the fungi activate basic responses within these brain areas. The fungi/brain interface is able to convert the electrochemical signals of neurons into chemical signals that can be transmitted along the fungal network that extends through much of the body. This signal method is slow and imperfect, which results in the uncoordinated movements of zombies. And this reliance on the host’s brain accounts for the “headshot” phenomenon, in which grievous wounds to the brain or spine seem to render zombies fully inert.

Egalitarianism is an uneasy compromise

Saturday, February 4th, 2023

For most of human history we have been egalitarian, Rob Henderson explains, with status equivalency among the decision-makers of a group and no powerful group leaders:

Behavior in these societies is maintained by “moral communities.” Both men and women are quick to judge the misdeeds of others, and compare such actions to how people should behave.

For upstarts, awareness of predictable and swift punishment tends to modify their behavior. And if they don’t adhere to moral norms, they get eliminated.

A band will use social control against any adult, usually male, who behaves too assertively in an aggressive way.

Generally, both sexes get to contribute to the decision of whether a person has been socially deviant. For severe infractions, people were either ostracized or killed. Though women have some say in the decisions, the executions were typically carried out by men.

Within the groups, adult men tend to treat one another as equals, and women and children are treated as subordinate.

The book describes the indigenous Yanomamo tribe in Brazil. They have chiefs—leaders selected for their skill or bravery. But they cannot give direct orders to other men. They can simply make suggestions, which tend to hold more weight than the suggestions of others.

But this egalitarianism doesn’t apply within families, where men beat their wives without consequence. “On one occasion, though,” Boehm writes, “when a man was beating his wife so brutally that he was likely to kill her, a chief did intervene physically.”

Generally speaking in hunter-gatherer communities, if there was a conflict of interest between men and women, rules typically favored the men.

For example, male hunter-gatherers throughout Australia used women as political pawns. Wives could be required to have sex with multiple men at special ceremonies. They could also be loaned to a visiting man, or ordered by their husbands to have sex with another man in order to erase a debt or make peace. In 1938, the anthropologist A.P. Elkin reported that Australian Aboriginal women lived in terror of the expectations others had of them during ceremonies.

Generally speaking, across hunter-gatherer societies, status equivalency appears to apply only to adult males. Strict egalitarianism in making decisions for the community is practiced only among men.

[…]

The book [Hierarchy in the Forest: The Evolution of Egalitarian Behavior] states that both egalitarianism (status equivalency) and hierarchy are “natural conditions of humanity.” Everyone wants to dominate others, and everyone doesn’t want to be dominated by others.

Egalitarianism is an uneasy compromise.

As the anthropologist Harold Schneider puts it: “All men seek to rule, but if they cannot rule they prefer to be equal.”

[…]

But even with the power of norms and social pressure, violence is far more common in hunter-gatherer bands than in modern societies. Bands and tribes strongly favor peace, cooperation, and despise conflict, but violent outbreaks are not infrequent.

Perhaps the most important reason for this is that there is no formalized authority.

There is no strong leader or council of elders who have the power to arbitrate disputes. In fact, those who attempt to broker peace are often killed. As a consequence, once a serious conflict arises, there is no truly effective means of settling the dispute.

The most common cause of murder in hunter-gatherer communities involves matters of sex, adultery, or jealousy.

This form of capital punishment domesticated us

Thursday, February 2nd, 2023

Rob Henderson explains the self-domestication hypothesis, discussed by Harvard anthropologist Richard Wrangham in The Goodness Paradox:

The idea is that humans domesticated each other. Within hunter-gatherer communities, whenever aggressive or disagreeable males attempted to exert unwelcome dominance, other males would conspire to kill them.

Early human communities selected against reactive aggression: arrogance, bullying, random violence, and the monopolizing of food and females.

Over time, early humans eliminated those who were overtly aggressive. They killed or ostracized upstarts hungry for power; men with aggressive political ambitions. Other men would quietly conspire to collectively murder troublesome males.

They were good at this, because they were well-practiced at killing large-bodied mammals during a hunt. Humans are large-bodied mammals.

This form of capital punishment domesticated us.

Wrangham compared the level of within-group conflict among hunter-gatherer humans to that of chimpanzees. Chimps are 150 to 550 times more likely than humans to commit violence against their peers.

Humans are far gentler to members of their own community than chimps are, thanks to our ancestors and their ability to plan organized murder.

Our ancestors were polygynous until about three hundred thousand years ago

Tuesday, January 31st, 2023

Rob Henderson opens his piece on Reverse Dominance Hierarchies with a quote from Blueprint: The Evolutionary Origins of a Good Society, by Nicholas A. Christakis:

Our ancestors were polygynous until about three hundred thousand years ago, primarily monogamous until about ten thousand years ago, primarily polygynous again until about two thousand years ago, and primarily monogamous since then.

Henderson continues:

Homo sapiens in our current form arose around 300,000 years ago. Out of 300,000 years, only about 8,000 of those years were humans in primarily polygynous arrangements.

So for 97% of our history, humans have primarily been monogamous.

From their perch in the heavens they could witness solemn oaths between the men of the steppe

Monday, January 23rd, 2023

Razib Khan describes the whirlwind of wagons that swept through Eurasia 5,000 years ago:

This eruption of warrior ferocity five thousand years ago was triggered by an economic revolution that swept across Eurasia, the advent of an unbeatable new cultural toolkit that finally harnessed the full productive potential of the cattle, sheep and goats that had long been viewed as simply mobile meat lockers in agricultural societies. Though these animals had already been domesticated by 8500 BC, it took millennia to perfect milk, cheese and wool production, and the harnessing of oxen as beasts of burden. North of the Black Sea, this revolution arrived around 3500 BC, as small groups of farmers huddling on river banks shifted from a mixed agro-pastoralist production system eking out a living cultivating wheat in an unforgivingly short growing season, to one of pure pastoral nomadism that turned over the vast grasslands around them to massive herds of animals.

Within a few generations, these people, known as the Yamnaya to archaeologists, were both grazing their cattle in the heart of Europe and driving their sheep up to the higher pastures of the Mongolian Altai uplands. This 5,000-mile distance (8,000 km) was spanned in just a few generations by the former farmers. Mobility was the first result of the switch to nomadism, as fleets of wagons began to roll across the steppe, like swarms of lumbering migratory villages, eternally bound for greener pastures. But far beyond a simple shift in aggregate economic production, many later knock-on effects were to reshape the culture of Eurasian societies, some of which continue to impact us down to the present.

Foremostly, the status and power of males rose within these cultures, in tandem with the shift to nomadism. Almost all contemporary nomadic pastoralists are patrilineal and patriarchal, so identity and wealth are passed from father to son, just as with the Plains Indians. Men occupy all of the de jure political leadership positions, if not all de facto ones. This is in contrast to rooted farming cultures, which exhibit more diversity in social arrangements, from the patriarchal Eurasian river-valley civilizations to the matrilineal horticultural societies of tropical Africa and Asia. Even within India, the cultures of the wheat-based northern plains were strongly patrilineal, with wives being totally unrelated to their husbands, and always moving to the households of the men they were to marry. In contrast, in tropical Kerala far to the south many groups cultivating rice, bananas and coconuts were matrilineal, with husbands moving to the villages of their wives, and the primary male figure in some boys’ lives even routinely being their maternal uncle.

For nomads though, the switch to livestock as the primary source of wealth and status increased male clout and importance to universally high levels. Whether they are Asian Mongols or African Maasai, herder societies are dominated by male kindreds that control the movable wealth in the form of livestock, and it is their role to on the one hand protect the herds and drive them to more fertile pastures, and on the other steal animals from neighbors. In nomadic societies, paternal kin groups provided exclusively for their women and children. It was senior men in these groups that accumulated wealth and status they could pass on to sons, resulting in a very strong concern over paternity, so as to avoid investing in the offspring of men outside of their lineage. After all, these men strived for wealth and status in the first place to produce sons who would continue their legacy. And just as they were fixated on their sons, nomadic societies were also punctilious in revering the memories of their forefathers. The Bible’s older books are littered with “begats” a dozen deep, Norse sagas begin with a recitation of half a dozen steps of descent from father to son, and the earliest Indian texts are fixated on royal genealogies.

These ancient nomadic obsessions continue down to the present. The kingdom of Jordan is still ruled today by a direct paternal descendent of Hashim ibn Abd Manaf, Muhammad’s great-grandfather and the progenitor of the Ban Hashim clan to which he belonged, 1,525 years after he died. The lineage of Bodonchar Munkhag, Genghis Khan’s ancestor who founded the world conqueror’s clan two centuries before his conquests, still ruled Mongolia as late as 1920, nearly 700 years after Munkhag’s time.

But steppe patriarchy was reflected in more than just age-old customs and long-standing genealogies. It was more than an empire of ideas. Steppe patriarchy expressed itself in a material fashion. The Yamnaya nomads constructed massive burial mounds, kurgans, wherever they went. Within these vast mounds, they inhumed individuals of high status and greater power. The remains found skew heavily male. It is no surprise that just as they preferentially buried their honored male rulers under enormous mounds, these people worshiped male sky-gods. These male deities were culturally important, as from their perch in the heavens they could witness solemn oaths between the men of the steppe.

How do these families keep producing such talent, generation after generation?

Saturday, January 21st, 2023

Reading about Galton’s disappearance from collective memory reminded me of Scott Alexander’s piece on the secrets of the great families, which included this brief description of Galton’s great family:

Charles Darwin discovered the theory of evolution. His grandfather Erasmus Darwin also groped towards some kind of proto-evolutionary theory, made contributions in botany and pathology, and founded the influential Lunar Society of scientists. His other grandfather Josiah Wedgwood was a pottery tycoon who “pioneered direct mail, money back guarantees, self-service, free delivery, buy one get one free, and illustrated catalogues” and became “one of the wealthiest entrepreneurs of the 18th century”. Charles’ cousin Francis Galton invented the modern fields of psychometrics, meteorology, eugenics, and statistics (including standard deviation, correlation, and regression). Charles’ son Sir George Darwin, an astronomer, became president of the Royal Astronomical Society and another Royal Society fellow. Charles’ other son Leonard Darwin, became a major in the army, a Member of Parliament, President of the Royal Geography Society, and a mentor and patron to Ronald Fisher, another pioneer of modern statistics. Charles’ grandson Charles Galton Darwin invented the Darwin-Fowler method in statistics, the Darwin Curve in diffraction physics, Darwin drift in fluid dynamics, and was the director of the UK’s National Physical Laboratory (and vaguely involved in the Manhattan Project).

How, he asks, do these families keep producing such talent, generation after generation?

One obvious answer would be “privilege”. It’s not completely wrong; once the first talented individual makes a family rich and famous, it has a big leg up. And certainly once the actual talent in these families burns out, the next generation becomes semi-famous fashion designers and TV personalities and journalists, which seem like typical jobs for people who are well-connected and good at performing class, but don’t need to be amazingly bright. Sometimes they become politicians, another job which benefits from lots of name recognition.

But I’ve tried to avoid mentioning these careers, and focus on actually impressive achievements that are hard to fake. And also, none of these families except the Tagores were fantastically rich; there are thousands or millions of families richer than they are who don’t have any of their accomplishments. For example, Cornelius Vanderbilt’s many descendants are famous only for being very rich and doing rich people things very well (one of them won a yachting prize; another was an art collector; a third was Anderson Cooper).

The other obvious answer is “genetics!” I think this one is right, but there are some mysteries here that make it less of a slam dunk.

First, don’t genetics dilute quickly? You only share 6.25% of your genes with your great-great-grandfather.

[…]

The answer to the first question is really impressive assortative mating and having vast litters of children.

Take Niels Bohr. He’s a genius, but if he marries a merely does-well-at-Harvard level woman, his son will be less of a genius. But in fact he married Margrethe Nørlund. It’s not really clear how smart she was — she was described as Bohr’s “sounding-board” and “editor”, and that can hide a wide variety of different levels of contribution. But her brother was Niels Nørlund, a famous mathematician who invented the Nørlund–Rice integral and apparently got a mountain range named after him. He may have been the most mathematically gifted person in Denmark who was not himself a member of the Bohr family — so marrying his sister is a pretty big score on the “keep the family genetically good at math” front.

The Darwins were even more selective: they mostly married incestuously among themselves. Charles Darwin married his cousin Emma Wedgwood; Charles’ sister Caroline Darwin married her cousin Josiah Wedgwood III; their second, cousin, Josiah Wedgwood IV, married his cousin, Ethel Bowen (and became a Baron!)

When the Darwins weren’t marrying each other, they were marrying others of their same intellectual caliber. There is at least one Darwin-Huxley marriage: that would be George Pember Darwin (a computer scientist, Charles’ great-grandson) and Angela Huxley (Thomas’ great-granddaughter) in 1964. But also, Margaret Darwin (Charles’ granddaughter) married Geoffrey Keynes (John Maynard Keynes’ brother, and himself no slacker — he pioneered blood transfusion in Britain). And John Maynard and Geoffrey’s sister, Margaret Keynes, married Archibald Hill, who won the Nobel Prize in Medicine. And let’s not forget Marie Curie’s daughter marrying a Nobel Peace Prize laureate.

If you find yourself marrying John Maynard Keynes’ brother, or Niels Nørlund’s sister, or future Nobel laureates, you’re going way above the bar of “just as selective as Harvard or Oxford”. In retrospect, maybe it was stupid of me to think these people would settle so low.

But also, all these people had massive broods, or litters, or however you want to describe it. Charles Darwin had ten children (insert “Darwinian imperative” joke here); Tagore family patriarch Debendranath Tagore had fourteen.

I said before that if an IQ 150 person marries an IQ 130 person, on average their kids will have IQ 124. But I think most of these people are doing better than IQ 130. I don’t know if Charles Darwin can find someone exactly as intelligent as he is, but let’s say IQ 145. And let’s say that instead of having one kid, they have 10. Now the average kid is 129, but the smartest of ten is 147 — ie you’ve only lost three IQ points per generation. And if you’re marrying other people from very smart families — not just other very smart people — then they might have already chopped off the non-genetic portion of their intelligence and won’t regress. This is starting to look more do-able.

[…]

One last thing, which I have no evidence for. Eliezer Yudkowsky sometimes talks about the idea of a Hero License — ie, most people don’t accomplish great things, because they don’t try to accomplish great things, because they don’t think of themselves as the kind of person who could accomplish great things.

[…]

It seems weird to think of “genius” as a career you can aim for. But maybe if your dad is Charles Darwin, you don’t just go into science. You also start making lots of big theories, speculating about lots of stuff. The fact that something is an unsolved problem doesn’t scare you; trying to solve the biggest unsolved problems is just what normal people do. Maybe if your dad founded a religion, and everyone else you know is named Somethingdranath Tagore and has accomplished amazing things, you start trying to write poetry to set the collective soul of your nation on fire.

What could be a more interesting question?

Friday, January 13th, 2023

There are people who are really trying to either kill or at least studiously ignore all of the progress in genomics, Stephen Hsu reports — from first-hand experience:

My research group solved height as a phenotype. Give us the DNA of an individual with no other information other than that this person lived in a decent environment—wasn’t starved as a child or anything like that—and we can predict that person’s height with a standard error of a few centimeters. Just from the DNA. That’s a tour de force.

Then you might say, “Well, gee, I heard that in twin studies, the correlation between twins in IQ is almost as high as their correlation in height. I read it in some book in my psychology class 20 years ago before the textbooks were rewritten. Why can’t you guys predict someone’s IQ score based on their DNA alone?”

Well, according to all the mathematical modeling and simulations we’ve done, we need somewhat more training data to build the machine learning algorithms to do that. But it’s not impossible. In fact, we predicted that if you have about a million genomes and the cognitive scores of those million people, you could build a predictor with a standard error of plus or minus 10 IQ points. So you can ask, “Well, since you guys showed you could do it for height, and since there are 30, or 40, or 50, different disease conditions that we now have decent genetic predictors for, why isn’t there one for IQ?”

Well, the answer is there’s zero funding. There’s no NIH, NSF, or any agency that would take on a proposal saying, “Give me X million dollars to genotype these people, and also measure their cognitive ability or get them to report their SAT scores to me.” Zero funding for that. And some people get very, very aggressive upon learning that you’re interested in that kind of thing, and will start calling you a racist, or they’ll start attacking you. And I’m not making this up, because it actually happened to me.

What could be a more interesting question? Wow, the human brain—that’s what differentiates us from the rest of the animal species on this planet. Well, to what extent is brain development controlled by DNA? Wouldn’t it be amazing if you could actually predict individual variation in intelligence from DNA just as we can with height now? Shouldn’t that be a high priority for scientific discovery? Isn’t this important for aging, because so many people undergo cognitive decline as they age? There are many, many reasons why this subject should be studied. But there’s effectively zero funding for it.

The group was elitist, but it was also meritocratic

Tuesday, January 10th, 2023

Sputnik’s success created an overwhelming sense of fear that permeated all levels of U.S. society, including the scientific establishment:

As John Wheeler, a theoretical physicist who popularized the term “black hole” would later tell an interviewer: “It is hard to reconstruct now the sense of doom when we were on the ground and Sputnik was up in the sky.”

Back on the ground, the event spurred a mobilization of American scientists unseen since the war. Six weeks after the launch of Sputnik, President Dwight Eisenhower revived the President’s Scientific Advisory Council (PSAC). It was a group of 16 scientists who reported directly to him, granting them an unprecedented amount of influence and power. Twelve weeks after Sputnik, the Department of Defense launched the Advanced Research Project Agency (ARPA), which was later responsible for the development of the internet. Fifteen months after Sputnik, the Office of the Director of Defense Research and Engineering (ODDRE) was launched to oversee all defense research. A 36-year-old physicist who worked on the Manhattan Project, Herb York, was named head of the Office of the ODDRE. There, he reported directly to the president and was given total authority over all defense research spending.

It was the beginning of a war for technological supremacy. Everyone involved understood that in the nuclear age, the stakes were existential.

It was not the first time the U.S. government had mobilized the country’s leading scientists. World War II had come to be known as “the physicists’ war.” It was physicists who developed proximity bombs and the radar systems that rendered previously invisible enemy ships and planes visible, enabling them to be targeted and destroyed, and it was physicists who developed the atomic bombs that ended the war. The prestige conferred by their success during the war positioned physicists at the top of the scientific hierarchy. With the members of the Manhattan Project now aging, getting the smartest young physicists to work on military problems was of intense interest to York and the ODDRE.

Physicists saw the post-Sputnik era as an opportunity to do well for themselves. Many academic physicists more than doubled their salaries working on consulting projects for the DOD during the summer. A source of frustration to the physicists was that these consulting projects were awarded through defense contractors, who were making twice as much as the physicists themselves. A few physicists based at the University of California Berkeley decided to cut out the middleman and form a company they named Theoretical Physics Incorporated.

Word of the nascent company spread quickly. The U.S.’s elite physics community consisted of a small group of people who all went to the same small number of graduate programs and were faculty members at the same small number of universities. These ties were tightened during the war, when many of those physicists worked closely together on the Manhattan Project and at MIT’s Rad Lab.

Charles Townes, a Columbia University physics professor who would later win a Nobel Prize for his role in inventing the laser, was working for the Institute for Defense Analysis (IDA) at the time and reached out to York when he learned of the proposed company. York knew many of the physicists personally and immediately approved $250,000 of funding for the group. Townes met with the founders of the company in Los Alamos, where they were working on nuclear-rocket research. Appealing to their patriotism, he convinced them to make their project a department of IDA.

A short while later the group met in Washington D.C., where they fleshed out their new organization. They came up with a list of the top people they would like to work with and invited them to Washington for a presentation. Around 80 percent of the people invited joined the group; they were all friends of the founders, and they were all high-level physicists. Seven of the first members, or roughly one-third of its initial membership, would go on to win the Nobel Prize. Other members, such as Freeman Dyson, who published foundational work on quantum field theory, were some of the most renowned physicists to never receive the Nobel.

The newly formed group was dubbed “Project Sunrise” by ARPA, but the group’s members disliked the name. The wife of one of the founders proposed the name JASON, after the Greek mythological hero who led the Argonauts on a quest for the golden fleece. The name stuck and JASON was founded in December 1959, with its members being dubbed “Jasons.”

The key to the JASON program was that it formalized a unique social fabric that already existed among elite U.S. physicists. The group was elitist, but it was also meritocratic. As a small, tight-knit community, many of the scientists who became involved in JASON had worked together before. It was a peer network that maintained strict standards for performance. With permission to select their own members, the Jasons were able to draw from those who they knew were able to meet the expectations of the group.

This expectation superseded existing credentials; Freeman Dyson never earned a PhD, but he possessed an exceptionally creative mind. Dyson became known for his involvement with Project Orion, which aimed to develop a starship design that would be powered through a series of atomic bombs, as well as his Dyson Sphere concept, a hypothetical megastructure that completely envelops a star and captures its energy.

Another Jason was Nick Christofilos, an engineer who developed particle accelerator concepts in his spare time when he wasn’t working at an elevator maintenance business in Greece. Christofilos wrote to physicists in the U.S. about his ideas, but was initially ignored. But he was later offered a job at an American research laboratory when physicists found that some of the ideas in his letters pre-dated recent advances in particle accelerator design. Dyson’s and Christofilios’s lack of formal qualifications would preclude an academic research career today, but the scientific community at the time was far more open-minded.

JASON was founded near the peak of what became known as the military-industrial complex. When President Eisenhower coined this term during his farewell address in 1961, military spending accounted for nine percent of the U.S. economy and 52 percent of the federal budget; 44 percent of the defense budget was being spent on weapons systems.

But the post-Sputnik era entailed a golden age for scientific funding as well. Federal money going into basic research tripled from 1960 to 1968, and research spending more than doubled overall. Meanwhile, the number of doctorates awarded in physics doubled. Again, meritocratic elitism dominated: over half of the funding went to 21 universities, and these universities awarded half of the doctorates.

With a seemingly unlimited budget, the U.S. military leadership had started getting some wild ideas. One general insisted a moon base would be required to gain the ultimate high ground. Project Iceworm proposed to build a network of mobile nuclear missile launchers under the Greenland ice sheet. The U.S. Air Force sought a nuclear-powered supersonic bomber under Project WS-125 that could take off from U.S. soil and drop hydrogen bombs anywhere in the world. There were many similar ideas and each military branch produced analyses showing that not only were the proposed weapons technically feasible, but they were also essential to winning a war against the Soviet Union.

Prior to joining the Jasons, some of its scientists had made radical political statements that could make them vulnerable to having their analysis discredited. Fortunately, JASON’s patrons were willing to take a risk and overlook political offenses in order to ensure that the right people were included in the group. Foreseeing the potential political trap, Townes proposed a group of senior scientific advisers, about 75 percent of whom were well-known conservative hawks. Among this group was Edward Teller, known as the “father of the hydrogen bomb.” This senior layer could act as a political shield of sorts in case opponents attempted to politically tarnish JASON members.

Every spring, the Jasons would meet in Washington D.C. to receive classified briefings about the most important problems facing the U.S. military, then decide for themselves what they wanted to study. JASON’s mandate was to prevent “technological surprise,” but no one at the Pentagon presumed to tell them how to do it.

In July, the group would reconvene for a six-week “study session,” initially alternating yearly between the east and west coasts. Members later recalled these as idyllic times for the Jasons, with the group becoming like an extended family. The Jasons rented homes near each other. Wives became friends, children grew up like cousins, and the community put on backyard plays at an annual Fourth of July party. But however idyllic their off hours, the physicists’ workday revolved around contemplating the end of the world. Questions concerning fighting and winning a nuclear war were paramount. The ideas the Jasons were studying approached the level of what had previously been science fiction.

Some of the first JASON studies focused on ARPA’s Defender missile defense program. Their analysis furthered ideas involving the detection of incoming nuclear attacks through the infrared signature of missiles, applied newly-discovered astronomical techniques to distinguish between nuclear-armed missiles and decoys, and worked on the concept of shooting what were essentially directed lightning bolts through the atmosphere to destroy incoming nuclear missiles.

The lightning bolt idea, known today as directed energy weapons, came from Christofilos, who was described by an ARPA historian as mesmerizing JASON physicists with the “kind of ideas that nobody else had.” Some of his other projects included a fusion machine called Astron, a high-altitude nuclear explosion test codenamed Operation Argus that was dubbed the “greatest scientific experiment ever conducted,” and explorations of a potential U.S. “space fleet.”

The Jasons’ analysis on the effects of nuclear explosions in the upper atmosphere, water, and underground, as well as methods of detecting these explosions, was credited with being critical to the U.S. government’s decision to sign the Limited Test Ban Treaty with the Soviet Union. Because of their analysis, the U.S. government felt confident it could verify treaty compliance; the treaty resulted in a large decline in the concentration of radioactive particles in the atmosphere.

The success of JASON over its first five years increased its influence within the U.S. military and spurred attempts by U.S. allies to copy the program. Britain tried for years to create a version of JASON, even enlisting the help of JASON’s leadership. But the effort failed: British physicists simply did not seem to desire involvement. Earlier attempts by British leaders like Winston Churchill to create a British MIT had run into the same problems.

The difference was not ability, but culture. American physicists did not have a disdain for the applied sciences, unlike their European peers. They were comfortable working as advisors on military projects and were employed by institutions that were dependent on DOD funding. Over 20 percent of Caltech’s budget in 1964 came from the DOD, and it was only the 15th largest recipient of funding; MIT was first and received twelve times as much money. The U.S. military and scientific elite were enmeshed in a way that had no parallel in the rest of the world then or now.

If you sense that NSF or NIH have a view on something, it’s best not to fight city hall

Saturday, January 7th, 2023

Stephen Hsu gives an example of how politics constrains the scientific process:

This individual is one of the most highly decorated, well-known climate simulators in the world. To give you his history, he did a PhD in general relativity in the UK and then decided he wanted to do something else, because he realized that even though general relativity was interesting, he didn’t feel like he was going to have a lot of impact on society. So he got involved in meteorology and climate modeling and became one of the most well known climate modelers in the world in terms of prizes and commendations. He’s been a co-author on all the IPCC reports going back multiple decades. So he’s a very well-known guy. But he was one of the authors of a paper in which he made the point that climate models are still far from perfect.

To do a really good job, you need to have a small basic cell size, which captures the size of the features being modeled inside the simulation. The best size is actually scaled down quite a bit because of all kinds of nonlinear phenomena: turbulence, convection, transport of heat, moisture, and everything that goes into the making of weather and climate.

And so he made this point that we’re nowhere near actually being able to properly simulate the physics of these very important features. It turns out that the transport of water vapor, which is related to the formation of clouds, is important. And it turns out high clouds reflect sunlight, and have the opposite sign effect on climate change compared to low clouds, which trap infrared radiation. So whether moisture in the atmosphere or additional carbon in the atmosphere causes more high cloud formation versus more low cloud formation is incredibly important, and it carries the whole day in these models.

In no way are these microphysics of cloud formation being modeled right now. And anybody who knows anything knows this. And the people who really understand physics and do climate modeling know this.

So he wrote a paper saying that governments are going to spend billions, maybe trillions of dollars on policy changes or geothermal engineering. If you’re trying to fix the climate change problem, can you at least spend a billion dollars on the supercomputers that we would need to really do a more definitive job forecasting climate change?

And so that paper he wrote was controversial because people in the community maybe knew he was right, but they didn’t want him talking about this. But as a scientist, I fully support what he’s trying to do. It’s intellectually honest. He’s asking for resources to be spent where they really will make a difference, not in some completely speculative area where we’re not quite sure what the consequences will be. This is clearly going to improve climate modeling and is clearly necessary to do accurate climate modeling. But the anecdote gives you a sense of how fraught science is when there are large scale social consequences. There are polarized interest groups interacting with science.

[…]

It was controversial because, in a way, he was airing some well known dirty laundry that all the experts knew about. But many of them would say it’s better to hide laundry for the greater good, because a bad guy—somebody who’s very anti-CO2 emissions reduction— could seize on this guy’s article and say “Look, the leading guy in your field says that you can’t actually do the simulations he wants, and yet you’re trying to shove some very precise policy goal down my throat. This guy’s revealing those numbers have literally no basis.” That would be an extreme version of the counter-utilization of my colleague’s work.

[…]

In my lifetime, the way science is conducted has changed radically, because now it’s accepted—particularly by younger scientists—that we are allowed to make ad hominem attacks on people based on what could be their entirely sincere scientific belief. That was not acceptable 20 or 30 years ago. If you walked into a department, even if it had something to do with the environment or human genetics or something like that, people were allowed to have their contrary opinion as long as the arguments they made were rational and supported by data. There was not a sense that you’re allowed to impute bad moral character to somebody based on some analytical argument that they’re making. It was not socially acceptable to do that. Now people are in danger of losing their jobs.

[…]

I could list a bunch of factors that I think contributed, and one of them is that scientists are under a lot of pressure to get money to fund their labs and pay their graduate students. If you sense that NSF or NIH have a view on something, it’s best not to fight city hall. It’s like fighting the Fed—you’re going to lose. So that enforces a certain kind of conformism.

[…]

As far as how science relates to the outside world, here’s the problem: for some people, when science agrees with their cherished political belief, they say “Hey, you know what? This is the Vulcan Science Academy, man. These guys know what they’re doing. They debated it, they looked at all the evidence, that’s a peer-reviewed paper, my friend—it was reviewed by peers. They’re real scientists.” When they like the results, they’re going to say that.

When they don’t like it, they say, “Oh, come on, those guys know they have to come to that conclusion or they’re going to lose their NIH grant. These scientists are paid a lot of money now and they’re just feathering their own nests, man. They don’t care about the truth. And by the way, papers in this field don’t replicate. Apparently, if you do a study where you look back at the most prominent papers over the last 10 years, and you check to see whether subsequent papers which were better powered, had better technology, and more sample size actually replicated, the replication rate was like 50 percent. So, you can throw half the papers that are published in top journals in the trash.”

Strange things have been happening to the human body over the last few decades

Thursday, January 5th, 2023

Strange things have been happening to the human body over the last few decades:

Why have human body temperatures declined in the United States over the last 150 years? Or why has the age of first puberty been declining among teenagers since the mid-nineteenth century, from 16.5 years in 1840 to 13 years in 1995?

Or—to take a more troubling and immediate case—why have rates of autism been increasing so dramatically? After having been very rare a few decades prior, the rate has grown from about 1 in 150 children in 2000 to 1 in 44 in 2018, according to the Center for Disease Control. The standard explanation for this increase—changing diagnostic criteria and increased awareness—simply does not explain how sustained the uptick has been, nor does it explain the first-hand accounts of the increase by teachers. In fact, studies have found that changing diagnostic criteria account for only one-fourth of observed increases. Something else is causing the rest.

Or consider something as seemingly straightforward as obesity. In 1975, about 12 percent of American adults were obese; now that figure sits above 40 percent. The standard explanation of the remarkable increase in obesity over the last few decades—the “big two,” more calories and less physical exertion—have an intuitive appeal, but they do not seem to capture the full picture. Between 1999 and 2017, per capita caloric intake among Americans did not change, while the rate of obesity increased by about a third. The increase is so dramatic that a drop-off in physical exertion in so brief a period is unlikely to be the sole explanation, especially since the majority of human energy expenditure is non-behavioral.

Obesity thus remains, in the words of an article in the American Journal of Clinical Nutrition, an “unexplained epidemic.” This is why many scientists have sought to locate contributing factors to the secular increase in obesity, from the decline in cigarette use to increases in atmospheric CO2 levels.

There are many conditions like this: allergies, irritable bowel syndrome, eczema, and autoimmune conditions like juvenile arthritis are other notable examples. These are not the well-known “diseases of modernity,” like heart disease or Type 2 diabetes, whose causes are reasonably well-known. Disturbingly, there seem to be connections between all of these conditions: the “autistic enterocolitis” gut disorder that resembles Crohn’s disease in autistic children, the obesity-asthma link, the irritable bowel syndrome-eczema link, the eczema-allergies link. These “diseases of postmodernity” appear to be a package deal: autistic children report higher rates of stomach pain, and obese people be more likely to develop eczema-like skin diseases. There is some common root underlying these conditions.

A wealth of scientific work mostly done in the last decade by scientists like Martin Blaser of Rutgers may point to the answer. The origin lies in the extraordinary pressure we have been placing on a part of the body about which we know and think little: the microbiome of the human gut.

[…]

We have known for a long time that antibiotics induce rapid weight gain in everything from mice to humans. The specific dynamic—antibiotics cause gut dysbiosis, and gut dysbiosis leads to obesity and other diseases—is now becoming increasingly clear. Similar studies for conditions like asthma or juvenile arthritis, all conducted only in the last few years, have found the same link.

This is especially worrying because antibiotics are everywhere.

[…]

Consider animal agriculture, the main force for antibiotic pollution in the United States. Antibiotics are now crucial to the industrial production of chicken, pig, and cow protein; in recent years antibiotics have even begun to be used in aquaculture. The reasons are simple: antibiotics used prophylactically can prevent and suppress infectious diseases, like bovine footrot and anaplasmosis, that are common in the claustrophobic quarters of concentrated animal-feeding operations (CAFOs). More insidiously, antibiotics can make livestock larger by disrupting their gut biomes and metabolisms, allowing them to be slaughtered at younger ages and at greater weights. In 2019, of the antibiotics sold in the United States, only about a third went to humans, with the rest consumed by livestock.

Antibiotics have been used in American animal agriculture since the late 1940s. It was then that Thomas Jukes, a biologist for the pharmaceutical company Lederle Laboratories, discovered that treating chickens with even trace amounts of the antibiotic chlortetracycline—a drug that had been discovered in 1945 at Lederle—caused them to gain much more weight. The more chlortetracycline the birds got, the larger they were; the chickens that had gotten the highest doses weighed two and a half times more than the ones that hadn’t gotten anything.

[…]

Per capita consumption of chicken—once a rare and expensive kind of meat, typically consumed as a Sunday treat—more than tripled between 1960 and 2020, growing from a relatively marginal part of the American diet in the first few decades of the twentieth century into the country’s premier staple protein.

[…]

As with chickens, the biological effects on cows were significant. The year that monensin was licensed, the average weight of cows at slaughter was 1,047 pounds; by 2005, it had grown thirty percent, to 1,369 pounds. By 2017, American cattle producers used about 171 milligrams of antibiotic per kilogram of livestock—four times as much as in France, and six times as much as in the United Kingdom.

[…]

As a result of this mass pharmaceutical use in animal agriculture, natural bodies of water now contain remarkable amounts of antibiotic waste. One study of a river in Colorado found that “the only site at which no antibiotics were detected was the pristine site in the mountains before the river had encountered urban or agricultural landscapes.” Antibiotics like macrolides and tetracyclines have been found in chlorinated drinking water, while the antibiotic triclosan has been found in rivers and streams around the world. This effluent trickles into everything else: research has detected uptake of veterinary antibiotics in carrots and lettuce, as well as in human breast milk.

[…]

It was not until 2017, well after European countries had strictly limited the use of antibiotics, that the FDA was finally able to ban the use of antibiotics for growth promotion in livestock, mandating that all antibiotics given to cattle needed a prescription. After peaking in 2015, antibiotic use on farms has declined by about 40 percent, with most of the effect taking place in the year of the ban.

But antibiotic use remains elevated, above an average of 100 milligrams per kilogram per year—far more than the 50 milligram limit that reports on antibiotic resistance have proposed, and several times more than is normal in European countries like France or Norway. The reason, Lewis believes, goes back to his anaplasmosis episode. He believes that anaplasmosis is commonly used as a pretext for administering growth-promoting antibiotics, and that this is an open secret among farmers and livestock veterinarians. The “motorway veterinarian,” dependent on the business of growth-hungry farmers, remains alive and well.

[…]

One study in Science found that 42 percent of lots that were certified by the Department of Agriculture as “Raised Without Antibiotics” actually contained cattle that had been given antibiotics, with five percent of lots being composed entirely of cattle raised on antibiotics.

The Overfitted Brain Hypothesis explains why dreams are so dreamlike

Wednesday, January 4th, 2023

None of the leading hypotheses about the purpose of dreaming are convincing, Erik Hoel explains:

E.g., some scientists think the brain replays the day’s events during dreams to consolidate the day’s new memories with the existing structure. Yet, such theories face the seemingly insurmountable problem that only in the most rare cases do dreams involve specific memories. So if true, they would mean that the actual dreams themselves are merely phantasmagoric effluvia, a byproduct of some hazily-defined neural process that “integrates” and “consolidates” memories (whatever that really means). In fact, none of the leading theories of dreaming fit well with the phenomenology of dreams—what the experience of dreaming is actually like.

First, dreams are sparse in that they are less vivid and detailed than waking life. As an example, you rarely if ever read a book or look at your phone screen in dreams, because the dreamworld lacks the resolution for tiny scribblings or icons. Second, dreams are hallucinatory in that they are often unusual, either by being about unlikely events, or involve nonsensical objects or borderline categories. People who are two people, places that are both your home and a spaceship. Many dreams could be short stories by Kafka, Borges, Márquez, or some other fabulist. A theory of dreams must explain why every human, even the most unimaginative accountant, has within them a surrealist author scribbling away at night.

To explain the phenomenology of dreams I recently outlined a scientific theory called the Overfitted Brain Hypothesis (OBH). The OBH posits that dreams are an evolved mechanism to avoid a phenomenon called overfitting. Overfitting, a statistical concept, is when a neural network learns overly specifically, and therefore stops being generalizable. It learns too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at data sets it hasn’t seen before. All learning is basically a tradeoff between specificity and generality in this manner. Real brains, in turn, rely on the training set of lived life. However, that set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain, and relying solely on it likely leads to overfitting.

Common practices in deep learning, where overfitting is a constant concern, lend support to the OBH. One such practice is that of “dropout,” in which a portion of the training data or network itself is made sparse by dropping out some of the data, which forces the network to generalize. This is exactly like the spareness of dreams. Another example is the practice of “domain randomization,” where during training the data is warped and corrupted along particular dimensions, often leading to hallucinatory or fabulist inputs. Other practices include things like feeding the network its own outputs when it’s undergoing random or biased activity.

What the OBH suggests is that dreams represent the biological version of a combination of such techniques, a form of augmentation or regularization that occurs after the day’s learning—but the point is not to enforce the day’s memories, but rather combat the detrimental effects of their memorization. Dreams warp and play with always-ossifying cognitive and perceptual categories, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning, then, during sleep, the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models and incorrect associations.

The OBH fits with the evidence from human sleep research: sleep seems to be associated not so much with assisting pure memorization, as other hypotheses about dreams would posit, but with an increase in abstraction and generalization. There’s also the famous connection between dreams and creativity, which also fits with the OBH. Additionally, if you stay awake too long you will begin to hallucinate (perhaps because your perceptual processes are becoming overfitted). Most importantly, the OBH explains why dreams are so, well, dreamlike.

This connects to another question. Why are we so fascinated by things that never happened?

If the OBH is true, then it is very possible writers and artists, not to mention the entirety of the entertainment industry, are in the business of producing what are essentially consumable, portable, durable dreams. Literally. Novels, movies, TV shows—it is easy for us to suspend our disbelief because we are biologically programmed to surrender it when we sleep.

[…]

Just like dreams, fictions and art keep us from overfitting our perception, models, and understanding of the world.

[…]

There is a sense in which something like the hero myth is actually more true than reality, since it offers a generalizability impossible for any true narrative to possess.

Galton’s disappearance from collective memory would have been surprising to his contemporaries

Tuesday, January 3rd, 2023

Some people get famous for discovering one thing, Adam Mastroianni notes, like Gregor Mendel:

Some people get super famous for discovering several things, like Einstein and Newton.

So surely if one person came up with a ton of different things — say, correlation, standard deviation, regression to the mean, “nature vs. nurture,” questionnaires, twin studies, the wisdom of the crowd, fingerprinting, the first map of Namibia, synesthesia, weather maps, anticyclones, the best method to cut a round cake, and eugenics (yikes) — they’d be super DUPER famous.

But most people have never heard of Sir Francis Galton (1822-1911). Psychologists still use many of the tools he developed, but the textbooks barely mention him. Charles Darwin, Galton’s half-cousin, seems to get a new biography every other year; Galton has had three in a century.

Galton’s disappearance from collective memory would have been surprising to his contemporaries. Karl Pearson (of regression coefficient fame) thought Galton might ultimately be bigger than Darwin or Mendel:

Twenty years ago, no one would have questioned which was the greater man [...] If Darwinism is to survive the open as well as covert attacks of the Mendelian school, it will only be because in the future a new race of biologists will arise trained up in Galtonian method and able to criticise from that standpoint both Darwinism and Mendelism, for both now transcend any treatment which fails to approach them with adequate mathematical knowledge [...] Darwinism needs the complement of Galtonian method before it can become a demonstrable truth…

So, what happened? How come this dude went from being mentioned in the same breath as Darwin to never being mentioned at all? Psychologists are still happy to talk about the guy who invented “penis envy,” so what did this guy do to get scrubbed from history?

I started reading Galton’s autobiography, Memories of My Life, because I thought it might be full of juicy, embarrassing secrets about the origins of psychology. I’m telling you about it today because it is, and it’s full of so much more. There are adventures in uncharted lands, accidental poisonings, brushes with pandemics, some dabbling in vivisection, self-induced madness, a dash of blood and gore, and some poo humor for the lads. And, ultimately, a chance to wonder whether moral truth exists and how to find it.

Readers of this blog — certainly the ones of proper breeding — will already know what Galton did “wrong” to end up down the memory hole.

I felt a bit embarrassed that I’d never read his biography, but I doubt I’ve ever come across a physical copy.

A young man entering full-time research interested in warfare would find himself stymied at every turn

Friday, December 30th, 2022

Why are archaeologists taking to anonymous online spaces to practice their craft?

In part because we have an inflation of young people, educated to around the postgraduate level, who no longer see a future in the academy, where jobs are almost non-existent, and acutely aware of the damage a single remark or online comment can do to a career. But also because we have a university research system that has drifted towards a political position that defies a common sense understanding of human nature and history. A young man entering full-time research interested in warfare, conflict, the origins of different peoples, how borders and boundaries have changed through time, grand narratives of conquest or expansion, would find himself stymied at every turn and regarded with great suspicion. If he didn’t embrace the critical studies fields of postcolonial thought, feminism, gender and queer politics or antiracism, he might find himself shut out from a career altogether. Much easier instead to go online and find the ten other people on Earth who share his interests, who are concerned with what the results mean, rather than their wider current political and social ramifications.

Science has been running an experiment on itself

Thursday, December 29th, 2022

For the last 60 years or so, Adam Mastroianni notes, science has been running an experiment on itself:

The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.

Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”

This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.

Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

The results are in. It failed.

[…]

Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time.

[…]

When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”

[…]

If you look at what scientists actually do, it’s clear they don’t think peer review really matters.

First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal.

[…]

Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place.

And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”

[…]

Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’t seem to make them any better. Neither does training them.

He got some nasty comments and came up with some reasons why people got so nasty:

First: the third-person effect, which is people’s tendency to think that other people are susceptible to persuasion. I am a savvy consumer; you are a knucklehead who can be duped into buying Budweiser by a pair of boobs. I evaluate arguments rationally; you listen to whoever is shouting the loudest. I won’t be swayed by a blog post; you will.

[…]

And second: social dominance. Scientists may think they’re egalitarian because they don’t believe in hierarchies based on race, sex, wealth, and so on. But some of them believe very strongly in hierarchy based on prestige. In their eyes, it is right and good for people with more degrees, bigger grants, and fancier academic positions to be above people who have fewer of those things. They don’t even think of this as hierarchy, exactly, because that sounds like a bad word. To them, it’s just the natural order of things.

(To see this in action, watch what happens when two academic scientists meet. The first things they’ll want to know about each are are 1) career stage — grad student, postdoc, professor, etc., and 2) institution. These are the X and Y coordinates that allow you to place someone in the hierarchy: professor at elite institution gets lots of status, grad student at no-name institution gets none. Older-looking graduate students sometimes have the experience of being mistaken for professors, and professors will chat to them amiably until they realize their mistake, at which point they will, horrified, high-tail it out of the conversation.)

People who are all-in on a hierarchy don’t like it when you question its central assumptions. If peer review doesn’t work or is even harmful to science, it suggests the people at the top of the hierarchy might be naked emperors, and that’s upsetting not just to the naked emperors themselves, but also the people who are diligently disrobing in the hopes of becoming one. In fact, it’s more than upsetting — it’s dangerous, because it could tip over a ladder that has many people on it.

Vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes

Thursday, December 22nd, 2022

“Green hydrogen” is created through electrolysis, which goes much faster, RMIT researchers found, when you apply high-frequency sound waves:

So why does this process work so much better when the RMIT team plays a 10-MHz hybrid sound? Several reasons, according to a research paper just published in the journal Advanced Energy Materials.

Firstly, vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes, shaking them out of the tetrahedral networks they tend to settle in. This results in more “free” water molecules that can make contact with catalytic sites on the electrodes.

Secondly, since the separate gases collect as bubbles on each electrode, the vibrations shake the bubbles free. That accelerates the electrolysis process, because those bubbles block the electrode’s contact with the water and limit the reaction. The sound also helps by generating hydronium (positively charged water ions), and by creating convection currents that help with mass transfer.

In their experiments, the researchers chose to use electrodes that typically perform pretty poorly. Electrolysis is typically done using rare and expensive platinum or iridium metals and powerfully acidic or basic electrolytes for the best reaction rates, but the RMIT team went with cheaper gold electrodes and an electrolyte with a neutral pH level. As soon as the team turned on the sound vibrations, the current density and reaction rate jumped by a remarkable factor of 14.

So this isn’t a situation where, for a given amount of energy put into an electrolyzer, you get 14 times more hydrogen. It’s a situation where the water gets split into hydrogen and oxygen more quickly and easily. And that does have an impressive effect on the overall efficiency of an electrolyzer. “With our method, we can potentially improve the conversion efficiency leading to a net-positive energy saving of 27%,” said Professor Leslie Yeo, one of the lead researchers.