The Overfitted Brain Hypothesis explains why dreams are so dreamlike

Wednesday, January 4th, 2023

None of the leading hypotheses about the purpose of dreaming are convincing, Erik Hoel explains:

E.g., some scientists think the brain replays the day’s events during dreams to consolidate the day’s new memories with the existing structure. Yet, such theories face the seemingly insurmountable problem that only in the most rare cases do dreams involve specific memories. So if true, they would mean that the actual dreams themselves are merely phantasmagoric effluvia, a byproduct of some hazily-defined neural process that “integrates” and “consolidates” memories (whatever that really means). In fact, none of the leading theories of dreaming fit well with the phenomenology of dreams—what the experience of dreaming is actually like.

First, dreams are sparse in that they are less vivid and detailed than waking life. As an example, you rarely if ever read a book or look at your phone screen in dreams, because the dreamworld lacks the resolution for tiny scribblings or icons. Second, dreams are hallucinatory in that they are often unusual, either by being about unlikely events, or involve nonsensical objects or borderline categories. People who are two people, places that are both your home and a spaceship. Many dreams could be short stories by Kafka, Borges, Márquez, or some other fabulist. A theory of dreams must explain why every human, even the most unimaginative accountant, has within them a surrealist author scribbling away at night.

To explain the phenomenology of dreams I recently outlined a scientific theory called the Overfitted Brain Hypothesis (OBH). The OBH posits that dreams are an evolved mechanism to avoid a phenomenon called overfitting. Overfitting, a statistical concept, is when a neural network learns overly specifically, and therefore stops being generalizable. It learns too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at data sets it hasn’t seen before. All learning is basically a tradeoff between specificity and generality in this manner. Real brains, in turn, rely on the training set of lived life. However, that set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain, and relying solely on it likely leads to overfitting.

Common practices in deep learning, where overfitting is a constant concern, lend support to the OBH. One such practice is that of “dropout,” in which a portion of the training data or network itself is made sparse by dropping out some of the data, which forces the network to generalize. This is exactly like the spareness of dreams. Another example is the practice of “domain randomization,” where during training the data is warped and corrupted along particular dimensions, often leading to hallucinatory or fabulist inputs. Other practices include things like feeding the network its own outputs when it’s undergoing random or biased activity.

What the OBH suggests is that dreams represent the biological version of a combination of such techniques, a form of augmentation or regularization that occurs after the day’s learning—but the point is not to enforce the day’s memories, but rather combat the detrimental effects of their memorization. Dreams warp and play with always-ossifying cognitive and perceptual categories, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning, then, during sleep, the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models and incorrect associations.

The OBH fits with the evidence from human sleep research: sleep seems to be associated not so much with assisting pure memorization, as other hypotheses about dreams would posit, but with an increase in abstraction and generalization. There’s also the famous connection between dreams and creativity, which also fits with the OBH. Additionally, if you stay awake too long you will begin to hallucinate (perhaps because your perceptual processes are becoming overfitted). Most importantly, the OBH explains why dreams are so, well, dreamlike.

This connects to another question. Why are we so fascinated by things that never happened?

If the OBH is true, then it is very possible writers and artists, not to mention the entirety of the entertainment industry, are in the business of producing what are essentially consumable, portable, durable dreams. Literally. Novels, movies, TV shows—it is easy for us to suspend our disbelief because we are biologically programmed to surrender it when we sleep.

[…]

Just like dreams, fictions and art keep us from overfitting our perception, models, and understanding of the world.

[…]

There is a sense in which something like the hero myth is actually more true than reality, since it offers a generalizability impossible for any true narrative to possess.

Galton’s disappearance from collective memory would have been surprising to his contemporaries

Tuesday, January 3rd, 2023

Some people get famous for discovering one thing, Adam Mastroianni notes, like Gregor Mendel:

Some people get super famous for discovering several things, like Einstein and Newton.

So surely if one person came up with a ton of different things — say, correlation, standard deviation, regression to the mean, “nature vs. nurture,” questionnaires, twin studies, the wisdom of the crowd, fingerprinting, the first map of Namibia, synesthesia, weather maps, anticyclones, the best method to cut a round cake, and eugenics (yikes) — they’d be super DUPER famous.

But most people have never heard of Sir Francis Galton (1822-1911). Psychologists still use many of the tools he developed, but the textbooks barely mention him. Charles Darwin, Galton’s half-cousin, seems to get a new biography every other year; Galton has had three in a century.

Galton’s disappearance from collective memory would have been surprising to his contemporaries. Karl Pearson (of regression coefficient fame) thought Galton might ultimately be bigger than Darwin or Mendel:

Twenty years ago, no one would have questioned which was the greater man [...] If Darwinism is to survive the open as well as covert attacks of the Mendelian school, it will only be because in the future a new race of biologists will arise trained up in Galtonian method and able to criticise from that standpoint both Darwinism and Mendelism, for both now transcend any treatment which fails to approach them with adequate mathematical knowledge [...] Darwinism needs the complement of Galtonian method before it can become a demonstrable truth…

So, what happened? How come this dude went from being mentioned in the same breath as Darwin to never being mentioned at all? Psychologists are still happy to talk about the guy who invented “penis envy,” so what did this guy do to get scrubbed from history?

I started reading Galton’s autobiography, Memories of My Life, because I thought it might be full of juicy, embarrassing secrets about the origins of psychology. I’m telling you about it today because it is, and it’s full of so much more. There are adventures in uncharted lands, accidental poisonings, brushes with pandemics, some dabbling in vivisection, self-induced madness, a dash of blood and gore, and some poo humor for the lads. And, ultimately, a chance to wonder whether moral truth exists and how to find it.

Readers of this blog — certainly the ones of proper breeding — will already know what Galton did “wrong” to end up down the memory hole.

I felt a bit embarrassed that I’d never read his biography, but I doubt I’ve ever come across a physical copy.

A young man entering full-time research interested in warfare would find himself stymied at every turn

Friday, December 30th, 2022

Why are archaeologists taking to anonymous online spaces to practice their craft?

In part because we have an inflation of young people, educated to around the postgraduate level, who no longer see a future in the academy, where jobs are almost non-existent, and acutely aware of the damage a single remark or online comment can do to a career. But also because we have a university research system that has drifted towards a political position that defies a common sense understanding of human nature and history. A young man entering full-time research interested in warfare, conflict, the origins of different peoples, how borders and boundaries have changed through time, grand narratives of conquest or expansion, would find himself stymied at every turn and regarded with great suspicion. If he didn’t embrace the critical studies fields of postcolonial thought, feminism, gender and queer politics or antiracism, he might find himself shut out from a career altogether. Much easier instead to go online and find the ten other people on Earth who share his interests, who are concerned with what the results mean, rather than their wider current political and social ramifications.

Science has been running an experiment on itself

Thursday, December 29th, 2022

For the last 60 years or so, Adam Mastroianni notes, science has been running an experiment on itself:

The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.

Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”

This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.

Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

The results are in. It failed.

[…]

Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time.

[…]

When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”

[…]

If you look at what scientists actually do, it’s clear they don’t think peer review really matters.

First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal.

[…]

Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place.

And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”

[…]

Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’t seem to make them any better. Neither does training them.

He got some nasty comments and came up with some reasons why people got so nasty:

First: the third-person effect, which is people’s tendency to think that other people are susceptible to persuasion. I am a savvy consumer; you are a knucklehead who can be duped into buying Budweiser by a pair of boobs. I evaluate arguments rationally; you listen to whoever is shouting the loudest. I won’t be swayed by a blog post; you will.

[…]

And second: social dominance. Scientists may think they’re egalitarian because they don’t believe in hierarchies based on race, sex, wealth, and so on. But some of them believe very strongly in hierarchy based on prestige. In their eyes, it is right and good for people with more degrees, bigger grants, and fancier academic positions to be above people who have fewer of those things. They don’t even think of this as hierarchy, exactly, because that sounds like a bad word. To them, it’s just the natural order of things.

(To see this in action, watch what happens when two academic scientists meet. The first things they’ll want to know about each are are 1) career stage — grad student, postdoc, professor, etc., and 2) institution. These are the X and Y coordinates that allow you to place someone in the hierarchy: professor at elite institution gets lots of status, grad student at no-name institution gets none. Older-looking graduate students sometimes have the experience of being mistaken for professors, and professors will chat to them amiably until they realize their mistake, at which point they will, horrified, high-tail it out of the conversation.)

People who are all-in on a hierarchy don’t like it when you question its central assumptions. If peer review doesn’t work or is even harmful to science, it suggests the people at the top of the hierarchy might be naked emperors, and that’s upsetting not just to the naked emperors themselves, but also the people who are diligently disrobing in the hopes of becoming one. In fact, it’s more than upsetting — it’s dangerous, because it could tip over a ladder that has many people on it.

Vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes

Thursday, December 22nd, 2022

“Green hydrogen” is created through electrolysis, which goes much faster, RMIT researchers found, when you apply high-frequency sound waves:

So why does this process work so much better when the RMIT team plays a 10-MHz hybrid sound? Several reasons, according to a research paper just published in the journal Advanced Energy Materials.

Firstly, vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes, shaking them out of the tetrahedral networks they tend to settle in. This results in more “free” water molecules that can make contact with catalytic sites on the electrodes.

Secondly, since the separate gases collect as bubbles on each electrode, the vibrations shake the bubbles free. That accelerates the electrolysis process, because those bubbles block the electrode’s contact with the water and limit the reaction. The sound also helps by generating hydronium (positively charged water ions), and by creating convection currents that help with mass transfer.

In their experiments, the researchers chose to use electrodes that typically perform pretty poorly. Electrolysis is typically done using rare and expensive platinum or iridium metals and powerfully acidic or basic electrolytes for the best reaction rates, but the RMIT team went with cheaper gold electrodes and an electrolyte with a neutral pH level. As soon as the team turned on the sound vibrations, the current density and reaction rate jumped by a remarkable factor of 14.

So this isn’t a situation where, for a given amount of energy put into an electrolyzer, you get 14 times more hydrogen. It’s a situation where the water gets split into hydrogen and oxygen more quickly and easily. And that does have an impressive effect on the overall efficiency of an electrolyzer. “With our method, we can potentially improve the conversion efficiency leading to a net-positive energy saving of 27%,” said Professor Leslie Yeo, one of the lead researchers.

All the twin pairs came in for physical examinations, and the results were pretty much what you’d expect

Wednesday, December 21st, 2022

Researchers in Finland looked at 17 pairs of identical twins who didn’t have similar exercise habits:

The first thing to note is just how unusual such twin pairs are. The twins in the study were drawn from two previous Finnish twin studies that included thousands of pairs of identical twins. The vast majority of them had similar levels of physical activity. The High Runner mouse line that’s often used in lab studies took mice that loved to run, bred them with each other, and produced mice that love to run even more. I’d like to think that human behavior (and mating patterns) are a little more complex than that, but the twin data certainly suggests that our genes influence our predilection for movement.

Still, they found these 17 pairs whose paths had diverged. There were two different subgroups: young twins in their thirties whose exercise habits had diverged for at least three years, and older twins in their fifties to seventies whose habits had diverged for at least 30 years. On average, the exercising twins got about three times as much physical activity, including active commuting, as the non-exercising ones: 6.1 MET-hours per day compared to 2.0 MET-hours per day. For context, running at a ten-minute-mile pace for half an hour consumes about 5 MET-hours.

All the twin pairs came in for physical examinations, and the results were pretty much what you’d expect. The exercising twins had higher VO2 max (38.6 vs. 33.0 ml/kg/min), smaller waist circumference (34.8 vs. 36.3 inches), lower body fat (19.7 vs. 22.6 percent), significantly less abdominal fat and liver fat, and so on.

[…]

A 2018 case study from researchers at California State University Fullerton looked at a single identical twin pair, then aged 52. One was a marathoner and triathlete who had logged almost 40,000 miles of running between 1993 and 2015. The other was a truck driver who didn’t exercise. In this case, the exercising twin weighed 22 pounds less, and his resting heart rate was 30 percent lower. Most fascinatingly, muscle biopsies showed that the marathoner had 94 percent slow-twitch fibers while the truck-driver had just 40 percent slow-twitch. No one before or since (as far as I know) has shown such a dramatic change in muscle properties.

Sprint speed starts declining after your 20s

Saturday, December 17th, 2022

Alex Hutchinson explains how to hold on to your sprint speed as you age:

Many of the challenges of daily living, once you hit your 70s and 80s and beyond, are essentially tests of all-out power rather than sustained endurance (though both are important).

The problem is that sprint speed starts declining after your 20s, and most endurance athletes have no clue how to preserve it.

[…]

Older sprinters take shorter steps and their feet spend longer in contact with the ground, presumably because they’re less able to generate explosive force with each step. That’s consistent with the finding that older sprinters have less muscle, and in particular less fast-twitch muscle, than younger sprinters.

But it’s not just a question of how much muscle you’ve got. In fact, some studies suggest that you lose strength more rapidly than you lose muscle, which means that the quality of your remaining muscle is reduced. There are a bunch of different reasons for muscle quality to decline, including the properties of the muscle fibers themselves, but the most interesting culprit is the neuromuscular system: the signals from brain to muscle get garbled.

[…]

The authors cover their bases by recommending that your resistance training routine should include workouts that aim to build muscle size (e.g. three sets of ten reps at 70 percent of one-rep max); workouts that aim to build strength (e.g. two to four sets of four to six reps at 85 percent of max); and workouts to build power (e.g. three sets of three to ten reps at 35 to 60 percent of max).

[…]

The authors suggest training to improve coordination through exercises that challenge balance, stability, and reflexes, such as single-leg balance drills. One advantage of this type of training: it’s not as draining as typical “reps to failure” strength workouts, so it may provide more bang for your buck if you can’t handle as many intense workouts as you used to.

[…]

On that note, the standard advice that veteran athletes give you when you hit your 40s is that you can no longer recover as quickly. Strangely, the authors point out, the relatively sparse data on this question doesn’t find any differences in physiological markers of post-workout recovery between younger and older athletes. The main difference is that older athletes feel less recovered—and in this case, it’s probably worth assuming that those feelings represent some kind of reality, even if we don’t know how to measure it.

Inertial confinement fusion involves bombarding a tiny pellet of hydrogen plasma with the world’s biggest laser

Monday, December 12th, 2022

The federal Lawrence Livermore National Laboratory in California achieved net energy gain in a fusion experiment, using a process called inertial confinement fusion that involves bombarding a tiny pellet of hydrogen plasma with the world’s biggest laser:

The fusion reaction at the US government facility produced about 2.5 megajoules of energy, which was about 120 per cent of the 2.1 megajoules of energy in the lasers, the people with knowledge of the results said, adding that the data was still being analysed.

E78BEB06-0A67-4729-AA37-A9EC5ACBD385

The $3.5bn National Ignition Facility was primarily designed to test nuclear weapons by simulating explosions but has since been used to advance fusion energy research. It came the closest in the world to net energy gain last year when it produced 1.37 megajoules from a fusion reaction, which was about 70 per cent of the energy in the lasers on that occasion.

Among the subjects was 17-year-old Ted Kaczynski

Monday, November 28th, 2022

I remember first finding out about the Unabomber in 1995 and being shocked that I hadn’t heard about a real-life mad-scientist supervillain mysteriously blowing up professors and industrialists.

I recently watched Unabomber: In His Own Words — in which Ted Kaczynski sounds like a bitter nerd, not Doctor Doom — and learned that his origin story involves another character who could have come out of a pulp novel, one Henry Murray:

During World War II, he left Harvard and worked as lieutenant colonel for the Office of Strategic Services (OSS). James Miller, in charge of the selection of secret agents at the OSS during World War II, said the situation test was used by British War Officer Selection Board and OSS to assess potential agents.

In 1943 Murray helped complete Analysis of the Personality of Adolph Hitler, commissioned by OSS boss Gen. William “Wild Bill” Donovan. The report was done in collaboration with psychoanalyst Walter C. Langer, Ernst Kris, New School for Social Research, and Bertram D. Lewin, New York Psychoanalytic Institute. The report used many sources to profile Hitler, including informants such as Ernst Hanfstaengl, Hermann Rauschning, Princess Stephanie von Hohenlohe, Gregor Strasser, Friedelind Wagner, and Kurt Ludecke. The groundbreaking study was the pioneer of offender profiling and political psychology. In addition to predicting that Hitler would choose suicide if defeat for Germany was near, Murray’s collaborative report stated that Hitler was impotent as far as heterosexual relations were concerned and that there was a possibility that Hitler had participated in a homosexual relationship. The report stated: “The belief that Hitler is homosexual has probably developed (a) from the fact that he does show so many feminine characteristics, and (b) from the fact that there were so many homosexuals in the Party during the early days and many continue to occupy important positions. It is probably true that Hitler calls Albert Forster ‘Bubi’, which is a common nickname employed by homosexuals in addressing their partners.”

In 1947, he returned to Harvard as a chief researcher, lectured and established with others the Psychological Clinic Annex.

From late 1959 to early 1962, Murray was responsible for unethical experiments in which he used twenty-two Harvard undergraduates as research subjects. Among other goals, experiments sought to measure individuals’ responses to extreme stress. The unwitting undergraduates were submitted to what Murray called “vehement, sweeping and personally abusive” attacks. Specifically tailored assaults to their egos, cherished ideas, and beliefs were used to cause high levels of stress and distress. The subjects then viewed recorded footage of their reactions to this verbal abuse repeatedly.

Among the subjects was 17-year-old Ted Kaczynski, a mathematician who went on to be known as the ‘Unabomber’, a domestic terrorist who targeted academics and technologists for 17 years. Alston Chase’s book Harvard and the Unabomber: The Education of an American Terrorist connects Kaczynski’s abusive experiences under Murray to his later criminal career.

In 1960, Timothy Leary started research in psychedelic drugs at Harvard, which Murray is said to have supervised.

Some sources have suggested that Murray’s experiments were part of, or indemnified by, the US Government’s research into mind control known as the MKUltra project.

Mechanochemical breakthrough unlocks cheap, safe, powdered gases

Wednesday, November 23rd, 2022

Nanotechnology researchers based at Deakin University’s Institute for Frontier Materials claim to have found a super-efficient way to mechanochemically trap and hold gases in powders, which could radically reduce energy use in the petrochemical industry, while making hydrogen much easier and safer to store and transport:

Mechanochemistry is a relatively recently coined term, referring to chemical reactions that are triggered by mechanical forces as opposed to heat, light, or electric potential differences. In this case, the mechanical force is supplied by ball milling – a low-energy grinding process in which a cylinder containing steel balls is rotated such that the balls roll up the side, then drop back down again, crushing and rolling over the material inside.

The team has demonstrated that grinding certain amounts of certain powders with precise pressure levels of certain gases can trigger a mechanochemical reaction that absorbs the gas into the powder and stores it there, giving you what’s essentially a solid-state storage medium that can hold the gases safely at room temperature until they’re needed. The gases can be released as required, by heating the powder up to a certain point.

[…]

This process, for example, could separate hydrocarbon gases out from crude oil using less than 10% of the energy that’s needed today. “Currently, the petrol industry uses a cryogenic process,” says Chen. “Several gases come up together, so to purify and separate them, they cool everything down to a liquid state at very low temperature, and then heat it all together. Different gases evaporate at different temperatures, and that’s how they separate them out.”

[…]

“The energy consumed by a 20-hour milling process is US$0.32,” reads the paper. “The ball-milling gas adsorption process is estimated to consume 76.8 KJ/s to separate 1,000 liters (220 gal) of olefin/paraffin mixture, which is two orders less than that of the cryogenic distillation process.”

[…]

Chen tells us the powder can store a hydrogen weight percentage of around 6.5%. “Every one gram of material will store about 0.065 grams of hydrogen,” he says. “That’s already above the 5% target set by the US Department of Energy. And in terms of volume, for every one gram of powder, we wish to store around 50 liters (13.2 gal) of hydrogen in there.”

Indeed, should the team prove these numbers, they’d represent an instant doubling of the best current solid-state hydrogen storage mass fractions, which, according to Air Liquide, can only manage 2-3%.

The sort of Life Support System required to nourish a generation ship to fly through space for millennia is beyond our current capabilities

Monday, November 21st, 2022

No life support system miracles are required to keep humans alive on Mars in the near future, Casey Handmer’s argues:

A common criticism of ambitious space exploration plans, such as building cities on Mars, is that life support systems (LSS) are inadequate to keep humans alive, ergo the whole idea is pointless. As an example, the space shuttle LSS could operate for about two weeks. The ISS LSS operates indefinitely but requires regular replenishment of stores launched from Earth, and regular and intense maintenance. Finally, all closed loop LSS, both conceptual and built, are incredibly complex pieces of machinery, and complexity tends to be at odds with reliability. The general consensus is that the sort of LSS required to nourish a generation ship to fly through space for millennia is beyond our current capabilities.

No matter how big the rocket, supplies launched to Mars are finite and will eventually be exhausted. These supplies include both bulk materials like oxygen or nitrogen, and replacement parts for machinery. This doesn’t bode well. Indeed, much of the dramatic tension in The Martian is due precisely to the challenges of getting a NASA-quality LSS to keep someone alive for much longer than originally intended.

[…]

On Earth, we breath a mixture of nitrogen and oxygen, with bits of argon, water vapor, CO2, and other stuff mixed in. The LSS has to scrub CO2, regenerate oxygen, condense water vapor evaporated by our moist lungs, and filter out contaminants that are toxic, such as ozone and hydrazine.

With breathing gas sorted out, humans also drink water, consume food, and excrete waste. For extended habitation, these needs also need to be addressed by the LSS.

On Earth, these various elemental and chemical cycles are produced, and buffered by, the immensely large natural environment. I don’t think anyone thinks that a compact biological regeneration system is adequate to meet the needs of a growing city on Mars. Biosphere 2 had a really good go at this and failed for a variety of reasons. One major one was complexity. If the LSS depends on the good will of tonnes of microbes, most of which are undescribed by science, it is very easy to have a bad day.

The alternative is a physical/chemical system. Much simpler, it employs a glorified air conditioning system to process the air and recycle/sanitize waste products. Something like this exists on every spacecraft, and submarine, ever built. The difficulty arises when a simple, robust machine that is 90% efficient is asked to perform at 99.999% efficiency.

[…]

Once on the surface, there is an entire planet of atoms ready to harvest. Rocky planets such as the Earth or Mars are, to a physicist, a giant pile of iron atoms encapsulated by a giant pile of oxygen atoms, with other stuff in the gaps. Nearly all rocks, plus water, contain more oxygen than any other element. The Moon and Mars have a lot of water if one knows where to look. Nitrogen is another issue but does exist in the Martian atmosphere. The upshot is that the LSS on Mars doesn’t have to be closed loop. It can depend on constant air mining or environmental extraction to make up for losses, leaks, and inefficiencies. The machinery can be relatively simple, robust, and easy to maintain. The ISS LSS is, after all, 1980s technology at best.

American geneticists now face an even more drastic form of censorship

Thursday, October 27th, 2022

A policy of deliberate ignorance has corrupted top scientific institutions in the West, James Lee suggests:

It’s been an open secret for years that prestigious journals will often reject submissions that offend prevailing political orthodoxies — especially if they involve controversial aspects of human biology and behavior — no matter how scientifically sound the work might be. The leading journal Nature Human Behaviour recently made this practice official in an editorial effectively announcing that it will not publish studies that show the wrong kind of differences between human groups.

American geneticists now face an even more drastic form of censorship: exclusion from access to the data necessary to conduct analyses, let alone publish results. Case in point: the National Institutes of Health now withholds access to an important database if it thinks a scientist’s research may wander into forbidden territory. The source at issue, the Database of Genotypes and Phenotypes (dbGaP), is an exceptional tool, combining genome scans of several million individuals with extensive data about health, education, occupation, and income. It is indispensable for research on how genes and environments combine to affect human traits. No other widely accessible American database comes close in terms of scientific utility.

My colleagues at other universities and I have run into problems involving applications to study the relationships among intelligence, education, and health outcomes. Sometimes, NIH denies access to some of the attributes that I have just mentioned, on the grounds that studying their genetic basis is “stigmatizing.” Sometimes, it demands updates about ongoing research, with the implied threat that it could withdraw usage if it doesn’t receive satisfactory answers. In some cases, NIH has retroactively withdrawn access for research it had previously approved.

Note that none of the studies I am referring to include inquiries into race or sex differences. Apparently, NIH is clamping down on a broad range of attempts to explore the relationship between genetics and intelligence.

This machine-feeding regimen was just about as close as one can get to a diet with zero reward value and zero variety

Monday, October 24th, 2022

In The Hungry Brain, neuroscientist Stephan Guyenet references a 1965 study in which volunteers received all their food from a “feeding machine” that pumped a “liquid formula diet” through a “dispensing syringe-type pump which delivers a predetermined volume of formula through the mouthpiece.”

What happens to food intake and adiposity when researchers dramatically restrict food reward? In 1965, the Annals of the New York Academy of Sciences published a very unusual study that unintentionally addressed this question. …

The “system” in question was a machine that dispensed liquid food through a straw at the press of a button—7.4 milliliters per press, to be exact (see figure 15). Volunteers were given access to the machine and allowed to consume as much of the liquid diet as they wanted, but no other food. Since they were in a hospital setting, the researchers could be confident that the volunteers ate nothing else. The liquid food supplied adequate levels of all nutrients, yet it was bland, completely lacking in variety, and almost totally devoid of all normal food cues.

[…]

The researchers first fed two lean people using the machine—one for sixteen days and the other for nine. Without requiring any guidance, both lean volunteers consumed their typical calorie intake and maintained a stable weight during this period.

Next, the researchers did the same experiment with two “grossly obese” volunteers weighing approximately four hundred pounds. Again, they were asked to “obtain food from the machine whenever hungry.” Over the course of the first eighteen days, the first (male) volunteer consumed a meager 275 calories per day—less than 10 percent of his usual calorie intake. The second (female) volunteer consumed a ridiculously low 144 calories per day over the course of twelve days, losing twenty-three pounds. The investigators remarked that an additional three volunteers with obesity “showed a similar inhibition of calorie intake when fed by machine.”

The first volunteer continued eating bland food from the machine for a total of seventy days, losing approximately seventy pounds. After that, he was sent home with the formula and instructed to drink 400 calories of it per day, which he did for an additional 185 days, after which he had lost two hundred pounds —precisely half his body weight. The researchers remarked that “during all this time weight was steadily lost and the patient never complained of hunger.” This is truly a starvation-level calorie intake, and to eat it continuously for 255 days without hunger suggests that something rather interesting was happening in this man’s body. Further studies from the same group and others supported the idea that a bland liquid diet leads people to eat fewer calories and lose excess fat.

This machine-feeding regimen was just about as close as one can get to a diet with zero reward value and zero variety. Although the food contained sugar, fat, and protein, it contained little odor or texture with which to associate them. In people with obesity, this diet caused an impressive spontaneous reduction of calorie intake and rapid fat loss, without hunger. Yet, strangely, lean people maintained weight on this regimen rather than becoming underweight. This suggests that people with obesity may be more sensitive to the impact of food reward on calorie intake.

Environmental contamination by artificial, human-synthesized compounds fits this picture very well, and no other account does

Sunday, October 23rd, 2022

Only one theory can account for all of the available evidence about the obesity epidemic: it is caused by one or more environmental contaminants:

We know that this is biologically plausible because there are many compounds that reliably cause people to gain weight, sometimes a lot of weight.

[…]

We need a theory that can account for all of the mysteries we reviewed earlier. Another way to put this is to say that, based on the evidence, we’re looking for a factor that:

  1. Changed over the last hundred years
  2. With a major shift around 1980
  3. And whatever it is, there is more of it every year
  4. It doesn’t affect people living nonindustrialized lives, regardless of diet
  5. But it does affect lab animals, wild animals, and animals living in zoos
  6. It has something to do with palatable human snackfoods, unrelated to nutritional value
  7. It differs in its intensity by altitude for some reason
  8. And it appears to have nothing to do with our diets

Environmental contamination by artificial, human-synthesized compounds fits this picture very well, and no other account does.

We see a similar pattern of results in humans

Saturday, October 22nd, 2022

It used to be that if researchers needed obese rats for a study, they would just add fat to normal rodent chow, but it turns out that it takes a long time for rats to become obese on this diet:

A breakthrough occurred one day when a graduate student happened to put a rat onto a bench where another student had left a half-finished bowl of Froot Loops. Rats are usually cautious around new foods, but in this case the rat wandered over and began scarfing down the brightly-colored cereal. The graduate student was inspired to try putting the rats on a diet of “palatable supermarket food”; not only Froot Loops, but foods like Doritos, pork rinds, and wedding cake. Today, researchers call these “cafeteria diets”.

Sure enough, on this diet the rats gained weight at unprecedented speed. All this despite the fact that the high-fat and cafeteria diets have similar nutritional profiles, including very similar fat/kcal percentages, around 45%. In both diets, rats were allowed to eat as much as they wanted. When you give a rat a high-fat diet, it eats the right amount and then stops eating, and maintains a healthy weight. But when you give a rat the cafeteria diet, it just keeps eating, and quickly becomes overweight. Something is making them eat more. “Palatable human food is the most effective way to cause a normal rat to spontaneously overeat and become obese,” says neuroscientist Stephan Guyenet in The Hungry Brain, “and its fattening effect cannot be attributed solely to its fat or sugar content.”

Rodents eating diets that are only high in fat or only high in carbohydrates don’t gain nearly as much weight as rodents eating the cafeteria diet. And this isn’t limited to lab rats. Raccoons and monkeys quickly grow fat on human food as well.

We see a similar pattern of results in humans.