The important question in a political dispute is not “who is right?”

Thursday, August 15th, 2019

The number of anonymous Twitter handles worth reading can be counted on two hands, T. Greer says. One of them, @itrulyknowchina, shared this tweetstorm about the Chinese reaction to Hong Kong:

The overwhelming majority of Chinese mainlanders, including or especially the educated, comparative liberal ones, have lost their brains on the issue of Hong Kong — genuinely buying into whatever the Party has been selling. And this makes me really frightened.

Many bought into the foreign incitement bullshit. What kind of foreign “black hand” can whip two million people onto the street on a single day and keep tens of thousands on the streets week after week? It’s just bullshit.

Plus, the “black hand” theory is so looking down on HK citizens — are they that stupid to be manipulated by a few “black hands”? What can drive these HK citizens except their own grievances and discontent?

There are so many bullshit theories that I just don’t want to go through one by one. Bottom line is the overwhelming majority of Chinese mainlanders including the elite ones have been brainwashed so thoroughly that they don’t have any critical thinking capabilities left on them.

They can’t tell black from white. They can’t tell right from wrong. And they don’t know what is good for Hong Kong and perhaps most importantly what is good for China (even within its most narrow definition) in the long run.

This phenomenon, namely that the hearts and minds of the overwhelming majority of Chinese mainlanders are under the fingertips and easily manipulated by the Party, is gonna have far reaching repercussions for China and the world in the long run.

Beijing is gonna feel ever emboldened, having been reassured by the “patriotism” it has seen on HK issue. It will therefore act more toughly and recklessly on external affairs. Nations across the world will find — have already found — China adopting a much tougher stance.

China doesn’t have checks and balances built in internal politics, so one of the few little things that could vague check Beijing’s hand is the elusive collective “feeling” of its citizens. If Beijing is confident in manipulating public opinion, it fears nothing (not even USA).

It is not accurate or especially helpful to chalk Chinese beliefs about Hong Kong to state propaganda, T. Greer argues:

If it was all a matter of propaganda and censorship, then the whole thing could be resolved by exposing Chinese to the truth. There are obvious snags here. Take those Chinese students in New Zealand and Australia that attacked the pro-Hong Kong marchers. They have escaped the Chinese censorship machine. Are they any better off for it? They are exposed — quite directly — to opposing narratives. Have they been moderated by it?

Censorship is the wrong lens through which to view this issue.

American readers, an intellectual exercise: think for about thirty seconds about your partisan opposites. In that thirty seconds, tally as many of crazy, unconscionable, obviously false things commonly believed by the other side’s rank and file.

Now: reflect on the American Great Fire Wall — but that is right, we do not have one. We are free to read whatever views we will. You cannot live in our country and not eventually come across arguments from the other side.

So why do so many Americans they believe stupid things?

We know the answer to this query. I have written about it before. Hugo Mercier and Dan Sperber have written a superb book about it. Moshe Hoffman’s twitter feed (one of that service’s few other gems) is a daily exploration of it. Humans do not reason to find truth. Reasoning and rhetoric were useful adaptations in mankind’s evolutionary past because reason and rhetoric help us build coalitions. We argue to win. The telos of reason is victory. Every other application is a fortunate accident.

The important question in a political dispute is not “who is right?” but “who is on our side?”

He favors a motorcentric view of the brain

Wednesday, August 14th, 2019

Neuroscientist Shane O’Mara has written an entire book In Praise of Walking:

He favours what he calls a “motor-centric” view of the brain — that it evolved to support movement and, therefore, if we stop moving about, it won’t work as well.

This is neatly illustrated by the life cycle of the humble sea squirt which, in its adult form, is a marine invertebrate found clinging to rocks or boat hulls. It has no brain because it has eaten it. During its larval stage, it had a backbone, a single eye and a basic brain to enable it to swim about hunting like “a small, water-dwelling, vertebrate cyclops”, as O’Mara puts it. The larval sea squirt knew when it was hungry and how to move about, and it could tell up from down. But, when it fused on to a rock to start its new vegetative existence, it consumed its redundant eye, brain and spinal cord. Certain species of jellyfish, conversely, start out as brainless polyps on rocks, only developing complicated nerves that might be considered semi-brains as they become swimmers.


“Our sensory systems work at their best when they’re moving about the world,” says O’Mara. He cites a 2018 study that tracked participants’ activity levels and personality traits over 20 years, and found that those who moved the least showed malign personality changes, scoring lower in the positive traits: openness, extraversion and agreeableness. There is substantial data showing that walkers have lower rates of depression, too. And we know, says O’Mara, “from the scientific literature, that getting people to engage in physical activity before they engage in a creative act is very powerful. My notion — and we need to test this — is that the activation that occurs across the whole of the brain during problem-solving becomes much greater almost as an accident of walking demanding lots of neural resources.”

O’Mara’s enthusiasm for walking ties in with both of his main interests as a professor of experimental brain research: stress, depression and anxiety; and learning, memory and cognition. “It turns out that the brain systems that support learning, memory and cognition are the same ones that are very badly affected by stress and depression,” he says. “And by a quirk of evolution, these brain systems also support functions such as cognitive mapping,” by which he means our internal GPS system. But these aren’t the only overlaps between movement and mental and cognitive health that neuroscience has identified.

I witnessed the brain-healing effects of walking when my partner was recovering from an acute brain injury. His mind was often unsettled, but during our evening strolls through east London, things started to make more sense and conversation flowed easily. O’Mara nods knowingly. “You’re walking rhythmically together,” he says, “and there are all sorts of rhythms happening in the brain as a result of engaging in that kind of activity, and they’re absent when you’re sitting. One of the great overlooked superpowers we have is that, when we get up and walk, our senses are sharpened. Rhythms that would previously be quiet suddenly come to life, and the way our brain interacts with our body changes.”

From the scant data available on walking and brain injury, says O’Mara, “it is reasonable to surmise that supervised walking may help with acquired brain injury, depending on the nature, type and extent of injury — perhaps by promoting blood flow, and perhaps also through the effect of entraining various electrical rhythms in the brain. And perhaps by engaging in systematic dual tasking, such as talking and walking.”

One such rhythm, he says, is that of theta brainwaves. Theta is a pulse or frequency (seven to eight hertz, to be precise) which, says O’Mara, “you can detect all over the brain during the course of movement, and it has all sorts of wonderful effects in terms of assisting learning and memory, and those kinds of things”. Theta cranks up when we move around because it is needed for spatial learning, and O’Mara suspects that walking is the best movement for such learning. “The timescales that walking affords us are the ones we evolved with,” he writes, “and in which information pickup from the environment most easily occurs.”

Essential brain-nourishing molecules are produced by aerobically demanding activity, too. You’ll get raised levels of brain-derived neurotrophic factor (BDNF) which, writes O’Mara, “could be thought of as a kind of a molecular fertiliser produced within the brain because it supports structural remodelling and growth of synapses after learning … BDNF increases resilience to ageing, and damage caused by trauma or infection.” Then there’s vascular endothelial growth factor (VEGF), which helps to grow the network of blood vessels carrying oxygen and nutrients to brain cells.

Jon Peterson discusses the birth of wargaming

Tuesday, August 13th, 2019

I recently shared Invicta’s video, How did war become a game?, and now it looks like the show has brought on Jon Peterson, author of Playing at the World, to do a Q&A, since his book was the primary source for the original piece:

Jon Peterson also co-wrote Art & Arcana: A Visual History of D&D.

Stack your attackers

Monday, August 12th, 2019

About 40% of violent criminal attacks involve more than one attacker, Greg Ellifritz warns:

I’m seeing lots of recent news articles where groups of teens attack individuals and couples.  The teens often beat the victims into unconsciousness.  Take a look at these news articles that have been posted in the last couple weeks.

All of these events involved groups of three to eight criminals attacking a single person or a couple.  These group attacks seem to be increasing in frequency.

His advice:

  1. The best way to win the fight is to avoid it.
  2. Multiple attackers are more dangerous to you.
  3. Whenever possible, try to “stack” your attackers.
  4. If you end up grappling with one of your attackers, use him as a shield to keep between you and the other attackers.
  5. Chokes are important.
  6. Don’t go to the ground.
  7. If you can’t escape, stack your attackers, or manipulate one to be a shield, you must attack.

Netflix is spending hundreds of millions of dollars to produce big-budget films

Sunday, August 11th, 2019

Netflix is spending hundreds of millions of dollars to produce big-budget films:

Earlier this month, Netflix agreed to spend nearly $200 million to make the Dwayne Johnson action movie “Red Notice,” which will be filmed next year at exotic locations and also stars Ryan Reynolds and Gal Gadot, the people said. In addition, a person familiar with the matter said, Netflix plans to release later this year “6 Underground,” a Michael Bay-directed action film that is costing about $150 million, and Martin Scorsese’s “The Irishman.”

The latter film might be the company’s riskiest bet. “The Irishman,” a historical drama likely to appeal only to adults interested in serious subject matter, costs as much as some all-ages action-adventure movies because of cutting-edge visual effects that allow stars including Robert De Niro, Al Pacino and Joe Pesci to appear at different ages. People close to the picture said Netflix’s total commitment is at least $173 million, with some going above $200 million, making “The Irishman” the most expensive adult drama in recent history.

Netflix has previously said about one-third of its total viewing is movies, rather than television series.


Netflix has been picking up many film projects Hollywood studios didn’t see as commercially viable at the box office, at least at the same budgets. Recent examples include Sandra Bullock’s post-apocalyptic movie “Bird Box’” and the jungle-heist flick “Triple Frontier,” starring Ben Affleck. Neither was a standout with critics, but “Bird Box” drew 80 million viewers during its first month and “Triple Frontier” has been watched 63 million times since its March release, the company said, making them Netflix’s first and fifth most popular original films, respectively.

Netflix bought the rights to “The Irishman” after major studios passed because of concerns that it was too expensive for a drama, a genre that has struggled at the box office in recent years. The producers were in the midst of raising independent funds to make the film when Netflix entered. “Without Netflix, ‘Irishman’ would not have been made,” said one of the people close to the movie. “I just don’t see [other] studios wanting to dive into these projects any more. I think they are staying away from the riskier, more mature films, especially dramas.”

The Duffer Brothers explain every major movie reference in Stranger Things

Saturday, August 10th, 2019

The Duffer Brothers explain every major movie reference in Stranger Things:

Thank God for the Atom Bomb

Friday, August 9th, 2019

Thank God for the Atom Bomb, Paul Fussell said:

I bring up the matter because, writing on the forty-second anniversary of the atom-bombing of Hiroshima and Nagasaki, I want to consider something suggested by the long debate about the ethics, if any, of that ghastly affair. Namely, the importance of experience, sheer, vulgar experience, in influencing, if not determining, one’s views about that use of the atom bomb.

The experience I’m talking about is having to come to grips, face to face, with an enemy who designs your death. The experience is common to those in the marines and the infantry and even the line navy, to those, in short, who fought the Second World War mindful always that their mission was, as they were repeatedly assured, “to close with the enemy and destroy him.” Destroy, notice: not hurt, frighten, drive away, or capture. I think there’s something to be learned about that war, as well as about the tendency of historical memory unwittingly to resolve ambiguity and generally clean up the premises, by considering the way testimonies emanating from real war experience tend to complicate attitudes about the most cruel ending of that most cruel war.

“What did you do in the Great War, Daddy?” The recruiting poster deserves ridicule and contempt, of course, but here its question is embarrassingly relevant, and the problem is one that touches on the dirty little secret of social class in America. Arthur T. Hadley said recently that those for whom the use of the A-bomb was “wrong” seem to be implying “that it would have been better to allow thousands on thousands of American and Japanese infantrymen to die in honest hand-to-hand combat on the beaches than to drop those two bombs.” People holding such views, he notes, “do not come from the ranks of society that produce infantrymen or pilots.” And there’s an eloquence problem: most of those with firsthand experience of the war at its worst were not elaborately educated people. Relatively inarticulate, most have remained silent about what they know. That is, few of those destined to be blown to pieces if the main Japanese islands had been invaded went on to become our most effective men of letters or impressive ethical theorists or professors of contemporary history or of international law. The testimony of experience has tended to come from rough diamonds — James Jones’ is an example — who went through the war as enlisted men in the infantry or the Marine Corps.

Anticipating objections from those without such experience, in his book WWII Jones carefully prepares for his chapter on the A-bombs by detailing the plans already in motion for the infantry assaults on the home islands of Kyushu (thirteen divisions scheduled to land in November 1945) and ultimately Honshu (sixteen divisions scheduled for March 1946). Planners of the invasion assumed that it would require a full year, to November 1946, for the Japanese to be sufficiently worn down by land-combat attrition to surrender. By that time, one million American casualties was the expected price. Jones observes that the forthcoming invasion of Kyushu “was well into its collecting and stockpiling stages before the war ended.” (The island of Saipan was designated a main ammunition and supply base for the invasion, and if you go there today you can see some of the assembled stuff still sitting there.) “The assault troops were chosen and already in training,” Jones reminds his readers, and he illuminates by the light of experience what this meant:

What it must have been like to some old-timer buck sergeant or staff sergeant who had been through Guadalcanal or Bougainville or the Philippines, to stand on some beach and watch this huge war machine beginning to stir and move all around him and know that he very likely had survived this far only to fall dead on the dirt of Japan’s home islands, hardly bears thinking about.

Another bright enlisted man, this one an experienced marine destined for the assault on Honshu, adds his testimony. Former Pfc. E. B. Sledge, author of the splendid memoir With the Old Breed at Peleliu and Okinawa, noticed at the time that the fighting grew “more vicious the closer we got to Japan,” with the carnage of Iwo Jima and Okinawa worse than what had gone before. He points out that

what we had experienced [my emphasis] in fighting the Japs (pardon the expression) on Peleliu and Okinawa caused us to formulate some very definite opinions that the invasion… would be a ghastly bloodletting. It would shock the American public and the world. [Every Japanese] soldier, civilian, woman, and child would fight to the death with whatever weapons they had, ride, grenade, or bamboo spear.

The Japanese pre-invasion patriotic song, “One Hundred Million Souls for the Emperor,” says Sledge, “meant just that.” Universal national kamikaze was the point. One kamikaze pilot, discouraged by his unit’s failure to impede the Americans very much despite the bizarre casualties it caused, wrote before diving his plane onto an American ship “I see the war situation becoming more desperate. All Japanese must become soldiers and die for the Emperor.” Sledge’s First Marine Division was to land close to the Yokosuka Naval Base, “one of the most heavily defended sectors of the island.” The marines were told, he recalls, that

due to the strong beach defenses, caves, tunnels, and numerous Jap suicide torpedo boats and manned mines, few Marines in the first five assault waves would get ashore alive — my company was scheduled to be in the first and second waves. The veterans in the outfit felt we had already run out of luck anyway…. We viewed the invasion with complete resignation that we would be killed — either on the beach or inland.

And the invasion was going to take place: there’s no question about that. It was not theoretical or merely rumored in order to scare the Japanese. By July 10, 1945, the prelanding naval and aerial bombardment of the coast had begun, and the battleships Iowa, Missouri, Wisconsin, and King George V were steaming up and down the coast, softening it up with their sixteen-inch shells.

On the other hand, John Kenneth Galbraith is persuaded that the Japanese would have surrendered surely by November without an invasion. He thinks the A-bombs were unnecessary and unjustified because the war was ending anyway. The A-bombs meant, he says, “a difference, at most, of two or three weeks.” But at the time, with no indication that surrender was on the way, the kamikazes were sinking American vessels, the Indianapolis was sunk (880 men killed), and Allied casualties were running to over 7,000 per week. “Two or three weeks,” says Galbraith.

Two weeks more means 14,000 more killed and wounded, three weeks more, 21,000. Those weeks mean the world if you’re one of those thousands or related to one of them. During the time between the dropping of the Nagasaki bomb on August 9 and the actual surrender on the fifteenth, the war pursued its accustomed course: on the twelfth of August eight captured American fliers were executed (heads chopped off); the fifty-first United States submarine, Bonefish, was sunk (all aboard drowned); the destroyer Callaghan went down, the seventieth to be sunk, and the Destroyer Escort Underhill was lost. That’s a bit of what happened in six days of the two or three weeks posited by Galbraith. What did he do in the war? He worked in the Office of Price Administration in Washington. I don’t demand that he experience having his ass shot off. I merely note that he didn’t.

(This came up a couple times, a few months back.)

This is now called conservatism

Thursday, August 8th, 2019

American politics can be considered a tale of three liberalisms, George Will argues, in The Conservative Sensibility:

[T]he first of which, classical liberalism, teaches that the creative arena of human affairs is society, as distinct from government. Government’s proper function is to protect the conditions of life and liberty, primarily for the individual’s private pursuit of happiness. This is now called conservatism. Until the New Deal, however, it was the Jeffersonian spirit of most of the Democratic Party.

FDR’s New Deal liberalism was significantly more ambitious. He said that until the emergence of the modern industrial economy, “government had merely been called upon to produce the conditions within which people could live happily, labor peacefully and rest secure.” Now it would be called upon to play a grander role. It would not just provide conditions in which happiness, understood as material well-being, could be pursued. Rather, it would become a deliverer of happiness itself. Government, FDR said, has “final responsibility” for it. This “middle liberalism” of the New Deal supplemented political rights with economic rights.

The New Deal, the modern state it created, and the class of people for whom the state provided employment led to the third liberalism, that of the 1960s and beyond. This “managerial liberalism” celebrates the role of intellectuals and other policy elites in rationalizing society from above, wielding the federal government and the “science” of public administration, meaning bureaucracy.

The apotheosis of the first phase of liberalism, in Will’s view, was the American Founding, as Arnold Kling explains:

Madison and the other Founders took at as given that human nature made us sufficiently equal to deserve identical treatment under the law, sufficiently different to benefit from liberty and autonomy, sufficiently bellicose to require a government that could resolve disputes peacefully, and sufficiently factional that preventing one coalition from dominating the rest required a system of checks and balances.

Read the whole thing.

You can’t be healthy unless the animals you eat are healthy

Wednesday, August 7th, 2019

The New York Times looks at the vegetarians who turned into (ethical) butchers:

As soon as I started eating meat, my health improved,” she said. “My mental acuity stepped up, I lost weight, my acne cleared up, my hair got better. I felt like a fog lifted.” All of the meat was from healthy, grass-fed animals reared on the farms where she worked.

Other former vegetarians reported that they, too, felt better after introducing grass-fed meat into their diets: Ms. Kavanaugh said eating meat again helped with her depression. Mr. Applestone said he felt far more energetic.


Grass-fed and -finished meat has been shown to be more healthful to humans than that from animals fed on soy and corn, containing higher levels of omega-3 fatty acids, conjugated linoleic acid, beta carotene and other nutrients. Cows that are fed predominantly grass and forage also have better health themselves, requiring less use of antibiotics.

“There’s one health for animals and humans,” Ms. Fernald said. “You can’t be healthy unless the animals you eat are healthy.”

There’s another benefit to grass-fed and -pastured meat: It can be absolutely delicious, as that steak in Denver reminded me.

Mr. Applestone vividly remembers that first bacon sandwich (made with pasture-raised pork) in his post-vegetarian life, served on a soft Martin’s potato roll: “I thought it was the greatest thing that ever hit my mouth.”

The suspects had a history of threats or other troubling communications

Tuesday, August 6th, 2019

So, what role does mental illness play in these mass killings?

Multiple studies done between 2000 and 2015 suggest that about a third of mass killers have an untreated severe mental illness. If mental illness is defined more broadly, the percentage is higher. In 2018 the Federal Bureau of Investigation released a report titled “A Study of the Pre-Attack Behavior of Active Shooters in the United States Between 2008 and 2013.” It reported that 40% of the shooters had received a psychiatric diagnosis, and 70% had “mental health stressors” or “mental health concerning behaviors” before the attack.

Most recently, in July 2019, the U.S. Secret Service released its report “Mass Attacks in Public Spaces—2018.” The report covered 27 attacks that resulted in 91 deaths and 107 injuries. The investigators found that 67% of the suspects displayed symptoms of mental illness or emotional disturbance. In 93% of the incidents, the authorities found that the suspects had a history of threats or other troubling communications. The results were similar to those of another study published by the Secret Service on 28 such attacks in 2017.


It should be emphasized that mentally ill patients who are receiving treatment are no more at risk for violence than the general population. Yet it is also clear that without treatment some seriously mentally ill people are at greater risk for violent behavior than the general population.

It doesn’t seem like we take mere threats very seriously.

Bloom was on to something

Sunday, August 4th, 2019

José Luis Ricón presents a systematic review of the effectiveness of mastery learning, tutoring, and direct instruction and draws these conclusions about Bloom’s two-sigma problem:

Bloom noted that mastery learning had an effect size of around 1 (one sigma); while tutoring leads to d=2. This is mostly an outlier case.

Nonetheless, Bloom was on to something: Tutoring and mastery learning do have a degree of experimental support, and fortunately it seems that carefully designed software systems can completely replace the instructional side of traditional teaching, achieving better results, on par with one to one tutoring. However, designing them is a hard endeavour, and there is a motivational component of teachers that may not be as easily replicable purely by software.

Overall, it’s good news that the effects are present for younger and older students, and across subjects, but the effect sizes of tutoring, mastery learning or DI are not as good as they would seem from Bloom’s paper. That said, it is true that tutoring does have large effect sizes, and that properly designed software does as well. The DARPA case study shows what is possible with software tutoring, in the case the effect sizes went even beyond Bloom’s paper.

Also, other approaches to education also have shown large effect sizes, and so one shouldn’t privilege DI/ML here. The principles behind DI/ML (clarity in what is taught, constant testing, feedback, remediation) are sound, and they do seem more clearly effective for disadvantaged children, so for them they are worth trying. For gifted children, or in general intelligent individuals, the principles of the approaches do still make sense, but how much of an effect do they have? In this review I have not looked at this question, but suffice to say that I haven’t found numerous mentions of work targeting the gifted.

That aside, if what one has in mind is raising the average societal skill-level by improving education, that’s a different matter, and that’s where the evidence from the DI literature is less compelling, the effects that do seem to emerge are weaker, perhaps of a quarter of a standard deviation at best. ML does fare better in the general student population, and for college students too.

As for the effect of diverse variables on the effects, studies tend to find that the effects of DI/ML fade over time — but don’t fully disappear — and that less skilled students benefit more than highly capable ones, and the effects vary greatly on what is being tested. Mastery learning, it seems, works by overfitting to a test, and the chances that those skills do not generalise are nontrivial. As in Direct Instruction, if what is desired is mastery of a few key core concepts, especially with children with learning disabilities, it may be well suited for them. But it is yet unclear if DI are useful for average kids. For high SES kids, it seems unlikely that they would benefit.

(Hat tip to Gwern.)

They are unable to decipher compound sentences

Saturday, August 3rd, 2019

Rod Dreher shares this email from a college professor in a STEM field:

My students are unable to analyze, follow and understand written text. To be more specific, they are unable to decipher compound sentences, understand relationship between subordinate and main clauses. They can’t grasp the logical relationship between sentences, let alone paragraphs, which are totally opaque to them.

When I started to teach (only 2 years ago), I prepared material written in normal, rational, technical prose — for adults, or as I understood they would be. Immediately, it became apparent that there was zero comprehension. Well, thought I, let’s make it a bit simpler. So I reduced the paragraphs to bullet point lists.

Still nothing? Hmm.

I started to write step by step, basically cut-and-paste instructions, highlighted the important points, wrote in notes and cross references (like NOTE: you did this in step #2 please refer to #2). Abject failure.

So, especially in the exams, I started to write in answers in the follow up questions, like so: “If you correctly answered #1 as ABC what is the cause of …?”. Basically I give them the answers in followup questions, plus cut and paste documents. My exams are open book, open notes, Internet access.

95% of them fail.

This is what I attribute this phenomenon to: I don’t think that they are able to concentrate for more that a few seconds. Hence compound sentences become an enigma. Their brains are ’trained’ to hold information for the minimum time possible and to move on the next soundbite or tweet. They are unable to hold a thought in their minds long enough to abstract it, analyze it, and form required relationships. As a result they lack the fundamental building blocks for inductive and deductive reasoning. They want to be spoon-fed without ever having to resort to a single abstract thought. They have been ‘educated’ by quick turnaround, expensive and largely incorrect multiple choice question textbooks.

Imagine how this would (and soon will) affect the medical profession. “When you treat appendicitis you will remove a) spleen, b) heart, c) appendix, d) none of the above. “Well, done!” Here is your first patient … (or, in Dr. Zoidberg’s context: Scalpel!, Blood bucket! Priest!).

Their problem is that they are unable to formulate questions. It’s difficult to come up with answers if you don’t know what to ask. So I tell them that my ambition is to teach them how to ask questions. They love my classes but I am told repeatedly: “This was the best class we have had but by far the most difficult.”

Good grief. We have totally destroyed this generation.

A reproach to every existing government

Friday, August 2nd, 2019

The theory of market failure is a reproach to the free-market economy, but, Bryan Caplan notes, it’s also a reproach to every existing government:

How so? Because market failure theory recommends specific government policies — and actually-existing governments rarely adopt anything like them.

What do I have in mind?

1. When markets produce too much of something, market failure theory tells governments to impose corrective taxes that correspond to the severity of the excess — then let people do as they please. In the real world, in contrast, governments normally pass a phone book’s worth of regulations. They rarely consider the cost of the regulations, and almost never just let people bypass regulations by paying a high fee. Thus, you can’t buy your way out of an Environmental Impact Statement or heroin prohibition — and if the theory of market failure is right, this rigidity is deeply dysfunctional.

2. When markets produce too little of something, market failure theory tells governments to provide corrective subsidies that correspond to the severity of the shortfall — then let people do as they please. In the real world, in contrast, governments tend to directly run industries with alleged positive externalities. Public education and health care are the obvious example, but the same goes for national parks, government lands, etc. Furthermore, government firms routinely offer even non-rival products for gratis or next-to-gratis — even when the products have clear negative externalities such as road congestion and subsidized energy.

3. While the theory of market failure abhors monopoly, actually-existing governments do much to stifle competition. This is most grotesque for real estate and immigration, which most governments view with dire suspicion, but perhaps most blatant for occupational licensing. Again, if negative externalities were the real rationale for these restrictions, governments would just impose taxes — then let everyone build, move, and work as they please.

They become analog computations instead of digital

Thursday, August 1st, 2019

University of Michigan engineers are claiming the first memristor-based programmable computer for AI that can work on all its own.

“Memory is really the bottleneck,” says University of Michigan professor Wei Lu. “Machine learning models are getting larger and larger, and we don’t have enough on-chip memory to store the weights.” Going off-chip for data, to DRAM, say, can take 100 times as much computing time and energy. Even if you do have everything you need stored in on-chip memory, moving it back and forth to the computing core also takes too much time and energy, he says. “Instead, you do the computing in the memory.”

His lab has been working with memristors (also called resistive RAM, or RRAM), which store data as resistance, for more than a decade and has demonstrated the mechanics of their potential to efficiently perform AI computations such as the multiply-and-accumulate operations at the heart of deep learning. Arrays of memristors can do these tasks efficiently because they become analog computations instead of digital.

The new chip combines an array of 5,832 memristors with an OpenRISC processor, 486 specially-designed digital-to-analog converters, 162 analog-to-digital converters, and two mixed-signal interfaces act as translators between the memristors’ analog computations and the main processor.

Scientists created the first memristor 11 years ago and foresaw their use in neural nets.