Trying to pretend that somehow the 20th century is still amongst us

June 4th, 2020

I mentioned recently that I somehow managed to go this whole time without reading a single Tom Clancy novel — or watching a single movie adaptation, except for The Hunt for Red October — and only just listened to the audiobook version of Patriot Games, which was originally published in 1987.

I’ve continued working through the Jack Ryan UniverseThe Cardinal of the Kremlin, Clear and Present Danger, and The Sum of All Fears — and a point Clancy keeps making is how much the intelligence community relies on the mainstream media for information, especially cable news, which was new and exciting at the time.

When Russ Roberts interviewed Martin Gurri on a recent EconTalk, I was naturally interested, since Arnold Kling has been mentioning Gurri regularly, but I wasn’t expecting a connection to Tom Clancy. Then Roberts introduced Gurri:

Today is February 20th, 2020, and my guest is author Martin Gurri. He is a former CIA [Central Intelligence Agency] analyst and the author of The Revolt of the Public. Martin, welcome to EconTalk.

Gurri explains his position:

Well, as you say, I was an analyst in CIA. I probably had the least glamorous job there. I didn’t have my 00 [Double-O] license to kill or the beautiful girls. I was an analyst of global media, and for the earliest part of my career, that was very straightforward. There was a trickle of open information and every country had its equivalent of the New York Times, a source that set the agenda. So, if the president wanted to know how his policies were playing in France, you went to Le Monde or you went to Le Figaro. Just, literally, two newspapers.

Then things went haywire: just the world turned upside down. A digital earthquake epicenter, say, somewhere between Mountain View and Palo Alto, generated this tsunami of information that just swept over the world.

And, ‘tsunami,’ I think is a good word. Numbers can be boring, but sometimes they can be illustrative. Some very clever people from Berkeley tried to measure how the information of the world had developed, and they came up with the fact that in the year 2001, as you just tip into this era, that year produced double the amount of information of all previous human history going back to the cave paintings and the dawn of culture.

So, 2002 doubled 2001. So, if you chart that you do get something that looks like a gigantic wave; and I call it a tsunami. Now, for those of us who worked in CIA, that was like, ‘Now what the heck do we do with this enormous amount of information? Where do we get our stuff?’ But, what really mattered was the effect of the information in different nations of the world. We could see, as the tsunami swept across the world at different speeds in different countries, tremendously increased levels of social and political turbulence. And, the question was why? So, that was the seed of the book.

After I left government, the question that haunted me was the one that my CIA masters always asked, which was, like, ‘So, what? Okay, so the people get–they start to write bad things about government in Egypt. So what? What are they going to do when the cops come? Hit them with their laptops?’ That was an internal CIA joke: Are they going to hit them with their laptops?

[...]

2011 is a year I call the Phase Change Year, where it really showed the effect of this tide of information could affect power. And you had, of course, the Arab Spring in the Middle East, probably misnamed. You had the Indignados in Spain. You had a revolt called the 10 People Revolt or Social Justice Revolt in Israel. You had the Occupiers here in the United States. And, these all had similar origin. So the question, now, was what was going on? What caused these eruptions from below?

And, to my thinking, it has to do with the kinds of institutions that we have inherited from the 20th century, from the industrial age. They’re all–how many people are aware of Frederick Taylor? He’s sort of a forgotten figure in history. But he was sort of the prophet of industrialism and scientific management. And, if you read his writings, everything happens from the top down: the top manager figures everything is going to happen, all the tools that you need, and essentially what everybody, every layer below you–and there are many, many layers below you–is going to do. Everything is scripted.

Well, our institutions, which we think or tend to think were created in the 18th century by the Founders, in fact are the product of the industrial age, and of political Taylorism, in essence. And, one of the things that they required, to maintain their authority–and they had, in their day, a great deal of authority: that they believe in expertise, they believe in science–one of the things that they were–the primary foundation was a monopoly of information in their domains.

So that, if you’re in government you have a control over a certain set of government information. If you’re in politics, you and the media, you as the politicians and the media share a certain set of information that nobody else had access to in the 20th century. Nobody talked back.

And, what that tsunami has done was destroy that monopoly. In brief, it has destroyed that monopoly; and it turns out these institutions can’t seem to function without that and have lost their authority. Where, before there was a sort of instinctive reliance–the President says something at the age of JFK [John Fitzgerald Kennedy], somewhere between 70% and 80% of Americans trusted the government. Today, if you’re the President, you are instinctively distrusted: somewhere between 20% and 30% today, trust the Presidency and the Federal Government.

So, I think it has been a crisis that these institutions have lapsed into. And, I think the elites that manage and inhabit these institutions have reacted pretty badly in the sense of not really being aware of what’s happening, and trying to pretend that somehow the 20th century is still amongst us and that the internet and the web and the digital universe has never exploded around them.

Maxis didn’t want to make professional simulation games

June 3rd, 2020

SimCity wasn’t meant to be taken seriously:

The game was inspired by research on real-world urban planning concepts, and although it was created as a way for players to experiment running a city, the goal was to be fun rather than accurate. “I realized early on, because of chaos theory and a lot of other things,” said designer Will Wright, “that it’s kind of hopeless to approach simulations like that, as predictive endeavors. But we’ve kind of caricatured our systems. SimCity was always meant to be a caricature of the way a city works, not a realistic model of the way a city works.”

“I think if we tried to make it realistic, we would be doing something that we wouldn’t want to do,” Wright said in an interview in 1999. But that didn’t stop companies from believing Maxis could design realistic simulations. Will Wright didn’t believe that was even possible. “Many people come to us and say, ‘You should do the professional version,’” he continued. “That really scares me because I know how pathetic the simulations are, really, compared to reality. The last thing I want people to come away with is that we’re on the verge of being able to simulate the way that a city really develops, because we’re not.”

Maxis didn’t want to make professional simulation games. But for two brief, strange years, they did.

From 1992 to 1994, a division called Maxis Business Simulations was responsible for making serious professional simulations that looked and played like Maxis games. After Maxis cut the division loose, the company continued to operate independently, taking the simulation game genre in their own direction. Their games found their way into in corporate training rooms and even went as far as the White House.

Almost nothing they developed was ever released to the public.

[...]

For Wright, games were a way of helping people create “mental models” for understanding parts of the world. The team at Maxis would research a topic like urban dynamics — or something like ant colony behavior, in the case of another game they made called SimAnt — and create a game where players could experiment with those ideas. The goal wasn’t to teach anything directly, but rather to help the player get the model of SimCity in their head, so that playing this game could help them understand how the different systems within a city interact.

For many people though, that nuance was lost, and instead they treated it like Maxis could build accurate simulations of the real world. And they wouldn’t stop asking about it. “In the first couple months after SimCity appeared,” Wright told Wired, “we were approached by a number of companies saying, ‘Hey that’s great! If you can do a city like that, we want you to do SimPizzaHut, or SimWhatever.’ We thought these things were so weird that we said no, but they kept coming in.”

“So at some point, as we got big enough, we decided to give it a go.”

John Hiles knew about SimCity. He also believed in the power of building mental models, and he saw something in SimCity that was missing from the simulation modeling work happening at Delta Logic: it was fun. It had an intuitive interface and friendly graphics. That was the missing ingredient. Hiles believed that if they teamed up — Maxis’s style with Delta Logic’s systems — they could create simulations that were fun and powerful. Maxis had been looking for new partners for software development, so Hiles used that as an opportunity to get in their orbit. He approached Jeff Braun, and in 1991, his company became a contractor for Maxis.

[...]

As part of the company’s restructuring in the wake of SimCity, in the summer of 1992, Maxis accepted a $10 million investment from Warburg Pincus Ventures, who received a 30% stake in the company and a seat at the board. According to Braun, Warburg Pincus wanted Maxis to start doing business simulation games more seriously.

With their new directive, Maxis decided to jump in all the way. That July, they purchased Delta Logic, turning them into a new division of the company — Maxis Business Simulations. John Hiles was named VP and general manager.

Their first project? Chevron wanted them to make a game about an oil refinery.

Oil refineries are really, really complicated. That’s why Chevron wanted Maxis to make them a game like SimCity, to teach the employees at their oil refinery in Richmond, California how it all worked.

To be clear, they didn’t want a game that was supposed to accurately train people how to run an oil refinery or replace an education in chemical engineering. That would’ve been incredibly dangerous. What they wanted instead was something that showed you how the dynamics of the refinery worked, how all the different pieces invisibly fit together, like SimCity did for cities.

The operators at the refinery sometimes had trouble getting a big picture for what was happening at the plant beyond their particular area of focus. “The whole goal if this was to teach operators that they are part of a bigger system,” Skidmore said. “Their concern at the time was that operators tended to be very focused on their one plant, and their one thing they do, and so [they] weren’t keeping in mind that what they do affected other parts of the plant. So they wanted a training tool that allowed operators to manipulate inputs and outputs of the various pieces of the refinery process to see how they impact.”

The non-technical staff at the Richmond refinery needed to know how it worked too. The people in human resources and accounting weren’t chemical engineers, but it would help their work to see how the different areas of the plant were networked together, how one department affected another department.

Chevron paid Maxis $75,000 for a prototype of a refinery simulator. The project began even before Maxis bought Delta Logic, back when they were still just contractors.

How do you get started on a project like this? They did it the same way Maxis developed their own games: they did research.

John Hiles and the Bruces took a visit to the Chevron Richmond Refinery, where they met with a specialist who took them on a tour of the plant and explained how it worked. It was a collaborative relationship with Chevron throughout the development process; Chevron sent them the raw formulas they used at the refinery, and as Maxis Business Simulations turned that into a game, Chevron would double-check their work.

[...]

John Hiles said that most of the trainers at Chevron wanted to use it as a conventional training tool, “but some of the more astute teachers said, ‘Let’s just get you started here by seeing if you can wreck the oil refinery, if you can abuse the inputs and the settings and essentially get fired,’” he remembered.

That was a legitimate way to learn how a refinery worked: if you start breaking the refinery, you can see how ruining one part of the plant will affect the other parts of the plant. “The tool — the game — was agnostic,” Hiles explained, correcting himself. “It would work for someone trying to ruin an oil refinery just as well as somebody trying to run it efficiently.”

SimRefinery was finished in fall 1992, earlier than the 1993 date that’s usually reported online. The trademark registration for SimRefinery suggests that the game was officially handed over to Chevron on Monday, October 26, 1992. (It’s unusual to have a specific release date for a corporate training product, but that’s a result of Maxis trademarking the SimRefinery name almost a year after it was completed.)

Chevron liked it. They started testing the game with their staff in September, and Chevron reported that communication from marketing and finance staff “improved dramatically.” Speaking to The Plain Dealer in Cleveland, Chevron training specialist Susan Gustin praised the game’s effectiveness. “Just dumping information on people isn’t effective,” she said. “People only remember what they use.” She told Computerworld, “Some of these relationships aren’t at all obvious until you play the game a bit.”

It seems to have even won over one of its critics, Will Wright. “He was initially skeptical,” Skidmore said. “I think when we eventually finished SimRefinery, I think he approved of it.”

[...]

Whatever the long-term interest in SimRefinery, it wasn’t adopted at Chevron out of the gate, and that was the start of a pattern for the games by Maxis Business Simulations — a skepticism towards the idea that a simulation game could teach you something. Or should teach you something.

Once you grasp its lessons, you can never again be a normal citizen

June 2nd, 2020

Labor economics stands against the world, Bryan Caplan says:

Once you grasp its lessons, you can never again be a normal citizen.

What are these “central tenets of our secular religion” and what’s wrong with them?

Tenet #1: The main reason today’s workers have a decent standard of living is that government passed a bunch of laws protecting them.

Critique: High worker productivity plus competition between employers is the real reason today’s workers have a decent standard of living. In fact, “pro-worker” laws have dire negative side effects for workers, especially unemployment.

Tenet #2: Strict regulation of immigration, especially low-skilled immigration, prevents poverty and inequality.

Critique: Immigration restrictions massively increase the poverty and inequality of the world — and make the average American poorer in the process. Specialization and trade are fountains of wealth, and immigration is just specialization and trade in labor.

Tenet #3: In the modern economy, nothing is more important than education.

Critique: After making obvious corrections for pre-existing ability, completion probability, and such, the return to education is pretty good for strong students, but mediocre or worse for weak students.

Tenet #4: The modern welfare state strikes a wise balance between compassion and efficiency.

Critique: The welfare state primarily helps the old, not the poor — and 19th-century open immigration did far more for the absolutely poor than the welfare state ever has.

Tenet #5: Increasing education levels is good for society.

Critique: Education is mostly signaling; increasing education is a recipe for credential inflation, not prosperity.

Tenet #6: Racial and gender discrimination remains a serious problem, and without government regulation, would still be rampant.

Critique: Unless government requires discrimination, market forces make it a marginal issue at most. Large group differences persist because groups differ largely in productivity.

Tenet #7: Men have treated women poorly throughout history, and it’s only thanks to feminism that anything’s improved.

Critique: While women in the pre-modern era lived hard lives, so did men. The mating market led to poor outcomes for women because men had very little to offer. Economic growth plus competition in labor and mating markets, not feminism, is the main reason women’s lives improved.

Tenet #8: Overpopulation is a terrible social problem.

Critique: The positive externalities of population — especially idea externalities — far outweigh the negative. Reducing population to help the environment is using a sword to kill a mosquito.

Yes, I’m well-aware that most labor economics classes either neglect these points, or strive for “balance.” But as far as I’m concerned, most labor economists just aren’t doing their job. Their lingering faith in our society’s secular religion clouds their judgment — and prevents them from enlightening their students and laying the groundwork for a better future.

California trash-to-hydrogen plant promises dirt-cheap, super-green H2

June 1st, 2020

Lancaster, California will be home to a “greener than green” trash-to-hydrogen production plant three times the size of any other green H2 facility:

SGH2 says its process is the cleanest of all on the market, while matching the price of the cheapest producers — and pulling tens of thousands of tons of garbage out of landfills.

[...]

According to a recent memorandum of understanding, the city of Lancaster will host and co-own the SGH2 Lancaster plant, which will be capable of producing up to 11,000 kg of H2 per day, or 3.8 million kg per year, while processing up to 42,000 tons of recycled waste per year. Garbage to clean fuel, with a US$2.1 to $3.2 million saving on landfill costs per year as a sweetener.

[...]

The process, developed by SGH2′s parent company Solena, uses high-temperature plasma torches putting out temperatures between 3,500 and 4,000 °C (6,332 to 7,232 °F). This ionic heat, with oxygen-enriched gas fed in, catalyzes a “complete molecular dissociation of all hydrocarbons” in whatever fuel you’ve fed in, and as it rises and begins to cool, it forms “a very high quality, hydrogen-rich bio-syngas free of tar, soot and heavy metals.”

The process accepts a wide variety of waste sources, including paper, old tires, textiles, and notably plastics, which it can handle very efficiently without toxic by-products. The bio-syngas exits the top of a plenum chamber, and is sent to a cooling chamber, followed by a pair of acid scrubbers to remove particulate matter.

A centrifugal compressor further cleans the gas stream, leaving a mixture of hydrogen, carbon monoxide and carbon dioxide. This is run through a water-gas shift reactor that adds water vapor and converts the carbon monoxide to carbon dioxide and more hydrogen gas. The two are separated, neatly capturing all the CO2 as hydrogen comes out the other end.

A Berkeley Lab lifecycle carbon analysis concluded, says SGH2, that each ton of hydrogen produced by this process reduces emissions by between 23 and 31 tons of CO2 equivalent — presumably counting emissions that would be created if the garbage was burned instead of converted into hydrogen. That would be between 13–19 tons more carbon dioxide avoided than any other green hydrogen production process.

What’s more, while electrolysis requires some 62 kWh of energy to produce one kilogram of hydrogen, the Solena process is energy-positive, generating 1.8 kWh per kg of hydrogen, meaning the plant generates its own electricity and doesn’t require external power input.

The 5-acre facility, in a heavy industrial zone of Lancaster, will employ 35 people full-time and create some 600 jobs in construction. SGH2 is hoping to break ground in Q1 2021 and achieve full operational status by 2023. The company is in negotiations with “California’s largest owners and operators of hydrogen refueling stations” to buy the plant’s entire output for a 10-year period.

Use the same tactics you would use with a power hungry and controlling supervisor in your place of employment

May 29th, 2020

How do you safely intervene when cops are mistreating a prisoner?

Violent action won’t help. You will be arrested and likely beaten or killed as well. If you physically attack the cop, it might actually make it worse for the guy you are trying to protect.

[...]

You always want to give your opponent a “face saving” way out. You want your opponent to think that your idea is his idea and to embrace that idea rather than to fight it. The best way to deal with these police officers is to use the same tactics you would use with a power hungry and controlling supervisor in your place of employment.

[...]

Don’t let your rage make you ineffective. To verbally convince these officers that they are acting in error, you need to provide them with a better solution and make them think that the decision is in their own best interests. You may have to soften your angry tone and think a bit to make that happen.

[...]

The best thing to do is to approach another officer on scene who has less ego involvement rather than approaching the officer kneeling on the man’s neck.

Say something like:

“Hey officer, I just want to let you know that the guy on the ground appears to be suffering from a medical condition. I don’t know if the officer controlling him knows he’s kneeling on the dude’s neck. People are videotaping and it doesn’t look good. I just don’t want you guys to get in trouble.”

If someone approached me at a similar scene in that manner, I would most certainly go check things out and ensure that the prisoner is OK.

You don’t care about the officers’ well being. You openly hope that the officer does get in trouble. Remember, to be successful, you want him to think it was his own idea. You want the officer to think “Maybe that doesn’t look very good. I have to stop this before it gets worse.” Play the game.

If there is no one else on scene, I’d approach the officer and focus on the medical issues.

“Officer, let me help you. I’ve had advanced medical training and that guy doesn’t look so good. Let’s move him on to his side and away from the car so that he can breathe better and I’ll check him out for you.”

In that approach, the officer can yield authority to someone who is better qualified without losing face. Most cops know very little about medical treatment protocols. If you seem like you know more than he does, he may yield to your experience.

Another way that might work is:

“Officer, are you OK? I’m a martial arts instructor. Can I help you hold him down so that you don’t have to kneel on his neck? Just tell me what you want me to do and I’ll do it.”

That might get the officer thinking about the consequences of kneeling on someone’s neck and allow him the safety to “de-escalate” if he feels that you are helping him get a chaotic situation under control.

Heaviside can take advantage of slim and low-drag aerodynamic forms that are just not practical on cars

May 28th, 2020

Electric vertical takeoff and landing (eVTOL) aircraft can be surprisingly energy-efficient:

Under the EPA’s standard freeway driving test, a 2020 Nissan Leaf Plus uses about 275 Watt-hours per mile when it averages 50 miles per hour. It can comfortably seat four, but its average occupancy is somewhere around 1.6. Thus, the Leaf’s energy consumption is about 171Wh per passenger mile across all trips.

Our current Heaviside prototype uses about 120Wh per passenger mile, and does so at twice the speed of the Leaf: 100 miles per hour (of course, we can fly much faster, if we choose). We can save another 15% of energy because while roads are not straight, flight paths usually are. All together, Heaviside requires 61% as much energy to go a mile.

Why is Heaviside this efficient — doesn’t it take more energy to go faster? Yes, and it makes the high efficiency we’ve achieved even more dramatic. The answer is that Heaviside can take advantage of slim and low-drag aerodynamic forms that are just not practical on cars.

Detonation is chaotic and much harder to control

May 27th, 2020

A type of rocket engine once thought impossible has just been fired up in the lab:

Engineers have built and successfully tested what is known as a rotating detonation engine, which generates thrust via a self-sustaining wave of detonations that travel around a circular channel.

As this engine requires far less fuel than the combustion engines currently used to power rockets, it could eventually mean a more efficient and much lighter means of getting our ships into space.

“The study presents, for the first time, experimental evidence of a safe and functioning hydrogen and oxygen propellant detonation in a rotating detonation rocket engine,” said aerospace engineer Kareem Ahmed of the University of Central Florida.

The idea of the rotating detonation engine goes back to the 1950s. It consists of a ring-shaped — annular — thrust chamber created by two cylinders of different diameters stacked inside one another, creating a gap in between.

Gas fuel and oxidiser are then injected into this chamber through small holes and ignited. This creates the first detonation, which produces a supersonic shockwave that bounces around the chamber. That shockwave ignites the next detonation, which ignites the next, and so forth, producing an ongoing supersonic shockwave to generate thrust.

This should produce more energy for less fuel compared to combustion, which is why the US Military is investigating and funding it; this new research was funded by the US Air Force, and it’s not the only such project the military are looking into.

In practice, however, there’s a reason rockets are generally powered by internal combustion instead, in which the fuel and oxidiser are mixed to produce a slower, controlled reaction to generate thrust.

Detonation is chaotic and much harder to control. In order for the whole thing to not blow up — very literally — in your face, everything needs to be precisely calibrated.

(Hat tip to Hans Schantz.)

Seismic technology to probe the Earth adapted to probe the brain

May 26th, 2020

For decades, geologists have used sound waves travelling through the Earth to search for oil, image fault lines and attempt to predict earthquakes:

But in recent years seismology has been supercharged by a computational technique called full waveform inversion (FWI), which uses complex computer algorithms to scavenge ever more information from seismic data, and make much more detailed and accurate 3D maps of the Earth’s crust.

Now scientists at Imperial College London have adapted the same technology into a prototype head-mounted scanner that produced imaging information they say could be used in the future to produce high-resolution 3D images of the brain.

synthesized-wavefield-crossing-the-head

The device uses a helmet fitted with an array of acoustic transducers that act as both sound transmitters and receivers. The system uses low frequency sound waves that are able to penetrate the skull and pass through the brain without harming brain tissue. The sound waves are altered as they pass through different brain structures, then the signals are read and run through the FWI algorithm. In simulations the team got results that make them confident they can produce high-resolution 3D images that may be as good, if not better, than more traditional approaches.

Such a device, because of its simplicity and presumably lower cost, could make brain imaging much more widely available.

If developed into a small, portable version, it could have a powerful impact on the diagnosis of brain injury. For example, doctors in emergency rooms or paramedics would be able to do instant brain scans of accident victims with head injuries, or stroke victims.

Current brain scanning technology is very expensive so its use is effectively rationed, with long wait times for non-emergency appointments. It’s also cumbersome, not very well suited to some emergency situations, and can’t be used on some patients. MRI, for example, can’t be used on patients with metallic medical implants or victims of accidents who might have metallic foreign bodies in them. They’re also huge, loud and confining, which can be a big issue for some patients.

Abu Hureyra had another story to tell

May 25th, 2020

Before the Taqba Dam impounded the Euphrates River in northern Syria in the 1970s, an archaeological site named Abu Hureyra bore witness to the moment ancient nomadic people first settled down and started cultivating crops — but Abu Hureyra had another story to tell:

A large mound marks the settlement, which now lies under Lake Assad.

But before the lake formed, archaeologists were able to carefully extract and describe much material, including parts of houses, food and tools — an abundance of evidence that allowed them to identify the transition to agriculture nearly 12,800 years ago. It was one of the most significant events in our Earth’s cultural and environmental history.

Abu Hureyra, it turns out, has another story to tell. Found among the cereals and grains and splashed on early building material and animal bones was meltglass, some features of which suggest it was formed at extremely high temperatures — far higher than what humans could achieve at the time — or that could be attributed to fire, lighting or volcanism.

“To help with perspective, such high temperatures would completely melt an automobile in less than a minute,” said James Kennett, a UC Santa Barbara emeritus professor of geology. Such intensity, he added, could only have resulted from an extremely violent, high-energy, high-velocity phenomenon, something on the order of a cosmic impact.

Based on materials collected before the site was flooded, Kennett and his colleagues contend Abu Hureyra is the first site to document the direct effects of a fragmented comet on a human settlement. These fragments are all part of the same comet that likely slammed into Earth and exploded in the atmosphere at the end of the Pleistocene epoch, according to Kennett. This impact contributed to the extinction of most large animals, including mammoths, and American horses and camels; the disappearance of the North American Clovis culture; and to the abrupt onset of the end-glacial Younger Dryas cooling episode.

The team’s findings are highlighted in a paper published in the Nature journal Scientific Reports.

“Our new discoveries represent much more powerful evidence for very high temperatures that could only be associated with a cosmic impact,” said Kennett, who with his colleagues first reported evidence of such an event in the region in 2012.

Abu Hureyra lies at the easternmost sector of what is known as the Younger Dryas Boundary (YDB) strewnfield, which encompasses about 30 other sites in the Americas, Europe and parts of the Middle East. These sites hold evidence of massive burning, including a widespread carbon-rich “black mat” layer that contains millions of nanodiamonds, high concentrations of platinum and tiny metallic spherules formed at very high temperatures. The YDB impact hypothesis has gained more traction in recent years because of many new discoveries, including a very young impact crater beneath the Hiawatha Glacier of the Greenland ice sheet, and high-temperature meltglass and other similar evidence at an archaeological site in Pilauco, located in southern Chile.

Rodents rely on restaurants for food

May 24th, 2020

The Centers for Disease Control and Prevention (CDC) is warning that rodents are becoming increasingly aggressive as they scavenge for food:

In an advisory posted to its website, the health agency noted that rodents rely on food and waste generated at commercial establishments such as restaurants. Closures and limits on service have caused rodents to search for new sources of food and to exhibit more erratic behavior.

Passive repetitive reading produces little or no benefit for learning

May 24th, 2020

Research dating back a century has shown that retrieval contributes to learning, but the past decade has seen a renewed, intense focus on exploring the benefits of retrieval for learning:

This recent research has established that repeated retrieval enhances learning with a wide range of materials, in a variety of settings and contexts, and with learners ranging from preschool ages into later adulthood (Balota, Duchek, Sergent-Marshall & Roediger, 2006; Fritz, Morris, Nolan & Singleton, 2007).

A word-learning experiment illustrates some key points about retrieval-based learning. In the experiment (Karpicke & Bauernschmidt, 2011), students learned a list of foreign language words (e.g., Swahili vocabulary words like “mashua — boat”) across cycles of study and recall trials. In study trials, the students saw a vocabulary word and its translation on the computer screen, and in recall trials, they saw a vocabulary word and had to recall and type its translation. The students studied a list of vocabulary words, then attempted to retrieve the whole list, studied it again, retrieved it again, and so on across alternating study and retrieval practice blocks.

There were several different conditions in the experiment. In one condition, students simply studied the words once, without trying to recall them at all. In a second condition, students continued studying and recalling the words until they had recalled all of them once. After a word was successfully retrieved once, it was “dropped” from further practice — the students did not see it again in the learning session.

Other conditions in the experiment examined the effects of repeated retrieval practice. Once a word was recalled, the computer program required the students to practice retrieving the items three more times. One repeated retrieval condition had the three recall trials happen immediately, three times in a row. This condition, referred to as massed retrieval practice, is akin to repeating a new piece of information over and over in your head right after you experience it. Finally, in the last condition highlighted here, the students also practiced retrieving the words three times, but the repeated retrievals were spaced throughout the learning session. For instance, once a student correctly recalled the translation for mashua, the program moved on to other vocabulary words, but prompts to practice retrieval of the translation for mashua would pop up later on in the program. In this way, the retrieval opportunities were spaced throughout the learning session.

The key question in this research was, how well would students remember the vocabulary word translations in the long term? Figure 1 shows the proportion of translations that students remembered one week after the initial learning session.

CWS-fig1

Merely studying the words once without ever recalling them produced extremely poor performance (average recall was 1 percent, barely visible on the figure). Practicing until each translation was recalled once was much better. But what about the effects of repeated retrieval practice? Massed retrieval — repeating the translations three times immediately — produced no additional gain in learning. Repeated retrieval enhanced learning only when the repetitions were spaced, and indeed, the effects of repeated spaced retrieval were very large. In a single experiment, simple changes that incorporated spaced retrieval practice took performance from nearly total forgetting to extremely good retention (about 80 percent correct) one week after an initial learning experience (see also Karpicke & Roediger, 2008; Pyc & Rawson, 2010).

[...]

In one survey (Karpicke, Butler & Roediger, 2009), college students were asked to list the strategies they use while studying and to rank-order the strategies. The results, shown in Figure 2, indicate that students’ most frequent study strategy, by far, is repetitive reading of notes or textbooks. Active retrieval practice lagged far behind repetitive reading and other strategies (for a review of several learning strategies, see Dunlosky, Rawson, Marsh, Nathan & Willingham, 2013). A wealth of research has shown that passive repetitive reading produces little or no benefit for learning (Callender & McDaniel, 2009). Yet not only was repetitive reading the most frequently listed strategy, it was also the strategy most often listed as students’ number one choice, by a large margin.

Grunts in the Sky

May 23rd, 2020

I don’t remember Grunts in the Sky from when it was leaked in 2015 or officially released a couple years after that:

Teachers don’t learn about learning

May 22nd, 2020

Many things that go on at schools are at odds with the conclusions of rigorous education research:

Teaching kids abstract critical thinking skills is unlikely to help them think critically. The length of lectures often exceeds children’s attention spans. Most anti-bullying programs don’t work.

[...]

The results were “sobering,” according to a March 2020 report, “Learning by Scientific Design; Early insights from a network informing teacher preparation.” By my math, teacher candidates scored an average of 57 percent or 31 questions correct on a 54-question test — an F.

[...]

Deans for Impact instead reported the results in three separate categories: 49 percent on 14 basic cognitive science principles; 58 percent correct on 32 questions about applying these concepts in the classroom, and 67 percent on eight questions about beliefs about how kids learn.

[...]

One common misunderstanding, according to the report, is mistaking student engagement for learning. In one question, student teachers were asked to pick between two classroom activities to teach students the difference between types of newspaper articles. One activity asked students to read the same three articles and answer three questions in small discussion groups. One example: “Make a list of differences between the news article and the opinion pieces. Which of these can be attributed to the authors’ differing purposes?” The second activity had students go on a newspaper scavenger hunt and sort articles into three categories: persuade, inform and entertain.

The question specifically asked teachers to pick the activity that would help students learn the ways that an author’s purpose influences their writing. And for the education researchers who helped create the assessment, it wasn’t a close call. “None of these are gotcha questions,” said Heal, a consultant with Deans for Impact.

But only 22 percent of future teachers picked the first activity, which was the correct answer, because it requires students to make their thinking visible and identify key features of each text. That helps students build a mental model that they can apply again in the future. The second activity doesn’t require much analysis but teacher candidates gravitated toward it. Why? “The first activity is very boring, I didn’t even want to read the questions,” wrote one test taker. “The second activity is more inviting, seems more hands on and is more inquiry learning.”

The test also revealed that many teacher candidates embrace the myth of learning styles, believing that individual students are either visual, auditory or kinesthetic learners. The research consensus is that differentiating instruction this way doesn’t boost learning.

[...]

Twenty-two teaching instructors at the six schools volunteered to take the test themselves. They also failed the section on basic cognitive science principles but they passed the section on practical applications in the classroom with an average score of 77 percent correct. Maybe you don’t need to know the details of the science as long as you know how to apply them.

Indochina got the worst of two worlds

May 21st, 2020

Bryan Caplan often feels the need to save pacifism from (other) pacifists:

Though the argument for pacifism is surprisingly solid, flesh-and-blood pacifists often make me cringe with their naive and even intellectually dishonest claims. Some even shamefully glide from pacifism to identification with heinous totalitarian regimes.

One striking example: the following panel from historian Howard Zinn‘s non-fiction graphic novel, A People’s History of American Empire.* After a history of the Vietnam War that barely mentions North Vietnam’s record of mass murder and oppression, Zinn claims complete vindication by events.

Zinn People's History of American Empire Vietnam Memorial

Everything that radical critics had predicted”?! Did they predict a mass exodus of desperate boat people? Communist Vietnam’s imprisonment of millions in re-education camps? The untimely deaths of over 100,000 in those camps? The execution of another hundred thousand? The Khmer Rouge’s takeover and murder of 25% of the population of Cambodia? Defenders of the war who claimed that only America’s presence could prevent a bloodbath have a far stronger claim to vindication by the facts than its “radical critics.”

Zinn deserves credit for pointing out the crimes of the American and South Vietnamese governments. But the intellectually honest pacifist should be the first to admit that the North Vietnamese government’s crimes were far worse — and that Indochinese Communists’ post-war intentions were truly macabre.

If these are my views, why on earth would I have opposed the Vietnam War? The same reasons as usual: even the less-evil side engaged in mass murder of civilians and other human rights violations without any strong reason to believe these moral transgressions would lead to sharply better consequences. The American government did great evil in the name of a greater good that never materialized. In the end, Indochina got the worst of two worlds: all the horrors of war plus all the horrors of Communism.

What’s especially tragic is that the U.S. could have peacefully saved many millions of the intended victims of Indochinese Communism. How? By allowing their immigration. During a brief period of open borders between North and South Vietnam, a million intended victims of Communism escaped to the modestly freer, richer South. Imagine how many Indochinese would have gladly emigrated to the far freer, far richer United States if we’d only given them the option.

A crazy idea? Perhaps. But far less crazy than trying to save Vietnam by bombing it into the stone age.

How to create the best at-home videoconferencing setup, for every budget

May 20th, 2020

TechCrunch explains how to create the best at-home videoconferencing setup, for every budget:

Level 0
Turn on a light and put it in the right place
Be aware of what’s behind you
Know your system sound settings

Level 1
Get an external webcam (e.g. Logitech C922x)
Get a basic USB mic (e.g. Samson Meteor Mic)
Get some headphones

Level 2
Use a dedicated camera and an HDMI-to-USB interface (e.g. Elgato Cam Link 4K, IOGEAR Video Capture Adapter, Magewell USB 3.0 Capture)
Get a wired lav mic (e.g. Rode’s Lavalier GO)
Get multiple lights and position them effectively

Level 3
Use an interchangeable lens camera and a fast lens
Get a wireless lav mic (e.g. Rode Wireless Go)
Use in-ear monitors (e.g. Shure PSM300 Pro Wireless In-Ear Monitor System, Bang & Olufsen E8)
Use 3-point lighting (e.g. Elgato’s Key Lights)

Level 4
Get an HDMI broadcast switcher deck (e.g. Blackmagic ATEM Mini)
Use a broadcast-quality shotgun mic (e.g. Rode VideoMic NTG)
Add accent lighting (e.g. Hue Play Smart LED Light Bars)