There is a remarkable clustering of surface gravity levels in our solar system

Friday, December 17th, 2021

There is a remarkable clustering of surface gravity levels in our solar system:

All bodies with 9% to 250% of Earth gravity cluster near Earth, Mars, or Moon gravity. Those 3 gravity levels seem like the only levels available for us to live in this solar system. I stumbled onto this only after 34 years in aerospace.

Surface gravity clustering in our solar system

Four other planets have within 12% of Earth gravity. But all four have extreme temperatures and atmospheric pressures. And all but Venus have two to three times Earth’s escape velocity. Returning to Earth would be hard.

The eight smaller bodies near Moon or Mars gravity seem more practical. They also have much lower two-way delta-Vs. So, a key question for living beyond Earth is whether lunar or at least Martin gravity will let us avoid health problems like those we have seen in sustained free fall. But we have no health data between 0g and a full 1g!

[...]

Early in the space age, most planners assumed rotating crewed facilities to provide Earth-level artificial gravity. Apollo flights were planned to last only 6 to 12 days, so interest in artificial gravity faded after the Gemini 7 crew spent 14 days in free fall. Even 4-, 8-, and 12-week crew stays on Skylab caused few health issues. But crews who spent 6 to 12 months on Salyut, Mir, or ISS have had significant degradation of their bones, muscles, fluids, eyes, brain, and immune response. Exercise, diet, and drug “microgravity countermeasures” have slowed these trends but have not stopped them, despite decades of countermeasure refinements.

[...]

If we find that lunar gravity is enough for long-term health, humanity may expand to the six largest moons plus Mars and Mercury, and not just our Moon. If we need Mars gravity, we might settle Mars and Mercury, but not any moons. But even in 1g, exercise is critical. Any reduction in gravity is likely to require more exercise.

If we do need sustained gravity at levels higher than that of Mars, it seems easier to develop sustainable rotating settlements than to terraform any near-1g planet. And rotating settlements offer lower gravity inboard. A key attraction of such settlements may be the easy access to a wide range of gravity levels.

We’ve been talking about rotating settlements for a long, long time.

Most decisions along the way make individual sense, even if they lead to collective failure

Thursday, December 16th, 2021

Mancur Olson’s The Rise and Decline of Nations is one of Alex Tabarrok’s favorite books and a classic of public choice. He shares four of its nine implications:

2. Stable societies with unchanged boundaries tend to accumulate more collusions and organizations for collective action over time. The longer the country is stable, the more distributional coalitions they’re going to have.

6. Distributional coalitions make decisions more slowly than the individuals and firms of which they are comprised, tend to have crowded agendas and bargaining tables, and more often fix prices than quantities. Since there is so much bargaining, lobbying, and other interactions that need to occur among groups, the process moves more slowly in reaching a conclusion. In collusive groups, prices are easier to fix than quantities because it is easier to monitor whether other industries are selling at a different price, while it may be difficult to monitor the actual quantities they are producing.

7. Distributional coalitions slow down a society’s capacity to adopt new technologies and to reallocate resources in response to changing conditions, and thereby reduce the rate of economic growth. Since it is difficult to make decisions, and since many groups have an interest in the status quo, it will be more difficult to adopt new technologies, create new industries, and generally adapt to changing environments.

9. The accumulation of distributional coalitions increases the complexity of regulation, the role of government, and the complexity of understandings, and changes the direction of social evolution. As the number of distributional coalitions grows, it will make policy-making increasingly difficult, and social evolution will focus more on distributing wealth among groups than on economic efficiency and growth.

You can gauge the book’s continued relevance from this thread by Ezra Klein, he notes, which gets at some of the consequences of the forces Olson explained:

A key failure of liberalism in this era is the inability to build in a way that inspires confidence in more building. Infrastructure comes in overbudget and late, if it comes in at all. There aren’t enough homes, enough rapid tests, even enough good government web sites. I’ve covered a lot of these processes, and it’s important to say: Most decisions along the way make individual sense, even if they lead to collective failure.

If the problem here was idiots and crooks, it’d be easy to solve. Sadly, it’s (usually) not. Take the parklets. There are fire safety concerns. SFMTA is losing revenue. Some pose disability access issues. It’s not crazy to try and take everyone’s concerns into account. But you end up with an outcome everyone kind of hates.

I’ve seen this happen again and again. Every time I look into it, I talk to well-meaning people able to give rational accounts of their decisions.

It usually comes down to risk. If you do X, Y might happen, and even if Y is unlikely, you really don’t want to be blamed for it. But what you see, eventually, is that our mechanisms of governance have become so risk averse that they are now running *tremendous* risks because of the problems they cannot, or will not, solve. And you can say: Who cares? It’s just parklets/HeathCare.gov/rapid tests/high-speed rail/housing/etc.

But it all adds up.

There’s both a political and a substantive problem here.

The political problem is if people keep watching the government fail to build things well, they won’t believe the government can build things well. So they won’t trust it to build. And they won’t even be wrong. The substantive problem, of course, is that we need government to build things, and solve big problems.

If it’s so hard to build parklets, how do you think think that multi-trillion dollar, breakneck investment in energy infrastructure is going to go?

The best deregulation lacks popular appeal

Wednesday, December 15th, 2021

The best deregulation lacks popular appeal, Bryan Caplan says, but when the stars align, specific forms of deregulation become potentially popular:

A politician today could loudly promise lots of deregulation — and win. Furthermore, he could fulfill his promises — and win again. Topping the list of potentially popular deregulation:

  1. An immediate end to all Covid rules. No more mask mandates — not in schools, not in airports, not on planes. No more distancing. No more Covid tests. No more travel restrictions on anyone. (The “anyone” phrasing is how you free foreigners, as well as natives, without calling attention to the fact).
  2. An immediate end to all government Covid propaganda. No more looping audio warnings at airports. No more signs or stickers. Indeed, a national campaign to tear down all the propaganda that’s been uglifying the country for almost two years.
  3. A radical and immediate reduction in airport security theater. End the rules that require the removal of shoes, jackets, and belts. End the rules that require you to remove electronic devices from your bags for extra screening. End the rules against travelling with liquids. Switch back to old-fashioned metal detectors instead of body scanners.
  4. An immediate end to all airline security theater. End federal rules for use of “large electronics” during takeoff and landing. End federal rules for tray tables and seat inclines. Stop turning flight attendants into sky deputies. Just say, “Let the airlines decide. Competition works.”
  5. End all traffic cameras. All of them.
  6. End all remaining laws against marijuana and psychedelic mushrooms.
  7. End FDA regulation of smoking and vaping for legal adults – and pass new laws banning such power grabs in the future.
  8. Full school choice, nation-wide: “Fund students, not systems.”
  9. Kill REAL ID. Forever.
  10. End mandatory vehicle safety and emissions inspections: “An annual pain in the neck and a complete waste of time.”
  11. Create an ironclad free speech limitation on discrimination law, which explicitly includes both (a) political speech, and (b) jokes. Along the lines of, “Expression of political opinions or jokes by co-workers, managers, or owners are Constitutionally protected free speech and can never be treated as evidence of discrimination or a hostile workplace environment.”
  12. Undermine Human Resource Departments by amending existing employment law to read, “Human Resource employee training or lack thereof can never be treated as evidence in employment lawsuits.” This removes the incentive to constantly ratchet up employee brainwashing to show that your firm takes the law seriously.

The political left and right share an interest in science in general, but not science in particular

Tuesday, December 14th, 2021

Millions of online book co-purchases reveal partisan differences in the consumption of science, researchers report:

Passionate disagreements about climate change, stem cell research and evolution raise concerns that science has become a new battlefield in the culture wars. We used data derived from millions of online co-purchases as a behavioural indicator for whether shared interest in science bridges political differences or selective attention reinforces existing divisions. Findings reveal partisan preferences both within and across scientific disciplines.

Across fields, customers for liberal or ‘blue’ political books prefer basic science (for example, physics, astronomy and zoology), whereas conservative or ‘red’ customers prefer applied and commercial science (for example, criminology, medicine and geophysics). Within disciplines, ‘red’ books tend to be co-purchased with a narrower subset of science books on the periphery of the discipline.

We conclude that the political left and right share an interest in science in general, but not science in particular. This underscores the need for research into remedies that can attenuate selective exposure to ‘convenient truth’, renew the capacity for science to inform political debate and temper partisan passions.

The novel is in conversation with classics like 1984

Monday, December 13th, 2021

When I first heard of Netflix’s Queen’s Gambit — which I still haven’t watched, despite hearing good things — it never occurred to me that it was based on a 40-year-old book — one that does not describe its protagonist as anything like Audrey Hepburn playing chess, by the way. What actually jumped out at me about the book, beyond its mere existence, was the author, Walter Tevis. I immediately recognized the name but couldn’t quite place it.

Tevis wrote the science-fiction classic, Mockingbird, which I’ve been meaning to read — and which just showed up on Tor’s list of Golden Age and New Wave SF classics that should be adapted right now, in the wake of Dune and Foundation‘s film and TV adaptations:

The novel is in conversation with classics like 1984 but is built on a reversal of empowering people through the power of books and literature. The high-concept, post-apocalyptic setting and narrative would make for great set pieces and visuals. Tevis, who also wrote The Hustler has a knack for rich, compelling characters and his work is ripe for adaptation. Given the recent success of Netflix’s adaptation of The Queen’s Gambit and the buzz over Showtime’s upcoming The Man Who Fell to Earth series makes this a perfect time to adapt the Nebula-nominated Mockingbird as well.

The Russian approach is to stick a machinegun and a rocket launcher on the mule and send it ahead of the troops

Sunday, December 12th, 2021

Ukraine’s defense minister promising a “bloody massacre” if Russia invades:

While Ukraine is heavily outmatched by Russian forces, the threat of heavy casualties is one which Russian cannot ignore. This is why uncrewed systems – remote-controlled robot warriors – could play an important part where the fighting is heaviest.

[...]

“Today Russia is more averse to casualties for military and political reasons,” Samuel Bendett, an expert on the Russian defense scene, and adviser to both the CNA and CNAS told me. “Both Chechnya wars are still fresh in many Russians’ memories and the casualties that Russian forces took in those wars has a very powerful and negative effect on the population’s overall support for such campaigns.”

[...]

While other nations have pursued armed drones, Russia has carved out a niche in developing and fielding a variety of armed ground robots, most notably the Uran-9, which was used extensively in Syria.

Uran-9

The Uran-9 is an uncrewed tracked vehicle the size of a large SUV weighing ten tons. Usual armament is a 30mm automatic cannon, four anti-tank guided missiles and rocket launchers firing unguided thermobaric rockets (the Russians describe this as a rocket-flamethrower), plus a machine gun. It can be remotely controlled from two miles away. A specific aim of fielding the Uran-9 is to “minimize battlefield casualties”: throwing expendable robots into the assault means less fire will be directed at humans.

“The Russian military is presenting the ongoing modernization as turning the military into a precise and high-tech force. Developing different types of unmanned systems speaks to that principle as making missions more effective and ultimately saving soldiers by removing troops from certain dangerous front line combat,” says Bendett.

This approach is seen as heresy in some military quarters. In the U.S. Army for example, unmanned ground vehicles are seen more as auxiliaries, providing logistics support as robot truck drivers and battlefield mules to lug footsoldiers equipment, not replacign them. The Russian approach is to stick a machinegun and a rocket launcher on the mule and send it ahead of the troops, not have it trailing behind.

You still won’t be able to compete for attention with all of the other sensational crimes

Saturday, December 11th, 2021

Leighton Woodhouse and his wife are scrambling to find daycare for their 16-month-old son:

We’ve had a “nanny share” up until now, which means we and another couple employ a nanny for both couples’ kids and split the cost. Our nanny is wonderful, and she lives just a few blocks from us. But a few weeks ago, someone walked up her street spraying bullets into random houses. One of the bullets found its way into her living room, as she and her family ducked for cover. At that moment, she and her husband decided they were moving their family out of Oakland.

The shooting didn’t even make the local news. Apparently, in the Bay Area right now, you can walk up a residential street firing your gun into houses, and you still won’t be able to compete for attention with all of the other sensational crimes.

Woodhouse, who considers himself progressive, nonetheless agrees with Michael Shellenberger (author of San Fransicko) that progressives do ruin cities:

After a summer of protests against police violence, progressive cities like New York, Seattle, Minneapolis, Austin, and Denver cut their police budgets in 2020 even during a national surge in violent crime. That surge has only continued into 2021, in some places by wide margins. The wave of murders in American cities has provoked political backlashes to the cuts, which have forced some local governments to backtrack from their defund agenda.

But that hasn’t stopped demoralized departments from bleeding officers through attrition. Austin, for example, which voted in 2020 to cut its police budget by a third before restoring most of it this year, is losing 15 to 22 officers per month. Its homicide rate is up 88 percent over last year, blowing past a previous homicide record that was set nearly four decades ago.

How do activists justify hobbling cities’ ability to respond to the crime wave by gutting their police forces? Here’s Cat Brooks, perhaps Oakland’s most prominent police abolitionist, in The Guardian: “The goal is to interrupt and respond to state violence,” she explained. “We’re good at responding but the only way you get to interruption is to reduce the number of interactions with police.”

That’s true: If you have fewer interactions between police and civilians, you’ll likely have fewer acts of violence perpetrated on civilians by police. The obvious problem with this “solution,” though, is that you’ll also have more crime.

Fisher Price re-released their Music Box Record Player

Friday, December 10th, 2021

In 2010 Fisher Price re-released their Music Box Record Player, in a new version that does not work like the original:

Fisher Price Music Box Record Player 1

Fisher Price Music Box Record Player 2

Fisher Price Music Box Record Player 3

Fisher Price Music Box Record Player 4

Fisher Price Music Box Record Player 5

Fisher Price Music Box Record Player 6

Fisher Price Music Box Record Player 7

Fisher Price Music Box Record Player 8

Fisher Price Music Box Record Player 9

 

Fisher Price Music Box Record Player 10

(Hat tip to commenter Chedolf.)

The attack helicopter becomes like a rapidly mobile SAM site

Thursday, December 9th, 2021

If it’s armed for air-to-air combat, an attack helicopter will defeat most fighter airplanes:

In 1978/79 US Army and US Air Force conducted a joint experiment called Joint Countering Attack Helicopter (J-CATCH). J-CATCH focused on dissimilar air combat between jet fighters and attack helicopters. To the surprise of many involved in the program, the helicopters proved extremely dangerous to the fighters when they were properly employed, racking up a 5-to-1 kill ratio over the fighters when fighting at close ranges with guns.

‘Ironically, Army aviation dominated the air,’ explained Caleb Posey, AH-64E Crew Chief at U.S. Army, on Quora. ‘Air Force pilots were “shot down” without even knowing the helicopters were there. Apaches can hide in the radar clutter at tree top level, and use the incredibly sophisticated Longbow system to track literally hundreds of targets simultaneously. If I remember the numbers, the helicopters shot down ~5 fixed wing for ever chopper that got hit. Granted, this tested helos that were loaded with air to air weapons (NOT typical), but still… the Air Force left with the overall idea of “leave enemy helicopters the f**k alone.’

‘A well equipped attack helicopter flown by a trained crew will defeat most fighter airplanes in 1v1 air combat, should the fighter be foolish enough to drop down to try and engage,’ Nick Lappos, Technical Fellow Emeritus at Sikorsky and former U.S. Army AH-1 Cobra attack helicopter pilot, said on Quora. ‘A helicopter immersed in ground clutter is very hard to detect by almost any means, and so is hard to engage. Meanwhile, the helicopter can be equipped with air to air missiles and large caliber guns that easily engage fighters as they maneuver at low altitudes against a blue sky in their attempts to engage the helicopter. The helicopter if properly flown will always maneuver to cut off the angle from the airplane, forcing impossibly steep closure maneuvers for the fighter. Typical helicopter turn rates are 30 to 40 degrees per second, three times that of the fighter, even at high g, so the fighter will find the helicopters weapons always engaging it during any serious contest. If the helicopter gun and missiles were selected for anti-aircraft (like the 30mm guns on the Mi-24 and KA-50/51), the results are that the attack helicopter becomes like a rapidly mobile SAM site, a very dangerous target.’

[...]

‘It must be said that the fighter is only vulnerable if it drops down from its normal altitude to engage the helicopter. If the fighter stays high and prosecutes its normal mission, it is nearly invulnerable to the helicopter’s weapons.’

Pearl Harbor Day caught me off guard

Tuesday, December 7th, 2021

Pearl Harbor Day snuck up on me. Here are some posts on the topic:

The Sunbeam Radiant Control Toaster from 1949 is still smarter than any toaster sold today

Monday, December 6th, 2021

The Sunbeam Radiant Control Toaster from 1949 is still smarter than any toaster sold today:

With the Sunbeam, the heat radiating from the bread itself warms up a bimetal strip (one of the simplest kinds of thermostats) which, being made of two different kinds of metal that expand at different rates, ends up bending backwards to sever the connection and stop the flow of electricity when the toast is done. And here’s the most ingenious part: when the heating wire shrinks as it cools down, that is what triggers the mechanical chain reaction that lifts your bread back up.

They go for an average of $130 on eBay, with fully restored models fetching two to four times that at auction.

Toyota is poised to make affordability, not range, at the center of its EV play

Sunday, December 5th, 2021

Toyota is poised to put affordability, not range, at the center of its EV play:

“‘Nothing happens until you sell a car’ is an expression we have internally,” he summed. “To have a positive impact on the environment, you must sell a high volume of cars…so it’s really important that the price point is such that we can make an actual business model out of it.”

To that point, Toyota expects that it will be selling millions of electric vehicles by the end of the decade. In September, the automaker announced plans to spend $13.5 billion on battery development through then, with aims of cutting the battery cost per vehicle by 50% versus the bZ4X.

[...]

“The bottom line is, over time we view EV range similar to horsepower,” Ericksen said, comparing it to how almost any customer really wanted 400 horsepower but, at an affordability standpoint, might settle for 120 hp. “People who are affluent and can afford a really expensive vehicle can afford a lot of horsepower.”

“Batteries are expensive, and the bigger you make the battery, the more expensive it is,” Ericksen said. “So the trick, I think long-term is not all about range, range, range; the trick is matching the range and the price point to what the consumer can afford.”

“And as people become more accustomed to operating an EV I think the anxiety over range is going to dissipate,” he continued, saying that many EV shoppers are going to understand they don’t need 300 or 400 miles—and certainly not in a second or third car.

Although we tend to agree that range is a red herring, especially for that second or third car, Toyota will face some headwinds if it dives into the “just enough” category. In a study released earlier this year, J.D. Power found that EVs with more than 200 miles of range had higher levels of satisfaction than those with less. And back in 2017, a comprehensive Autolist survey on minimum range found that only 14.6% of individuals saw 200 miles of range as enough, while the largest group, 38.9%, considered 300 miles of range to be enough. It emphasized, then, that a jump from 250 to 300 miles yielded an increase of 30% more people willing to buy an EV.

Range really is like horsepower.

Being asked to explain the experimenter’s reasoning produced considerably more learning

Saturday, December 4th, 2021

Five-year-olds whose pretest performance showed that they had not mastered number conservation were presented four training sessions:

Some were just given feedback on their number conservation performance; others were given feedback and asked to explain their reasoning; yet others were given feedback and asked to explain the reasoning that led to the experimenter’s judgment. Being asked to explain the experimenter’s reasoning produced considerably more learning than either of the other two procedures.

Number conservation is kind of hard:

The Bulletproof Musician summarizes the results:

The kids who were asked to imagine what the expert’s perspective might be ultimately got 62 percent of the questions correct over the course of their four testing sessions. Whereas the group that provided their own reasoning for the answer only got 48 percent of the problems correct. And those who provided no rationale got 49 percent correct.

Praise curtails discussion and serves mainly to reinforce the teacher’s role as the authority who bestows rewards

Friday, December 3rd, 2021

Although error avoidance during learning appears to be the rule in American classrooms, Janet Metcalfe says, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students:

Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the beneficial effects are particularly salient when individuals strongly believe that their error is correct: Errors committed with high confidence are corrected more readily than low-confidence errors. Corrective feedback, including analysis of the reasoning leading up to the mistake, is crucial. Aside from the direct benefit to learners, teachers gain valuable information from errors, and error tolerance encourages students’ active, exploratory, generative engagement. If the goal is optimal performance in high-stakes situations, it may be worthwhile to allow and even encourage students to commit and correct errors while they are in low-stakes learning situations rather than to assiduously avoid errors at all costs.

[...]

It might seem intuitive that if one does not want errors on the test that counts, then one should avoid errors at all stages of learning. In this view, committing errors should make those errors more salient and entrench them into both the memory and the operating procedures of the person who makes them. Exercising the errors should make the errors themselves stronger, thus increasing their probability of recurrence. Such a view, which is consistent with a number of the oldest and most well established theories of learning and memory (Bandura 1986, Barnes & Underwood 1959, Skinner 1953), suggests that errors are bad and should be avoided at all costs.

[...]

However, Stevenson & Stigler (1994; see also Stigler & Hiebert 2009) and their colleagues conducted a landmark study in which they were able to videotape lessons in grade 8 mathematics classrooms in a variety of countries, including the United States, Taiwan, China, and Japan. Of most interest, given that Japan is by far outstripping the United States in math scores, is the striking difference in the teaching methods used in those two countries. Although there may be many other reasons for the differences in math scores, one highly salient difference is whether or not teachers engage with students’ errors. Videotapes show that, in the United States, set procedures for doing particular kinds of problems are explicitly taught. These correct procedures are rehearsed and emphasized; errors are avoided or ignored. The students are not passive in American classrooms. A teacher may ask for student participation in repeating, for example, a procedure for borrowing when subtracting. When asking a question such as, “Can you subtract 9 from 5?” to prompt students to answer, “No, you have to borrow to make the 5 a 15,” the teacher may fail to even acknowledge the deviant child who says, “Yes. It’s negative 4.” If the response does not fit with the procedure being exercised, it is not reinforced. Errors (as well as deviant correct answers) are neither punished nor discussed but are disregarded. Praise is given, but only for the “correct” answer.

As Stevenson & Stigler (1994) pointed out, praise curtails discussion and serves mainly to reinforce the teacher’s role as the authority who bestows rewards. It does not empower students to think, criticize, reconsider, evaluate, and explore their own thought processes. By way of contrast, in Japan praise is rarely given. There, the norm is extended discussion of errors, including the reasons for them and the ways in which they may seem plausible but nevertheless lead to the incorrect answer, as well as discussion of the route and reasons to the correct answer. Such in-depth discussion of the thought processes underlying both actual and potential errors encourages exploratory approaches by students.

Instead of beginning with teacher-directed classwork and explication, Japanese students first try to solve problems on their own, a process that is likely to be filled with false starts. Only after these (usually failed) attempts by students does teacher-directed discussion — interactively involving students and targeting students’ initial efforts and core mathematical principles — occur. It is expected that students will struggle and make errors, insofar as they rarely have available a fluent procedure that allows them to solve the problems. Nor are students expected to find the process of learning easy. But the time spent struggling on their own to work out a solution is considered a crucial part of the learning process, as is the discussion with the class when it reconvenes to share the methods, to describe the difficulties and pitfalls as well as the insights, and to provide feedback on the principles at stake as well as the solutions.

As Stevenson & Stigler (1994, p. 193) note, “Perhaps because of the strong influence of behavioristic teaching, which says conditions should be arranged so that the learner avoids errors and makes only a reinforceable response, American teachers place little emphasis on the constructive use of errors as a teaching technique. Learning about what is wrong may hasten understanding of why the correct procedures are appropriate, but errors may also be interpreted as failure. And Americans, reluctant to have such interpretations made of their children’s performance, strive to avoid situations where this might happen.”

The Japanese active learning approach well reflects the fundamental ideas of a learning-from-errors approach. Engaging with errors is difficult, but difficulty can be desirable for learning (Bjork 2012). In comparison with approaches that stress error avoidance, making training more challenging by allowing false starts and errors followed by feedback, discussion, and correction may ultimately lead to better and more flexible transfer of skills to later critical situations.

Considerable research now indicates that engagement with errors fosters the secondary benefits of deep discussion of thought processes and exploratory active learning and that the view that the commission of errors hurts learning of the correct response is incorrect. Indeed, many tightly controlled experimental investigations have now shown that in comparison with error-free study, the generation of errors, as long as it is followed by corrective feedback, results in better memory for the correct response.

[...]

Early studies by Izawa (1967, 1970) showed that multiple unsuccessful retrieval attempts led to better memory for the correct feedback than did a procedure producing fewer incorrect responses. Kane & Anderson (1978) showed similar results: Attempting the generation of the last word of the sentence, even if what was generated was wrong, led to enhanced correct performance compared to reading the sentence correctly from the outset. Slamecka & Fevreiski (1983) asked people to remember near antonyms, such as trivial-vital or oscillate-settle. Even failed attempts (followed by feedback containing the correct answer) improved later recall of the correct answers over simply reading the correct answer. Kornell et al. (2015) have conducted a recent investigation of the same issue and have reached similar conclusions.

[...]

It appears that to be beneficial, the guess needs to be somewhat informed rather than a shot in the dark.
[...]

Interestingly, in the related-pair case in which a large beneficial effect of committing errors was found, the participants were metacognitively unaware of the benefit. Even immediately after they had experienced the task and had evidenced a benefit of 20 percent (i.e., roughly the difference between a C-minus and an A, if it had been a course grade), participants thought that the error-free condition had resulted in better recall (Huelser & Metcalfe 2012). This lack of awareness of the benefits of error generation may contribute to the aversion to errors in the American teaching style evinced in Stigler’s work.