Technology will, by itself, degrade

Sunday, January 12th, 2020

I didn’t recognize Jonathan Blow by name — he’s the “indie” game designer behind Braid, which I haven’t played, but which I have mentioned — but he recently gave a speech about a topic that interests me, Preventing the Collapse of Civilization:

He presents the key point fifteen minutes in:

This is why technology degrades. It takes a lot of energy to communicate from generation to generation, there are losses.

Nikita Prokopov summarizes it this way:

The software crisis is systemic and generational. Say, the first generation works on thing X. After X is done and becomes popular, time passes and the next generation of programmers comes and works on Y, based on X. They do not need to know, exactly, how X is built, why it was built that way, or how to write an alternative X from scratch. They are not lesser people or lazier, they just have no real need to write X2 since X already exists and allows them to solve more pressing tasks.

The biggest a-ha moment of the talk was that if you are working on Y and Y is based on X, that does not imply automatically that you would know X also. Even if the people who build X are still around, knowledge does not spread automatically and, without actual necessity, it will go away with the people who originally possessed it.

This is counter-intuitive: most people would think that if we’ve built, for example, a space ship or a complex airplane in the past, we could build it again at any time. But no, if we weren’t building a particular plane uninterruptedly, then after just 50 years it is already easier to develop a new one from scratch rather than trying to revive old processes and documentation. Knowledge does not automatically transfer to the next generation.

In programming, we are developing abstractions at an alarming rate. When enough of those are stacked, it becomes impossible to figure out or control what’s going on down the stack. This is where my contribution begins: I believe I have found some pretty vivid examples of how the ladder of abstractions has started to fall and nobody can do anything about it now because we all are used to work only at the very tip of it.

I still think a good general education would teach how to rebuild civilization. (I haven’t read my copy of How to Invent Everything: A Survival Guide for the Stranded Time Traveler yet, but it looks promising.)

Imagine sending a five-year-old into combat

Saturday, January 11th, 2020

Hamilton Gregory, author of McNamara’s Folly, discusses the use of low-IQ troops in the Vietnam War:

I mentioned McNamara’s Folly when Gwern reviewed it.

Intelligence and character aren’t the same things at all

Sunday, January 5th, 2020

The problem with meritocracy, T. Greer notes, isn’t the meritit’s the ocracy. He cites some passages from Andrew Yang’s book, of all places:

Intelligence and character aren’t the same things at all. Pretending that they are will lead us to ruin. The market is about to turn on many of us with little care for what separates us from each other. I’ve worked with and grown up alongside hundreds of very highly educated people for the past several decades, and trust me when I say that they are not uniformly awesome. People in the bubble think that the world is more orderly than it is. They overplan. They mistake smarts for judgment. They mistake smarts for character. They overvalue credentials. Head not heart. They need status and reassurance. They see risk as a bad thing. They optimize for the wrong things. They think in two years, not 20. They need other bubble people around. They get pissed off when others succeed. They think their smarts should determine their place in the world. They think ideas supersede action. They get agitated if they’re not making clear progress. They’re unhappy. They fear being wrong and looking silly. They don’t like to sell. They talk themselves out of having guts. They worship the market. They worry too much. Bubble people have their pluses and minuses like anyone else.

[...]

In coming years it’s going to be even harder to forge a sense of common identity across different walks of life. A lot of people who now live in the bubble grew up in other parts of the country. They still visit their families for holidays and special occasions. They were brought up middle-class in normal suburbs like I was and retain a deep familiarity with the experiences of different types of people. They loved the mall, too.

In another generation this will become less and less true. There will be an army of slender, highly cultivated products of Mountain View and the Upper East Side and Bethesda heading to elite schools that has been groomed since birth in the most competitive and rarefied environments with very limited exposure to the rest of the country.

When I was growing up, there was something of an inverse relationship between being smart and being good-looking. The smart kids were bookish and awkward and the social kids were attractive and popular. Rarely were the two sets of qualities found together in the same people. The nerd camps I went to looked the part.

Today, thanks to assortative mating in a handful of cities, intellect, attractiveness, education, and wealth are all converging in the same families and neighborhoods. I look at my friends’ children, and many of them resemble unicorns: brilliant, beautiful, socially precocious creatures who have gotten the best of all possible resources since the day they were born. I imagine them in 10 or 15 years traveling to other parts of the country, and I know that they are going to feel like, and be received as, strangers in a strange land. They will have thriving online lives and not even remember a car that didn’t drive itself. They may feel they have nothing in common with the people before them. Their ties to the greater national fabric will be minimal. Their empathy and desire to subsidize and address the distress of the general public will likely be lower and lower.

There is time to reflect on the story and to see its reverberations

Thursday, December 19th, 2019

Clinicians at the Cincinnati Children’s Reading and Literacy Discovery Center have used MRI scanners to find a Goldilocks effect in how children react to being read to:

For a small 2018 study involving 27 children around the age of 4, the researchers watched how the young brains responded to different stimuli. As with the first bowl of porridge that Goldilocks finds in the house of the Three Bears, the sound of the storytelling voice on its own seemed to be “too cold” to get the children’s brain networks to fully engage. Like the second bowl that Goldilocks samples, animation of the sort that children might see on a TV screen or tablet was “too hot.” There is just too much going on, too quickly, for the children to be able to participate in what they were seeing. Small children’s brains have no difficulty registering bright, fast-moving images, as experience teaches and MRI scanning confirms, but the giddy shock and awe of animation doesn’t give them time to exercise their deeper cognitive faculties.

Just as Goldilocks sighs with relief when she takes a spoonful from the third bowl of porridge and finds that it is “just right,” so a small child can relax into the experience of being read a picture book. There is a bit of pleasurable challenge in making sense of what he’s seeing and hearing. There is time to reflect on the story and to see its reverberations in his own life — a transaction that may be as simple as the flash of making a connection between a real donkey he once saw with the “honky tonky, winky wonky donkey” of Craig Smith’s picture book. The collaborative engagement that a child brings to the experience is so vital and productive that reading aloud “stimulates optimal patterns of brain development,” as a 2014 paper from the American Academy of Pediatrics put it, strengthening the neural connections that will enable him to process more difficult and complex stories as he gets older.

Much of the hidden magic of reading aloud has to do with those curious eyes and that devouring gaze. Looking at a book with an adult, a child increases his capacity for “joint attention,” noticing what others see and following their gaze. This phenomenon has a remarkable tempering power in children. It encourages the development of executive function, an array of skills that includes the ability to remember details and to pay attention. Children “learn to naturally regulate their attention when they are focusing on a task they find interesting in a context that is nurturing, warm and responsive,” as Vanderbilt University’s David Dickenson and colleagues put it in a paper summarizing the rich developmental value of reading aloud.

By contrast, fast-paced TV shows have been shown to impair executive function in young children after as little as nine minutes of viewing. Nor is that the only tech-related downside. Babies look at adults to see where we’re looking, so if we’re glued to our electronic devices, that’s what will draw their gaze too. What they see may not be what we want them to see. As the psychologist Catherine Steiner-Adair has written: “Babies are often distressed when they look to their parent for a reassuring connection and discover the parent is distracted or uninterested. Studies show that they are especially perturbed by a mother’s ‘flat’ or emotionless expression, something we might once have associated with a depressive caregiver but which now is eerily similar to the expressionless face we adopt when we stare down to text, stare away as we talk on our phones or stare into a screen as we go online.”

Their skill was in avoiding the same old patterns

Sunday, December 15th, 2019

One tool for avoiding cognitive entrenchment, David Epstein reports (in Range), is to keep one foot outside your world:

Scientists and members of the general public are about equally likely to have artistic hobbies, but scientists inducted into the highest national academies are much more likely to have avocations outside of their vocation. And those who have won the Nobel Prize are more likely still. Compared to other scientists, Nobel laureates are at least twenty-two times more likely to partake as an amateur actor, dancer, magician, or other type of performer. Nationally recognized scientists are much more likely than other scientists to be musicians, sculptors, painters, printmakers, woodworkers, mechanics, electronics tinkerers, glassblowers, poets, or writers, of both fiction and nonfiction. And, again, Nobel laureates are far more likely still. The most successful experts also belong to the wider world. “To him who observes them from afar,” said Spanish Nobel laureate Santiago Ramón y Cajal, the father of modern neuroscience, “it appears as though they are scattering and dissipating their energies, while in reality they are channeling and strengthening them.”

[...]

“When we were designing the first Macintosh computer, it all came back to me,” [Steve Jobs] said. “If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts.”

Or electrical engineer Claude Shannon, who launched the Information Age thanks to a philosophy course he took to fulfill a requirement at the University of Michigan. In it, he was exposed to the work of self-taught nineteenth-century English logician George Boole, who assigned a value of 1 to true statements and 0 to false statements and showed that logic problems could be solved like math equations. It resulted in absolutely nothing of practical importance until seventy years after Boole passed away, when Shannon did a summer internship at AT&T’s Bell Labs research facility. There he recognized that he could combine telephone call-routing technology with Boole’s logic system to encode and transmit any type of information electronically. It was the fundamental insight on which computers rely. “It just happened that no one else was familiar with both those fields at the same time,” Shannon said.

[...]

Connolly’s primary finding was that early in their careers, those who later made successful transitions had broader training and kept multiple “career streams” open even as they pursued a primary specialty.

[...]

They employed what Hogarth called a “circuit breaker.” They drew on outside experiences and analogies to interrupt their inclination toward a previous solution that may no longer work. Their skill was in avoiding the same old patterns.

No savant has ever been known to become a “Big-C creator,” who changed their field

Wednesday, December 11th, 2019

When we know the rules and answers, and they don’t change over time — chess, golf, playing classical music — an argument can be made for savant-like hyperspecialized practice from day one, David Epstein argues (in Range), but those are poor models of most things humans want to learn:

Chris Argyris, who helped create the Yale School of Management, noted the danger of treating the wicked world as if it is kind. He studied high-powered consultants from top business schools for fifteen years, and saw that they did really well on business school problems that were well defined and quickly assessed. But they employed what Argyris called single-loop learning, the kind that favors the first familiar solution that comes to mind. Whenever those solutions went wrong, the consultant usually got defensive. Argyris found their “brittle personalities” particularly surprising given that “the essence of their job is to teach others how to do things differently.”

[...]

Psychologist Barry Schwartz demonstrated a similar, learned inflexibility among experienced practitioners when he gave college students a logic puzzle that involved hitting switches to turn light bulbs on and off in sequence, and that they could play over and over. It could be solved in seventy different ways, with a tiny money reward for each success. The students were not given any rules, and so had to proceed by trial and error.* If a student found a solution, they repeated it over and over to get more money, even if they had no idea why it worked. Later on, new students were added, and all were now asked to discover the general rule of all solutions. Incredibly, every student who was brand-new to the puzzle discovered the rule for all seventy solutions, while only one of the students who had been getting rewarded for a single solution did. The subtitle of Schwartz’s paper: “How Not to Teach People to Discover Rules”—that is, by providing rewards for repetitive short-term success with a narrow range of solutions.

[...]

As psychologist Ellen Winner, one of the foremost authorities on gifted children, noted, no savant has ever been known to become a “Big-C creator,” who changed their field.

[...]

When experienced accountants were asked in a study to use a new tax law for deductions that replaced a previous one, they did worse than novices.

Cheerleaders are their number-one worshipers, high priestesses to the cult

Saturday, December 7th, 2019

American schools are uniquely focused on athletics, sociologist Randall Collins notes:

Murray Milner (University of Virginia sociologist) did a massive study of prestige hierarachies at high schools across the country. He went on to develop an explanation of why jocks and cheerleaders are at the top, and serious students near the bottom. Games by a school team are the one activity where everyone is assembled, focusing attention on a group of token individuals who represent themselves. Games also have drama, plot tension, and emotion, thus fitting the ingredients for a successful interaction ritual. Predictably, they create feelings of solidarity and identity; and they give prestige to the individuals who are in the center of attention. Jocks are the school’s heroes (especially when they are winning). Cheerleaders are their number-one worshipers, high priestesses to the cult, sharing the stage or at least the edge of it. And they are chosen to represent the top of the sexual attractiveness hierarchy, hence centers of the partying-celebration part of school life — out of the purview of adult teachers, administrators, and parents.

In contrast, outstanding students perform mostly alone. They are not the center of an audience gathered to watch them show off their skills. There are no big interaction rituals focusing attention on them. Their achievement is for themselves; they do not represent the school body, certainly not in any way that involves contagious emotional excitement. The jocks-&-partying channeling of attention in schools devalues the intellectuals. When it comes to a contest between the two, the athletic-centered sphere always dominates, at least in the public places where the action is. The social networks of intellectual students are backstage, even underground.

I wouldn’t disagree with that, but we should admit that being athletic is naturally more attractive than being studious.

I think he really misses the boat here, though:

This is why the average scores on American students in international comparisons of skills in reading, math, and other subjects tend to be at the bottom, far below countries in east Asia and in Europe. It is not a matter of talent, and certainly not a deficiency in school facilities, but a problem of social motivation.

European-Americans do about as well as Europeans, and Asian-Americans do about as well as Asians.

It frequently bred confidence but not skill

Sunday, December 1st, 2019

In Range: Why Generalists Triumph in a Specialized World, David Epstein notes that many different kinds of specialists make high-stakes decisions under time pressure:

Psychologist Gary Klein is a pioneer of the “naturalistic decision making” (NDM) model of expertise; NDM researchers observe expert performers in their natural course of work to learn how they make high-stakes decisions under time pressure.

[...]

Kasparov said he would bet that grandmasters usually make the move that springs to mind in the first few seconds of thought.

Klein studied firefighting commanders and estimated that around 80 percent of their decisions are also made instinctively and in seconds.

[...]

When he studied nonwartime naval commanders who were trying to avoid disasters, like mistaking a commercial flight for an enemy and shooting it down, he saw that they very quickly discerned potential threats. Ninety-five percent of the time, the commanders recognized a common pattern and chose a common course of action that was the first to come to mind.

One of Klein’s colleagues, psychologist Daniel Kahneman, studied human decision making from the “heuristics and biases” model of human judgment. His findings could hardly have been more different from Klein’s. When Kahneman probed the judgments of highly trained experts, he often found that experience had not helped at all. Even worse, it frequently bred confidence but not skill.

Kahneman included himself in that critique. He first began to doubt the link between experience and expertise in 1955, as a young lieutenant in the psychology unit of the Israel Defense Forces. One of his duties was to assess officer candidates through tests adapted from the British army. In one exercise, teams of eight had to get themselves and a length of telephone pole over a six-foot wall without letting the pole touch the ground, and without any of the soldiers or the pole touching the wall.* The difference in individuals’ performances were so stark, with clear leaders, followers, braggarts, and wimps naturally emerging under the stress of the task, that Kahneman and his fellow evaluators grew confident they could analyze the candidates’ leadership qualities and identify how they would perform in officer training and in combat. They were completely mistaken. Every few months, they had a “statistics day” where they got feedback on how accurate their predictions had been. Every time, they learned they had done barely better than blind guessing. Every time, they gained experience and gave confident judgments. And every time, they did not improve. Kahneman marveled at the “complete lack of connection between the statistical information and the compelling experience of insight.”

[...]

In those domains, which involved human behavior and where patterns did not clearly repeat, repetition did not cause learning. Chess, golf, and firefighting are exceptions, not the rule.

[...]

Narrow experience made for better chess and poker players and firefighters, but not for better predictors of financial or political trends, or of how employees or patients would perform.

The domains Klein studied, in which instinctive pattern recognition worked powerfully, are what psychologist Robin Hogarth termed “kind” learning environments. Patterns repeat over and over, and feedback is extremely accurate and usually very rapid.

[...]

Kahneman was focused on the flip side of kind learning environments; Hogarth called them “wicked.”

In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both.

In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons.

Hogarth noted a famous New York City physician renowned for his skill as a diagnostician. The man’s particular specialty was typhoid fever, and he examined patients for it by feeling around their tongues with his hands. Again and again, his testing yielded a positive diagnosis before the patient displayed a single symptom. And over and over, his diagnosis turned out to be correct. As another physician later pointed out, “He was a more productive carrier, using only his hands, than Typhoid Mary.”

[...]

Expert firefighters, when faced with a new situation, like a fire in a skyscraper, can find themselves suddenly deprived of the intuition formed in years of house fires, and prone to poor decisions.

A concerned citizen is largely helpless

Saturday, November 30th, 2019

In Loserthink Scott Adams cites a celebrity’s global warming climate change tweet as an example of a bright person talking about something without training in economics or business:

Now let’s say you had experience in economics and business, as I do. In those domains, anyone telling you they can predict the future in ten years with their complicated multivariate models is automatically considered a fraud.

[...]

You might be debating me in your mind right now and thinking that, unlike the field of finance, the scientific process drives out bias over time. Studies are peer reviewed, and experiments that can’t be reproduced are discarded.

Is that what is happening?

Here I draw upon my sixteen years working in corporate America. If my job involved reviewing a complicated paper from a peer, how much checking of the data and the math would I do when I am already overworked? Would I travel to the original measuring instruments all over the world and check their calibrations? Would I compare the raw data to the “adjusted” data that is used in the paper? Would I do a deep dive on the math and reasoning, or would I skim it for obvious mistakes? Unless scientists are a different kind of human being than the rest of us, they would intelligently cut corners whenever they think they could get away with it, just like everyone else. Assuming scientists are human, you would expect lots of peer-reviewed studies to be flawed. And that turns out to be the situation. As the New York Times reported in 2018, the peer review process is defective to the point of being laughable.

[...]

My point is that a concerned citizen is largely helpless in trying to understand how settled the science of climate change really is. But that doesn’t stop us from having firm opinions on the topic.

[...]

Whenever you have a lot of money in play, combined with the ability to hide misbehavior behind complexity, you should expect widespread fraud to happen. Take, for example, the 2019 Duke University settlement in which the university agreed to pay $112.5 million for repeatedly submitting research grant requests with falsified data. Duke had a lot of grant money at stake, and lots of complexity in which to hide bad behavior. Fraud was nearly guaranteed.

If you have been on this planet for a long time, as I have, and you pay attention to science, you know that the consensus of scientists on the topic of nutrition was wrong for decades.

[...]

Over time, it became painfully obvious to me that nutrition science wasn’t science at all. It was some unholy marriage of industry influence, junk science, and government. Any one of those things is bad, but when you put those three forces together, people die. That isn’t hyperbole. Bad nutrition science has probably killed a lot of people in the past few decades.

Naming things can weaponize them

Thursday, November 28th, 2019

Scott Adams has some fun introducing his latest book, Loserthink: How Untrained Brains Are Ruining America:

I know from experience that many of you will give this book as a gift to the unproductive thinkers in your lives, and I wanted to create a complete picture for them, if not for you, O wise book-giver.

[...]

We humans give greater weight to things that have names. And giving loserthink its name creates a shorthand way of mocking people who practice unproductive thinking. Mockery gets a bad rap, but I think we can agree it can be useful when intelligently applied. For example, mocking people for lying probably helps to reduce future lies and make the world a better place, whereas mocking people for things they can’t change is just being a jerk.

[...]

Naming things can weaponize them.

[...]

The risk of mockery changes behavior. I would go so far as to say it is one of history’s most powerful forces.

[...]

Before I introduced the term loserthink, what word would you have used to describe a smart person who has a mental blind spot caused by a lack of exposure across different fields?

[...]

You would probably default to the closest word in your vocabulary, which might be stupid, dumb, idiot, and the like. I don’t have to tell you it’s hard to change someone’s mind after you call him an idiot. And if you take the high road and the intellectual path, describing a person’s mental blind spots with terms such as confirmation bias or cognitive dissonance, your target will claim you are actually the one suffering from those cognitive errors, and the discussion goes nowhere.

(The audiobook seems to be on sale right now.)

If they don’t turn up to school, it doesn’t make the slightest difference

Friday, November 15th, 2019

You often hear people lament that children should be allowed to roam free. Ed West’s radical proposal is that child labour should be reintroduced:

But it only sounds radical because we associate child labour with past times of extreme poverty and poor working conditions. For my generation it’s Rik from The Young Ones castigating an elderly woman about the “good old days” when you had “four-year-old kiddies digging coal”. And those days were indeed awful. Dan Jackson’s brilliant recent book The Northumbrians recalled the heart-breaking tragedy of the 1862 Hartley Mining Disaster where the bodies of young boys were found with their tiny arms around their brothers.

Not even an ironic reactionary like me would lament the decline of infant mortality and workplace fatalities brought about by health and safety legislation. We obviously wouldn’t allow children to do dangerous work in factories today, and many of the most horrific roles once done by kids are obsolete anyway.

[...]

But for children a bit older, the working environment allows them to interact with adults, adopt adult social norms and learn skills when their brain is rapidly absorbing information. They could also earn money at a time in life they really want it.

I suspect that a lot of teenage crime in London exists because boys reach an age when they want disposable income but there’s no way for them to legally earn it. They’re also mentally and physically under-stimulated by schoolwork they know brings them little tangible benefit. (This is arguably more acute among boys because they’re generally more goal-driven, respond when stakes are high, and easily give up when they’re not). Instead, during those crucial developmental years, they often learn negative behaviour through frustration and drift, so that by the time they’re finally allowed to enter the labour force, they’re already unsuited to it.

At the moment, almost half a million people aged 16-24 are unemployed, but many might not be, if they’d been allowed to start work a bit earlier — with a lower minimum wage. Experience would make them more attractive to employers; it would also get them in the habit of work, so they’d be more likely to adjust to working quickly and stick to it.

Prolonged education also cuts adolescents off from wider society. One of the worst aspects of British society — and where it contrasts poorly with Catholic cultures like Italy or Ireland (still, just about) — is that we have a great deal of generational separation. Young people benefit from working and socialising among those older than them, not only because they’re a calming influence but because they can subtly instruct them on how to behave.

Working young helps insulate children from one of the biggest pitfalls of modern life: extended or even permanent adolescence, which happens when people learn responsibility too late. It also helps a person form a place in a society. Not having enough money is pretty much the worst thing in the world — and reducing poverty should be the central “social justice” aim of governments — but not having a role or purpose is almost as bad.

Teenage boys like to feel needed. This really hit me a few years back when during an unexpected snowfall — there hadn’t been snow in London for well over a decade — all the cars in our area were stuck, and the drivers, many of them mothers with children, stranded. It was obvious that the boys on their way home from the nearby secondary school loved all this — for once, wider society actually needed them.

Having a job, going to an office and earning money — and with it the opportunity to work, and earn, even more — gives teenagers a role. If they don’t turn up for work, the company suffers; if they don’t turn up to school, it doesn’t make the slightest difference except for the purpose of government statistics.

Start with a big blatant neglected fact

Friday, November 8th, 2019

How does Bryan Caplan pick book topics?

How do I pick book topics? On reflection, I usually start with what appears to be a big blatant neglected fact. Then I try to discover whether anything in the universe is big enough to explain this alleged fact away. If a laborious search uncovers nothing sufficient, I am left with the seed of a book: One Big Fact that Overawes All Doubts.

Thus, my Myth of the Rational Voter starts with what appears to be a big blatant neglected fact: the typical voter seems highly irrational. He uses deeply flawed intellectual methods, and holds a wide range of absurd views. Twist and turn the issue as you please, and this big blatant neglected fact remains.

Selfish Reasons to Have More Kids, similarly, begins with a rather different big blatant neglected alleged fact: Modern parenting is obsessed with “investing” in kids’ long-run outcomes, yet twin and adoption researchers consistently conclude that the long-run effect of nurture is grossly overrated. Yes, the latter fact is only “blatant” after you read the research, but once you read it, you can’t unread it.

What’s the One Big Fact that Overawes All Doubts in The Case Against Education? This: education is highly lucrative even though the curriculum is highly irrelevant in the real world. Yes, it takes a book to investigate the many efforts to explain this One Big Fact away (“learning how to learn,” anyone?). But without One Big Fact, there’d be no book.

Finally, the big motivated fact behind Open Borders is that simply letting a foreigner move to the First World vastly multiplies his labor earnings overnight. A Haitian really can make twenty times as much money in Miami the week after he leaves Port-au-Prince – and the reason is clearly that the Haitian is vastly more productive in the U.S. Which really makes you wonder: Why would anyone want to stop another human being from escaping poverty by enriching the world? Giving this starting point, anti-immigration arguments are largely attempts to explain this big blatant neglected fact away. Given what restrictionist arguments are up against, it’s hardly surprising that they don’t measure up.

On reflection, my current book project, Poverty: Who To Blame doesn’t seem to fit this formula. The book will rest on three or four big blatant neglected facts rather than one. Yet perhaps as I write, One Big Fact that Overawes All Doubts will come into focus…

Half of Americans read a book in the last year

Sunday, October 13th, 2019

The size of the American reading public varies depending on one’s definition of reading:

In 2017, about 53 percent of American adults (roughly 125 million people) read at least one book not for school or for work in the previous 12 months, according to the National Endowment for the Arts (NEA). Five years earlier, the NEA ran a more detailed survey, and found that 23 percent of American adults were “light” readers (finishing one to five titles per year), 10 percent were “moderate” (six to 11 titles), 13 percent were “frequent” (12 to 49 titles), and a dedicated 5 percent were “avid” (50 books and up).

The College Board has been criticized for this so-called excellence gap

Tuesday, September 24th, 2019

Learning in the Fast Lane is both a history and a defense of the Advanced Placement program:

The No Child Left Behind Act, signed by President George W. Bush nearly 20 years ago, and the Race to the Top initiative, championed by President Obama, weren’t overly concerned with students who occupied the loftiest parts of the achievement spectrum. Schools were rewarded “for helping struggling kids meet proficiency standards but not for dealing with those already well beyond proficiency,” Mr. Finn said. Education policy makers respond to incentives like everyone else.

One bright spot is the Advanced Placement program, which got its start during the Eisenhower administration. Spooked by Sputnik, the government worried about the intellectual rigor of our schools. The country was trying to win a Cold War against communism, and the thinking was that a better-educated public would help ensure victory. After World War II, states made high school mandatory, and the GI Bill gave returning soldiers access to college. The goal was to locate and then nurture the nation’s best and brightest.

The AP program initially was funded by the Ford Foundation but today is run by the College Board, the same nonprofit entity that administers the SAT. Early on, fewer than a dozen AP courses existed, mainly in private schools or affluent suburban districts. By 2018, nearly 40 subjects were available to some 2.8 million students enrolled in more than 22,000 high schools. Students who complete the courses take a final exam, which is graded on a 5-point scale. Those who score 3 or higher are often eligible for college credit.

The downside of this expansion is that many low-income and minority students who complete the courses don’t score well enough on the exams to receive college credit. The College Board has been criticized for this so-called excellence gap, but Mr. Finn hopes that the outreach continues.

He said the proper response to underwhelming test scores is better preparation for disadvantaged students who enroll, and he commends the AP program for maintaining high standards.

The growth in high school enrollment tracks the growth in teen suicides

Friday, August 23rd, 2019

Michael Strong suggests evolutionary mismatch as a causal factor in adolescent dysfunction and mental illness:

Human beings evolved over many millions of years in diverse physical environments. But with respect to social structure, until the dawn of agriculture and empire, almost all adolescents:

1. Lived in a small tribal community of a few dozen to a few hundred with few interactions with other tribal groups.

2. These tribes would have shared one language, one belief system, one set of norms, one morality, and more generally a social and cultural homogeneity that is unimaginable for us today.

3. They would have been immersed in a community with a full range of ages present, from child to elder.

4. From childhood they would have been engaged in the work of the community, typically hunting and gathering, with full adult responsibilities typically being associated with puberty.

5. Their mating and status competitions would have mostly been within their tribe or occasionally with nearby groups, most of which would have been highly similar to themselves.

Contemporary adolescents in developed nations, by contrast:

1. Are often exposed to hundreds or thousands of age peers directly in addition to thousands of adults and thousands of electronic representations of diverse human beings (both social media and entertainment media).

2. Are exposed to many languages, belief systems, norms, moralities, and social and cultural diversity.

3. Are largely isolated with a very narrow range of age peers through schooling.

4. Have little or no opportunities for meaningful work in their community and no adult responsibilities until 18 or even into their 20s.

5. They are competing for mates and status with hundreds or thousands directly and with many thousands via electronic representations (both social media and entertainment media).

We do not know for certain exactly which of these differences between our environment of evolutionary adaptation and contemporary adolescence in developed nations result in which manifestations of mental illnesses and to what extent. But it would be surprising if these rather dramatic changes in the social and cultural environment did not have some impact.

[...]

Might the growth in teen suicides from 1950 to 1980 be a result of an increased evolutionary mismatch during that period?

We don’t know why, exactly, teen suicide increased 3x during that period. That said, the correlation with schooling is intriguing. For instance, white students completing high school increased from about 30% to about 70% from 1950 to 1980. “Black and other” students increased from about 10% completing high school in 1950 to about 60% in 1980.

If we look at suicide rates age 15–19, white males increase from 3.7 per 100,000 in 1950 to 15.0 per 100,000 in 1980, more than 4x. Black and other males increase from 2.2 per 100,000 in 1950 to 7.5 per 100,000 in 1980, more than 3.4x. Female rates did not increase at the same rate: White females age 15–19 went from 1.9 per 100,000 in 1950 to 3.3 in 1980, 1.7x. Black and other females increased from 1.5 per 100,000 in 1950 to a peace of 3.0 in 1970 before coming back down to 1.8 per 100,000 in 1980, 2x at the peak, 1.2x by 1980.

For males, at least, the growth in high school enrollment tracks the growth in teen suicides.