What is it like for kids to play sports in adult-sized spaces?
It’s impossible to build on failure, Tony Robbins says:
You build only on success. I turned around the United States Army pistol shooting program. I made certain that the first time someone shot a pistol, instead of shooting the .45 caliber pistol from 50 feet away — which is what they were starting these guys out at — I brought the target literally five feet in front of the students. I wouldn’t let them fire the gun until they had rehearsed over and over again the exact perfect shooting form for two hours. By the time they held the gun, they had every technique perfected, so when they fired, they succeeded. BAM!
At first the Army thought it was stupid, but it put ignition into the students’ brain — “WOW! I’ve succeeded!” — versus shooting bullets into the ceiling or floor the first few times. It created an initial sense of certainty.
I believe in setting people up to win. Many instructors believe in setting them up to fail so they stay humble and they are more motivated. I disagree radically. There is a time for that but not in the beginning. People’s actions are very limited when they think they have limited potential. If you have limited belief, you are going to use limited potential, and you are going to take limited action.
I am shocked — shocked! — to find cheating going on at UNC!
A blistering report into an academic fraud scandal at the University of North Carolina released Wednesday found that for nearly two decades two employees in the African and Afro-American Studies department ran a “shadow curriculum” of hundreds of fake classes that never met but for which students, many of them Tar Heels athletes, routinely received A’s and B’s.
Nearly half the students in the classes were athletes, the report found, often deliberately steered there by academic counselors to bolster their worrisomely low grade-point averages and to allow them to continue playing on North Carolina’s teams.
I’m so glad we’ve ferreted out this one isolated program, and America’s student-athletes can continue their long tradition of academic excellence.
Gian-Carlo Rota of MIT shares ten lessons he wishes he had been taught:
- Blackboard Technique
- Publish the same results several times.
- You are more likely to be remembered by your expository work.
- Every mathematician has only a few tricks.
- Do not worry about your mistakes.
- Use the Feynmann method.
- Give lavish acknowledgments.
- Write informative introductions.
- Be prepared for old age.
His lesson on lecturing:
The following four requirements of a good lecture do not seem to be altogether obvious, judging from the mathematics lectures I have been listening to for the past forty-six years.
Every lecture should make only one main point
The German philosopher G. W. F. Hegel wrote that any philosopher who uses the word “and” too often cannot be a good philosopher. I think he was right, at least insofar as lecturing goes. Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.
Never run overtime
Running overtime is the one unforgivable error a lecturer can make. After fifty minutes (one microcentury as von Neumann used to say) everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute overtime can destroy the best of lectures.
Relate to your audience
As you enter the lecture hall, try to spot someone in the audience with whose work you have some familiarity. Quickly rearrange your presentation so as to manage to mention some of that person’s work. In this way, you will guarantee that at least one person will follow with rapt attention, and you will make a friend to boot.
Everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.
Give them something to take home
It is not easy to follow Professor Struik’s advice. It is easier to state what features of a lecture the audience will always remember, and the answer is not pretty. I often meet, in airports, in the street and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course, and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.
If there is a case to be made that unconventional schooling, without busywork or fixed schedules, helps unleash creativity, Palmer Luckey, creator of the Oculus Rift, might well be Exhibit A for the prosecution:
His mother, Julie, home-schooled all four of her children during a period of each of their childhoods (Luckey’s father, Donald, is a car salesman), but Palmer was the only one of the kids who never went back; he liked the flexibility too much. In his ample free time, he devoted most of his considerable energy to teaching himself how to build electronics from scratch.
No one else in Luckey’s family was especially interested in technology, but his parents were happy to give over half of the garage at their Long Beach, California, home to his experiments. There, Luckey quickly progressed from making small electronics to “high-voltage stuff” like lasers and electromagnetic coilguns. Inevitably, there were mishaps. While working on a live Tesla coil, Luckey once accidentally touched a grounded metal bed frame, and blew himself across the garage; another time, while cleaning an infrared laser, he burned a gray spot into his vision.
When Luckey was 15, he started “modding” video game equipment: taking consoles like the Nintendo GameCube, disassembling them, and modifying them with newer parts, to transform them into compact, efficient and hand-crafted devices. “Modding was more interesting than just building things entirely using new technologies,” Luckey told me. “It was this very special type of engineering that required deeply understanding why people had made the decisions they made in designing the hardware.”
Luckey soon became obsessed with PC gaming. How well, he wondered, could he play games? “Not skill level,” he clarified to me, “but how good could the experience be?” By this time, Luckey was making good money fixing broken iPhones, and he spent most of it on high-end gaming equipment in order to make the experience as immersive as possible. At one point, his standard gaming setup consisted of a mind-boggling six-monitor arrangement. “It was so sick,” he recalled.
But it wasn’t enough. Luckey didn’t just want to play on expensive screens; he wanted to jump inside the game itself. He knew the military sometimes trained soldiers using virtual reality headsets, so he set out to buy some — on the cheap, through government auctions. “You’d read that these VR systems originally cost hundreds of thousands of dollars, and you thought, clearly if they’re that expensive, they must be really good,” Luckey said. Instead, they fell miles short of his hopes. The field of view on one headset might be so narrow that he’d feel as if he was looking through a half-opened door. Another might weigh ten pounds, or have preposterously long lag between his head moving and the image reacting onscreen — a feature common to early VR that literally makes users nauseated.
So Luckey decided to do what he’d been doing for years with game consoles: He’d take the technology apart, figure out where it was falling short and modify it with new parts to improve it. Very quickly, he realized that this wasn’t going to be simple. “It turned out that a lot of the approaches the old systems were taking were dead ends,” he said.
The problem was one of fundamental design philosophy. In order to create the illusion of a three-dimensional digital world from a single flat screen, VR manufacturers had typically used complex optical apparatuses that magnified the onscreen image to fill the user’s visual field while also correcting for any distortion. Because these optics had to perform a variety of elaborate tricks to make the magnified image seem clear, they were extremely heavy and costly to produce.
Luckey’s solution to this dilemma was ingeniously simple. Why use bulky, expensive optics, he thought, when he could put in cheap, lightweight lenses and then use software to distort the image, so that it came out clear through them? Plus, he quickly realized that he could combine these lenses with screens from mobile phones, which the smartphone arms race had made bigger, crisper and less expensive than ever before. “That let me make something that was a lot lighter and cheaper, with a much wider field of view, than anything else out there,” he said.
From 2009 to 2012, while also taking college classes and working at the University of Southern California’s VR-focused Institute for Creative Technologies, Luckey poured countless hours into creating a working prototype from this core vision. He tinkered with different screens, mixed and matched parts from his collection of VR hardware, and refined the motion tracking equipment, which monitored the user’s head movements in real-time. Amazingly, considering the eventual value of his invention, Luckey was also posting detailed reports about his work to a 3-D gaming message board. The idea was sitting there for anyone to steal.
But, as Brendan Iribe put it to me, “Maybe his name is Luckey for a reason.” By that point, no one was interested in throwing more money away on another doomed virtual reality project.
Then, in early 2012, luck struck again when the legendary video game programmer John Carmack stumbled onto his work online and asked Luckey if he could buy one of his prototypes. Luckey sent him one for free. “I played it super cool,” he assured me. Carmack returned the favor in a big way: At that June’s E3 convention — the game industry’s gigantic annual commercial carnival — he showed off the Rift prototype to a flock of journalists, using a repurposed version of his hit game “Doom 3” for the demonstration. The response was immediate and ecstatic. “I was in Boston at a display conference at the time,” Luckey said, “and people there were like, ‘Dude, Palmer, everyone’s writing articles about your thing!’”
The rest, as they say, is virtual history: Over the next 21 months, Luckey partnered with Iribe, Antonov and Mitchell, launched a Kickstarter campaign that netted $2.4 million in funding — nearly ten times its initial goal — and joined the Facebook empire, thereby ensuring the company the kind of financial backing that most early-stage tech companies can only dream of.
The Oculus Rift is now entering its final stages of development — it’s slated for commercial release next year — and this fall Samsung will release a scaled-down product for developers and enthusiasts, powered by Oculus technology, that will clip over the company’s Galaxy Note 4 smartphone. But Luckey knows that success is by no means assured. “To this point, there has never been a successful commercial VR product, ever,” Luckey told me. “Nobody’s actually managed to pull this off.” Spend a few minutes inside the Rift, though, and one can’t help but believe that Luckey will be the one to do it.
- Figuring stuff out is way hard.
- There is no general method.
- Selecting and formulating problems is as important as solving them; these each require different cognitive skills.
- Problem formulation (vocabulary selection) requires careful, non-formal observation of the real world.
- A good problem formulation includes the relevant distinctions, and abstracts away irrelevant ones. This makes problem solution easy.
- Little formal tricks (like Bayesian statistics) may be useful, but any one of them is only a tiny part of what you need.
- Progress usually requires applying several methods. Learn as many different ones as possible.
- Meta-level knowledge of how a field works — which methods to apply to which sorts of problems, and how and why — is critical (and harder to get).
I didn’t find that list as interesting as his pull-out points along the way:
- Understanding informal reasoning is probably more important than understanding technical methods.
- Finding a good formulation for a problem is often most of the work of solving it.
- Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.
- Choosing a good vocabulary, at the right level of description, is usually key to understanding.
- Truth does not apply to problem formulations; what matters is usefulness.
- All problem formulations are “false,” because they abstract away details of reality.
- Work through several specific examples before trying to solve the general case. Looking at specific real-world details often gives an intuitive sense for what the relevant distinctions are.
- Problem formulation and problem solution are mutually-recursive processes.
- Heuristics for evaluating progress are critical not only during problem solving, but also during problem formulation.
- Solve a simplified version of the problem first. If you can’t do even that, you’re in trouble.
- If you are having a hard time, make sure you aren’t trying to solve an NP-complete problem. If you are, go back and look for additional sources of constraint in the real-world domain.
- You can never know enough mathematics.
- An education in math is a better preparation for a career in intellectual field X than an education in X.
- You should learn as many different kinds of math as possible. It’s difficult to predict what sort will be relevant to a problem.
- If a problem seems too hard, the formulation is probably wrong. Drop your formal problem statement, go back to reality, and observe what is going on.
- Learn from fields very different from your own. They each have ways of thinking that can be useful at surprising times. Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind.
- If all you have is a hammer, everything looks like an anvil. If you only know one formal method of reasoning, you’ll try to apply it in places it doesn’t work.
- Evaluate the prospects for your field frequently. Be prepared to switch if it looks like it is approaching its inherent end-point.
- It’s more important to know what a branch of math is about than to know the details. You can look those up, if you realize that you need them.
- Get a superficial understanding of as many kinds of math as possible. That can be enough that you will recognize when one applies, even if you don’t know how to use it.
- Math only has to be “correct” enough to get the job done.
- You should be able to prove theorems and you should harbor doubts about whether theorems prove anything.
- Try to figure out how people smarter than you think.
- Figure out what your own cognitive style is. Embrace and develop it as your secret weapon; but try to learn and appreciate other styles as well.
- Collect your bag of tricks.
- Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.
Peter Gray and Gina Riley surveyed 232 parents who unschool their children:
Getting into college was typically a fairly smooth process for this group; they adjusted to the academics fairly easily, quickly picking up skills such as class note-taking or essay composition; and most felt at a distinct advantage due to their high self-motivation and capacity for self-direction. “The most frequent complaints,” Gray notes on his blog, “were about the lack of motivation and intellectual curiosity among their college classmates, the constricted social life of college, and, in a few cases, constraints imposed by the curriculum or grading system.”
Most of those who went on to college did so without either a high school diploma or general education diploma (GED), and without taking the SAT or ACT. Several credited interviews and portfolios for their acceptance to college, but by far the most common route to a four-year college was to start at a community college (typically begun at age 16, but sometimes even younger).
None of the respondents found college academically difficult, but some found the rules and conventions strange and sometimes off-putting. Young people who were used to having to find things out on their own were taken aback, and even in some cases felt insulted, “when professors assumed they had to tell them what they were supposed to learn,” Gray says.
The range of jobs and careers was very broad—from film production assistant to tall-ship bosun, urban planner, aerial wildlife photographer, and founder of a construction company—but a few generalizations emerged. Compared to the general population, an unusually high percentage of the survey respondents went on to careers in the creative arts—about half overall, rising to nearly four out of five in the always-unschooled group. Similarly, a high number of respondents (half of the men and about 20 percent of the women) went on to science, technology, engineering or math (STEM) careers.
Grade inflation has led universities to offer context to students’ grades:
Starting this fall, UNC-Chapel Hill transcripts will provide a little truth in grading.
From now on, transcripts for university graduates will contain a healthy dose of context.
Next to a student’s grade, the record will include the median grade of classmates, the percentile range and the number of students in the class section. Another new measure, alongside the grade point average, is the schedule point average. A snapshot average grade for a student’s mix of courses, the SPA is akin to a sports team’s strength of schedule.
Researchers collected grade data for 135 U.S. colleges and universities, representing 1.5 million students. They found that A’s are now the most commonly awarded grade – 43 percent of all grades. Failure is almost unheard of, with D’s and F’s making up less than 10 percent of all college grades.
The study found that grade inflation has been most pronounced at elite private universities, trailed by public flagship campuses and then less selective schools. Grading tends to be higher in humanities courses, followed by social sciences. The lowest grades tend to occur in the science, math and engineering disciplines.
Indiana University used to do it, but stopped because of a software change. Dartmouth College and Cornell University include median grades on transcripts. Cornell used to publish the information online, but quit in 2011 after a study revealed that enrollment spiked in classes with a median grade of A.
But there is a larger move to transcripts with broader information about students’ learning outcomes, said Brad Myers, Ohio State University registrar and president of the American Association of Collegiate Registrars and Admissions Officers.
“We’re really trying to say, ‘Here’s what the student has mastered, and isn’t that what you’re after, more than whether the student got a B or a C or a D in this class?’ ”
Princeton University made headlines for a 2004 policy that sought to limit A’s to 35 percent in undergraduate courses – seen as a radical approach to regulate grades. Earlier this month, a faculty committee there recommended dropping the policy, saying it was too stressful for students and was misinterpreted as a quota system.
Sometimes, when we open a test, we see familiar questions on material we’ve studied — and yet we still do badly. Why does this happen?
Psychologists have studied learning long enough to have an answer, and typically it’s not a lack of effort (or of some elusive test-taking gene). The problem is that we have misjudged the depth of what we know. We are duped by a misperception of “fluency,” believing that because facts or formulas or arguments are easy to remember right now, they will remain that way tomorrow or the next day. This fluency illusion is so strong that, once we feel we have some topic or assignment down, we assume that further study won’t strengthen our memory of the material. We move on, forgetting that we forget.
Often our study “aids” simply create fluency illusions — including, yes, highlighting — as do chapter outlines provided by a teacher or a textbook. Such fluency misperceptions are automatic; they form subconsciously and render us extremely poor judges of what we need to restudy or practice again. “We know that if you study something twice, in spaced sessions, it’s harder to process the material the second time, and so people think it’s counterproductive,” Nate Kornell, a psychologist at Williams College, said. “But the opposite is true: You learn more, even though it feels harder. Fluency is playing a trick on judgment.”
The best way to overcome this illusion is testing, which also happens to be an effective study technique in its own right. This is not exactly a recent discovery; people have understood it since the dawn of formal education, probably longer. In 1620, the philosopher Francis Bacon wrote, “If you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite it from time to time and consulting the text when your memory fails.”
Scientific confirmation of this principle began in 1916, when Arthur Gates, a psychologist at Columbia University, created an ingenious study to further Bacon’s insight. If someone is trying to learn a piece of text from memory, Gates wondered, what would be the ideal ratio of study to recitation (without looking)? To interrogate this question, he had more than 100 schoolchildren try to memorize text from Who’s Who entries. He broke them into groups and gave each child nine minutes to prepare, along with specific instructions on how to use that time. One group spent 1 minute 48 seconds memorizing and the remaining time rehearsing (reciting); another split its time roughly in half, equal parts memorizing and rehearsing; a third studied for a third and recited for two-thirds; and so on.
After a sufficient break, Gates sat through sputtered details of the lives of great Americans and found his ratio. “In general,” he concluded, “best results are obtained by introducing recitation after devoting about 40 percent of the time to reading. Introducing recitation too early or too late leads to poorer results.” The quickest way to master that Shakespearean sonnet, in other words, is to spend the first third of your time memorizing it and the remaining two-thirds of the time trying to recite it from memory.
Continue reading the main story
In the 1930s, a doctoral student at the State University of Iowa, Herman F. Spitzer, recognized the broader implications of this insight. Gates’s emphasis on recitation was, Spitzer realized, not merely a study tip for memorization; it was nothing less than a form of self-examination. It was testing as study, and Spitzer wanted to extend the finding, asking a question that would apply more broadly in education: If testing is so helpful, when is the best time to do it?
He mounted an enormous experiment, enlisting more than 3,500 sixth graders at 91 elementary schools in nine Iowa cities. He had them study an age-appropriate article of roughly 600 words in length, similar to what they might analyze for homework. Spitzer divided the students into groups and had each take tests on the passages over the next two months, according to different schedules. For instance, Group 1 received one quiz immediately after studying, then another a day later and a third three weeks later. Group 6, by contrast, didn’t take one until three weeks after reading the passage. Again, the time the students had to study was identical. So were the quizzes. Yet the groups’ scores varied widely, and a clear pattern emerged.
The groups that took pop quizzes soon after reading the passage — once or twice within the first week — did the best on a final exam given at the end of two months, marking about 50 percent of the questions correct. (Remember, they had studied their peanut or bamboo article only once.) By contrast, the groups who took their first pop quiz two weeks or more after studying scored much lower, below 30 percent on the final. Spitzer’s study showed that not only is testing a powerful study technique, but it’s also one that should be deployed sooner rather than later. “Achievement tests or examinations are learning devices and should not be considered only as tools for measuring achievement of pupils,” he concluded.
The testing effect, as it’s known, is now well established, and it opens a window on the alchemy of memory itself. “Retrieving a fact is not like opening a computer file,” says Henry Roediger III, a psychologist at Washington University in St. Louis, who, with Jeffrey Karpicke, now at Purdue University, has established the effect’s lasting power. “It alters what we remember and changes how we subsequently organize that knowledge in our brain.”
What would it take to fix our wasteful and unjust system of university admissions?, Steven Pinker asks:
Let’s daydream for a moment. If only we had some way to divine the suitability of a student for an elite education, without ethnic bias, undeserved advantages to the wealthy, or pointless gaming of the system. If only we had some way to match jobs with candidates that was not distorted by the halo of prestige. A sample of behavior that could be gathered quickly and cheaply, assessed objectively, and double-checked for its ability to predict the qualities we value….
We do have this magic measuring stick, of course: it’s called standardized testing. I suspect that a major reason we slid into this madness and can’t seem to figure out how to get out of it is that the American intelligentsia has lost the ability to think straight about objective tests. After all, if the Ivies admitted the highest scoring kids at one end, and companies hired the highest scoring graduates across all universities at the other (with tests that tap knowledge and skill as well as aptitude), many of the perversities of the current system would vanish overnight. Other industrialized countries, lacking our squeamishness about testing, pick their elite students this way, as do our firms in high technology. And as Adrian Wooldridge pointed out in these pages two decades ago, test-based selection used to be the enlightened policy among liberals and progressives, since it can level a hereditary caste system by favoring the Jenny Cavilleris (poor and smart) over the Oliver Barretts (rich and stupid).
If, for various reasons, a university didn’t want a freshman class composed solely of scary-smart kids, there are simple ways to shake up the mixture. Unz suggests that Ivies fill a certain fraction of the incoming class with the highest-scoring applicants, and select the remainder from among the qualified applicant pool by lottery. One can imagine various numerical tweaks, including ones that pull up the number of minorities or legacies to the extent that those goals can be publicly justified. Grades or class rank could also be folded into the calculation. Details aside, it’s hard to see how a simple, transparent, and objective formula would be worse than the eye-of-newt-wing-of-bat mysticism that jerks teenagers and their moms around and conceals unknown mischief.
So why aren’t creative alternatives like this even on the table? A major reason is that popular writers like Stephen Jay Gould and Malcolm Gladwell, pushing a leftist or heart-above-head egalitarianism, have poisoned their readers against aptitude testing. They have insisted that the tests don’t predict anything, or that they do but only up to a limited point on the scale, or that they do but only because affluent parents can goose their children’s scores by buying them test-prep courses.
But all of these hypotheses have been empirically refuted. We have already seen that test scores, as far up the upper tail as you can go, predict a vast range of intellectual, practical, and artistic accomplishments. They’re not perfect, but intuitive judgments based on interviews and other subjective impressions have been shown to be far worse. Test preparation courses, notwithstanding their hard-sell ads, increase scores by a trifling seventh of a standard deviation (with most of the gains in the math component). As for Deresiewicz’s pronouncement that “SAT is supposed to measure aptitude, but what it actually measures is parental income, which it tracks quite closely,” this is bad social science. SAT correlates with parental income (more relevantly, socioeconomic status or SES), but that doesn’t mean it measures it; the correlation could simply mean that smarter parents have smarter kids who get higher SAT scores, and that smarter parents have more intellectually demanding and thus higher-paying jobs. Fortunately, SAT doesn’t track SES all that closely (only about 0.25 on a scale from -1 to 1), and this opens the statistical door to see what it really does measure. The answer is: aptitude. Paul Sackett and his collaborators have shown that SAT scores predict future university grades, holding all else constant, whereas parental SES does not. Matt McGue has shown, moreover, that adolescents’ test scores track the SES only of their biological parents, not (for adopted kids) of their adoptive parents, suggesting that the tracking reflects shared genes, not economic privilege.
Regardless of the role that you think aptitude testing should play in the admissions process, any discussion of meritocracy that pretends that aptitude does not exist or cannot be measured is not playing with a full deck.
What are the goals of a university education?, Steven Pinker asks:
It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.
On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.
I believe (and believe I can persuade you) that the more deeply a society cultivates this knowledge and mindset, the more it will flourish. The conviction that they are teachable gets me out of bed in the morning. Laying the foundations in just four years is a formidable challenge.
Teaching is fundamentally a performance art — real time interactions in chaotic and complex human situations. There are no institutions in our society that provide for an environment in which master practitioners of this performance art systematically transfer their expertise.
Imagine, instead, if Escalante had been a great martial arts teacher. He might have established his own school. Students from around the world would have flocked to learn directly from him. Gradually, some of his best students would open up their own schools. They would prominently display their lineage, the fact that they had studied directly with Escalante. People who were interested in becoming serious about a particular martial arts form would ask around to discover who were the best teachers. Those schools could charge a premium. Sometimes such schools would trace their lineage back through several generations of great teachers.
Child prodigies get a lot of attention, Daniel Coyle says, but adult prodigies are even more impressive:
I’m talking about people in their thirties, forties, and beyond — people who are miles past any of the “learning windows” for talent, and who yet succeed in building fantastically high-performing skill sets.
People like Dr. Mary Hobson, who took up Russian at 56, and became a prize-winning translator. Or Gary Marcus, a neuroscientist who took up guitar at the age of 38 and taught himself to rock, or pool player Michael Reddick, or Dan McLaughlin, a 31-year-old who took up golf for the first time four years ago and now plays to an outstanding 3.3 handicap (and who also keeps track of his practice hours — 4,530 and counting, if you wanted to know).
We tend to explain adult prodigies with the same magical thinking as we use to explain child prodigies: they’re special. They always possessed hidden talents.
However, some new science is shedding light on the real reasons adults are able to successfully learn new skills, and exploding some myths in the process. You should check out this article from New Scientist if you want to go deeper. Or read Marcus’s book Guitar Zero, or How We Learn, by Benedict Carey.
The takeaway to all this is that adult prodigies succeed because they’re able to work past two fundamental barriers: 1) the wall of belief that they can’t do it; and 2) the grid of adult routines that keep them from spending time working intensively to improve skills.
The most-read article in the history of the New Republic is not about war, politics, or great works of art, Steven Pinker notes, but about the admissions policies of a handful of elite universities:
At the admissions end, it’s common knowledge that Harvard selects at most 10 percent (some say 5 percent) of its students on the basis of academic merit. At an orientation session for new faculty, we were told that Harvard “wants to train the future leaders of the world, not the future academics of the world,” and that “We want to read about our student in Newsweek 20 years hence” (prompting the woman next to me to mutter, “Like the Unabomer”). The rest are selected “holistically,” based also on participation in athletics, the arts, charity, activism, travel, and, we inferred (Not in front of the children!), race, donations, and legacy status (since anything can be hidden behind the holistic fig leaf).
t would be an occasion for hilarity if anyone suggested that Harvard pick its graduate students, faculty, or president for their prowess in athletics or music, yet these people are certainly no shallower than our undergraduates. In any case, the stereotype is provably false. Camilla Benbow and David Lubinski have tracked a large sample of precocious teenagers identified solely by high performance on the SAT, and found that when they grew up, they not only excelled in academia, technology, medicine, and business, but won outsize recognition for their novels, plays, poems, paintings, sculptures, and productions in dance, music, and theater. A comparison to a Harvard freshman class would be like a match between the Harlem Globetrotters and the Washington Generals.
What about the rationalization that charitable extracurricular activities teach kids important lessons of moral engagement? There are reasons to be skeptical. A skilled professional I know had to turn down an important freelance assignment because of a recurring commitment to chauffeur her son to a resumé-building “social action” assignment required by his high school. This involved driving the boy for 45 minutes to a community center, cooling her heels while he sorted used clothing for charity, and driving him back — forgoing income which, judiciously donated, could have fed, clothed, and inoculated an African village. The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.