Cliques form because people are often attracted to people of the same race, class, gender, and age as themselves—this is not a novel idea, and in sociology, this concept is called homophily (“love of the same”). But Daniel McFarland, an education professor at Stanford and the lead author of the study, discovered that this tendency to segregate is much more prevalent in large schools and schools that provide students with more academic freedom. A news release about the study explains: “Schools that offer students more choice — more elective courses, more ways to complete requirements, a bigger range of potential friends, more freedom to select seats in a classroom—are more likely to be rank-ordered, cliquish, and segregated.”
Little has changed in the 42 years Steve Sailer has been reading social scientists:
As I’ve joked before, when I became interested in the quantitative literature on educational achievement in ninth grade in 1972, the racial rankings went:
Today, the order is:
The letter r can be hard to pronounce, and some children never quite learn how, but a new tool could help:
Conventional speech therapy is often effective at helping to resolve speech errors from sounds that are made with the lips, such as “p,” “b,” “m” and “v.” Children can look in a mirror and imitate a therapist’s lips. But more complex sounds like “s,” “l” and “ch” are harder to fix because they involve movements of the tongue hidden inside the mouth.
Experts say “r” has a particularly complex tongue shape. Using ultrasound biofeedback allows children to see and visualize the tongue as it moves, something not possible in traditional speech therapy. Also, unlike other speech sounds, “r” isn’t always produced the same way; there are many different tongue variations that produce the same sound.
For some children, part of the problem may be an auditory-perceptual problem that makes it difficult for them to hear the difference between correct and incorrect “r” sounds, Dr. Byun said. Ultrasound images “replace the auditory channel with the visual channel,” she said.
To use the technology, an ultrasound probe is dabbed with gel and placed under a child’s chin. Sound waves capture real-time images of the tongue, which help patients and therapists see the outline of the tongue’s shape and position.
Among the most common tongue shapes for producing the correct “r” sound is the bunched “r,” where the tip of the tongue is pointed down or forward and the bulk of the tongue is raised up near the hard palate. Another is the retroflex “r,” where the tongue tip is curled up and slightly back.
In both these cases, parts of the tongue are doing different things at the same time. Generally the tongues of people who don’t pronounce the “r” sound correctly are making simpler or undifferentiated shapes.
“It’s a complicated sound to make. It requires some difficult and coordinated movements with the tongue,” said Jonathan Preston, an assistant professor in the department of communication sciences and disorders at Syracuse University. “Ultrasound makes it more obvious since people can visually adjust and they can learn to adjust in real time,” he said.
Preschoolers who seek stimulation — who physically explore their environment and engage in verbal and nonverbal stimulation with other children and adults — end up more intelligent:
The prediction that high stimulation seeking 3-year-olds would have higher IQs by 11 years old was tested in 1,795 children on whom behavioral measures of stimulation seeking were taken at 3 years, together with cognitive ability at 11 years. High 3-year-old stimulation seekers scored 12 points higher on total IQ at age 11 compared with low stimulation seekers and also had superior scholastic and reading ability. Results replicated across independent samples and were found for all gender and ethnic groups. Effect sizes for the relationship between age 3 stimulation seeking and age 11 IQ ranged from 0.52 to 0.87. Findings appear to be the first to show a prospective link between stimulation seeking and intelligence. It is hypothesized that young stimulation seekers create for themselves an enriched environment that stimulates cognitive development.
This salient bit went unmentioned in the abstract:
The larger population from which the participants were drawn consisted
of 1,795 children from the island of Mauritius (a country lying in the Indian
Ocean between Africa and India).
(Hat tip to Richard Harper.)
Business schools don’t — but should — teach their students to become behaviorally fit, Lee Newman argues:
It’s a 9 a.m. meeting, Carolina is getting resistance from the team and a rival is trying to derail her with subtle gibes. This is a typical moment of truth, and her success will depend largely on how well she listens, reveals hidden agendas, demonstrates openness to others’ ideas, and controls her emotions.
Business school graduates and rising young professionals are all smart and armed with knowledge and tools. What differentiates them is how well they can think and react, and the quality of what they say and do in these behavioral moments that populate every workday.
Behavioral science has shown very clearly that when under time pressure and stress, we resort to default behaviors. These are automatic ways of thinking and reacting that are too often unproductive. Behind closed doors, when I ask a group of executives or young professionals, “Who in this room thinks they could be a better listener?”–90% or more raise their hands. In my experience, the majority of smart professionals listen too little, micromanage too much, judge too quickly, give too little consideration to the ideas of others… and the list goes on.
Learning best practices in workplace behaviors (e.g., 10 steps for active listening, eight steps for leading change, and so on) is useful, but also easily forgotten. When push comes to shove in a high-conflict meeting at the end of a long day, it’s less about what you know and are capable of doing, than it is about having well tuned behaviors that allow you to actually make things happen.
This is what I call “behavioral fitness.” Business schools and corporate universities need to treat the workplace like a behavioral gym where professionals have a clear training plan for what behaviors they need to work on, and then they need to get down to it. Professionals need to sweat daily, in every meeting, every conversation, and every problem solving session.
Why wait until post-grad business school?
What is it like for kids to play sports in adult-sized spaces?
It’s impossible to build on failure, Tony Robbins says:
You build only on success. I turned around the United States Army pistol shooting program. I made certain that the first time someone shot a pistol, instead of shooting the .45 caliber pistol from 50 feet away — which is what they were starting these guys out at — I brought the target literally five feet in front of the students. I wouldn’t let them fire the gun until they had rehearsed over and over again the exact perfect shooting form for two hours. By the time they held the gun, they had every technique perfected, so when they fired, they succeeded. BAM!
At first the Army thought it was stupid, but it put ignition into the students’ brain — “WOW! I’ve succeeded!” — versus shooting bullets into the ceiling or floor the first few times. It created an initial sense of certainty.
I believe in setting people up to win. Many instructors believe in setting them up to fail so they stay humble and they are more motivated. I disagree radically. There is a time for that but not in the beginning. People’s actions are very limited when they think they have limited potential. If you have limited belief, you are going to use limited potential, and you are going to take limited action.
I am shocked — shocked! — to find cheating going on at UNC!
A blistering report into an academic fraud scandal at the University of North Carolina released Wednesday found that for nearly two decades two employees in the African and Afro-American Studies department ran a “shadow curriculum” of hundreds of fake classes that never met but for which students, many of them Tar Heels athletes, routinely received A’s and B’s.
Nearly half the students in the classes were athletes, the report found, often deliberately steered there by academic counselors to bolster their worrisomely low grade-point averages and to allow them to continue playing on North Carolina’s teams.
I’m so glad we’ve ferreted out this one isolated program, and America’s student-athletes can continue their long tradition of academic excellence.
Gian-Carlo Rota of MIT shares ten lessons he wishes he had been taught:
- Blackboard Technique
- Publish the same results several times.
- You are more likely to be remembered by your expository work.
- Every mathematician has only a few tricks.
- Do not worry about your mistakes.
- Use the Feynmann method.
- Give lavish acknowledgments.
- Write informative introductions.
- Be prepared for old age.
His lesson on lecturing:
The following four requirements of a good lecture do not seem to be altogether obvious, judging from the mathematics lectures I have been listening to for the past forty-six years.
Every lecture should make only one main point
The German philosopher G. W. F. Hegel wrote that any philosopher who uses the word “and” too often cannot be a good philosopher. I think he was right, at least insofar as lecturing goes. Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.
Never run overtime
Running overtime is the one unforgivable error a lecturer can make. After fifty minutes (one microcentury as von Neumann used to say) everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute overtime can destroy the best of lectures.
Relate to your audience
As you enter the lecture hall, try to spot someone in the audience with whose work you have some familiarity. Quickly rearrange your presentation so as to manage to mention some of that person’s work. In this way, you will guarantee that at least one person will follow with rapt attention, and you will make a friend to boot.
Everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.
Give them something to take home
It is not easy to follow Professor Struik’s advice. It is easier to state what features of a lecture the audience will always remember, and the answer is not pretty. I often meet, in airports, in the street and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course, and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.
If there is a case to be made that unconventional schooling, without busywork or fixed schedules, helps unleash creativity, Palmer Luckey, creator of the Oculus Rift, might well be Exhibit A for the prosecution:
His mother, Julie, home-schooled all four of her children during a period of each of their childhoods (Luckey’s father, Donald, is a car salesman), but Palmer was the only one of the kids who never went back; he liked the flexibility too much. In his ample free time, he devoted most of his considerable energy to teaching himself how to build electronics from scratch.
No one else in Luckey’s family was especially interested in technology, but his parents were happy to give over half of the garage at their Long Beach, California, home to his experiments. There, Luckey quickly progressed from making small electronics to “high-voltage stuff” like lasers and electromagnetic coilguns. Inevitably, there were mishaps. While working on a live Tesla coil, Luckey once accidentally touched a grounded metal bed frame, and blew himself across the garage; another time, while cleaning an infrared laser, he burned a gray spot into his vision.
When Luckey was 15, he started “modding” video game equipment: taking consoles like the Nintendo GameCube, disassembling them, and modifying them with newer parts, to transform them into compact, efficient and hand-crafted devices. “Modding was more interesting than just building things entirely using new technologies,” Luckey told me. “It was this very special type of engineering that required deeply understanding why people had made the decisions they made in designing the hardware.”
Luckey soon became obsessed with PC gaming. How well, he wondered, could he play games? “Not skill level,” he clarified to me, “but how good could the experience be?” By this time, Luckey was making good money fixing broken iPhones, and he spent most of it on high-end gaming equipment in order to make the experience as immersive as possible. At one point, his standard gaming setup consisted of a mind-boggling six-monitor arrangement. “It was so sick,” he recalled.
But it wasn’t enough. Luckey didn’t just want to play on expensive screens; he wanted to jump inside the game itself. He knew the military sometimes trained soldiers using virtual reality headsets, so he set out to buy some — on the cheap, through government auctions. “You’d read that these VR systems originally cost hundreds of thousands of dollars, and you thought, clearly if they’re that expensive, they must be really good,” Luckey said. Instead, they fell miles short of his hopes. The field of view on one headset might be so narrow that he’d feel as if he was looking through a half-opened door. Another might weigh ten pounds, or have preposterously long lag between his head moving and the image reacting onscreen — a feature common to early VR that literally makes users nauseated.
So Luckey decided to do what he’d been doing for years with game consoles: He’d take the technology apart, figure out where it was falling short and modify it with new parts to improve it. Very quickly, he realized that this wasn’t going to be simple. “It turned out that a lot of the approaches the old systems were taking were dead ends,” he said.
The problem was one of fundamental design philosophy. In order to create the illusion of a three-dimensional digital world from a single flat screen, VR manufacturers had typically used complex optical apparatuses that magnified the onscreen image to fill the user’s visual field while also correcting for any distortion. Because these optics had to perform a variety of elaborate tricks to make the magnified image seem clear, they were extremely heavy and costly to produce.
Luckey’s solution to this dilemma was ingeniously simple. Why use bulky, expensive optics, he thought, when he could put in cheap, lightweight lenses and then use software to distort the image, so that it came out clear through them? Plus, he quickly realized that he could combine these lenses with screens from mobile phones, which the smartphone arms race had made bigger, crisper and less expensive than ever before. “That let me make something that was a lot lighter and cheaper, with a much wider field of view, than anything else out there,” he said.
From 2009 to 2012, while also taking college classes and working at the University of Southern California’s VR-focused Institute for Creative Technologies, Luckey poured countless hours into creating a working prototype from this core vision. He tinkered with different screens, mixed and matched parts from his collection of VR hardware, and refined the motion tracking equipment, which monitored the user’s head movements in real-time. Amazingly, considering the eventual value of his invention, Luckey was also posting detailed reports about his work to a 3-D gaming message board. The idea was sitting there for anyone to steal.
But, as Brendan Iribe put it to me, “Maybe his name is Luckey for a reason.” By that point, no one was interested in throwing more money away on another doomed virtual reality project.
Then, in early 2012, luck struck again when the legendary video game programmer John Carmack stumbled onto his work online and asked Luckey if he could buy one of his prototypes. Luckey sent him one for free. “I played it super cool,” he assured me. Carmack returned the favor in a big way: At that June’s E3 convention — the game industry’s gigantic annual commercial carnival — he showed off the Rift prototype to a flock of journalists, using a repurposed version of his hit game “Doom 3” for the demonstration. The response was immediate and ecstatic. “I was in Boston at a display conference at the time,” Luckey said, “and people there were like, ‘Dude, Palmer, everyone’s writing articles about your thing!’”
The rest, as they say, is virtual history: Over the next 21 months, Luckey partnered with Iribe, Antonov and Mitchell, launched a Kickstarter campaign that netted $2.4 million in funding — nearly ten times its initial goal — and joined the Facebook empire, thereby ensuring the company the kind of financial backing that most early-stage tech companies can only dream of.
The Oculus Rift is now entering its final stages of development — it’s slated for commercial release next year — and this fall Samsung will release a scaled-down product for developers and enthusiasts, powered by Oculus technology, that will clip over the company’s Galaxy Note 4 smartphone. But Luckey knows that success is by no means assured. “To this point, there has never been a successful commercial VR product, ever,” Luckey told me. “Nobody’s actually managed to pull this off.” Spend a few minutes inside the Rift, though, and one can’t help but believe that Luckey will be the one to do it.
- Figuring stuff out is way hard.
- There is no general method.
- Selecting and formulating problems is as important as solving them; these each require different cognitive skills.
- Problem formulation (vocabulary selection) requires careful, non-formal observation of the real world.
- A good problem formulation includes the relevant distinctions, and abstracts away irrelevant ones. This makes problem solution easy.
- Little formal tricks (like Bayesian statistics) may be useful, but any one of them is only a tiny part of what you need.
- Progress usually requires applying several methods. Learn as many different ones as possible.
- Meta-level knowledge of how a field works — which methods to apply to which sorts of problems, and how and why — is critical (and harder to get).
I didn’t find that list as interesting as his pull-out points along the way:
- Understanding informal reasoning is probably more important than understanding technical methods.
- Finding a good formulation for a problem is often most of the work of solving it.
- Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.
- Choosing a good vocabulary, at the right level of description, is usually key to understanding.
- Truth does not apply to problem formulations; what matters is usefulness.
- All problem formulations are “false,” because they abstract away details of reality.
- Work through several specific examples before trying to solve the general case. Looking at specific real-world details often gives an intuitive sense for what the relevant distinctions are.
- Problem formulation and problem solution are mutually-recursive processes.
- Heuristics for evaluating progress are critical not only during problem solving, but also during problem formulation.
- Solve a simplified version of the problem first. If you can’t do even that, you’re in trouble.
- If you are having a hard time, make sure you aren’t trying to solve an NP-complete problem. If you are, go back and look for additional sources of constraint in the real-world domain.
- You can never know enough mathematics.
- An education in math is a better preparation for a career in intellectual field X than an education in X.
- You should learn as many different kinds of math as possible. It’s difficult to predict what sort will be relevant to a problem.
- If a problem seems too hard, the formulation is probably wrong. Drop your formal problem statement, go back to reality, and observe what is going on.
- Learn from fields very different from your own. They each have ways of thinking that can be useful at surprising times. Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind.
- If all you have is a hammer, everything looks like an anvil. If you only know one formal method of reasoning, you’ll try to apply it in places it doesn’t work.
- Evaluate the prospects for your field frequently. Be prepared to switch if it looks like it is approaching its inherent end-point.
- It’s more important to know what a branch of math is about than to know the details. You can look those up, if you realize that you need them.
- Get a superficial understanding of as many kinds of math as possible. That can be enough that you will recognize when one applies, even if you don’t know how to use it.
- Math only has to be “correct” enough to get the job done.
- You should be able to prove theorems and you should harbor doubts about whether theorems prove anything.
- Try to figure out how people smarter than you think.
- Figure out what your own cognitive style is. Embrace and develop it as your secret weapon; but try to learn and appreciate other styles as well.
- Collect your bag of tricks.
- Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.
Peter Gray and Gina Riley surveyed 232 parents who unschool their children:
Getting into college was typically a fairly smooth process for this group; they adjusted to the academics fairly easily, quickly picking up skills such as class note-taking or essay composition; and most felt at a distinct advantage due to their high self-motivation and capacity for self-direction. “The most frequent complaints,” Gray notes on his blog, “were about the lack of motivation and intellectual curiosity among their college classmates, the constricted social life of college, and, in a few cases, constraints imposed by the curriculum or grading system.”
Most of those who went on to college did so without either a high school diploma or general education diploma (GED), and without taking the SAT or ACT. Several credited interviews and portfolios for their acceptance to college, but by far the most common route to a four-year college was to start at a community college (typically begun at age 16, but sometimes even younger).
None of the respondents found college academically difficult, but some found the rules and conventions strange and sometimes off-putting. Young people who were used to having to find things out on their own were taken aback, and even in some cases felt insulted, “when professors assumed they had to tell them what they were supposed to learn,” Gray says.
The range of jobs and careers was very broad—from film production assistant to tall-ship bosun, urban planner, aerial wildlife photographer, and founder of a construction company—but a few generalizations emerged. Compared to the general population, an unusually high percentage of the survey respondents went on to careers in the creative arts—about half overall, rising to nearly four out of five in the always-unschooled group. Similarly, a high number of respondents (half of the men and about 20 percent of the women) went on to science, technology, engineering or math (STEM) careers.
Grade inflation has led universities to offer context to students’ grades:
Starting this fall, UNC-Chapel Hill transcripts will provide a little truth in grading.
From now on, transcripts for university graduates will contain a healthy dose of context.
Next to a student’s grade, the record will include the median grade of classmates, the percentile range and the number of students in the class section. Another new measure, alongside the grade point average, is the schedule point average. A snapshot average grade for a student’s mix of courses, the SPA is akin to a sports team’s strength of schedule.
Researchers collected grade data for 135 U.S. colleges and universities, representing 1.5 million students. They found that A’s are now the most commonly awarded grade – 43 percent of all grades. Failure is almost unheard of, with D’s and F’s making up less than 10 percent of all college grades.
The study found that grade inflation has been most pronounced at elite private universities, trailed by public flagship campuses and then less selective schools. Grading tends to be higher in humanities courses, followed by social sciences. The lowest grades tend to occur in the science, math and engineering disciplines.
Indiana University used to do it, but stopped because of a software change. Dartmouth College and Cornell University include median grades on transcripts. Cornell used to publish the information online, but quit in 2011 after a study revealed that enrollment spiked in classes with a median grade of A.
But there is a larger move to transcripts with broader information about students’ learning outcomes, said Brad Myers, Ohio State University registrar and president of the American Association of Collegiate Registrars and Admissions Officers.
“We’re really trying to say, ‘Here’s what the student has mastered, and isn’t that what you’re after, more than whether the student got a B or a C or a D in this class?’ ”
Princeton University made headlines for a 2004 policy that sought to limit A’s to 35 percent in undergraduate courses – seen as a radical approach to regulate grades. Earlier this month, a faculty committee there recommended dropping the policy, saying it was too stressful for students and was misinterpreted as a quota system.
Sometimes, when we open a test, we see familiar questions on material we’ve studied — and yet we still do badly. Why does this happen?
Psychologists have studied learning long enough to have an answer, and typically it’s not a lack of effort (or of some elusive test-taking gene). The problem is that we have misjudged the depth of what we know. We are duped by a misperception of “fluency,” believing that because facts or formulas or arguments are easy to remember right now, they will remain that way tomorrow or the next day. This fluency illusion is so strong that, once we feel we have some topic or assignment down, we assume that further study won’t strengthen our memory of the material. We move on, forgetting that we forget.
Often our study “aids” simply create fluency illusions — including, yes, highlighting — as do chapter outlines provided by a teacher or a textbook. Such fluency misperceptions are automatic; they form subconsciously and render us extremely poor judges of what we need to restudy or practice again. “We know that if you study something twice, in spaced sessions, it’s harder to process the material the second time, and so people think it’s counterproductive,” Nate Kornell, a psychologist at Williams College, said. “But the opposite is true: You learn more, even though it feels harder. Fluency is playing a trick on judgment.”
The best way to overcome this illusion is testing, which also happens to be an effective study technique in its own right. This is not exactly a recent discovery; people have understood it since the dawn of formal education, probably longer. In 1620, the philosopher Francis Bacon wrote, “If you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite it from time to time and consulting the text when your memory fails.”
Scientific confirmation of this principle began in 1916, when Arthur Gates, a psychologist at Columbia University, created an ingenious study to further Bacon’s insight. If someone is trying to learn a piece of text from memory, Gates wondered, what would be the ideal ratio of study to recitation (without looking)? To interrogate this question, he had more than 100 schoolchildren try to memorize text from Who’s Who entries. He broke them into groups and gave each child nine minutes to prepare, along with specific instructions on how to use that time. One group spent 1 minute 48 seconds memorizing and the remaining time rehearsing (reciting); another split its time roughly in half, equal parts memorizing and rehearsing; a third studied for a third and recited for two-thirds; and so on.
After a sufficient break, Gates sat through sputtered details of the lives of great Americans and found his ratio. “In general,” he concluded, “best results are obtained by introducing recitation after devoting about 40 percent of the time to reading. Introducing recitation too early or too late leads to poorer results.” The quickest way to master that Shakespearean sonnet, in other words, is to spend the first third of your time memorizing it and the remaining two-thirds of the time trying to recite it from memory.
Continue reading the main story
In the 1930s, a doctoral student at the State University of Iowa, Herman F. Spitzer, recognized the broader implications of this insight. Gates’s emphasis on recitation was, Spitzer realized, not merely a study tip for memorization; it was nothing less than a form of self-examination. It was testing as study, and Spitzer wanted to extend the finding, asking a question that would apply more broadly in education: If testing is so helpful, when is the best time to do it?
He mounted an enormous experiment, enlisting more than 3,500 sixth graders at 91 elementary schools in nine Iowa cities. He had them study an age-appropriate article of roughly 600 words in length, similar to what they might analyze for homework. Spitzer divided the students into groups and had each take tests on the passages over the next two months, according to different schedules. For instance, Group 1 received one quiz immediately after studying, then another a day later and a third three weeks later. Group 6, by contrast, didn’t take one until three weeks after reading the passage. Again, the time the students had to study was identical. So were the quizzes. Yet the groups’ scores varied widely, and a clear pattern emerged.
The groups that took pop quizzes soon after reading the passage — once or twice within the first week — did the best on a final exam given at the end of two months, marking about 50 percent of the questions correct. (Remember, they had studied their peanut or bamboo article only once.) By contrast, the groups who took their first pop quiz two weeks or more after studying scored much lower, below 30 percent on the final. Spitzer’s study showed that not only is testing a powerful study technique, but it’s also one that should be deployed sooner rather than later. “Achievement tests or examinations are learning devices and should not be considered only as tools for measuring achievement of pupils,” he concluded.
The testing effect, as it’s known, is now well established, and it opens a window on the alchemy of memory itself. “Retrieving a fact is not like opening a computer file,” says Henry Roediger III, a psychologist at Washington University in St. Louis, who, with Jeffrey Karpicke, now at Purdue University, has established the effect’s lasting power. “It alters what we remember and changes how we subsequently organize that knowledge in our brain.”