Talent, training and performance

Tuesday, August 30th, 2011

Ericcson’s expert performance framework, which says that you need 10,000 hours of deliberate practice to become an expert, is an already simple framework that often gets oversimplified — as in this video by table-tennis champion Matthew Syed, author of Bounce:

Tyler Cowen (and then Aretae) recently linked to a refutation of the expert performance framework — and especially of the oversimplified versions of it — by two exercise physiologists, Ross Tucker and Jonathoan Dugas:

I have that study, and what is remarkable about it is that Ericsson presents no indication of variance — there are no standard deviations, no maximums, minimums, or ranges. And so all we really know is that average practice time influences performance, not whether the individual differences present might undermine that argument. Statistically, this is a crucial omission and it may undermine the 10,000 hour conclusion entirely.

While I strongly agree that we need distributions, not single average values, to characterize such things, Tucker and Dugas attack something of a straw man here:

If the theory is that 10,000 hours of practice are needed, and there is no innate ability, then you should not find a single person who has succeeded with fewer than 10,000 hours, and nor should anyone fail having done their 10,000 hours.

I have no trouble accepting the 10,000-hour rule as merely a rule of thumb that suggests the right order of magnitude.

Here’s where things get much more interesting — and data-driven:

Gobet and Campitelli studied 104 chess players and measured practice time and performance level, and looked at the time taken to reach the Master level. This is their finding:

So, the average time taken is 11,053 hours. That’s pretty much in agreement with Ericsson’s violin players. So far so good. But look at that Standard Deviation — 5,538 hours, and it gives a co-efficient of variation of 50%. [...] One player reaches master level on 3,000 hours, another takes almost 24,000 hours, and some are still practicing but not succeeding. That’s a 21,000 hour difference, which is two entire practice lifetimes according to the model of practice.

Darts, which has been studied by Duffy and Ericsson, offers more data:

They find the following when looking at darts scores and accumulated practice time:

The figure above shows how much of performance can be explained by deliberate practice. In chess, which I showed above, it’s 34%. In darts, 15 years of practice explains only 28% of the variation in performance between individuals! An extra-ordinary finding, because with all due respect, that’s in darts. What else is there that influences performance? Yet practice time accounts for only a quarter of the performance differences.

What else is there to influence dart performance? Plenty of random noise, I suspect, because of the peculiar scoring system. There’s clearly a skill to poker, but that skill only explains a tiny percentage of performance compared to chess.

This also fails to disprove the importance of deliberate training, if we accept that there are degrees of deliberate-ness that are hard to measure. The original finding, after all, was that top-tier musicians hadn’t practiced music more than third-tier musicians, but that they had deliberately practiced more:

All expert musicians were found to spend about the same amount of time on all types of music related activities during the diary week — about 50–60 hours. The most striking difference was that the two most accomplished groups of expert musicians were found to spend more time (25 hours) in solitary practice than the least accomplished group, who only spent around 10 hours per week.

During solitary practice the experts reported working with full concentration on improving specific aspects of their music performance — often identified by their master teacher at their weekly lessons — thus meeting the criteria for deliberate practice. The best groups of expert musicians spent around four hours every day, including weekends, in this type of solitary practice.

From retrospective estimates of practice, Ericsson et al. (1993) calculated the number of hours of deliberate practice that five groups of musicians at different performance levels had accumulated by a given age, as is illustrated in Figure 3. By the age of 20, the most accomplished musicians had spent over 10,000 hours of practice, which is 2500 and 5000 hours more than two less accomplished groups of expert musicians or 8000 hours more than amateur pianists of the same age (Krampe & Ericsson, 1996).

As the contest moves away from pure skill to something more physical, the primacy of skill naturally drops:

Start with Olympic wrestling, football and field hockey. Below are the findings from research on the USA Olympic athletes.

Clearly, 10,000 hours are rarely required. A subsequent study on Australian athletes found that 28% had participated for fewer than four years in their sport — that’s probably 3,000 to 4,000 hours, at most. One netball player from Australia had made the international stage on 600 hours of play.

Clearly there is some overlap between the skills and attributes needed for success in various sports, and some sports — coughnetballcough — are nowhere near as competitive as others.

Their last point is one that immediately jumped out at me when I read about the original research: which way does the causality run?

Ericsson concludes that these children just accumulate more training time and that this explains performance. The difference between the “best experts” and the “least accomplished players” is the training time.

But what if it is exactly the other way around? Let’s take two children at nine years old. Do they have the same ability to play on first exposure? Ericsson’s model says yes, and that the difference comes later, when one child practices more, gets better teaching. But what if the difference is present from the very first note, the first exposure to the activity? The parents of a child who shows some ability encourage further practice, they invest in teaching and training, and this child, by virtue of the fact that he/she has more ability to begin with, accumulates more practice.

But the child who has little innate ability makes the violin sound like the death march of stray cats, and their parents do not encourage more play. In fact, they discourage it -— the “go play outside” syndrome takes over, and the child is never exposed to teaching or practice. His trajectory is set precisely because he has less innate ability.

This Matthew effect was also popularized by the same Gladwell book that made the 10,000-hour rule so fashionable — but Outliers neglects to mention that this effect disappears past the junior level.

Tucker and Dugas tend to focus on sports with a strong metabolic component, like running, cycling, and swimming, where skill plays less of a role than endurance, which is highly trainable but has a strong genetic component nonetheless:

The study that is needed to answer this question is to take a large, random group of people and expose them to training, and then to measure how much they improve. And this has been done. There are four studies, summarized in the figure below, where big groups have been put through a supervised training programme, and their VO2max measured as an index of fitness.

So, on average, VO2max will improve by 15% as a result of training. In some studies, it’s been as high as 19%, in others, 9%. This may be due to differences in the training programme, or the people involved. However, what you should be asking, especially given our look at Ericsson’s violin study and the chess paper, is “What are the individual differences that make up that 15%, and what is the genetic impact in these studies?”

And for this, a paper by Claude Bouchard earlier this year. In this study, 470 untrained volunteers were put through five months of training, and their fitness levels measured before and after. The figure below shows the result:

As you might expect, most people improve by average amounts — 38% of the volunteers improved by between 300 and 500 ml/min (shown by they yellow and green bars in the breakdown of responders section). But either side of these “typical responses”, you see the extremes — the “low responders” shown in reds and oranges, and the “high responders” shown in blues and purples. 4% of the volunteers improved by 800ml/min or more, whereas 7% improved by less than 100ml/min.

Overall, there was a range of changes in VO2max all the way from 100ml/min (basically no improvement) to over 1000ml/min. That’s a 10-fold difference. You may recall that yesterday, we saw how chess expertise showed an 8-fold difference between the fastest and slowest to succeed at reaching Master level. It seems that a similar range of responses occurs for physiology.

The end result is that the bottom 5% of the sample, those who responded the least, improved their VO2max by less than 4%. On the other end, the high responders, the top 5%, improved by 40%. That is an astonishing difference, and the simple, and obvious question is where are you most likely to find an endurance athlete in this sample? The answer is on the far right — the individual who shows large adaptations to training, improves quickly and then reaches a higher ceiling. I am sure that every one of you reading this knows one of each of these people, perhaps you are one of them!

We should expect to see similar patterns in strength, power, flexibility, etc. — different people start at different levels and then respond to training and conditioning differently.

Comments

  1. You’re being too harsh. It takes well over 15,000 hours of deliberate practice to become a world class strawman,

  2. Kristófer Máni says:

    How many hours does it take to become a world-class soccer player?

Leave a Reply