It conquered the office

Friday, April 21st, 2017

Adam Smith famously used a pin factory to illustrate the advantages of specialization, Virginia Postrel reminds us — just before the Industrial Revolution really kicked off:

By improving workers’ skills and encouraging purpose-built machinery, the division of labor leads to miraculous productivity gains. Even a small and ill-equipped manufacturer, Smith wrote in The Wealth of Nations, could boost each worker’s output from a handful of pins a day to nearly 5,000.

In the early 19th century, that number jumped an order of magnitude with the introduction of American inventor John Howe’s pin-making machine. It was “one of the marvels of the age, reported on in every major journal and encyclopedia of the time,” writes historian of technology Steven Lubar. In 1839, the Howe factory had three machines making 24,000 pins a day — and the inventor was clamoring for pin tariffs to offset the nearly 25 percent tax that pin makers had to pay on imported brass wire, a reminder that punitive tariffs hurt domestic manufacturers as well as consumers.


Nowadays, we think of straight pins as sewing supplies. But they weren’t always a specialty product. In Smith’s time and for a century after, pins were a multipurpose fastening technology. Straight pins functioned as buttons, snaps, hooks and eyes, safety pins, zippers, and Velcro. They closed ladies’ bodices, secured men’s neckerchiefs, and held on babies’ diapers. A prudent 19th century woman always kept a supply at hand, leading a Chicago Tribune writer to opine that the practice encouraged poor workmanship in women’s clothes: “The greatest scorner of woman is the maker of the readymade, who would not dare to sew on masculine buttons with but a single thread, yet will be content to give the feminine hook and eye but a promise of fixedness, trusting to the pin to do the rest.”

Most significantly, pins fastened paper. Before Scotch tape or command-v, authors including Jane Austen used them to cut and paste manuscript revisions. The Bodleian Library in Oxford maintains an inventory of “dated and datable pins” removed from manuscripts going as far back as 1617.


But a better solution was on its way. In 1899, an inventor in the pin-making capital of Waterbury, Connecticut, patented a “machine for making paper clips.” William Middlebrook’s patent application, observed Henry Petroski in The Evolution of Useful Things, “showed a perfectly proportioned Gem.”

It was that paper clip design that conquered the office and consigned pins to their current home in the sewing basket.

US healthcare is famous for three things

Wednesday, April 12th, 2017

US healthcare is famous for three things, Ben Southwood notes:

It’s expensive, it’s not universal, and it has poor outcomes. The US spends around $7,000 per person on healthcare every year, or roughly 18% of GDP; the next highest spender is Switzerland, which spends about $4,500. Before Obamacare, approx 15% of the US population were persistently uninsured (8.6% still are). And as this chart neatly shows, their overall outcome on the most important variable — overall life expectancy — is fairly poor.

But some of this criticism is wrongheaded and simplistic: when you slice the data up more reasonably, US outcomes look impressive, but being the world’s outrider is much more expensive than following behind. What’s more, most of the solutions people offer just don’t get to the heart of the issue: if you give people freedom they’ll spend a lot on healthcare.

The US undoubtedly spends a huge amount on healthcare. One popular narrative is that because of market failures and/or extreme overregulation in healthcare, prices are excessively high. So Americans with insurance (or covered by Medicare, the universal system for the elderly, or Medicaid, the government system for the poor) get the same as other developed world citizens, but those without get very poor care and die younger. A system like the NHS solves the problem, according to this view, with bulk buying of land, labour, and inputs, better incentives, and universal coverage.

But there are some serious flaws in this theory. Firstly, extending insurance to the previously-uninsured doesn’t, in America, seem to have large benefits. For example, a recent NBER paper found no overall health gains from the massive insurance expansion under Obamacare.* A famous RAND study found minuscule benefits over decades from giving out free insurance to previously uninsured in the 1970s. In fact, over and above the basics, insuring those who choose not to get insurance doesn’t ever seem to have large gains. Indeed, there is wide geographic variation in the life expectancy among the low income in the US, but this doesn’t even correlate with access to medical care! This makes it unlikely that the gap between the US and the rest is explained by universality.

To find the answer, consider the main two ingredients that go into health outcomes. One is health, and the other is treatment. If latent health is the same across the Western world, we can presume that any differences come from differences in treatment. But this is simply not the case. Obesity is far higher in the USA than in any other major developed country. Obviously it is a public health problem, but it’s unrealistic to blame it on the US system of paying for doctors, administrators, hospitals, equipment and drugs.

In fact in the US case it’s not even obesity, or indeed their greater pre-existing disease burden, that is doing most of the work in dragging their life expectancy down; it’s accidental and violent deaths. It is tragic that the US is so dangerous, but it’s not the fault of the healthcare system; indeed, it’s an extra burden that US healthcare spending must bear. Just simply normalising for violent and accidental death puts the USA right to the top of the life expectancy rankings.

One of our cultural problems, Arnold Kling adds, is that we spend too much on health care and not enough on public health.

Above-median income and close to zero saving

Tuesday, March 28th, 2017

There is a significant portion of the population with above-median income and close to zero saving, Arnold Kling notes:

I think it is hard to tell a story that explains that in terms of rational behavior. Remember, we are talking about a lot of people, not just a few random exceptions.

A Tale of Two Bell Curves

Monday, March 27th, 2017

Bo and Ben Winegard tell a tale of two Bell Curves:

To paraphrase Mark Twain, an infamous book is one that people castigate but do not read. Perhaps no modern work better fits this description than The Bell Curve by political scientist Charles Murray and the late psychologist Richard J. Herrnstein. Published in 1994, the book is a sprawling (872 pages) but surprisingly entertaining analysis of the increasing importance of cognitive ability in the United States.


There are two versions of The Bell Curve. The first is a disgusting and bigoted fraud. The second is a judicious but provocative look at intelligence and its increasing importance in the United States. The first is a fiction. And the second is the real Bell Curve. Because many, if not most, of the pundits who assailed The Bell Curve have not bothered to read it, the fictitious Bell Curve has thrived and continues to inspire furious denunciations. We have suggested that almost all of the proposals of The Bell Curve are plausible. Of course, it is possible that some are incorrect. But we will only know which ones if people responsibly engage the real Bell Curve instead of castigating a caricature.

Masters of reality, not big thinkers

Sunday, March 26th, 2017

Joel Mokyr’s A Culture of Growth attempts to answer the big question: Why did science and technology (and, with them, colonial power) spread west to east in the modern age, instead of another way around?

He reminds us that the skirmishing of philosophers and their ideas, the preoccupation of popular historians, is in many ways a sideshow — that the revolution that gave Europe dominance was, above all, scientific, and that the scientific revolution was, above all, an artisanal revolution. Though the élite that gets sneered at, by Trumpites and neo-Marxists alike, is composed of philosophers and professors and journalists, the actual élite of modern societies is composed of engineers, mechanics, and artisans — masters of reality, not big thinkers.

Mokyr sees this as the purloined letter of history, the obvious point that people keep missing because it’s obvious. More genuinely revolutionary than either Voltaire or Rousseau, he suggests, are such overlooked Renaissance texts as Tommaso Campanella’s “The City of the Sun,” a sort of proto-Masonic hymn to people who know how to do things. It posits a Utopia whose inhabitants “considered the noblest man to be the one that has mastered the most skills… like those of the blacksmith and mason.” The real upheavals in minds, he argues, were always made in the margins. He notes that a disproportionate number of the men who made the scientific and industrial revolution in Britain didn’t go to Oxford or Cambridge but got artisanal training out on the sides. (He could have included on this list Michael Faraday, the man who grasped the nature of electromagnetic induction, and who worked some of his early life as a valet.) What answers the prince’s question was over in Dr. Johnson’s own apartment, since Johnson was himself an eccentric given to chemistry experiments — “stinks,” as snobbish Englishmen call them.

As in painting and drawing, manual dexterity counted for as much as deep thoughts — more, in truth, for everyone had the deep thoughts, and it took dexterity to make telescopes that really worked. Mokyr knows Asian history, and shows, in a truly humbling display of erudition, that in China the minds evolved but not the makers. The Chinese enlightenment happened, but it was strictly a thinker’s enlightenment, where Mandarins never talked much to the manufacturers. In this account, Voltaire and Rousseau are mere vapor, rising from a steam engine as it races forward. It was the perpetual conversation between technicians and thinkers that made the Enlightenment advance. ted talks are a licensed subject for satire, but in Mokyr’s view ted talks are, in effect, what separate modernity from antiquity and the West from the East. Guys who think big thoughts talking to guys who make cool machines — that’s where the leap happens.

Meaning, even a very small meaning, can matter a lot

Friday, March 17th, 2017

Dan Ariely’s studies can be darkly humorous:

In their first experiment, Ariely’s team asked college students to find sets of repeated letters on a sheet of paper. Some of the students’ work was reviewed by a “supervisor” as soon as it was turned in. Other students were told in advance that their work would be collected but not reviewed, and still others watched as their papers were shredded immediately upon completion.

Each of the students was paid 55 cents for completing the first sheet, and five cents less for each sheet thereafter, and allowed to stop working at any point. The research team found that people whose work was reviewed and acknowledged by the “supervisor” were willing to do more work for less pay than those whose work was ignored or shredded.

In a second experiment, participants assembled Bionicles, toy figurines made by Lego. The researchers made the Bionicle project somewhat meaningful for half of the students, whose completed toys were displayed on their desks for the duration of the experiment, while the students assembled as many Bionicles as they wished. “Even though this may not have been especially meaningful work, the students felt productive seeing all of those Bionicles lined up on the desk, and they kept on building them even when the pay was rather low,” Ariely said.

The rest of the participants, whose work was intended to be devoid of meaning, gave their completed Bionicles to supervisors in exchange for another box of parts to assemble. The supervisors immediately disassembled the completed figurines, and returned the box of parts to the students when they were ready for the next round. “These poor individuals were assembling the same two Bionicles over and over. Every time they finished one, it was simply torn apart and given back to them later.” The students in the meaningful and non-meaningful conditions were each paid according to a scale that began at $2.00 for the first Bionicle and decreased by 11 cents for each subsequent figurine assembled.

“Adding to the evidence from the first experiment, this experiment also showed that meaning, even a very small meaning, can matter a lot,” Ariely said. Students who were allowed to collect their assembled Bionicles built an average of 10.2 figurines, while those whose work was disassembled built an average of 7.2. Students whose work was not meaningful required a median level of pay 40 percent higher than students whose work was meaningful.

“These experiments clearly demonstrate what many of us have known intuitively for some time. Doing meaningful work is rewarding in itself, and we are willing to do more work for less pay when we feel our work has some sort of purpose, no matter how small,” Ariely said. “But it is also important to point out that when we asked people to estimate the effect of meaning on labor, they dramatically underestimated the effects. This means, that while we recognize the general effect of meaning on motivation, we are not sufficiently appreciating its magnitude and importance.”

Neoliberal management may reduce productivity

Thursday, March 16th, 2017

Chris Dillow suggests some ways that neoliberal management may reduce productivity:

Good management can be bad for investment and innovation. William Nordhaus has shown that the profits from innovation are small. And Charles Lee and Salman Arif have shown that capital spending is often motivated by sentiment rather than by cold-minded appraisal with the result that it often leads to falling profits. We can interpret the slowdowns in innovation and investment as evidence that bosses have wised up to these facts. Also, an emphasis upon cost-effectiveness, routine and best practice can deny employees the space and time to experiment and innovate. Either way, Joseph Schumpeter’s point seems valid: capitalist growth requires a buccaneering spirit which is killed off by rational bureaucracy.

As Jeffrey Nielsen has argued, “rank-based” organizations can demotivate more junior staff, who expect to be told what to do rather than use their initiative.

The high-powered incentives offered to bosses can backfire. They can incentivize rent-seeking, office politics and jockeying for the top job rather than getting on with one’s work. They can crowd out intrinsic motivations such as professional pride. And they can divert (pdf) managers towards doing tasks that are easily monitored rather than ones which are important to an organization but harder to measure: for example, cost-cutting can be monitored and incentivized but maintaining a healthy corporate culture is less easily measured and so can be neglected by crude incentive schemes.

Empowering management can increase opposition to change. As McAfee and Brynjolfsson have shown, reaping the benefits of technical change often requires organizational change. But well-paid bosses have little reason to want to rock the boat by undertaking such change. The upshot is that we are stuck in what van Ark calls (pdf) the “installation phase” of the digital economy rather than the deployment phase. As Joel Mokyr has said, the forces of conservatism eventually suppress technical creativity.

The Only Thing That’s Curbed Inequality

Saturday, March 11th, 2017

The only thing that’s curbed inequality has been catastrophe, Walter Scheidel notes:

Throughout history, only massive, violent shocks that upended the established order proved powerful enough to flatten disparities in income and wealth. They appeared in four different guises: mass-mobilization warfare, violent and transformative revolutions, state collapse, and catastrophic epidemics. Hundreds of millions perished in their wake, and by the time these crises had passed, the gap between rich and poor had shrunk.


But what of less murderous mechanisms of combating inequality? History offers little comfort. Land reform often foundered or was subverted by the propertied. Successful programs that managed to parcel out land to the poor and made sure they kept it owed much to the threat or exercise of violence, from Mexico during its revolution to postwar Japan, South Korea, and Taiwan. Just as with the financial crisis of 2008, macroeconomic downturns rarely hurt the rich for more than a few years. Democracy on its own does not consistently lower inequality. And while improving access to education can indeed narrow income gaps, it is striking to see that American wage premiums for the credentialed collapsed precisely during both world wars.

Disrupters of the world unite!

Friday, February 17th, 2017

Tyler Cowen’s The Complacent Class suggests that America lost its taste for risk, and Edward Luce opens his review by poking a bit of fun at innovative start-ups:

Walk into any start-up company in America and you will likely see an almost identical decor: the walls will have been dutifully stripped of paint; the workplace will be littered with the same multicoloured pouffes; and most of its denizens will be wearing a variation on the casual hipster uniform. In an age of hyper-individualism, entrepreneurs strike a remarkably similar pose. The same applies to those who have refurbished their university common areas, set up corporate “chill-out zones”, or stripped their downtown apartments to look like a Silicon Valley unicorn. Everyone wants that creative energy to rub off on them. Disrupters of the world unite!

We should spend less

Thursday, February 16th, 2017

Arnold Kling shares what he believes about education:

1. The U.S. leads the world in health care spending per person, but not in health care outcomes. Many people look at that and say that health care costs too much in the U.S., and we should be able to get the same our better outcomes by sending less. Maybe that is correct, maybe not. That is not the point here. But —

2. the U.S. leads the world in K-12 education spending per student, but not in student outcomes. Yet nobody says that education costs too much and that we should spend less. Except —

3. me. I believe that we spend way too much on K-12 education.

4. We spend as much as we do on education in part because it is a sacred cow. We want to show that we care about children. (Yes, “showing that you care” is also Robin Hanson’s explanation for health care spending.)


8. I do not expect educational outcomes to be any better under a voucher system. That is because I believe in the Null Hypothesis, which is that educational interventions do not make a difference.

9. However, a competitive market in education would drive down costs, so that the U.S. would get the same outcomes with much less spending.

The dangers of status competition

Sunday, February 12th, 2017

Status competition can have corrosive effects:

Neighbours of lottery winners often make extravagant status good purchases (Kuhn et al. 2011) and are more likely to go bankrupt (Agarwal, Mikhed, and Scholnick 2016). Card et al. (2012) and Ashraf et al. (2014) show that job satisfaction and performance suffer when there are direct rankings and explicit comparisons with others in the same group.

Status competition can kill you — if you’re a fighter pilot:

During the height of the [Battle of Britain], in the summer of 1940, two of Germany’s highest-scoring aces did something unexpected: they went deer hunting. Werner Mölders, commanding a squadron of fighters on the Channel Coast, was asked by Hermann Göring head of the German air force, to confer with him for three days at Karinhall, his country retreat. Mölders at first refused, as he was competing against Adolf Galland for the honour of being the highest-scoring German ace. Mölders relented only on the condition that Galland would also be grounded for three days. Göring, who had also been a fighter ace in World War I, agreed and brought Galland along on the hunting trip (Galland 1993).

So, in the middle of the defining conflict for the German air force, two of its best pilots had been pulled from the front line – and one of them was not brought because there was an operational or administrative need, but to maintain a ‘level killing field’ with his competition. Competition for status was intense amongst German pilots. It was behind the elaborate systems of awards and medals that pervaded the military. Similar awards are also common in many other walks of life, from academia to the top ranks of business and politics.

Most air forces during WWII devoted considerable bureaucratic attention to filing, witnessing, adjudicating, and aggregating the victory claims made by their pilots. In the German system, pilots had to give the grid coordinates, aircraft type, type of destruction (pilot bail-out, impact, explosion, and so on) and time to file a claim. The claim would have to be witnessed by another pilot to stand a chance of being accepted. Claims would be sent to a central office of the Luftwaffe for adjudication, where many would be rejected.

This elaborate system was necessary because awards and medals were closely tied to victory scores. The Luftwaffe awarded medals based on informal quotas. For example, in early 1942 for a pilot to have a chance of receiving the Knight’s Cross with Oak Leaves and Swords, that pilot would have needed 100 victories.

We have data on the victory claims of more than 5,000 pilots for the entire conflict, 1939-45. These pilots filed claims that they had shot down 54,800 enemy planes. Victories were extremely unevenly distributed. The highest-scoring ace, Erich Hartmann, claimed more than 350 victories, and the top 100 pilots scored almost as many victories as the bottom 4,900. The maximum monthly victory score was 68, recorded in 1943 on the Eastern front.

These successes were bought at a high price (Figure 1). In an average month, 3.3% of pilots died. After two years of service, half the low-scoring pilots would have been killed. Amongst the better-performing pilots, only one-quarter would have survived. Towards the end of the war, loss rates became extremely high, averaging 25% or more from the spring of 1944 (Murray 1996).

Victory claims and exit rate among German fighter pilots, by month

Figure 2 summarises our key results. Good pilots – those whose average monthly victory score put them in the top 20% of the distribution – on average improved their victory score by 50%, from less than two to more than three a month, when the successes of their former peers were advertised. Pilots in the bottom 80% scored fewer victories overall, but also improved by a small margin. Strikingly, results are different for exit rates (‘exit’ usually meant death). Great pilots, on average, died more often, but they were not more likely to exit in times of peer recognition. The opposite was true for average and the poor pilots, whose exit rate increased by almost half. In other words, aces tried harder when a former colleague got a public pat on the back, but didn’t take many more risks. Average or poor pilots tried harder, were a bit more successful, but also tended to get themselves killed more often.

Victory and Exit

German pilots during WWII had the highest numbers of aerial victories ever recorded:

The top 100 pilots of all time are all German.

What happened when the U.S. got rid of guest workers?

Friday, February 10th, 2017

What happened when the U.S. got rid of guest workers?

A team of economists looked at the mid-century “bracero” program, which allowed nearly half a million seasonal farm workers per year into the U.S. from Mexico. The Johnson administration terminated the program in 1964, creating a large-scale experiment on labor supply and demand.

The result wasn’t good news for American workers. Instead of hiring more native-born Americans at higher wages, farmers automated, changed crops or reduced production.

What companies get wrong about motivating their people

Sunday, January 22nd, 2017

Dan Ariely’s Payoff looks at what companies get wrong about motivating their people:

A few years ago, behavioral economist Dan Ariely conducted a study at a semiconductor factory of Intel’s in Israel. Workers were given either a $30 bonus, a pizza voucher or a complimentary text message from the boss at the end of the first workday of the week as an incentive to meet targets. (A separate control group received nothing.) Pizza, interestingly, was the best motivator on the first day, but over the course of a week the compliment had the best overall effect, even better than the cash. “When I get the money, I’m interested, when I’m not getting the money, I’m not so interested,” Ariely said in a recent interview. “Even relatively small bonuses can reframe to people how they think about work.”

“Purpose” has become a buzzword:

Often what it means is that the CEO picks a charity that they give money to. That’s often corporate social responsibility. But the reality is that a lot of meaning is about the small struggles in life and managing to overcome them and feeling a sense of progress.


Companies often don’t create this kind of sense of connection and meaning. They destroy it — unintentionally — with rules and regulations.


In many companies, in the name of bureaucracy and procedure and streamlining things, we’re basically eliminating people’s ability to use their own judgment. We think about people as cogs. And because of that we eliminate their motivation.

Ariely is largely against bonuses:

I don’t even think we should pay bonuses to CEOs. There’s lots of reasons to give bonuses. Some are for accounting purposes — a company says ‘Let’s not promise people a fixed amount of money: You’ll get at least x, above that we’ll do revenue sharing.’ I understand that. It depends on how much money we make. But when you have performance-contingent bonuses — and this goes back to the book — to motivate people, what you are assuming can hold people back. Imagine I paid you on a performance-contingent approach. What is my underlying assumption? My underlying assumption is that you know what you need to do but you’re too lazy to do it.


How many CEOs are just lazy? Who’d say, if they didn’t have the bonus, that I’m not interested in working? CEOs are deeply involved in their companies. Their egos are tied to it. The second thing they tell you after they say their name, often before they tell you how many kids they have and what hobbies they have is what company they are leading. To think that they’re just working for a bonus is just completely crazy.

Populists are not fascists

Tuesday, January 17th, 2017

Comparisons between the United States today and Germany in the 1930s are becoming commonplace, Niall Ferguson notes, but there’s a better analogy:

Journalists are fond of saying that we are living in a time of “unprecedented” instability. In reality, as numerous studies have shown, our time is a period of remarkable stability in terms of conflict. In fact, viewed globally, there has been a small uptick in organized lethal violence since the misnamed Arab Spring. But even allowing for the horrors of the Syrian civil war, the world is an order of magnitude less dangerous than it was in the 1970s and 1980s, and a haven of peace and tranquility compared with the period between 1914 and 1945.

This point matters because the defining feature of interwar fascism was its militarism. Fascists wore uniforms. They marched in enormous and well-drilled parades and they planned wars. That is not what we see today.

So why do so many commentators feel that we are living through “unprecedented instability?” The answer, aside from plain ignorance of history, is that political populism has become a global phenomenon, and established politicians and political parties are struggling even to understand it, much less resist it. Yet populism is not such a mysterious thing, if one only has some historical knowledge. The important point is not to make the mistake of confusing it with fascism, which it resembles in only a few respects.

He lists the five ingredients for populism:

The first of these ingredients is a rise in immigration. In the past 45 years, the percentage of the population of the United States that is foreign-born has risen from below 5 percent in 1970 to over 13 percent in 2014—almost as high as the rates achieved between 1860 and 1910, which ranged between 13 percent and an all-time high of 14.7 percent in 1890.

So when people say, as they often do, that “the United States is a land based on immigration,” they are indulging in selective recollection. There was a period, between 1910 and 1970, when immigration drastically declined. It is only in relatively recent times that we have seen immigration reach levels comparable with those of a century ago, in what has justly been called the first age of globalization.

Ingredient number two is an increase in inequality. Drawing on the work done on income distribution by Thomas Piketty and Emmanuel Saez, we can see that we have recently regained the heights of inequality that were last seen in the pre-World War I period.

The share of income going to the top one percent of earners is back up from below 8 percent of total income in 1970 to above 20 percent of total income. The peak before the financial crisis, in 2007, was almost exactly the same as the peak on the eve of the Great Depression in 1928.

Ingredient number three is the perception of corruption. For populism to thrive, people have to start believing that the political establishment is no longer clean. Recent Gallup data on public approval of institutions in the United States show, among other things, notable drops in the standing of all institutions save the military and small businesses.

Just 9 percent of Americans have “a great deal” or “quite a lot” of confidence in the U.S. Congress—a remarkable figure. It is striking to see which other institutions are down near the bottom of the league. Big business is second-lowest, with just 21 percent of the public expressing confidence in it. Newspapers, television news, and the criminal justice system fare only slightly better. What is even more remarkable is the list of institutions that have fallen furthest in recent times: the U.S. Supreme Court now has just a 36 percent approval rating, down from a historical average of 44 percent, while the Presidency has dropped from 43 percent to 36 percent approval.

The financial crisis appears to have convinced many Americans—and not without good reason—that there is an unhealthy and likely corrupt relationship between political institutions, big business, and the media.

The fourth ingredient necessary for a populist backlash is a major financial crisis. The three biggest financial crises in modern history—if one uses the U.S. equity market index as the measure—were the crises of 1873, 1929, and 2008. Each was followed by a prolonged period of depressed economic performance, though these varied in their depth and duration.

In the most recent of these crises, the peak of the U.S. stock market was October 2007. With the onset of the financial crisis, we essentially replayed for about a year the events of 1929 and 1930. However, beginning in mid to late 2009, we bounced out of the crisis, thanks to a combination of monetary, fiscal, and Chinese stimulus, whereas the Great Depression was characterized by a deep and prolonged decline in stock prices, as well as much higher unemployment rates and lower growth.

The first of these historical crises is the least known: the post-1873 “great depression,” as contemporaries called it. What happened after 1873 was nothing as dramatic as 1929; it was more of a slow burn. The United States and, indeed, the world economy went from a financial crisis—which was driven by excessively loose monetary policy and real estate speculation, amongst other things—into a protracted period of deflation. Economic activity was much less impaired than in the 1930s. Yet the sustained decline in prices inflicted considerable pain, especially on indebted farmers, who complained (in reference to the then prevailing gold standard) that they were being “crucified on a cross of gold.”

We have come a long way since those days; gold is no longer a key component of the monetary base, and farmers are no longer a major part of the workforce. Nevertheless, in my view, the period after 1873 is much more like our own time, both economically and politically, than the period after 1929.

There is still one missing ingredient to be added. If one were cooking, this would be the moment when flames would leap from the pan. The flammable ingredient is, of course, the demagogue, for populist demagogues react vituperatively and explosively against all of the aforementioned four ingredients.

Populists are not fascists:

They prefer trade wars to actual wars; administrative border walls to more defensible fortifications. The maladies they seek to cure are not imaginary: uncontrolled rising immigration, widening inequality, free trade with “unfree” countries, and political cronyism are all things that a substantial section of the electorate have some reason to dislike. The problem with populism is that its remedies are wrong and, in fact, counterproductive.

What we most have to fear—as was true of Brexit—is not therefore Armageddon, but something more prosaic: an attempt to reverse certain aspects of globalization, followed by disappointment when the snake oil does not really cure the patient’s ills, followed by the emergence of a new and ostensibly more progressive set of remedies for our current malaise.

Building a 21st Century FDA

Monday, January 16th, 2017

Building a 21st Century FDA shouldn’t be hard:

A 2010 study in the Journal of Clinical Oncology by researchers from the M.D. Anderson Cancer Center in Houston, Texas found that the time from drug discovery to marketing increased from eight years in 1960 to 12 to 15 years in 2010. Five years of this increase results from new regulations boosting the lengths and costs of clinical trials. The regulators aim to prevent cancer patients from dying from toxic new drugs. However, the cancer researchers calculate that the delays caused by requirements for lengthier trials have instead resulted in the loss of 300,000 patient life-years while saving only 16 life-years. If true, this is a scandal.

How much higher are the costs of getting a new drug through the FDA gantlet? A new study, “Stifling New Cures: The True Cost of Lengthy Clinical Drug Trials,” by Manhattan Institute senior fellow Avik Roy points out that in 1975 the pharmaceutical industry spent about $100 million on research and development (R&D) before getting a new drug approved by the FDA. By 1987, that had tripled to $300 million and that has since quadrupled to $1.3 billion. But even these figures may be too low. Roy cites calculations done by Matthew Herper of Forbes, who divides up the R&D spending of $802 billion by 12 big pharma companies since 1997 by the 139 drugs that have since gotten FDA approval to yield costs of $5.8 billion per drug.

Currently, new pharmaceuticals typically go through Phase I trials using fewer than 100 patients to get preliminary information on the drug’s safety. Phase II trials involve a few hundred subjects and further evaluate a new drug’s safety and efficacy. Phase III trials enroll thousands of patients to see how well it works compared to either placebo and/or other therapies and to look for bad side effects.

“The biggest driver of this phenomenal increase has been the regulatory process governing Phase III clinical trials of new pharmaceuticals on on human volunteers,” notes Roy. Between 1999 and 2005, clinical trials saw average increases in trial procedures by 65 percent, staff work by 67 percent, and length by 70 percent.

Not only do FDA demands for bigger Phase III clinical trials delay the introduction of effective new medicines, they dramatically boost costs for bringing them to market. Roy acknowledges that pre-clinical research that aims to identify promising therapeutic compounds absorbs 28 percent of the R&D budgets of pharmaceutical companies. Setting those discovery costs aside, Roy calculates that the Phase III trials “typically represent 90 percent or more of the cost of developing an individual drug all the way from laboratory to market.”