Walk into any start-up company in America and you will likely see an almost identical decor: the walls will have been dutifully stripped of paint; the workplace will be littered with the same multicoloured pouffes; and most of its denizens will be wearing a variation on the casual hipster uniform. In an age of hyper-individualism, entrepreneurs strike a remarkably similar pose. The same applies to those who have refurbished their university common areas, set up corporate “chill-out zones”, or stripped their downtown apartments to look like a Silicon Valley unicorn. Everyone wants that creative energy to rub off on them. Disrupters of the world unite!
Arnold Kling shares what he believes about education:
1. The U.S. leads the world in health care spending per person, but not in health care outcomes. Many people look at that and say that health care costs too much in the U.S., and we should be able to get the same our better outcomes by sending less. Maybe that is correct, maybe not. That is not the point here. But —
2. the U.S. leads the world in K-12 education spending per student, but not in student outcomes. Yet nobody says that education costs too much and that we should spend less. Except —
3. me. I believe that we spend way too much on K-12 education.
4. We spend as much as we do on education in part because it is a sacred cow. We want to show that we care about children. (Yes, “showing that you care” is also Robin Hanson’s explanation for health care spending.)
8. I do not expect educational outcomes to be any better under a voucher system. That is because I believe in the Null Hypothesis, which is that educational interventions do not make a difference.
9. However, a competitive market in education would drive down costs, so that the U.S. would get the same outcomes with much less spending.
Status competition can have corrosive effects:
Neighbours of lottery winners often make extravagant status good purchases (Kuhn et al. 2011) and are more likely to go bankrupt (Agarwal, Mikhed, and Scholnick 2016). Card et al. (2012) and Ashraf et al. (2014) show that job satisfaction and performance suffer when there are direct rankings and explicit comparisons with others in the same group.
Status competition can kill you — if you’re a fighter pilot:
During the height of the [Battle of Britain], in the summer of 1940, two of Germany’s highest-scoring aces did something unexpected: they went deer hunting. Werner Mölders, commanding a squadron of fighters on the Channel Coast, was asked by Hermann Göring head of the German air force, to confer with him for three days at Karinhall, his country retreat. Mölders at first refused, as he was competing against Adolf Galland for the honour of being the highest-scoring German ace. Mölders relented only on the condition that Galland would also be grounded for three days. Göring, who had also been a fighter ace in World War I, agreed and brought Galland along on the hunting trip (Galland 1993).
So, in the middle of the defining conflict for the German air force, two of its best pilots had been pulled from the front line – and one of them was not brought because there was an operational or administrative need, but to maintain a ‘level killing field’ with his competition. Competition for status was intense amongst German pilots. It was behind the elaborate systems of awards and medals that pervaded the military. Similar awards are also common in many other walks of life, from academia to the top ranks of business and politics.
Most air forces during WWII devoted considerable bureaucratic attention to filing, witnessing, adjudicating, and aggregating the victory claims made by their pilots. In the German system, pilots had to give the grid coordinates, aircraft type, type of destruction (pilot bail-out, impact, explosion, and so on) and time to file a claim. The claim would have to be witnessed by another pilot to stand a chance of being accepted. Claims would be sent to a central office of the Luftwaffe for adjudication, where many would be rejected.
This elaborate system was necessary because awards and medals were closely tied to victory scores. The Luftwaffe awarded medals based on informal quotas. For example, in early 1942 for a pilot to have a chance of receiving the Knight’s Cross with Oak Leaves and Swords, that pilot would have needed 100 victories.
We have data on the victory claims of more than 5,000 pilots for the entire conflict, 1939-45. These pilots filed claims that they had shot down 54,800 enemy planes. Victories were extremely unevenly distributed. The highest-scoring ace, Erich Hartmann, claimed more than 350 victories, and the top 100 pilots scored almost as many victories as the bottom 4,900. The maximum monthly victory score was 68, recorded in 1943 on the Eastern front.
These successes were bought at a high price (Figure 1). In an average month, 3.3% of pilots died. After two years of service, half the low-scoring pilots would have been killed. Amongst the better-performing pilots, only one-quarter would have survived. Towards the end of the war, loss rates became extremely high, averaging 25% or more from the spring of 1944 (Murray 1996).
Figure 2 summarises our key results. Good pilots – those whose average monthly victory score put them in the top 20% of the distribution – on average improved their victory score by 50%, from less than two to more than three a month, when the successes of their former peers were advertised. Pilots in the bottom 80% scored fewer victories overall, but also improved by a small margin. Strikingly, results are different for exit rates (‘exit’ usually meant death). Great pilots, on average, died more often, but they were not more likely to exit in times of peer recognition. The opposite was true for average and the poor pilots, whose exit rate increased by almost half. In other words, aces tried harder when a former colleague got a public pat on the back, but didn’t take many more risks. Average or poor pilots tried harder, were a bit more successful, but also tended to get themselves killed more often.
German pilots during WWII had the highest numbers of aerial victories ever recorded:
The top 100 pilots of all time are all German.
What happened when the U.S. got rid of guest workers?
A team of economists looked at the mid-century “bracero” program, which allowed nearly half a million seasonal farm workers per year into the U.S. from Mexico. The Johnson administration terminated the program in 1964, creating a large-scale experiment on labor supply and demand.
The result wasn’t good news for American workers. Instead of hiring more native-born Americans at higher wages, farmers automated, changed crops or reduced production.
Dan Ariely’s Payoff looks at what companies get wrong about motivating their people:
A few years ago, behavioral economist Dan Ariely conducted a study at a semiconductor factory of Intel’s in Israel. Workers were given either a $30 bonus, a pizza voucher or a complimentary text message from the boss at the end of the first workday of the week as an incentive to meet targets. (A separate control group received nothing.) Pizza, interestingly, was the best motivator on the first day, but over the course of a week the compliment had the best overall effect, even better than the cash. “When I get the money, I’m interested, when I’m not getting the money, I’m not so interested,” Ariely said in a recent interview. “Even relatively small bonuses can reframe to people how they think about work.”
“Purpose” has become a buzzword:
Often what it means is that the CEO picks a charity that they give money to. That’s often corporate social responsibility. But the reality is that a lot of meaning is about the small struggles in life and managing to overcome them and feeling a sense of progress.
Companies often don’t create this kind of sense of connection and meaning. They destroy it — unintentionally — with rules and regulations.
In many companies, in the name of bureaucracy and procedure and streamlining things, we’re basically eliminating people’s ability to use their own judgment. We think about people as cogs. And because of that we eliminate their motivation.
Ariely is largely against bonuses:
I don’t even think we should pay bonuses to CEOs. There’s lots of reasons to give bonuses. Some are for accounting purposes — a company says ‘Let’s not promise people a fixed amount of money: You’ll get at least x, above that we’ll do revenue sharing.’ I understand that. It depends on how much money we make. But when you have performance-contingent bonuses — and this goes back to the book — to motivate people, what you are assuming can hold people back. Imagine I paid you on a performance-contingent approach. What is my underlying assumption? My underlying assumption is that you know what you need to do but you’re too lazy to do it.
How many CEOs are just lazy? Who’d say, if they didn’t have the bonus, that I’m not interested in working? CEOs are deeply involved in their companies. Their egos are tied to it. The second thing they tell you after they say their name, often before they tell you how many kids they have and what hobbies they have is what company they are leading. To think that they’re just working for a bonus is just completely crazy.
Comparisons between the United States today and Germany in the 1930s are becoming commonplace, Niall Ferguson, but there’s a better analogy:
Journalists are fond of saying that we are living in a time of “unprecedented” instability. In reality, as numerous studies have shown, our time is a period of remarkable stability in terms of conflict. In fact, viewed globally, there has been a small uptick in organized lethal violence since the misnamed Arab Spring. But even allowing for the horrors of the Syrian civil war, the world is an order of magnitude less dangerous than it was in the 1970s and 1980s, and a haven of peace and tranquility compared with the period between 1914 and 1945.
This point matters because the defining feature of interwar fascism was its militarism. Fascists wore uniforms. They marched in enormous and well-drilled parades and they planned wars. That is not what we see today.
So why do so many commentators feel that we are living through “unprecedented instability?” The answer, aside from plain ignorance of history, is that political populism has become a global phenomenon, and established politicians and political parties are struggling even to understand it, much less resist it. Yet populism is not such a mysterious thing, if one only has some historical knowledge. The important point is not to make the mistake of confusing it with fascism, which it resembles in only a few respects.
He lists the five ingredients for populism:
The first of these ingredients is a rise in immigration. In the past 45 years, the percentage of the population of the United States that is foreign-born has risen from below 5 percent in 1970 to over 13 percent in 2014—almost as high as the rates achieved between 1860 and 1910, which ranged between 13 percent and an all-time high of 14.7 percent in 1890.
So when people say, as they often do, that “the United States is a land based on immigration,” they are indulging in selective recollection. There was a period, between 1910 and 1970, when immigration drastically declined. It is only in relatively recent times that we have seen immigration reach levels comparable with those of a century ago, in what has justly been called the first age of globalization.
Ingredient number two is an increase in inequality. Drawing on the work done on income distribution by Thomas Piketty and Emmanuel Saez, we can see that we have recently regained the heights of inequality that were last seen in the pre-World War I period.
The share of income going to the top one percent of earners is back up from below 8 percent of total income in 1970 to above 20 percent of total income. The peak before the financial crisis, in 2007, was almost exactly the same as the peak on the eve of the Great Depression in 1928.
Ingredient number three is the perception of corruption. For populism to thrive, people have to start believing that the political establishment is no longer clean. Recent Gallup data on public approval of institutions in the United States show, among other things, notable drops in the standing of all institutions save the military and small businesses.
Just 9 percent of Americans have “a great deal” or “quite a lot” of confidence in the U.S. Congress—a remarkable figure. It is striking to see which other institutions are down near the bottom of the league. Big business is second-lowest, with just 21 percent of the public expressing confidence in it. Newspapers, television news, and the criminal justice system fare only slightly better. What is even more remarkable is the list of institutions that have fallen furthest in recent times: the U.S. Supreme Court now has just a 36 percent approval rating, down from a historical average of 44 percent, while the Presidency has dropped from 43 percent to 36 percent approval.
The financial crisis appears to have convinced many Americans—and not without good reason—that there is an unhealthy and likely corrupt relationship between political institutions, big business, and the media.
The fourth ingredient necessary for a populist backlash is a major financial crisis. The three biggest financial crises in modern history—if one uses the U.S. equity market index as the measure—were the crises of 1873, 1929, and 2008. Each was followed by a prolonged period of depressed economic performance, though these varied in their depth and duration.
In the most recent of these crises, the peak of the U.S. stock market was October 2007. With the onset of the financial crisis, we essentially replayed for about a year the events of 1929 and 1930. However, beginning in mid to late 2009, we bounced out of the crisis, thanks to a combination of monetary, fiscal, and Chinese stimulus, whereas the Great Depression was characterized by a deep and prolonged decline in stock prices, as well as much higher unemployment rates and lower growth.
The first of these historical crises is the least known: the post-1873 “great depression,” as contemporaries called it. What happened after 1873 was nothing as dramatic as 1929; it was more of a slow burn. The United States and, indeed, the world economy went from a financial crisis—which was driven by excessively loose monetary policy and real estate speculation, amongst other things—into a protracted period of deflation. Economic activity was much less impaired than in the 1930s. Yet the sustained decline in prices inflicted considerable pain, especially on indebted farmers, who complained (in reference to the then prevailing gold standard) that they were being “crucified on a cross of gold.”
We have come a long way since those days; gold is no longer a key component of the monetary base, and farmers are no longer a major part of the workforce. Nevertheless, in my view, the period after 1873 is much more like our own time, both economically and politically, than the period after 1929.
There is still one missing ingredient to be added. If one were cooking, this would be the moment when flames would leap from the pan. The flammable ingredient is, of course, the demagogue, for populist demagogues react vituperatively and explosively against all of the aforementioned four ingredients.
Populists are not fascists:
They prefer trade wars to actual wars; administrative border walls to more defensible fortifications. The maladies they seek to cure are not imaginary: uncontrolled rising immigration, widening inequality, free trade with “unfree” countries, and political cronyism are all things that a substantial section of the electorate have some reason to dislike. The problem with populism is that its remedies are wrong and, in fact, counterproductive.
What we most have to fear—as was true of Brexit—is not therefore Armageddon, but something more prosaic: an attempt to reverse certain aspects of globalization, followed by disappointment when the snake oil does not really cure the patient’s ills, followed by the emergence of a new and ostensibly more progressive set of remedies for our current malaise.
Building a 21st Century FDA shouldn’t be hard:
A 2010 study in the Journal of Clinical Oncology by researchers from the M.D. Anderson Cancer Center in Houston, Texas found that the time from drug discovery to marketing increased from eight years in 1960 to 12 to 15 years in 2010. Five years of this increase results from new regulations boosting the lengths and costs of clinical trials. The regulators aim to prevent cancer patients from dying from toxic new drugs. However, the cancer researchers calculate that the delays caused by requirements for lengthier trials have instead resulted in the loss of 300,000 patient life-years while saving only 16 life-years. If true, this is a scandal.
How much higher are the costs of getting a new drug through the FDA gantlet? A new study, “Stifling New Cures: The True Cost of Lengthy Clinical Drug Trials,” by Manhattan Institute senior fellow Avik Roy points out that in 1975 the pharmaceutical industry spent about $100 million on research and development (R&D) before getting a new drug approved by the FDA. By 1987, that had tripled to $300 million and that has since quadrupled to $1.3 billion. But even these figures may be too low. Roy cites calculations done by Matthew Herper of Forbes, who divides up the R&D spending of $802 billion by 12 big pharma companies since 1997 by the 139 drugs that have since gotten FDA approval to yield costs of $5.8 billion per drug.
Currently, new pharmaceuticals typically go through Phase I trials using fewer than 100 patients to get preliminary information on the drug’s safety. Phase II trials involve a few hundred subjects and further evaluate a new drug’s safety and efficacy. Phase III trials enroll thousands of patients to see how well it works compared to either placebo and/or other therapies and to look for bad side effects.
“The biggest driver of this phenomenal increase has been the regulatory process governing Phase III clinical trials of new pharmaceuticals on on human volunteers,” notes Roy. Between 1999 and 2005, clinical trials saw average increases in trial procedures by 65 percent, staff work by 67 percent, and length by 70 percent.
Not only do FDA demands for bigger Phase III clinical trials delay the introduction of effective new medicines, they dramatically boost costs for bringing them to market. Roy acknowledges that pre-clinical research that aims to identify promising therapeutic compounds absorbs 28 percent of the R&D budgets of pharmaceutical companies. Setting those discovery costs aside, Roy calculates that the Phase III trials “typically represent 90 percent or more of the cost of developing an individual drug all the way from laboratory to market.”
There is no question that the Roman Empire reached its peak under the “five good emperors,” Peter Turchin explains:
There are literally dozens of quantitative measures for imperial might that all agree with each other: territorial extent, overall population, internal peace and political stability, economic activity proxied by shipwrecks and the amount of industrial pollution, monument building, production of literature and art … After the death of the last “good emperor” in 180 all these indicators headed south. Together they tell us a much more quantitative and nuanced history than an artificial binary construct of “the Fall of Rome”. As a single example, here’s the trajectory of the volume of imports of particularly fine ceramics from Africa to Italy:
If we follow these trajectories, we will learn that there were peaks and valleys. For example, a key indicator, social and political instability, went up after 180 and stayed high to the end of the third century. However, there were several peaks on top of this elevated level, recurring at roughly 50-year intervals. Such dynamical richness doesn’t fit the narrative of a “collapse.”
Most of the fourth century was relatively peaceful, but then the western half really disintegrated. The center of gravity moved east, to Byzantium, which experienced its own decline in the seventh century. Which was followed up by more cycles.
Thus, a much better question is not why Rome collapsed, but why the Roman Empire experienced those massive waves of social and political instability, accompanied by political fragmentation, population decline, and (later) dramatic loss of literacy, disappearance of monumental buildings, decrease of economic activity etc.
Turchin, of course, explains this through his structural-demographic theory:
Growing political instability is first and foremost a result of elite overproduction leading to excessive intra-elite competition and conflict. This main driver is supplemented by mass mobilization of non-elites resulting from popular immiseration and by failing fiscal health of the state.
Mike Munger discusses the dangers of safety equipment:
In high school, I played football and wore pads and a helmet. During that time, I endured two shoulder separations, a dislocated kneecap and several snapped tendons in my hand.
In college, I played rugby and wore heavy cotton shorts and a stiff jersey, while suffering only some scraped elbows and several memorable hangovers from parties with “rugger huggers” after matches.
More equipment, more injuries? Social scientists have seen that before; they call it the Peltzman effect, after the economist Sam Peltzman. The feeling of safety, it seems, induces us to be less careful. A famous illustration of the Peltzman effect is that the better sky diving gear becomes, the more chances sky divers take, keeping the fatality rate from sky diving roughly unchanged over time. Peltzman’s point was that though rule-makers can regulate safety, people choose their own level of risk.
There are three things going on in football, and it’s important to keep them separate. The first is the formal rules, which attempt to limit concussions. The second is conventional tackling practice, which has a high risk of concussion. And the third is the informal rules, or “the code.”
When formal rules and the informal norms of sports conflict, players (and the game) suffer. In football today, the rules (no head shots) and norms (head shots are part of the game) conflict. And then there’s the other factor, tackling practice: Almost everyone believes that the helmet-first tackling style is more effective. As Dierdorf said, sending a man to the bench has been a badge of honor, not a violation of the code, even if you intended to knock him out. Anyone who avoids delivering a blow to avoid ringing the guy’s bell is a wimp, and he also risks missing the tackle. Formal rules will never be enough to deter head shots under those conditions.
The sportswriter Jonathan Clegg has argued that adopting rugby tackling is the key to making football defense both safer and more effective. Clegg’s argument has had mixed reviews in the football establishment. But there have been some takers. Pete Carroll, the coach of the Seattle Seahawks, has used rugby principles for football tackling, as is demonstrated in a video.
Kidnapping is hard — because of problems of trust, problems of bargaining, and problems of execution — but there is a well-organized market for hostages:
The first principle that insurers adopt is that safe retrieval of hostages is paramount. The second guiding principle is that kidnapping cannot become too wildly profitable, for fear of further destabilization. In the language of economists, there must be no “supernormal profits.” If victims’ representatives quickly offer large ransoms, this information spreads like wildfire and triggers kidnapping booms. A good example is Somalia, where a few premium ransoms led to an explosion of piracy that could only be stopped by a costly military intervention.
Insurers have therefore created institutions to make sure that ransom offers meet kidnapper expectations and produce safe releases but that do not upset local criminal markets. Insured parties obtain immediate, free access to highly experienced crisis-response consultants in the event of a kidnapping. These consultants find out whether the person demanding the ransom actually holds a live hostage to bargain over, they advise on the appropriate negotiation strategy, and they reassure families when they inevitably receive dire threats of violence.
Because insurers can communicate outcomes confidentially, they can stabilize ransoms — as well as discipline rogue kidnappers. One kidnapper summarized this perception in the criminal community as “No one negotiates with a kidnapper who has a reputation for blowing his victims’ brains out.” Crisis responders also manage the ransom drop, removing a further obstacle to a successful conclusion. About 98 percent of insured criminal kidnapping victims are safely retrieved.
Of course, this “protocol” for ransom negotiations is costly. Tough bargaining takes time, imposing huge psychological costs on negotiators and on the victim’s family and tying up productive resources in firms. Experienced consultants are paid a substantial daily fee. It is very tempting to conclude negotiations early. Most of the cost of quick ransoms that are bigger than they ought to be is borne by future victims and their insurers, not the current victim’s stakeholders. An effective governance regime for kidnapping resolution therefore requires rules to prevent anyone’s taking shortcuts.
It would be impossible to prove beyond reasonable doubt that an insurer’s crisis responder deliberately cuts corners because ransoms are naturally variable. This makes it impossible for insurers to formally contract with each other and punish those who “overpay” kidnappers.
Insurers resolve this through an ingenious market structure. All kidnapping insurance is either written or reinsured at Lloyd’s of London. Within the Lloyd’s market, there are about 20 firms (or “syndicates”) competing for business. They all conduct resolutions according to clear rules. The Lloyd’s Corp. can exclude any syndicate that deviates from the established protocol and imposes costs on others. Outsiders do not have the necessary information to price kidnapping insurance correctly: Victims are very tight-lipped about their experiences to avoid attracting further criminal attention.
The private governance regime for resolving criminal kidnappings generally delivers low and stable ransoms and predictable numbers of kidnappings. Most kidnappings can be resolved for thousands or tens of thousands of dollars. This makes profitable kidnapping insurance possible. When the protocol fails, insurers sustain losses and must innovate to regain control.
The outcomes of privately governed “criminal” kidnappings (where private firms or individuals pay the ransoms) contrast starkly with those of “terrorist” kidnappings (where governments are asked to pay ransoms or to make concessions). Here, insurers are prevented by law from ordering the market, leaving governments in the firing line.
Governments struggle to contain ransoms, and they often end up making concessions to terrorists despite their public “no negotiation” commitments. Government negotiators have no obvious budget constraints. They often prioritize quick settlements over containing ransoms. Finally, there is no international regime for preventing spillovers to subsequent negotiations. Citizens of nations who refuse to negotiate with terrorists are often tortured or killed to raise the pressure in parallel negotiations. Multimillion dollar ransoms in terrorist cases are therefore not really surprising — and such settlements reliably trigger new kidnappings.
There’s inequality, Nassim Nicholas Taleb notes, and then there’s inequality:
The first is the inequality people tolerate, such as one’s understanding compared to that of people deemed heroes, say Einstein, Michelangelo, or the recluse mathematician Grisha Perelman, in comparison to whom one has no difficulty acknowledging a large surplus. This applies to entrepreneurs, artists, soldiers, heroes, the singer Bob Dylan, Socrates, the current local celebrity chef, some Roman Emperor of good repute, say Marcus Aurelius; in short those for whom one can naturally be a “fan”. You may like to imitate them, you may aspire to be like them; but you don’t resent them.
The second is the inequality people find intolerable because the subject appears to be just a person like you, except that he has been playing the system, and getting himself into rent seeking, acquiring privileges that are not warranted — and although he has something you would not mind having (which may include his Russian girlfriend), he is exactly the type of whom you cannot possibly become a fan. The latter category includes bankers, bureaucrats who get rich, former senators shilling for the evil firm Monsanto, clean-shaven chief executives who wear ties, and talking heads on television making outsized bonuses. You don’t just envy them; you take umbrage at their fame, and the sight of their expensive or even semi-expensive car trigger some feeling of bitterness. They make you feel smaller.
There may be something dissonant in the spectacle of a rich slave.
The author Joan Williams, in an insightful article, explains that the working class is impressed by the rich, as role models. Michèle Lamont, the author of The Dignity of Working Men, whom she cites, did a systematic interview of blue collar Americans and found present a resentment of professionals but, unexpectedly, not of the rich.
It is safe to accept that the American public — actually all public — despise people who make a lot of money on a salary, or, rather, salarymen who make a lot of money. This is indeed generalized to other countries: a few years ago the Swiss, of all people almost voted a law capping salaries of managers. But the same Swiss hold rich entrepreneurs, and people who have derived their celebrity by other means, in some respect.
Further, in countries where wealth comes from rent seeking, political patronage, or what is called regulatory capture (by which the powerful uses regulation to scam the public, or red tape to slow down competition), wealth is seen as zero-sum. What Peter gets is extracted from Paul. Someone getting rich is doing so at other people’s expense. In countries such as the U.S. where wealth can come from destruction, people can easily see that someone getting rich is not taking dollars from your pocket; perhaps even putting some in yours. On the other hand, inequality, by definition, is zero sum.
In this chapter I will propose that effectively what people resent — or should resent — is the person at the top who has no skin in the game, that is, because he doesn’t bear his allotted risk, is immune to the possibility of falling from his pedestal, exiting the income or wealth bracket, and getting to the soup kitchen. Again, on that account, the detractors of Donald Trump, when he was a candidate, failed to realize that, by advertising his episode of bankruptcy and his personal losses of close to a billion dollars, they removed the resentment (the second type of inequality) one may have towards him. There is something respectable in losing a billion dollars, provided it is your own money.
In addition, someone without skin in the game — say a corporate executive with upside and no financial downside (the type to speak clearly in meetings) — is paid according to some metrics that do not necessarily reflect the health of the company; these (as we saw in Chapter x) he can manipulate, hide risks, get the bonus, then retire (or go to another company) and blame his successor for the subsequent results.
Quoted companies, he wrote, have a grave flaw: “an absence of effective monitoring of managers”. Shareholders are too dispersed and too ill-informed to exercise proper control of chief executives. This causes several nasty problems.
One, said Professor Jensen, is that bosses will want to build up cash piles to give themselves freedom from capital markets. If companies held no cash, they’d need to raise funds in the market every time they wanted to invest. This would give investors control over the company’s plans. If, however, companies can invest internal funds, this control is lacking and so bosses are freer. Events have vindicated Professor Jensen; in both the UK and US, corporate cash holdings have soared in recent years.
Secondly, he said, when companies do invest the job is likely to be badly done. Bosses will prefer grand schemes that gratify their ego rather than humdrum projects that maximize shareholder value. Perhaps the worst economic decision of our lifetime was RBS’s takeover of ABN Amro – a move that was due to shareholders’ failure to control Fred Goodwin’s megalomania.
To these failings we can add that bosses plunder directly from shareholders by extracting big wages for themselves. The High Pay Centre estimates that CEOs are now paid 150 times the salary of the average worker, a ratio that has tripled since the 1990s – an increase which, it says, can’t be justified by increased management efficiency. “No countervailing forces have been deployed to stop this,” it says.
Failures such as these, said Professor Jensen, would cause quoted companies to be supplanted by private equity, as this permits a few well-informed investors to properly oversee managers. This is what has happened.
Michael Lewis explains how two trailblazing psychologists turned the world of decision science upside down:
Danny was always sure he was wrong. Amos was always sure he was right. Amos was the life of every party; Danny didn’t go to the parties. Amos was loose and informal; even when Danny made a stab at informality, it felt as if he had descended from some formal place. With Amos you always just picked up where you left off, no matter how long it had been since you last saw him. With Danny there was always a sense you were starting over, even if you had been with him just yesterday. Amos was tone-deaf but would nevertheless sing Hebrew folk songs with great gusto. Danny was the sort of person who might be in possession of a lovely singing voice that he would never discover. Amos was a one-man wrecking ball for illogical arguments; when Danny heard an illogical argument, he asked, What might that be true of? Danny was a pessimist. Amos was not merely an optimist; Amos willed himself to be optimistic, because he had decided pessimism was stupid. When you are a pessimist and the bad thing happens, you live it twice, Amos liked to say. Once when you worry about it, and the second time when it happens. “They were very different people,” said a fellow Hebrew University professor. “Danny was always eager to please. He was irritable and short-tempered, but he wanted to please. Amos couldn’t understand why anyone would be eager to please. He understood courtesy, but eager to please—why?” Danny took everything so seriously; Amos turned much of life into a joke. When Hebrew University put Amos on its committee to evaluate all Ph.D. candidates, he was appalled at what passed for a dissertation in the humanities. Instead of raising a formal objection, he merely said, “If this dissertation is good enough for its field, it’s good enough for me. Provided the student can divide fractions!”
The piece is adapted from The Undoing Project: A Friendship That Changed Our Minds.
Arnold Kling just got mugged by reality, when his Obamacare notice arrived:
Yesterday in the mail, my wife and I got our premium notice from the health care exchange. Our monthly premium is going up 70 percent, and our deductible is going up also.
I wonder if any of the pundits who claim that Obamacare is working are actually getting their health insurance through an exchange.
I wonder how many of us who have not supported Donald Trump are feeling mugged by reality.
The Economist has ranked colleges based on the gap between how much money students subsequently earn and how much they might have made had they studied elsewhere:
We wanted to know how a wide range of factors would affect the median earnings in 2011 of a college’s former students. Most of the data were available directly from the scorecard: for the entering class of 2001, we used average SAT scores, sex ratio, race breakdown, college size, whether a university was public or private, and the mix of subjects students chose to study. There were 1,275 four-year, non-vocational colleges in the scorecard database with available figures in all of these categories. We complemented these inputs with information from other sources: whether a college is affiliated with the Catholic Church or a Protestant Christian denomination; the wealth of its state (using a weighted average of Maryland, Virginia and the District of Columbia for Washington) and prevailing wages in its city (with a flat value for colleges in rural areas); whether it has a ranked undergraduate business school (and is thus likely to attract business-minded students); the percentage of its students who receive federal Pell grants given to working-class students (a measure of family income); and whether it is a liberal-arts college. Finally, to avoid penalising universities that tend to attract students who are disinclined to pursue lucrative careers, we created a “Marx and Marley index”, based on colleges’ appearances during the past 15 years on the Princeton Review’s top-20 lists for political leftism and “reefer madness”. (For technically minded readers, all of these variables were statistically significant at the 1% level, and the overall r-squared was .8538, meaning that 85% of the variation in graduate salaries between colleges was explained by these factors. We also tested the model using 2009 earnings figures rather than 2011, and for the entering class of 2003 rather than 2001, and got virtually identical results.)
For example, Caltech’s forecast earnings increase by $27,114 as a result of its best-in-the-country incoming SAT scores, another $9,234 thanks to its students’ propensity to choose subjects like engineering, and a further $2,819 for its proximity to desirable employers in the Los Angeles area.