The Greeks had to learn civilization all over again

Sunday, August 20th, 2017

Without Classical Greece and its accomplishments, Razib Khan says, the West wouldn’t make any sense:

But here I have to stipulate Classical, because Greeks existed before the Classical period. That is, a people who spoke a language that was recognizably Greek and worshipped gods recognizable to the Greeks of the Classical period. But these Greeks were not proto-Western in any way. These were the Mycenaeans, a Bronze Age civilization which flourished in the Aegean in the centuries before the cataclysms outlined in 1177 B.C.

The issue with the Mycenaean civilization is that its final expiration in the 11th century ushered in a centuries long Dark Age. During this period the population of Greece seems to have declined, and society reverted to a more simple structure. By the time the Greeks emerged from this Dark Age much had changed. For example, they no longer used Linear B writing. Presumably this technique was passed down along lineages of scribes, whose services were no longer needed, because the grand warlords of the Bronze Age were no longer there to patronize them and make use of their skills. In its stead the Greeks modified the alphabet of the Phoenicians.

To be succinct the Greeks had to learn civilization all over again. The barbarian interlude had broken continuous cultural memory between the Mycenaeans and the Greeks of the developing polises of the Classical period. The fortifications of the Mycenaeans were assumed by their Classical descendants to be the work of a lost race which had the aid of monstrous cyclops.

Of course not all memories were forgotten. Epic poems such as The Iliad retained the memory of the past through the centuries. The list of kings who sailed to Troy actually reflected the distribution of power in Bronze Age Greece, while boar’s tusk helmets mentioned by Homer were typical of the period. To be sure, much of the detail in Homer seems more reflective of a simpler society of petty warlords, so the nuggets of memory are encased in later lore accrued over the centuries.

A recent Nature paper looks at the Genetic origins of the Minoans and Mycenaeans and shows that Minoans and Mycenaeans were genetically similar.

Europe made space at the political bargaining table for economic interests

Saturday, August 19th, 2017

In Rulers, Religion and Riches, Jared Rubin argues that differences in the way religion and government interact caused the economic fortunes of Europe and the Middle East to diverge:

The driving motivation of most rulers is not ideology or to do good, but to maintain and strengthen their hold on power: “to propagate their rule”. This requires “coercion” — the ability to enforce power — and, crucially, some form of “legitimacy”. In the medieval world, both Islamic and Christian rulers claimed part of their legitimacy from religious authorities, but after the Reformation, Rubin thinks that European governments had to turn away from religion as a source of political legitimacy.

By getting “religion out of politics”, Europe made space at the political “bargaining table” for economic interests, creating a virtuous cycle of “pro-growth” policy-making. Islamic rulers, by contrast, continued to rely on religious legitimation and economic interests were mostly excluded from politics, leading to governance that focused on the narrow interests of sultans, and the conservative religious and military elites who backed them.

The source of Europe’s success, then, lies in the Reformation, a revolution in ideas and authority spread by what Martin Luther called “God’s highest and ultimate gift of grace”: the printing press. Yet even though printers quickly discovered how to adapt movable type to Arabic lettering, there were almost no presses in the Middle East for nearly 300 years after Gutenberg’s invention. Conservative Islamic clerics did not want the press to undermine their power, and the state — still tied to religion not commerce — had no incentive to overrule them. Not until 1727 did the Ottoman state permit printing in Arabic script, with a decree that the device would finally be “unveiled like a bride and will not again be hidden”. The prohibition was “one of the great missed opportunities of economic and technological history”, a vivid example of the dead hand of religious conservatism.

By contrast, Europe was revolutionised. Rubin argues that the Dutch revolt against Catholic Spain and the English crown’s “search for alternative sources of legitimacy” after breaking with Rome empowered the Dutch and English parliaments: by the 1600s both countries were ruled by parliamentary governments that included economic elites. Their policies — such as promoting trade and protecting property rights — were conducive to broader economic progress. Decoupling religion from politics had created space for “pro-commerce” interests.

The Occupy Wall Street protesters and the bankers share a common delusion

Friday, August 18th, 2017

Eric Weinstein explains the crisis of late capitalism:

I believe that market capitalism, as we’ve come to understand it, was actually tied to a particular period of time where certain coincidences were present. There’s a coincidence between the marginal product of one’s labor and one’s marginal needs to consume at a socially appropriate level. There’s also the match between an economy mostly consisting of private goods and services that can be taxed to pay for the minority of public goods and services, where the market price of those public goods would be far below the collective value of those goods.

Beyond that, there’s also a coincidence between the ability to train briefly in one’s youth so as to acquire a reliable skill that can be repeated consistently with small variance throughout a lifetime, leading to what we’ve typically called a career or profession, and I believe that many of those coincidences are now breaking, because they were actually never tied together by any fundamental law.

Weinstein shares this anecdote about class warfare:

I reached a bizarre stage of my life in which I am equally likely to fly either economy or private. As such, I have a unique lens on this question. A friend of mine said to me, “The modern airport is the perfect metaphor for the class warfare to come.” And I asked, “How do you see it that way?” He said, “The rich in first and business class are seated first so that the poor may be paraded past them into economy to note their privilege.” I said, “I think the metaphor is better than you give it credit for, because those people in first and business are actually the fake rich. The real rich are in another terminal or in another airport altogether.”

The Occupy Wall Street protesters and the bankers share a common delusion, he says:

Both of them believe the bankers are more powerful in the story than they actually are. The real problem, which our society has yet to face up to, is that sometime around 1970, we ended several periods of legitimate exponential growth in science, technology, and economics. Since that time, we have struggled with the fact that almost all of our institutions that thrived during the post-World War II period of growth have embedded growth hypotheses into their very foundation.

That means that all of those institutions, whether they’re law firms or universities or the military, have to reckon with steady state [meaning an economy with mild fluctuations in growth and productivity] by admitting that growth cannot be sustained, by running a Ponzi scheme, or by attempting to cannibalize others to achieve a kind of fake growth to keep those particular institutions running. This is the big story that nobody reports. We have a system-wide problem with embedded growth hypotheses that is turning us all into scoundrels and liars.

Let’s say, for example, that I have a growing law firm in which there are five associates at any given time supporting every partner, and those associates hope to become partners so that they can hire five associates in turn. That formula of hierarchical labor works well while the law firm is growing, but as soon as the law firm hits steady state, each partner can really only have one associate, who must wait many years before becoming partner for that partner to retire. That economic model doesn’t work, because the long hours and decreased pay that one is willing to accept at an entry-level position is predicated on taking over a higher-lever position in short order. That’s repeated with professors and their graduate students. It’s often repeated in military hierarchies.

It takes place just about everywhere, and when exponential growth ran out, each of these institutions had to find some way of either owning up to a new business model or continuing the old one with smoke mirrors and the cannibalization of someone else’s source of income.

Then there’s the Wile E. Coyote effect — as long as Wile E. Coyote doesn’t look down, he’s suspended in air, even if he has just run off a cliff:

But the great danger is understanding that everything is flipped. During the 2008 crisis, many commentators said the markets have suddenly gone crazy, and it was exactly the reverse. The so-called great moderation that was pushed by Alan Greenspan, Timothy Geithner, and others was in fact a kind of madness, and the 2008 crisis represented a rare break in the insanity, where the market suddenly woke up to see what was actually going on. So the acute danger is not madness but sanity.

The problem is that prolonged madness simply compounds the disaster to come when sanity finally sets in.

Buy more time

Tuesday, August 15th, 2017

Spending money on time-saving services may result in greater life satisfaction, according to a new PNAS study:

An international team of researchers surveyed more than 6,000 men and women across the United States, Canada, Denmark and the Netherlands about their spending habits.

Those in the study who spent money on services to buy time — by paying other people to do the cleaning or cooking, for example — reported greater happiness compared to those who did not, regardless of their level of income.

[...]

“Some of our results are intuitive,” she continued. “For example, people should derive some satisfaction from outsourcing things like scrubbing the toilet or cleaning bathrooms. Yet just under half the millionaires we surveyed spent money to outsource disliked tasks.”

One factor that could explain why more people who could afford to don’t purchase these time-saving services could be guilt, the study authors suggest. Some people may feel guilty for paying someone to do tasks they simply don’t want to complete themselves.

[...]

“Busy-ness has become a status symbol in North America,” Whillans explained. “People want to feel they can manage all components of their lives.”

[...]

In addition to the first large study, Whillans and her colleagues performed a second, smaller experiment in a group of 60 working Canadian adults, giving them $40 to spend on a time-saving purchase one week and $40 to spend on a material purchase the second week. People who decided to spend money to save time, the researchers found, reported greater well-being than when money was spent on a material purchase.

“Here is a blind spot in human decision making: we don’t see the unhappiness from small annoying tasks,” Ariely said. “Part of it is we don’t experiment much. In order to figure out what works best for you, it’s not enough to have an intuition. You need to try out different things, whether for your health, relationships or saving money. The same goes for finding happiness.”

But it wasn’t easy for people in the second study to choose spending money on saving time — only two percent reported on the spot that they would make a time-saving purchase. The authors said part of the reason may be long-standing cultural and gender roles.

The economic benefits of the French Revolution came about while increasing inequality and consolidating wealth

Sunday, August 13th, 2017

The economic benefits of the French Revolution came about while increasing inequality and consolidating wealth:

In 1789, the revolutionary government seized French lands owned by the church, about 6.5% of the country, and redistributed them through auction. This move provided a useful experiment for the researchers—Susquehanna University’s Theresa Finley, Hebrew University of Jerusalem’s Raphaël Franck, and George Mason University’s Noel Johnson.

They tracked the agricultural outputs of the properties and the investment in infrastructure like irrigation, and find that areas with the most church property before the revolution—and thus the most redistribution afterward—saw higher output and more investment over the next 50-plus years. They also found more inequality in the size of farms, thanks to consolidation of previously fragmented land, than in areas with less redistribution.

[...]

Before the revolution, large landholders like the church tended to focus on renting out their land to small-holders, but these small plots didn’t reward investment in large-scale irrigation or other improvements, especially since feudal authorities would collect much of the results. They also faced numerous legal obstacles to selling their land to someone who might invest in it. The system put too many costs on smart investments to be effective.

[...]

“The auctioning-off of Church land during the Revolutionary period gave some regions a head-start in reallocating feudal property rights and adopting more efficient agricultural practices,” the researchers conclude. “The agricultural modernization enabled by the redistribution of Church land did not stem from a more equal land ownership structure, but by increasing land inequality.”

Some workers simply aren’t worth the trouble

Saturday, August 12th, 2017

Some workers simply aren’t worth the trouble, Tyler Cowen notes, and these “zero marginal product” workers account for a growing percentage of out workforce. Handle makes a similar point about military recruits:

During the surge and temporary force-builds, the Army and Marines had to lower standards and accept less impressive applicants in order to meet accession quotas for enlistedmen. Usually that involved relaxing each of the many standards each by a little bit. Actually, the system pretends the standards aren’t being changed at all, but that individuals are being granted discretionary ‘waivers’ of a typical standard on a one by one basis by commanders, which is the system ordinarily used rarely in exceptional cases for people with extreme talent or value in some area, but maybe just under the threshold for one of the standards. Well, suddenly these waivers were routine. Still, there is value to keeping the standards ‘in the book’ the same, since everybody still knows what they are supposed to do, and the waivers will eventually go away when the pressure is off.

But eventually you are going to be cutting into muscle and bone and not able to relax some standards any more. And someone is going to discover where you are going to get the most bang for your buck in terms of the greatest numbers resulting from a policy change in the other standards. That turned out to be in background check department, which gave rise to the whole ‘moral waivers’ problem. A lot of these guys were good soldiers, fit enough and smart enough to fit in, go fighting downrange, and get the job done well, but, inevitably, a huge number of them got into serious disciplinary trouble at some point. They were good workers who would get in trouble, which is a very different problem from the obedient and law-abiding ones that just aren’t up to snuff.

In times when men were desperately needed, when those men got in trouble, they’d get slapped on the wrist with minor penalties, or even just a good old-fashioned “smoke the shit out of him” extended painful-exertion session with an NCO. But as soon as Congress announced the numbers had to go down — by a lot, and quickly — then a very different message went out to commanders. Suddenly every little thing was a dischargeable offense, and it was, predictably, disproportionately the moral-waiver guys who were getting kicked out.

The people it prefers, it consumes

Wednesday, August 9th, 2017

The techno-commercial wing of the neoreactionary blogosphere, as Nick Land like to call it, has an obvious fondness for Pacific Rim city-states. like Singapore and Hong Kong, but these right-wing utopias have a problem. As Spandrell pointed out, Singapore is an IQ shredder:

How many bright Indians and bright Chinese are there, Harry? Surely they are not infinite. And what will they do in Singapore? Well, engage in the finance and marketing rat-race and depress their fertility to 0.78, wasting valuable genes just so your property prices don’t go down. Singapore is an IQ shredder.

The accusation is acute, Land says, and can be generalized:

Modernity has a fertility problem. When elevated to the zenith of savage irony, the formulation runs: At the demographic level, modernity selects systematically against modern populations. The people it prefers, it consumes. Without gross exaggeration, this endogenous tendency can be seen as an existential risk to the modern world. It threatens to bring the entire global order crashing down around it.

In order to discuss this implicit catastrophe, it’s first necessary to talk about cities, which is a conversation that has already begun. To state the problem crudely, but with confidence: Cities are population sinks. Historian William McNeil explains the basics. Urbanization, from its origins, has tended relentlessly to convert children from productive assets into objects of luxury consumption. All of the archaic economic incentives related to fertility are inverted.

[...]

Education expenses alone explain much of this. School fees are by far the most effective contraceptive technology ever conceived. To raise a child in an urban environment is like nothing that rural precedent ever prepared for. Even if responsible parenting were the sole motivation in play, the compressive effect on family size would be extreme. Under urban circumstances, it becomes almost an aggression against one’s own children for there to be many of them.

How do you get to Denmark?

Tuesday, August 8th, 2017

Where do ‘good’ or pro-social institutions come from?” Pseudoerasmus asks:

Why does the capacity for collective action and cooperative behaviour vary so much across the world today? How do some populations transcend tribalism to form a civil society? How have some societies gone beyond personal relations and customary rules to impersonal exchange and anonymous institutions? In short, how do you “get to Denmark”?

[...]

So to answer the question at the head of this post, “where do pro-social institutions come from?” — if “bad” institutions represent coordination failures, then intelligence and patience must be a big part of the answer. This need not have the same relevance for social evolution from 100,000 BCE to 1500 CE. But for the emergence of modern, advanced societies, intelligence and patience matter.

It’s not that people’s norms and values do not or cannot change. They do. But that does not seem enough. Solving complex coordination failures and collective action problems requires a lot more than just “good” culture.

I am not saying intelligence and patience explain everything, just that they seem to be an important part of how “good” institutions happen. Nor am I saying that intelligence and patience are immutable quantities.

[...]

Intelligence and patience allow you to understand, and weigh, the intuitive risks and the counterintuitive benefits from collaborating with perfect strangers. With less intelligence and less patience you stick to what you know — intuit the benefits from relationships cultivated over a long time through blood ties or other intimate affiliations.

Your “moral circle” is wider with intelligence and patience than without.

In the 1990s, in the middle of free market triumphalism, it was widely assumed that if you let markets rip, the institutions necessary to their proper functioning would “naturally” follow. Those with a vested interested in protecting their property rights would demand them, politically. That assumption went up in flames in the former communist countries and the developing countries under economic restructuring.

They don’t learn, because they are not the victims of their own mistakes

Friday, August 4th, 2017

Nassim Nicholas Taleb shares his thoughts on interventionistas and their mental defects:

Their three flaws: 1) They think in statics not dynamics, 2) they think in low, not high dimensions, 3) they think in actions, never interactions.

The first flaw is that they are incapable in thinking in second steps and unaware of the need for it — and about every peasant in Mongolia, every waiter in Madrid, and every car service operator in San Francisco knows that real life happens to have second, third, fourth, nth steps. The second flaw is that they are also incapable of distinguishing between multidimensional problems and their single dimensional representations — like multidimensional health and its stripped, cholesterol-reading reduced representation. They can’t get the idea that, empirically, complex systems do not have obvious one dimensional cause and effects mechanisms, and that under opacity, you do not mess with such a system. An extension of this defect: they compare the actions of the “dictator” to the prime minister of Norway or Sweden, not to those of the local alternative. The third flaw is that they can’t forecast the evolution of those one helps by attacking.

And when a blow up happens, they invoke uncertainty, something called a Black Swan, after some book by a (very) stubborn fellow, not realizing that one should not mess with a system if the results are fraught with uncertainty, or, more generally, avoid engaging in an action if you have no idea of the outcomes. Imagine people with similar mental handicaps, who don’t understand asymmetry, piloting planes. Incompetent pilots, those who cannot learn from experience, or don’t mind taking risks they don’t understand, may kill many, but they will themselves end up at the bottom of, say, the Atlantic, and cease to represent a threat to others and mankind.

So we end up populating what we call the intelligentsia with people who are delusional, literally mentally deranged, simply because they never have to pay for the consequences of their actions, repeating modernist slogans stripped of all depth. In general, when you hear someone invoking abstract modernistic notions, you can assume that they got some education (but not enough, or in the wrong discipline) and too little accountability.

Now some innocent people, Yazidis, Christian minorities, Syrians, Iraqis, and Libyans had to pay a price for the mistakes of these interventionistas currently sitting in their comfortable air-conditioned offices. This, we will see, violates the very notion of justice from its pre-biblical, Babylonian inception. As well as the ethical structure of humanity.

Not only the principle of healers is first do no harm (primum non nocere), but, we will argue: those who don’t take risks should never be involved in making decisions.

This idea is weaved into history: all warlords and warmongers were warriors themselves and, with few exceptions societies were run by risk takers not risk transferors. They took risks — more risks than ordinary citizens. Julian the Apostate, the hero of many, died on the battlefield fighting in the never-ending war on the Persian frontier. One of predecessors, Valerian, after he was captured was said to have been used as a human footstool by the Persian Shahpur when mounting his horse. Less than a third of Roman emperors died in their bed — and one can argue that, had they lived longer, they would have fallen prey to either a coup or a battlefield.

And, one may ask, what can we do since a centralized system will necessarily need people who are not directly exposed to the cost of errors? Well, we have no choice, but decentralize; have fewer of these. But not to worry, if we don’t do it, it will be done by itself, the hard way: a system that doesn’t have a mechanism of skin in the game will eventually blow up and fix itself that way. We will see numerous such examples.

For instance, bank blowups came in 2008 because of the hidden risks in the system: bankers could make steady bonuses from a certain class of concealed explosive risks, use academic risk models that don’t work (because academics know practically nothing about risk), then invoke uncertainty after a blowup, some unseen and unforecastable Black Swan, and keep past bonuses, what I have called the Bob Rubin trade. Robert Rubin collected one hundred million dollar in bonuses from Citibank, but when the latter was rescued by the taxpayer, he didn’t write any check. The good news is that in spite of the efforts of a complicit Obama administration that wanted to protect the game and the rent-seeking of bankers, the risk-taking business moved away to hedge funds. The move took place because of the overbureaucratization of the system. In the hedge fund space, owners have at least half of their net worth in the funds, making them more exposed than any of their customers, and they personally go down with the ship.

The interventionistas case is central to our story because it shows how absence of skin in the game has both ethical and epistemological effects (i.e., related to knowledge). Interventionistas don’t learn because they they are not the victims to their mistakes. Interventionistas don’t learn because they they are not the victims of their mistakes, and, as we saw with pathemata mathemata:

The same mechanism of transferring risk also impedes learning.

Our sin tends to be timidity, not rashness

Saturday, July 22nd, 2017

Arthur C. Brooks’ advice for young people heading out into the world is to be prudent — because prudence means something more than what we’ve been led to believe:

When I finally read the German philosopher Josef Pieper’s “The Four Cardinal Virtues,” which had sat unread on my shelf for years, I was shocked to learn that I didn’t hate prudence; what I hated was its current — and incorrect — definition. The connotation of prudence as caution, or aversion to risk, is a modern invention. “Prudence” comes from the Latin “prudentia,” meaning sagacity or expertise. The earliest English uses from the 14th century had little to do with fearfulness or habitual reluctance. Rather, it signified righteous decision making that is rooted in acuity and practical wisdom. Mr. Pieper argued that we have bastardized this classical concept. We have refashioned prudence into an excuse for cowardice, hiding behind the language of virtue to avoid what he calls “the embarrassing situation of having to be brave.” The correct definition, Mr. Pieper argued, is the willingness to do the right thing, even if that involves fear and risk. In other words, to be rash is only one breach of true prudence. It is also a breach to be timid. So which offense is more common today? [...] Our sin tends to be timidity, not rashness. On average, we say “no” too much when faced with an opportunity or dilemma.

Unemployment is the greater evil

Thursday, July 13th, 2017

Policymakers seem intent on making the joblessness crisis worse, Ed Glaeser laments:

The past decade or so has seen a resurgent progressive focus on inequality — and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil. Economists Andrew Clark and Andrew Oswald have documented the huge drop in happiness associated with unemployment — about ten times larger than that associated with a reduction in earnings from the $50,000–$75,000 range to the $35,000–$50,000 bracket. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs. While fewer than 10 percent of full-time workers have used an illegal substance in any given week, 18 percent of the unemployed have done drugs in the last seven days, according to a 2013 study by Alejandro Badel and Brian Greaney.

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic. David Cutler and I examined the rise in opioid deaths between 1992 and 2012. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice.

Politicians, when they do focus on long-term unemployment, too often advance poorly targeted solutions, such as faster growth, more infrastructure investment, and less trade. More robust GDP growth is always a worthy aim, but it seems unlikely to get the chronically jobless back to work. The booms of the 1990s and early 2000s never came close to restoring the high employment rates last seen in the 1970s. Between 1976 and 2015, Nevada’s GDP grew the most and Michigan’s GDP grew the least among American states. Yet the two states had almost identical rises in the share of jobless prime-age men.

Infrastructure spending similarly seems poorly targeted to ease the problem. Contemporary infrastructure projects rely on skilled workers, typically with wages exceeding $25 per hour; most of today’s jobless lack such skills. Further, the current employment in highway, street, and bridge construction in the U.S. is only 316,000. Even if this number rose by 50 percent, it would still mean only a small reduction in the millions of jobless Americans. And the nation needs infrastructure most in areas with the highest population density; joblessness is most common outside metropolitan America. (See “If You Build It…,” Summer 2016.)

Finally, while it’s possible that the rise of American joblessness would have been slower if the U.S. had weaker trade ties to lower-wage countries like Mexico and China, American manufacturers have already adapted to a globalized world by mechanizing and outsourcing. We have little reason to be confident that restrictions on trade would bring the old jobs back. Trade wars would have an economic price, too. American exporters would cut back hiring. The cost of imported manufactured goods would rise, and U.S. consumers would pay more, in exchange for — at best — uncertain employment gains.

The techno-futurist narrative holds that machines will displace most workers, eventually. Social peace will be maintained only if the armies of the jobless are kept quiet with generous universal-income payments. This vision recalls John Maynard Keynes’s 1930 essay “Economic Possibilities for Our Grandchildren,” which predicts a future world of leisure, in which his grandchildren would be able to satisfy their basic needs with a few hours of labor and then spend the rest of their waking hours edifying themselves with culture and fun.

But for many of us, technological progress has led to longer work hours, not playtime. Entrepreneurs conjured more products that generated more earnings. Almost no Americans today would be happy with the lifestyle of their ancestors in 1930. For many, work also became not only more remunerative but more interesting. No Pennsylvania miner was likely to show up for extra hours (without extra pay) voluntarily. Google employees do it all the time.

Joblessness is not foreordained, because entrepreneurs can always dream up new ways of making labor productive. Ten years ago, millions of Americans wanted inexpensive car service. Uber showed how underemployed workers could earn something providing that service. Prosperous, time-short Americans are desperate for a host of other services — they want not only drivers but also cooks for their dinners and nurses for their elderly parents and much more. There is no shortage of demand for the right kinds of labor, and entrepreneurial insight could multiply the number of new tasks that could be performed by the currently out-of-work. Yet over the last 30 years, entrepreneurial talent has focused far more on delivering new tools for the skilled than on employment for the unlucky. Whereas Henry Ford employed hundreds of thousands of Americans without college degrees, Mark Zuckerberg primarily hires highly educated programmers.

How the Democrats lost their way on immigration

Monday, July 3rd, 2017

Peter Beinart explains how the Democrats lost their way on immigration:

If the right has grown more nationalistic, the left has grown less so. A decade ago, liberals publicly questioned immigration in ways that would shock many progressives today.

In 2005, a left-leaning blogger wrote, “Illegal immigration wreaks havoc economically, socially, and culturally; makes a mockery of the rule of law; and is disgraceful just on basic fairness grounds alone.” In 2006, a liberal columnist wrote that “immigration reduces the wages of domestic workers who compete with immigrants” and that “the fiscal burden of low-wage immigrants is also pretty clear.” His conclusion: “We’ll need to reduce the inflow of low-skill immigrants.” That same year, a Democratic senator wrote, “When I see Mexican flags waved at pro-immigration demonstrations, I sometimes feel a flush of patriotic resentment. When I’m forced to use a translator to communicate with the guy fixing my car, I feel a certain frustration.”

The blogger was Glenn Greenwald. The columnist was Paul Krugman. The senator was Barack Obama.

[...]

Unfortunately, while admitting poor immigrants makes redistributing wealth more necessary, it also makes it harder, at least in the short term. By some estimates, immigrants, who are poorer on average than native-born Americans and have larger families, receive more in government services than they pay in taxes. According to the National Academies report, immigrant-headed families with children are 15 percentage points more likely to rely on food assistance, and 12 points more likely to rely on Medicaid, than other families with children. In the long term, the United States will likely recoup much if not all of the money it spends on educating and caring for the children of immigrants. But in the meantime, these costs strain the very welfare state that liberals want to expand in order to help those native-born Americans with whom immigrants compete.

What’s more, studies by the Harvard political scientist Robert Putnam and others suggest that greater diversity makes Americans less charitable and less willing to redistribute wealth. People tend to be less generous when large segments of society don’t look or talk like them. Surprisingly, Putnam’s research suggests that greater diversity doesn’t reduce trust and cooperation just among people of different races or ethnicities—it also reduces trust and cooperation among people of the same race and ethnicity.

One should pack the lunchbox with quinoa crackers and organic fruit

Thursday, June 29th, 2017

Thorstein Veblen’s conspicuous consumption has evolved into conspicuously inconspicuous consumption:

Eschewing an overt materialism, the rich are investing significantly more in education, retirement and health – all of which are immaterial, yet cost many times more than any handbag a middle-income consumer might buy. The top 1% now devote the greatest share of their expenditures to inconspicuous consumption, with education forming a significant portion of this spend (accounting for almost 6% of top 1% household expenditures, compared with just over 1% of middle-income spending). In fact, top 1% spending on education has increased 3.5 times since 1996, while middle-income spending on education has remained flat over the same time period.

[...]

While much inconspicuous consumption is extremely expensive, it shows itself through less expensive but equally pronounced signalling – from reading The Economist to buying pasture-raised eggs. Inconspicuous consumption in other words, has become a shorthand through which the new elite signal their cultural capital to one another. In lockstep with the invoice for private preschool comes the knowledge that one should pack the lunchbox with quinoa crackers and organic fruit.

One might think these culinary practices are a commonplace example of modern-day motherhood, but one only needs to step outside the upper-middle-class bubbles of the coastal cities of the US to observe very different lunch-bag norms, consisting of processed snacks and practically no fruit. Similarly, while time in Los Angeles, San Francisco and New York City might make one think that every American mother breastfeeds her child for a year, national statistics report that only 27% of mothers fulfil this American Academy of Pediatrics goal (in Alabama, that figure hovers at 11%).

[...]

Perhaps most importantly, the new investment in inconspicuous consumption reproduces privilege in a way that previous conspicuous consumption could not. Knowing which New Yorker articles to reference or what small talk to engage in at the local farmers’ market enables and displays the acquisition of cultural capital, thereby providing entry into social networks that, in turn, help to pave the way to elite jobs, key social and professional contacts, and private schools. In short, inconspicuous consumption confers social mobility.

The null hypothesis is not an iron law

Friday, June 23rd, 2017

Statistically, educational interventions tend to affect resource allocation much more than outcomes, Arnold Kling reminds us, so, for educational interventions within roughly the current institutional setting, the null hypothesis is not an iron law, but it is an empirical regularity. This led me to add:

What stands out to me is just how little variation we see between schooling options. Public schools are all run on the same basic plan. Catholic schools are too, but with stricter discipline. Private schools aren’t much different, but with a wealthier clientele.

Only a few niche alternatives, such as Montessori and Waldorf, offer something truly different, and they obviously attract unusual families.

At the beginning of the dynasty, taxation yields a large revenue from small assessments

Sunday, June 18th, 2017

Dániel Oláh looks back at the economic ideas of Ibn Khaldun — who is better known to most of us for his notion of assabiya, or social cohesion:

He states that the division of labor serves as the basis for any civilized society and identifies division of labor not only on the factory level but also in a social and international context as well. Khaldun highlights on the example of obtaining grain that division of labor creates surplus value: “Thus, he cannot do without a combination of many powers from among his fellow beings, if he is to obtain food for himself and for them. Through cooperation, the needs of a number of persons, many times greater than their own (number), can be satisfied” (Khaldun p. 87).

His example of the division of production process is completely forgotten by economists and it’s not less expressive than the pin factory of Smith: “such include, for instance, the use of carvings for doors and chairs. Or one skillfully turns and shapes pieces of wood in a lathe, and then one puts these pieces together, so that they appear to the eye to be of one piece” (Khaldun p. 519). What is more: opposed to Smith, Khaldun doesn’t make any distinction between productive and unproductive work.

Based on this it’s easy to understand that Ibn Khaldun presented very similar ideas as Adam Smith, but hundreds of years before the Western philosopher. But Khaldun said even more about the economy.

He analyzed markets which arise based on the division of labor and examined market forces in a simple didactic way which is very similar to the attitude of Alfred Marshall. The invention of supply and demand analysis wasn’t invented in the 19th century: the islamic scholar also described the relationship of demand and supply, and also took the role of inventories and merchandise trade into account. He divided the economy into three parts (production, trade and public sector) since the market prices in his theory include wages, profits and taxes (Boulakia 1971). At the same time he analyzed market for goods, labor and land as well. This structured approach led Khaldun to invent the labor theory of value, which makes the islamic scholar a pre-marxian (or classical) thinker in this sense (Oweiss, 1988).

His idea, that the produced value is zero if the labor input is zero seems surprisingly classical, far ahead of his time.

In the dynamic Khaldunian model of economic development, the government plays a crucial role. Its policies, primarily taxation has a great effect on the development of a civilization. After the nomadic way of life tribes change to sedentary lifestyle, giving birth to urban civilization. The sedentary lifestyle demolishes the original group solidarity and creates a need for a new clientele. Creating a new group identity is costly and needs a new army as well.

So with the deepening of urban civilization, and thanks to the increasing luxurious needs of the dynasty, the ruler has to increase taxes. In the end, tax rates become so high that the economy collapses. “It should be known that at the beginning of the dynasty, taxation yields a large revenue from small assessments. At the end of the dynasty, taxation yields a small revenue from large assessments” (Khaldun p. 352) — writes Khaldun, describing the micro incentives behind taxation as well. On the other hand, he rejects customs and government involvement in trade since the economic-political power of government is disproportionately large.

These ideas are so unique in the Middle Ages, that even Ronald Reagan quoted Khaldun’s work stating that they had some friends in common, referring to Arthur Laffer. The reason for this was that even Laffer himself regarded Khaldun as a forerunner of supply-side economics and the Laffer-curve, although Khaldunian ideas have not much in common with the Laffer-curve. The reason is that these should be interpreted in time dimension rather than as a policy rule of thumb.