But what do you want?

Sunday, March 2nd, 2014

Countless modern stories hinge on the protagonist’s response to the question, But what do you want?

First two thirds is about the individual struggling under the oppressive weight of expectations and rules imposed by their background, society, church, boss, parents, spouse or even children — then comes the decisive moment in which the protagonist asks themselves or gets asked “but what do you want?”

And then the dawning realization of failure to live-up to the highest modern morality: self-fulfilment, self-expression (self-ishness)… after which the protagonist breaks free of their background, parents or family — and is met by the intoxicating joy of… whatever.

As Saul Bellow used to argue (and he would know; being a prime example of it), the masses are “The Romantics” now — an attitude which used to be expressed by a handful of Dichter und Denker (poets and thinkers) is now mainstream: the individual sees himself (more often herself) as standing against everybody and everything else; and as in the Economy chapter of Thoreau’s Walden, the prime question of life becomes how to get the most from the world in return for the least amount of effort.

At an instinctive level, most people recognize that this perspective is evil, but in a secular society there is no compelling reason not to reject demands and duties when they become aversive and if you can get away with it.

(After all, you only live once, you have a duty to make the most of your time, everyone is doing it, in fact it is our duty to fight oppression — so divorce is an act of heroic rebellion…)

Why not? If it something make me feel happier — if it is what I want; then why not do it?

Disappointed with modernity

Sunday, January 26th, 2014

Bruce Charlton is disappointed with modernity:

I have this strong feeling, which goes way back into my early teen years — that I was very lucky to live in a period of peace, abundance, comfort; and that the existence of this ‘safety net’ this gave me great opportunities to strive to do the best work of which I was capable: to aim high, be idealistic, take the higher risk options.

As an atheist and an intellectual, I saw these opportunities in William Morrisite, or Emersonian terms of enhancement of the arts, architecture, natural beauty, the landscape; self-education; science and philosophy; dignity and creativity of labour; self-sufficiency; knowledge and participation in poetry and literature; establishing wholesome and free social arrangements — and the like.

And I have always been terribly disappointed that very few people even tried to do so.

Instead there was a societal obsession with material accumulation, with getting ever more of what they already had in abundance.

Even worse, there was the whole world of “fashion” — the mass willingness to be manipulated in pursuit of one manufactured triviality after another.

For example, when I first got a permanent job as a university lecturer, I recognized that I had one of the most secure positions in one of the most secure societies in history — and that this meant I had could embark on long term projects in scholarship, writing and research and scholarship; that my secure position made it easy stand aside from trends; that I could be a model of teaching and scientific integrity and it was virtually impossible for my employer to sack me for it!

But in general colleagues refused to acknowledge the basic privilege and security of their position, and persisted in talking as if they could be thrown out into destitution and starvation at any moment — and therefore they had to go along with whatever fashion, trend and politically driven lunacies and lies were floating around the university — and work at terribly unambitious scholarly and research projects that were neither useful nor radical — but merely aspired to be microscopic incremental increases in what were already trivial and irrelevant backwaters of tedium.

[...]

Well, it is now clear for those with eyes to see that prosperity, peace, and comfort are not the natural state of all right-thinking persons — but an unearned privilege inherited from the genius and hard work past generations; and now we have become so far advanced in dissipation they cannot long continue.

But it is terribly disappointing to me that our civilization found nothing better to do with its vast opportunities than watch tv, participate in chit-chat, take foreign holidays, buy ever more new cars and clothes and gadgets; and occupy our minds with manufactured news, seduction and pornography, celebrity gossip, the pursuit and promotion of intoxication; cynically contrived point-and-click sentimentality; and idle malice and hatred (aka politics).

Treating one Disease by Causing Another

Tuesday, May 21st, 2013

Treating one disease by causing another is a mainstream therapeutic strategy, Bruce Charlton notes — especially in psychiatry:

Malarial Therapy of GPI (“General Paralysis of the Insane” — cerebral syphilis)

Patients with incurable and fatal GPI were deliberately infected with malaria. The very high pyrexia (temperature) killed the syphilis germ, but (hopefully) not the patient. The patient was then (hopefully) cured of their malaria using quinine.

Leucotomy/Lobotomy

Patients with chronic and incurable anxiety or tension were deliberately given brain damage, cutting off the frontal lobes of the cerebral cortex from the rest of the brain. This made the patients docile and indifferent – which was presumed to be an improvement. The procedure became so popular that brain damage was inflicted on patients with less severe and probably temporary anxiety and other conditions, too.

Neuroleptics/Antipsychotics create Parkinson’s disease (or, rather, Parkinsonism, which may be reversible) for the treatment of fear, agitation, delusions, hallucinations, and hyperactivity

Patients with a range of very distressing psychological and psychotic symptoms were deliberately made to suffer from Parkinson’s disease by giving them dopamine blocking drugs. As well as producing the physical symptoms of Parkinsonism (tremor, stiffness, movement disorders), the drugs produced the psychological symptoms of Parkinsonism – emotional blunting and demotivation. Patients could no longer be bothered to respond to delusions and hallucinations.

Unfortunately patients could no longer be bothered to do anything else, either and became asocial, withdrawn, idle, and without the ability to experience pleasure. Also, when treatment was sustained, the drugs were found to have a permanent effect (tardive dyskinesia) and to create dependence — such that withdrawal often caused a psychotic breakdown.

War in the East

Thursday, December 6th, 2012

The books generally say that biological warfare is ineffective, Gregory Cochran notes — but then they would say that, wouldn’t they?

There is reason to think it has worked, and it may have made a difference.

Once upon a time, it was spring 1942, and the Germans were on a roll. Timoshenko had attacked from an already-established bridgehead across the Donets (the Izium salient) with about 750,000 men. He made a bad choice, since the Germans had already begun concentrating their forces for a planned southern offensive. After some initial Soviet gains, the Germans brought in Luftwaffe reinforcements and achieved air superiority. The 1st Panzer army counterattacked and cut off much of the Russian forces, who lost a quarter of a million prisoners (according to Beevor), many dead and wounded, and most of their armor. There was a huge hole in the front, and the Germans advanced towards Stalingrad.

We know of course that this offensive eventually turned into a disaster in which the German Sixth Army was lost. But nobody knew that then. The Germans were moving forward with little to stop them: they were scary SOBs. Don’t let anyone tell you otherwise. The Soviet leadership was frightened, enough so that they sent out a general backs-to-the-wall, no-retreat order that told the real scale of losses. That was the Soviet mood in the summer of 42.

That’s the historical background. Now for the clues. First, Ken Alibek was a bioweapons scientist back in the USSR. In his book, Biohazard, he tells how, as a student, he was given the assignment of explaining a mysterious pattern of tularemia epidemics back in the war. To him, it looked artificial, whereupon his instructor said something to the effect of “you never thought that, you never said that. Do you want a job?” Second, Antony Beevor mentions the mysteriously poor health of German troops at Stalingrad — well before being surrounded (p210-211). Third, the fact that there were large tularemia epidemics in the Soviet Union during the war — particularly in the ‘oblasts temporarily occupied by the Fascist invaders’, described in History and Incidence of Tularemia in the Soviet Union, by Robert Pollitzer.

Fourth, personal communications from a friend who once worked at Los Alamos. Back in the 90?s, after the fall of the Soviet Union, there was a time when you could hire a whole team of decent ex-Soviet physicists for the price of a single American. My friend was having a drink with one of his Russian contractors, son of a famous ace, who started talking about how his dad had dropped tularemia here, here, and here near Leningrad (sketching it out on a napkin) during the Great Patriotic War. Not that many people spontaneously bring up stories like that in dinner conversation…

Fifth, the huge Soviet investment in biowarfare throughout the Cold War is a hint: they really, truly, believed in it, and what better reason could there be than decisive past successes? In much the same way, our lavish funding of the NSA strongly suggested that cryptanalysis and sigint must have paid off handsomely for the Allies in WWII — far more so than publicly acknowledged, until the revelations about Enigma in the 1970s and later.

We know that tularemia is an effective biological agent: many countries have worked with it, including the Soviet Union. If the Russians had had this capability in the summer of ’42 (and they had sufficient technology: basically just fermentation), it is hard to imagine them not using it. I mean, we’re talking about Stalin. You think he had moral qualms? But we too would have used germ warfare if our situation had been desperate.

In my picture, it probably wasn’t used in 1941 because of surprise, the fast-moving front, crushing German air superiority (after the initial airfield strikes), and winter. I think that the Soviets were probably hesitant in 1942, since detection would have probably led to German efforts along the same lines, doubly dangerous because Germany was the world leader in bacteriology in those days, and because Moscow was within easy reach of the Luftwaffe. Tularemia, though, is easy to misdiagnose, and the Germans didn’t have much experience with it. Moreover, Germans in Stalingrad never had a chance to be fully debriefed back in Germany. Risky in the long run, but you first have to survive in the short run.

Bruce Charlton has a bit to add.

Almost a Psychosis

Sunday, February 13th, 2011

As societies become more complex — moving from hunting and gathering, to herding and slash-and-burn agriculture, to complex agriculture with division of labor, to modern industrialism and post-industrialism — the individuals within those societies seem to become more intelligent and less impulsive, which, Bruce Charlton says, is both a blessing and a curse:

It is the middling societies, agriculturally-based and with an average IQ of around 80-90, which seem to be the most devoutly religious — whether pagan or monotheistic.

Hunter gatherer societies are animistic, with totemism coming-in with simple agriculture along with larger scale organization and technology — and the industrial societies with high IQ have a very abstract religion tending towards atheism.

As average intelligence in a society becomes higher; so religiousness becomes less spontaneous, less intuitive, less realistic, less supernatural, less personal.

This can even be seen at a relatively fine level of discrimination within Christianity, with a gradient in average IQ among the denominations.

I think it no coincidence that even in Catholicism, the more rational Roman Catholics tend to dominate higher IQ societies than the more mystical Eastern Orthodoxy.

This is all a part of my larger thesis that higher average intelligence drove modernization (including industrialization) — but, mainly due to its effect in weakening spontaneous religiousness, is also destroying it.

And it is part of my belief that high IQ is a curse as well as a benefit.

The benefits are clear, the curse is not appreciated: indeed, high IQ people pride themselves on their disability.

People with a high IQ (high, that is, by historical and international standards; by which I mean above about 90) should regard themselves as suffering from a mental illness — almost a psychosis — since their perception of the world is so distorted by a spontaneous, compulsive abstraction which is alien to humans.

But high IQ in and of itself (no matter how supported culturally) cannot lead to endogenous industrialization — modernization requires genius: which requires both high IQ and creativity.

I have not touched on personality here; but much of what I said about IQ applies also to personality.

Complex agricultural societies provide a strong selective force for re-shaping and taming personality, promoting conscientiousness, docility (reducing spontaneous aggression and violence) and reducing spontaneous creativity.

These are the marks of the ‘civilized’ personality.

And this is why genius is so rare: because creativity and intelligence are reciprocally correlated, yet both must be present for genius to happen.

Genius is necessary for modernity, for industrialization, because it is genius which produces ‘breakthroughs’; and modernity requires frequent breakthroughs in order to outrun Malthusian constraints.

Europeans produced, in the past, the most geniuses proportionatly — but why?

I think it was because European society experienced a powerful and rapid selective force towards increased IQ, which left the creative personality trait more-intact than did the longer and slower selection for intelligence which happened in East Asia.

The longer and slower selection in East Asia led to (even) higher intelligence, but a greater taming/ civilization of the personality.

Consequently the average East Asian personality is both more intelligent (and more civilized) and less creative than the European.

(However, genius is now apparently a thing-of-the-past — even in the West; and therefore — lacking breakthroughs — modernity will grind to a halt and reverse; indeed this has already begun.)

We should regard high IQ rather as we regard sickle cell anaemia — a useful specific adaptation to certain specific selection pressures in certain types of society, but one which takes its toll in many other other ways and in other situations.

The most obvious disadvantage of high IQ is reduced fertility when fertility becomes controllable. In the past, any effect of IQ on lowering fertility was minimized by the lack of contraceptive technology, and was (at least in complex agricultural societies) more-than-compensated by the reduced mortality rate of more intelligent people.

So in complex agricultural societies with a high age-adjusted mortality rates, high IQ is adaptive — because reduced death rates have a more powerful effect on the number of surviving children; but in modern industrial societies with low age-adjusted mortality rates then high IQ is maladaptive because reduced birth rates have a more powerful effect on the number of surviving children (especially when fertility rates among the high IQ have fallen below replacement levels).

(See Why are women so intelligent?)

Clearly, the social selection pressures which led to increased IQ in stable complex agricultural societies have — for several generations — reversed; and the selection pressure is now to reduce IQ in industrialized countries.

But, fertility aside, the major disadvantage of high IQ (and one which works faster than genetic changes) is the compulsive abstraction of high IQ people.

High level abstraction, while enabling genius, is also mostly responsible for the profound and pervasive spiritual malaise of modernity: for alienation, relativism and nihilism.

This tendency to [alienation, relativism and nihilism] among individual intellectuals is amplified by IQ stratification and large population size which creates an IQ-meritocracy; within which abstraction becomes compulsive and mutally-reinforcing and finally (in some people) inescapable.

So that in an IQ-elite the intellectuals are are often proud of their inability to perceive the obvious, and their lack of ability to perceive solid reality, and their compulsive tendency to live in a changing state of perpetually deferred judgment and lack of committment.

But these are bad traits not virtues; intellectuals should be ashamed of them, and humble about their deficiencies — not proud of the inability to perceive and stand-by the obvious.

Bruce Charlton’s Concept of Political Correctness

Sunday, February 6th, 2011

Bruce Charlton’s concept of political correctness, or progressivism, starts from the notion that only the intelligent can truly internalize the abstractions of PC thought — but that doesn’t make PC thought right:

PC is atheistic and this worldly — but is trying to be good.

For PC the ultimate evil is selfishness — therefore the highest good it can conceive is unselfishness: i.e. altruism.

This worldly atruism is operationalized in terms of the allocation of ‘goods’ (money, power, status etc).

But PC sees humans as innately selfish — therefore the allocation of goods must be done impersonally — in practice, by rules and bureaucracies.

What governs the principles of PC? Reaction, rejection. The past is tainted. There must be a fresh start. The good is the opposite of what people used to believe. Hence moral inversion.

The fact that all this is anti-spontaneous, anti-natural, alien, scary — is actually taken as a sign of its virtuousness. The truly altruistic must sacrifice themselves.

The mass media is essential to this since it fills our minds everyday, continually displacing the past — so whatever is in the mass media is reality.

(Bruce Charlton, Mencius Moldbug, and others discuss the ambiguous relationship between Christianity and Progressivism over at Foseti‘s.)

The Carlylean Atheist’s God

Monday, January 17th, 2011

Recently I mentioned Bruce Charlton’s four tough questions for the secular right. What I didn’t realize — until Kalim Kassam mentioned it — is that Mencius Moldbug swooped in and answered them, by saying that he wants a sovereign corporation rather than a democracy, because coherent authority is not fissiparous:

Radicalism, etc, are tempting because these ideologies collectively empower their believers. In a state that does not leak power, they lose their attraction and disappear naturally.

Intellectuals are not inherently liberal. They are liberal if and only if liberalism is empowering. Intellectuals in Nazi Germany were attracted to Nazism, not democracy. Intellectuals in golden-age Spain were attracted to Catholicism, not democracy. Intellectuals (almost all) in Elizabethan England were attracted to the Virgin Queen, not democracy.

Divided authority is entropic and autocatalytic — like rust, cancer, etc. It can be cured, but it has to be cured all the way. The more of it you have, the harder it is to kill.

Present regimes have no trouble suppressing right-wing dissent, violent or nonviolent. They simply need to apply these mechanisms to the left.

Charlton has become disenchanted with — and alienated within — the modern bureaucratic world, which has led him [via neo-Paganism] to Christianity. Moldbug doesn’t disagree with this view of modernity, but he hasn’t exactly found Jesus:

Oh, I don’t at all disagree. My own strongest influence is Carlyle, and Carlyle as you know was a very Christian man — although one could say he had a Christianity of his own. He certainly went through a great crisis of faith in his youth. And he was no hedonist!

My ideal state (a) is run like a business, and (b) does the will of God. It seems to me that these criteria do not conflict, but reinforce each other from opposite perspectives — if you’ll pardon the cliche, a wave-particle duality. I think God wants his kingdoms on Earth to be run like businesses, and I think that if you run a kingdom like a business you’ll find yourself doing the will of God — whether or not you ascribe any sort of reality to Him.

“God” for the Carlylean atheist is a fictional character, like Hamlet. Dear atheist, do you believe in the material reality of Hamlet? Does this prevent you from (a) reading Shakespeare, (b) imagining the person of Hamlet, (c) describing certain actions as characteristic or uncharacteristic of Hamlet?

“God,” for instance, solves or at least greatly ameliorates the is-ought problem. What is good? What is justice? What is right? In each case, it is the will of God — for it’s clear that if we define an ungood, unjust, unrighteous deity as “God,” we are just abusing the English language. We certainly can’t define good as the will of the Flying Spaghetti Monster.

Does this solve anything? No, the secularist might say, because we cannot see or speak to God, at least not in any reproducible way. Wrong! We cannot see God, but we can imagine God — our post-ape brains are very good at (a) personifying imaginary characters, (b) submitting to higher authorities, (c) obeying moral codes.

Thus a fruitless debate of “ought” becomes a fruitful debate of the nature of God. One ought to eat babies, I say. You disagree. Can we continue conversing? We cannot, Hume tells us. Hume is right.

But if I say, God wants us to eat babies, I have to construct the character of a baby-munching God. You in turn can criticize my baroque construction — just as if I’d written a “Hamlet II” in which Hamlet ate babies. Thus the debate is fruitful, in that (a) we have stuff to talk about, (b) spectators can tell which of us is an ass.

In short, I simply don’t see any real conflict between atheist and Christian visions of reaction. For all sorts of reasons (child-rearing among them), I would much rather be a Christian, or even a Muslim — but I’m not, and I can’t change that.

There’s a story that Oriana Fallaci spoke to John Paul II and asked His Holiness how, as an atheist, she should live her life. “You don’t believe in God?” the Pope said. “No problem — just act as if you did.” I suspect there are precious few atheists who are physically incapable of understanding or following these instructions — and even fewer who could act as if they believed in the Flying Spaghetti Monster.

(Foseti found the same passage interesting.)

Should Western Civilization be saved?

Wednesday, January 5th, 2011

Should Western Civilization be saved?, Bruce Charlton asks — even if it could be saved:

It is purportedly the baseline belief of the Secular Right that the major goal of conservative or reactionary politics should be to ‘save’ Western Civilization.

Yet this is not a coherent belief, nor is it possible, nor is it desirable.
[...]
The big problem is that it is precisely Western Civilization which created Communism, Socialism, Liberalism, and Political Correctness; ‘modern art’; ‘human rights’; pacifism — it is Western Civilization which is destroying itself.

The counter currents have always been there — at least since the Great Schism of a millennium since — and the counter-current has now overwhelmed the main current.
[...]
Furthermore, all of those abstract attributes which the Secular Right wants to preserve in Western Civilization are complicit in the decline: freedom of choice/selfishness; democracy/mob rule; freedom of consciousness/secularism; philosophy-science/rational bureaucracy; art/subversion; freedom of lifestyle/moral inversion; kindness/cowardice; an open and accessible mass media/the primacy of virtual reality… the whole lot.
[...]
The Secular Right is, I am afraid, merely Saruman attempting to use Sauron’s Ring to fight Sauron; all its tactics to defend what it regards good are simultaneously (but in other places) strengthening the forces of destruction.

There is enough to suggest that the Left is indeed the main line of a Western Civilization which is pre-programmed to self-destruction; while the Right is merely imposing temporary corrections which save the West in the short term but only at the cost of entrenching its long-term and underlying errors.

The West cannot be saved.

His revisits these ideas in the second of his four tough questions for the secular right:

What are the mechanisms by which your ideal society would be maintained? Are they plausible? Are they strong enough?

Or are you just engaged in day-dreaming?

(Anyone can come up with their own ideal utopia — but in the real world, stable options are heavily constrained.)

That’s obviously not just a question for the secular right.

Foseti took a stab at answering Charlton’s questions, but I think he side-stepped the crux of that one:

Sure. Take Singapore. It’s a lot closer to my ideal than the current American form of government. It exists — it’s therefore possible to get a whole lot better.

As I said there, I don’t think Bruce Charlton would argue that Singapore can’t exist, but rather than it can’t last — not in its present form.

I was pleased to see Charlton himself respond to Foseti:

I’d like to emphasize that this is not really a matter of what I want, but of what we will get. And that I am thinking on a timescale of human generations (c. 25 year units), not of the next few years.

I was profoundly influenced by the analysis of Ernest Gellner who (in brief) divided all human societies into the 1. hunter-gatherer, 2. the agriculturally-based (dominated by warriors and priests, in various combinations), and 3. the post-industrial revolution modern societies — which depend on permanent growth (which means permanent increase in efficiency/productivity — largely by increasing functional specialization and coordination).

When (and not if) industrial civilization collapses (and this will happen sooner rather than later, not least because the politically correct ruling elites want to destroy The West and they are clearly succeeding); The West will (like it or not) revert to the agriculturally based societies run by combinations of warriors and priests which existed everywhere in the world (except among a handful of hunter gatherers) before the industrial revolution.

Our choices are between different balances of warriors and priests, and between different types of priests. The current default world religion is (obviously) Islam, not Christianity — due to its demographic growth and sustained assertive self-confidence.

The Beginning of Reality-Proof Morality

Friday, November 12th, 2010

Bruce Charlton presents the Abolition movement — which originated among English Quakers and spread to evangelicals, then to mass support in England — as a precursor to modern political correctness:

[T]he abolition movement was ruthless in its self-confidence, its desire to impose its reality globally: slavery was abolished everywhere in the world (except for some tiny, shrinking pockets in sub-Saharan Africa) in a long and dynamic (not to say ruthless) campaign stretching over many decades, and mostly by military coercion when the British Empire was at its height.

The passing of the acts of parliament to abolish the slave trade, then to abolish slavery in the Empire were merely the beginning of the process. The actual abolition of slavery everywhere had to be imposed by unrelenting, long-term political and military pressure, and backed-up by the guns of the Royal Navy which had a long reach.

In this respect the abolition movement was the antithesis of the feeble submissiveness of modern political correctness. Nonetheless, abolition shared the presumption of PC that ethics were susceptible of discovery and advancement — not by divine revelation, but by human social consensus.

Abolition showed that there might be an avant garde of elite opinion, and that the mass of the public might be brought around to views that they found initially incomprehesible, abhorrent or dangerous.

In particular, abolition was built on the “‘discovery” (initially by Nonconformist Protestants and Anglican evangelicals) that slavery was utterly unacceptable and must be stopped at any cost was a realization that entailed overthrowing 1800 years of Christian morality.

The “discovery” that Christianity ruled-out slavery entailed the assumption of moral progress, that modern abolitionists were more morally advanced than the ancient Greeks and Romans, than the Apostles, Saints and Holy Fathers and the greatest theologians of all previous eras.

Until the abolition movement, all societies in history had accepted slavery as a fact. Slavery was universal wherever it could be afforded.

It was only in England, among a small group of protestants in the late 1700s, that the discovery was made that slavery was intolerable, was indeed the worst of sins, and must be eradicated at any cost.

Abolition can thus be seen as an early example of progressivism — despite the contrast with PC that abolition was being advocated and implemented by muscular and militaristic Christians.

What was different about abolitionism was a fanaticism based on abstractness and universality of ethics.

Abolition was not primarily self-interested but was genuinely altruistic — in enforcing abolition upon the world the British Empire gave up a considerable amount of profitable enterprise, expended vast amounts of treasure in military action and in compensation of slave owners, expended prime manpower (and suffered heavy casualties) in the slave wars.

For instance the British military station in Sierra Leone, specifically for enforcing abolition, suffered a mortality rate of 50 percent per year due to tropical disease — a stunningly high number, such that to be stationed there was almost a death sentence — justifying its nickname of ‘the white man’s grave’.

And the costs were immense for many slaves, who were killed during these military actions, were slain by slavers and thrown overboard from ships to avoid incrimination, and who in many instances suffered death and extreme hardship following liberation .

So abolition has this dual face. In some ways it was the greatest altruistic moral achievement ever (in so far as costly altruism is supposedly the ultimate virtue for secular liberal morality.).

In other ways abolition was the beginning of reality-proof morality, the morality of designated “good acts” (regardless of ensuing consequences) and of the modern-style, prideful, hate-filled, self-gratifying justification-by-motivations — and therefore a precursor to political correctness.

(Hat tip to Foseti.)

I suppose most Americans have no idea that (a) the British ended the slave trade, not Abraham Lincoln, and (b) only a tiny fraction of African slaves ended up in North America.

The Talk-Radio Host in Your Head

Tuesday, August 17th, 2010

Our rational faculty isn’t a scientist, Jonah Lehrer says — it’s a talk-radio host:

Wilson and Schooler took the 1st, 11th, 24th, 32nd, and 44th best tasting jams (at least according to Consumer Reports) and asked the students for their opinion. In general, the preferences of the college students closely mirrored the preferences of the experts. Both groups thought Knott’s Berry Farm and Alpha Beta were the two best-tasting brands, with Featherweight a close third. They also agreed that the worst strawberry jams were Acme and Sorrel Ridge. When Wilson and Schooler compared the preferences of the students and the Consumer Reports panelists, he found that they had a statistical correlation of .55. When it comes to judging jam, we are all natural experts. We can automatically pick out the products that provide us with the most pleasure.

But that was only the first part of the experiment. The psychologists then repeated the jam taste test with a separate group of college students, only this time they asked them to explain why they preferred one brand over another. As the undergrads tasted the jams, the students filled out written questionnaires, which forced them to analyze their first impressions, to consciously explain their impulsive preferences. All this extra analysis seriously warped their jam judgment. The students now preferred Sorrel-Ridge — the worst tasting jam according to Consumer Reports — to Knott’s Berry farm, which was the experts’ favorite jam. The correlation plummeted to .11, which means that there was virtually no relationship between the rankings of the experts and the opinions of these introspective students.

What happened? Wilson and Schooler argue that “thinking too much” about strawberry jam causes us to focus on all sorts of variables that don’t actually matter. Instead of just listening to our instinctive preferences, we start searching for reasons to prefer one jam over another.

Lehrer cites the abstract of a paper by Hugo Mercier and Dan Sperber:

Reasoning is generally seen as a mean to improve knowledge and make better decisions. Much evidence, however, shows that reasoning often leads to epistemic distortions and poor decisions. This suggests rethinking the function of reasoning.

Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given human exceptional dependence on communication and vulnerability to misinformation. A wide range of evidence in the psychology or reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views.

This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively with the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow the persistence of erroneous beliefs. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all of these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: look for arguments that support a given conclusion, and favor conclusions in support of which arguments can be found.

Through the scientific method, we’ve managed to hijack this argumentative tool for truth-seeking.

This also reminds me of Bruce Charlton’s explanation for why the high-IQ lack common sense.

(Hat tip to Aretae.)

Good professors make bad kings

Tuesday, August 10th, 2010

Good professors make bad kings, Bruce Charlton realized, after reading The decline of the German mandarins: the German academic community, 1890-1933, by Fritz K Ringer:

In other words, I had assumed, up to that point, that if only things were run by people ‘like me’, then things would inevitably be run better.

Before reading the book I had not been aware that I believed this, but although unarticulated, a belief in leadership by intellectals had been a basic assumption.

It is, indeed, an assumption of the modern political elite, and has been the assumption of Dichter und Denker (poets and thinkers) for a couple of hundred years (since the Romantic era) — but it was not an assumption of traditional societies before this.

Indeed, as I read in Ernest Gellner at about the same time, in traditional societies the intellectual class (priests and clerks) was subordinated to the leadership — which was essentially military.

Intellectuals were — Gellner said — essentially ‘eunuchs’ — in the sense that they were not allowed to build dynastic, hereditary power — this was reserved for the military leadership.

So priests and other intellectuals with power were sometimes actual eunuchs, or servants and slaves, or celibate (legally, not sexually, celibate — i.e. they could not have legitimate heirs), or members of a legally circumscribed minority (such as Jewish merchants and money lenders), or — like the Chinese mandarins — they were prohibited from handing on their status to their children (entry to the mandarinate being controlled by competitive examinations).

The ‘natural’ leaders of human society throughout most of history are the military leaders — the ‘generals’. The aristocracy were essentially the military leaders.

But in modern societies, the Mandarins have progressively taken over the leadership.

People ‘like me’ run things; the military leadership (unless they are themselves mandarins — as increasingly is the case — and servile to political correctness) are officially feared, hated and despised; indeed any aspirant for power who is not ‘an intellectual’ is officially feared, hated and despised.

Fritz Ringer’s books was a revelation because he described a familiar and recent society that had indeed been a mandarinate — and this was Germany in the nineteenth century and leading up to the first and second world wars. Germany was at that time the academic intellectual centre of the West.

And ‘yet’ the mandarinate had been a disaster — leading to two world wars and National Socialism and also (ironically) to the eclipse of the German mandarins — who were purged virtually overnight in 1933 (only a few obedient Nazi mandarins were allowed to stay — like Martin Heidegger).

The German mandarins were nationalist, that was the focus of their ideology (the distinctive superiority of German culture) and that is one variety — very rare nowadays except in small nations and would-be nations like Scotland or Catalonia.

Of course the most widespread mandarinate was the Soviet Union whose ideology was (mostly) anti-nationalistic/ international communism. And international left-mandarinism is now the dominant form of government in the West.

Since reading Ringer, when my eyes were opened, my experience has hardened into conviction that — as a generalization — mandarins make very useful servants but very bad leaders. Good professors make bad kings.

The main problem is, I think, that mandarins are expert at ignoring common sense reality and focusing on abstraction.

Mandarins live ‘in culture’ — they are ‘Kultur’ experts. Culture is the source of their expertise and prestige — culture comes between mandarins and common sense.

When, as is normal, mandarin abstractions are substantially incomplete and significantly biased, then there is no limit to how bad mandarin leadership can be; because any feedback provided by ‘reality’ can be ignored by mandarins in ways which are impossible to normal people.

Power without Responsibility

Thursday, May 6th, 2010

While discussing the cancer of bureaucracy — in a piece I’ve already recommended — Bruce Charlton addresses the odd rise of the committee:

Committees now dominate almost all the major decision-making in modernizing societies — whether in the mass committee of eligible voters in elections, or such smaller committees as exist in corporations, government or in the US Supreme Court: it seems that modern societies always deploy a majority vote to decide or ratify all questions of importance. Indeed, it is all-but-inconceivable that any important decision be made by an individual person — it seems both natural and inevitable that such judgments be made by group vote.

Yet although nearly universal among Western ruling elites, this fetishizing of committees is a truly bizarre attitude; since there is essentially zero evidence that group voting leads to good, or even adequate, decisions — and much evidence that group voting leads to unpredictable, irrational and bad decisions.

The nonsense of majority voting was formally described by Nobel economics laureate Kenneth Arrow (1921-) in the 1960s, but it is surely obvious to anyone who has had dealings with committees and maintains independent judgement. It can be demonstrated using simple mathematical formulations that a majority vote may lead to unstable cycles of decisions, or a decision which not one single member of the committee would regard as optimal. For example, in a job appointments panel, it sometimes happens that there are two strong candidates who split the panel, so the winner is a third choice candidate whom no panel member would regard as the best candidate. In other words any individual panel member would make a better choice than derives from majority voting.

Furthermore, because of this type of phenomenon, and the way that majority decisions do not necessarily reflect any individual’s opinion, committee decisions carry no responsibility. After all, how could anyone be held responsible for outcomes which nobody intended and to which nobody agrees? So that committees exert de facto power without responsibility. Indeed most modern committees are typically composed of a variable selection from a number of eligible personnel, so that it is possible that the same committee may never contain the same personnel twice. The charade is kept going by the necessary but meaningless fiction of ‘committee responsibility’, maintained by the enforcement of a weird rule that committee members must undertake, in advance of decisions, to abide by whatever outcome (however irrational, unpredictable, unjustified and indefensible) the actual contingent committee deliberations happen to lead to. This near-universal rule and practice simply takes ‘irresponsibility’ and re-names it ‘responsibility’…

If that sounds like anyone, Charlton’s postscript confirms it:

Although I do not mention it specifically above, the stimulus to writing this essay came from Mark A Notturno’s Science and the open society: the future of Karl Popper’s philosophy (Central European University Press: Budapest, 2000) — in particular the account of Popper’s views on induction. It struck me that committee decision-making by majority vote is a form of inductive reasoning, hence non-valid; and that inductive reasoning is in practice no more than a form of ‘authoritarianism’ (as Notturno terms it). In the event, I decided to exclude this line of argument from the essay because I found it too hard to make the point interesting and accessible. Nonetheless, I am very grateful to have had it explained to me.

I should also mention that various analyses of the pseudonymous blogger Mencius Moldbug, who writes at Unqualified Reservations, likely had a significant role in developing the above ideas.

Again, I recommend the whole thing.

The Cancer of Bureaucracy

Friday, April 30th, 2010

Bruce Charlton decries the cancer of bureaucracy:

Everyone living in modernizing ‘Western’ societies will have noticed the long-term, progressive growth and spread of bureaucracy infiltrating all forms of social organization: nobody loves it, many loathe it, yet it keeps expanding. Such unrelenting growth implies that bureaucracy is parasitic and its growth uncontrollable — in other words it is a cancer that eludes the host immune system.

Old-fashioned functional, ‘rational’ bureaucracy that incorporated individual decision-making is now all-but extinct, rendered obsolete by computerization. But modern bureaucracy evolved from it, the key ‘parasitic’ mutation being the introduction of committees for major decision-making or decision-ratification. Committees are a fundamentally irrational, incoherent, unpredictable decision-making procedure; which has the twin advantages that it cannot be formalized and replaced by computerization, and that it generates random variation or ‘noise’ which provides the basis for natural selection processes.

Modern bureaucracies have simultaneously grown and spread in a positive-feedback cycle; such that interlinking bureaucracies now constitute the major environmental feature of human society which affects organizational survival and reproduction. Individual bureaucracies must become useless parasites which ignore the ‘real world’ in order to adapt to rapidly changing ‘bureaucratic reality’.

Within science, the major manifestation of bureaucracy is peer review, which — cancer-like — has expanded to obliterate individual authority and autonomy. There has been local elaboration of peer review and metastatic spread of peer review to include all major functions such as admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing and the award of prizes.

Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever-more elaborate and widespread peer review.

Just as peer review is killing science with its inefficiency and ineffectiveness, so parasitic bureaucracy is an un-containable phenomenon; dangerous to the extent that it cannot be allowed to exist unmolested, but must be utterly extirpated. Or else modernizing societies will themselves be destroyed by sclerosis, resource misallocation, incorrigibly-wrong decisions and the distortions of ‘bureaucratic reality’. However, unfortunately, social collapse is the more probable outcome, since parasites can evolve more rapidly than host immune systems.

That’s the abstract. Read the whole thing.

After science: Has the tradition been broken?

Monday, April 26th, 2010

A few months ago I finally got around to reading A Canticle for Liebowitz, in part because Bruce Charlton mentions it while discussing the scientific tradition:

The classic science fiction novel A Canticle for Liebowitz by Walter Miller portrays a post-nuclear-holocaust world in which the tradition of scientific practice — previously handed-down from one generation of scientists to the next — has been broken. Only a few scientific artefacts remain, such as fragments of electronic equipment. It turns out that after the tradition has been broken, the scientific artefacts make no sense and are wildly misinterpreted. For instance a blueprint is regarded as if it was a beautiful illuminated manuscript, and components such as diodes are regarded as magical talismans or pills.

Charlton also cites Alasdair MacIntyre’s After Virtue:

Imagine that the natural sciences were to suffer the effects of a catastrophe. A series of environmental disasters are blamed by the general public on the scientists. Widespread riots occur, laboratories are burnt down, physicists are lynched, books and instruments are destroyed. Finally a know-nothing political movement takes power and successfully abolishes science teaching in schools and universities, imprisoning and executing the remaining scientists.

Later still there is a reaction against this destructive movement and enlightened people seek to revive science, although they have largely forgotten what it was. But all that they possess are fragments: a knowledge of experiments detached from any knowledge of the theoretical context which gave them significance; parts of theories unrelated either to the other bits and pieces of theory which they possess or to experiment; instruments whose use has been forgotten; half-chapters from books, single pages from articles, not always fully legible because torn and charred.

Nonetheless all these fragments are re-embodied in a set of practices which go under the revived names of physics, chemistry and biology. Adults argue with each other about the respective merits of relativity theory, evolutionary theory and phlogiston theory, although they possess only a very partial knowledge of each. Children learn by heart the surviving portions of the periodic table and recite as incantations some of the theorems of Euclid.

Nobody, or almost nobody, realizes that what they are doing is not natural science in any proper sense at all. For everything that they do and say conforms to certain canons of consistency and coherence and those contexts which would be needed to make sense of what they are doing have been lost, perhaps irretrievably.

Charlton, as you might imagine, isn’t concerned about what might happen so much as what he believes has happened in the sciences:

A theme associated with philosophers such as Polanyi and Oakeshott is that explicit knowledge — such as is found in textbooks and scientific articles — is only a selective summary that misses that the most important capability derives from implicit, traditional or ‘tacit’ knowledge. It is this un-articulated knowledge that leads to genuine human understanding of the natural world, accurate prediction and the capacity to make effective interventions.

Tacit knowledge is handed on between and across generations by slow, assimilative processes which require extended, relatively unstructured and only semi-purposive human contact. What is being transmitted and inculcated is an over-arching purpose, a style of thought, a learned but then spontaneous framing of reality, a sense of how problems should be tackled, and a gut-feeling for evaluating the work or oneself, as well as others.

This kind of process was in the past achieved by such means as familial vocations, prolonged apprenticeship, co-residence and extended time spent in association with a Master — and by the fact that the Master and apprentice personally selected each other. The pattern was seen in all areas of life where independence, skill and depth of knowledge were expected: crafts, arts, music, scholarship — and science.
[...]
It is important to recognize that the discarding of traditions of apprenticeship and prolonged human contact in science was not due to any new discovery that apprenticeship was — after all — unnecessary, let alone that the new bureaucratic systems of free-standing explicit aims and objectives, summaries and lists of core knowledge and competencies etc. were superior to apprenticeship. Indeed there is nothing to suggest that they are remotely the equal of apprenticeship. Rather, the Master–apprentice system has been discarded despite the evidence of its superiority; and has been replaced by the growth of bureaucratic regulation.

The main reason is probably that scientific manpower, personnel or ‘human resources’ (as they are now termed) have expanded vastly over the past 60 years — probably about tenfold. So there was no possibility of such rapid and sustained quantitative expansion (accompanied, almost-inevitably, by massive decline in average quality) being achieved using the labour-intensive apprenticeship methods of the past. The tradition was discarded because it stood in the path of the expansion of scientific manpower.
[...]
It has now become implicitly accepted among the mass of professional ‘scientists’ that the decisions which matter most in science are those imposed upon science by outside forces: by employers (who gets the jobs, who gets promotion), funders (who gets the big money), publishers (who gets their work in the big journals), bureaucratic regulators (who gets allowed to do work), and the law courts (whose ideas get backed-up, or criminalized, by the courts). It is these bureaucratic mechanisms that constitute ‘real life’ and the ‘bottom line’ for scientific practice. The tradition has been broken.

The scholarly creative method of JRR Tolkien

Wednesday, November 11th, 2009

Bruce Charlton examines the scholarly creative method of JRR Tolkien:

Tolkien’s remarkable creative method has been elucidated by TA Shippey in his Road to Middle Earth; and amply confirmed by the evidence from the History of Middle Earth (HoME) edited by Christopher Tolkien.

In a nutshell, Tolkien treats his ‘first draft’ as if it were an historical text of which he is a scholarly editor. So when Tolkien is revising his first draft his approach is similar to that he would take when preparing (for example) an historically-contextualized edition of Sir Gawain and the Green Knight, or Beowulf.

So, as he reads his own first draft, he is trying to understand what ‘the author’ (himself) ‘meant’, he is aware of the possibility of errors in transcription, or which may have occurred during the historical transmission. He is also aware that ‘the author’ was writing from a position of incomplete knowledge, and was subject to bias.

This leads to some remarkable compositional occurrences. For example, in the HoME Return of the Shadow (covering the writing of the first part of Lord of the RingsLotR) Tolkien wrote about the hobbits hiding from a rider who stopped and sniffed the air. The original intention was that this rider was to be Gandalf and they were hiding to give him a surprise ‘ambush’. In the course of revision the rider became a ‘Black Rider’ and the hobbits were hiding in fear — the Black Riders were later, over many revisions, and as the story progressed, developed into the most powerful servants of Sauron.

This is a remarkable way of writing. Most writers know roughly what they mean in their first draft, and in the process of revising and re-drafting they try to get closer to that known meaning. But Tolkien did the reverse: he generated the first draft, then looked at it as if that draft had been written by someone else, and he was trying to decide what it meant — and in this case eventually deciding that it meant something pretty close to the opposite of the original meaning.

In other words, Tolkien’s original intention counted for very little, but could be — and was, massively reinterpreted by the editorial decision.

The specifics of the incident (rider, sniffing) stayed the same; but the interpretation of the incident was radically altered.

By contrast, most authors maintain the interpretation of incidents throughout revisions, but change the specific details.

Actually, Charlton does not describe Tolkien’s method as simplyscholarly, but as shamanistic:

By shamanistic, I mean that I believe much of Tolkien’s primary, first-draft creative, imaginative work was done in a state of altered consciousness — a ‘trance’ state or using ideas from dreams.

This is not unusual among creative people, especially poets. Robert Graves wrote about this a great deal. And neither is it unusual for poets to treat their ‘inspired’ first draft as material for editing. The first draft — if it truly is inspired — is interpreted as coming from elsewhere — from divine sources, from ‘the muse’, or perhaps from the creative unconscious; at any rate, the job of the alert and conscious mind is to ‘make sense’ of this material without destroying the bloom or freshness derived from its primary source.

This is, I believe, why Tolkien did not see himself as inventing, rather as understanding. If key evidence was missing, he could try and interpolate it like a historian by extrapolation from other evidence, or he could await poetic inspiration, which might provide the answer.