Humans can’t comprehend the magnitude of the insult that we pour into the ocean

Thursday, August 9th, 2018

What is it like to be a right whale? Not good:

They take their name from having been the “right” whale to hunt, because of the value of their blubber and baleen, and as such, they’d already been driven to rarity by the time of the American Revolution. Yet they do not die easy. The intentional killing of right whales was banned in 1935, but in March of that year, it took a group of fishermen — apparently not up to speed on international law — six hours, seven hand-thrown harpoons, and 150 rifle rounds to kill a 32-foot calf off Fort Lauderdale, Florida.

If right whales are threatened with extinction, it’s not from a lack of grit. It’s because their home — which spans 2,000 miles of coastline from southern Canada to northern Florida and cannot be described as small or niche — is one of the most human-modified and influenced regions on Earth. With due respect to Kraus, the North Atlantic right whale is not so much the urban whale as the Anthropocene whale.

[...]

One of the first people to start thinking about how we make whales miserable, as opposed to how we kill them, was the marine-acoustics scientist Chris Clark, now retired as a graduate professor of Cornell University. In the 1990s, with Cold War tensions subsiding, Clark was selected as the U.S. Navy’s marine-mammal scientist.

Using the Navy’s underwater listening posts, he was able to tune in to singing fin whales — second only to the blue whale in size — across a patch of sea larger than Oregon. In a data visualization he later created, the singing whales wink on and off: hotspots that arise, spread their sonic glow, and fade. Then enormous flares ripple across the entire space. That’s the acoustic imprint of a seismic air gun, used to probe for oil and gas deposits under the seafloor. “This was an epiphany,” Clark said. He had witnessed the way that human-made sounds could overwhelm, at enormous scales, whales’ ability to hear and be heard in the ocean.

I asked for his opinion about what day-to-day life is like for right whales now, two decades later. “Acoustic hell,” Clark replied. “Humans can’t comprehend the magnitude of the insult that we pour into the ocean.” While no one can say how an animal experiences its world, there are clues that Clark is correct. When the 9/11 attacks took place in 2001, researchers from the New England Aquarium happened to be in the Bay of Fundy, just across the U.S. border into Canada, testing right-whale feces for stress hormones. Over the following days, boat traffic abruptly dropped off. The scientists were struck by how clearly they could hear whale calls through their equipment, as though they’d been standing beside a freeway that fell silent and could suddenly hear birdsong. The whale stress levels measured in those quiet waters were the lowest by far that were recorded across four summers of sampling.

Noise his what biologists refer to as a “sublethal” impact, meaning it doesn’t directly cause death. The list of sub-lethal impacts has grown long, however. Right whales have the highest prevalence of infection with Giardia and Cryptosporidium, mainly from sewage and agricultural manure runoff, ever recorded in any mammal. In humans, these cause the diseases known as beaver fever and crypto, respectively, which involve debilitating digestive complaints. No one knows what problems, if any, they cause in right whales.

The whales are similarly exposed to an alphabet soup of chemicals (DDT, PCBs, PAHs, etc.), oil and gas, flame retardants, pharmaceuticals, pesticides — all the effluvia of civilization. Then there are blooms of red tide and other toxic algae, which can cause paralysis and death in humans, and are increasingly common. One study found paralytic shellfish poisoning in the feces of all 16 right whales it sampled. Again, no one can say what effect these pollutants might be having on right whales.

[...]

For an endangered species, a lack of births is a kind of death, and this year, for the first time since reliable record-keeping began nearly 30 years ago, no calves at all were born in the right-whale population. The animals’ welfare may now be so poor, their suffering so serious, that sublethal impacts have turned lethal.

We call them flying saltshakers of death

Wednesday, August 8th, 2018

Imagine emerging into the sun after 17 long years spent lying underground, Ed Yong suggests, only for your butt to fall off:

That ignominious fate regularly befalls America’s cicadas. These bugs spend their youth underground, feeding on roots. After 13 or 17 years of this, they synchronously erupt from the soil in plagues of biblical proportions for a few weeks of song and sex. But on their way out, some of them encounter the spores of a fungus called Massospora.

A week after these encounters, the hard panels of the cicadas’ abdomens slough off, revealing a strange white “plug.” That’s the fungus, which has grown throughout the insect, consumed its organs, and converted the rear third of its body into a mass of spores. The de-derriered insects go about their business as if nothing unusual has happened. And as they fly around, the spores rain down from their exposed backsides, landing on other cicadas and saturating the soil. “We call them flying saltshakers of death,” says Matt Kasson, who studies fungi at West Virginia University.

Massospora and its butt-eating powers were first discovered in the 19th century, but Kasson and his colleagues have only just shown that it has another secret: It doses its victims with mind-altering drugs. Perhaps that’s why “the cicadas walk around as if nothing’s wrong even though a third of their body has fallen off,” Kasson says.

[...]

Greg Boyce, a member of Kasson’s team, looked at all the chemicals found in the white fungal plugs of the various cicadas. And to his shock, he found that the banger-wings were loaded with psilocybin—the potent hallucinogen found in magic mushrooms. “At first, I thought: There’s absolutely no way,” he says. “It seemed impossible.” After all, no one has ever detected psilocybin in anything other than mushrooms, and those fungi have been evolving separately from Massospora for around 900 million years.

The surprises didn’t stop there. “I remember looking over at Greg one night and he had a strange look on his face,” Kasson recalls. “He said, ‘Have you ever heard of cathinone?’” Kasson hadn’t, but a quick search revealed that it’s an amphetamine. It had never been found in a fungus before. Indeed, it was known only from the khat plant that has long been chewed by people from the Middle East and the Horn of Africa. But apparently, cathinone is also produced by Massaspora as it infects periodical cicadas.

[...]

Infected cicadas behave strangely. Despite their horrific injuries, males become hyperactive and hypersexual. They frenetically try to mate with anything they can find, including with other males. They’ll even mimic the wing-flicking signals of females to lure males toward them. None of this does them any good—their genitals have either been devoured by the fungus or have fallen off with the rest of their butts. Instead, this behavior only benefits the fungus, allowing its spores to find new hosts.

Kasson suspects that cathinone and psilocybin are responsible for at least some of these behaviors. “If I had a limb amputated, I probably wouldn’t have a lot of pep in my step,” he said. “But these cicadas do. Something is giving them a bit more energy. The amphetamine could explain that.”

Psilocybin’s role is harder to explain. The drug might make humans hallucinate, but no one knows if cicadas would similarly trip. There is, however, a theory that magic mushrooms evolved psilocybin to reduce the appetites of insects that might compete with them for decaying wood. Perhaps by suppressing the appetites of cicadas, Massospora nudges them away from foraging and toward incessant mating.

There are many parasitic fungi that manipulate the behavior of insect hosts, including the famous Ophiocordyceps fungi, which can turn ants into zombies.

This is how a zombie outbreak could (semi-plausibly) happen.

No more than a few dozen excellent examples were ever published

Tuesday, August 7th, 2018

Ira Levin’s Rosemary’s Baby kicked off a wave of horror novels that flourished throughout the 1970s and 80s. James Clavell’s Shogun kicked off a similar, smaller wave of historical adventure novels set in Asia:

I’ve long been a big fan of these books which, for lack of a better term, I refer to collectively as ‘The Children of Shogun.’

Alas, Shogun didn’t produce nearly as many bastard offspring as Rosemary’s Baby did. It was fairly easy for any professional writer with imagination and a passion for horror stories to turn out a handful of 300-page supernatural thrillers over the course of a couple of decades. Producing a 900-page Shogun-like epic is another matter entirely. The Children of Shogun were written mostly by men and women with years of personal experience in Asia. They tended to be journalists or academics with a profound interest in the history and culture of the East. If you were a horror fan in the 1970s and ’80s (and I was), it was easy to find titles to feed your hunger for demonic children, seductive witches, and haunted houses. If you craved massive historical epics featuring singsong girls, opium pipes, rickshaws, treaty ports, forbidden cities, warlords, seppuku, pillow dictionaries, foot binding, and godowns filled with tea or silk or jade, feeding your hunger took a bit more initiative.

Nevertheless, quite a few such books got published between the mid-1970s and mid-1990s, and I’ve read dozens of them. The phenomenon seems to have faded over the past 20 years, giving way to other literary booms: vampire novels, fantasy epics, young-adult dystopian series. It is unlikely the boom will ever be revived. In a critical review of Pearl S. Buck’s The Good Earth posted on Goodreads, author Celeste Ng probably spoke for many of today’s progressive readers when she complained about “the weirdness that arises from a Westerner writing about a colonized country.” Apparently it’s all right when an Asian author like Haruki Murakami (whose work I love) writes novels inspired by the likes of Raymond Carver, Raymond Chandler, and Franz Kafka. But Westerners who write about the adventures of English-speaking protagonists in Asia are likely to be shouted down with accusations of cultural appropriation.

[...]

One thing that stands out about these authors is that many of them led lives nearly as adventurous as their protagonists. Anthony Grey spent 27 months in a Chinese prison. During his long career in journalism, Noel Barber was stabbed five times and shot in the head once. James Clavell was a prisoner of war during WWII. Robert Elegant covered both the Korean and Vietnam wars as a journalist and Richard Nixon once called him “my favorite China expert.” If books in this genre seem somewhat more convincing than horror novels of the same era, perhaps it’s because no horror novelists of the era were ever actually possessed by Satan, bitten by vampires, or capable of starting fires with their minds. But, while horror novels are still being churned out in large numbers, almost no one is writing Shogun-like sagas any longer. Soon the genre may cease to exist entirely. If you don’t believe me, consider the decline of the American Indian novel written by white authors.

During much of the twentieth century, white American authors produced some excellent novels featuring Native American characters. The list includes masterpieces such as Oliver La Farge’s Pulitzer Prize-winning Laughing Boy (1929) and Scott O’Dell’s Newbery Medal-winning Island of the Blue Dolphins (1960). Other prominent titles in the genre include Thomas Berger’s 1964 novel Little Big Man (subsequently adapted into a film starring Dustin Hoffman and Faye Dunaway directed by Arthur Penn), Margaret Craven’s I Heard the Owl Call My Name (1967), and Douglas C. Jones’s A Creek Called Wounded Knee (1978).

But the production of such novels has dwindled markedly over the last 40 years or so. This probably has something to do with what happened to Ruth Beebe Hill after the publication of her 1978 novel Hanta Yo. The early reviews of the book were positive. A reviewer for the Harvard Crimson called Hanta Yo “the best researched novel yet written about an American Indian tribe.” Native American author N. Scott Momaday, author of House Made of Dawn, admired the book. David Wolper, the producer of the landmark TV miniseries Roots purchased the film rights to Hanta Yo and planned to give it the same treatment as Roots. Alas, before Wolper could put his plan into action, the book began drawing criticism from Native American groups contending that it was an inaccurate portrayal of the Sioux.

[...]

If you haven’t yet experienced the joys of exploring ‘The Children of Shogun,’ a great literary pleasure still awaits you. But read slowly and linger over each book. No more than a few dozen excellent examples were ever published. And no new titles are likely to appear in the foreseeable future, if Celeste Ng and her ilk have their way.

Why does tech have so many political problems?

Monday, August 6th, 2018

Why does tech have so many political problems? Tyler Cowen suggests some reasons:

  • Most tech leaders aren’t especially personable. Instead, they’re quirky introverts. Or worse.
  • Most tech leaders don’t care much about the usual policy issues. They care about AI, self-driving cars, and space travel, none of which translate into positive political influence.
  • Tech leaders are idealistic and don’t intuitively understand the grubby workings of WDC.
  • People who could be “managers” in tech policy areas (for instance, they understand tech, are good at coalition building, etc.) will probably be pulled into a more lucrative area of tech. Therefore there is an acute talent shortage in tech policy areas.
  • The Robespierrean social justice terror blowing through Silicon Valley occupies most of tech leaders’ “political” mental energy. It is hard to find time to focus on more concrete policy issues.
  • By nature, tech leaders are disagreeable iconoclasts (with individualistic and believe it or not sometimes megalomaniacal tendencies). That makes them bad at uniting as a coalition.
  • The industry is so successful that it’s not very popular among the rest of U.S. companies and it lacks allies. (90%+ of S&P 500 market cap appreciation this year has been driven by tech.) Many other parts of corporate America see tech as a major threat.

They can hold two or more conflicting concepts in their mind

Monday, August 6th, 2018

Closed-minded people would never consider that they could actually be closed-minded. In his book Principles, Ray Dalio lays out ways you can tell the difference between the open- and closed-minded:

  • Closed-minded people don’t want their ideas challenged. They are typically frustrated that they can’t get the other person to agree with them instead of curious as to why the other person disagrees.
  • Closed-minded people are more likely to make statements than ask questions.
  • Open-minded people genuinely believe they could be wrong; the questions that they ask are genuine.
  • Closed-minded people focus much more on being understood than on understanding others.
  • Open-minded people feel compelled to see things through others’ eyes.
  • Closed-minded people say things like “I could be wrong … but here’s my opinion.” This is a classic cue I hear all the time. It’s often a perfunctory gesture that allows people to hold their own opinion while convincing themselves that they are being open-minded. If your statement starts with “I could be wrong”…, you should probably follow it with a question and not an assertion.
  • Open-minded people know when to make statements and when to ask questions.
  • Closed-minded people block others from speaking.
  • Open-minded people are always more interested in listening than in speaking.
  • Closed-minded people have trouble holding two thoughts simultaneously in their minds.
  • Open-minded people can take in the thoughts of others without losing their ability to think well — they can hold two or more conflicting concepts in their mind and go back and forth between them to assess their relative merits.
  • Closed-minded people lack a deep sense of humility.
  • Open-minded people approach everything with a deep-seated fear that they may be wrong.

Best Buy is like an arms dealer

Sunday, August 5th, 2018

Best Buy should be dead, but it’s thriving in the Age of Amazon:

There is, of course, one thing Best Buy has that Amazon doesn’t: more than 1,000 big-box stores. Joly saw the benefit of using them as showrooms — a word so fraught in retail that the company calls them showcases — for the big tech brands, Amazon included. Best Buy was among the first chains to feature Apple boutiques. In April 2013, Joly said there would be Samsung mini-shops in its 1,400 U.S. locations by June. That same month, Best Buy began adding 600 Microsoft stores-within-stores. Sony arrived in 2014. Last year, Best Buy turned over more space to Amazon and Google to better display their smart home technologies. The two are bitter rivals: Amazon doesn’t sell Google Home and offers a limited selection of Google’s Nest products. Best Buy is neutral ground.

The brands essentially pay rent to Best Buy (it’s cheaper than building stores) and either send in their own salespeople or train the blue shirts. No one at Best Buy would offer details about these partnerships. But even analyst Michael Pachter of Wedbush Securities Inc., who in almost 10 years has never recommended buying Best Buy’s stock, describes the partnerships as a phenomenal success because they ease the financial burden of operating stores while enhancing profit margins. “Best Buy is like an arms dealer,” he says. “They’re indifferent to what brand you buy as long as you buy it from them.”

[...]

In August 2013, the company recruited Rob Bass from Target Corp. to make it more efficient and to save a couple hundred million dollars to help cover the costs of Joly’s price-matching strategy. Bass discovered quickly why customers were frustrated: Best Buy’s distribution centers typically weren’t open on weekends or holidays, and its warehouse management software was at least two decades old. The software has been updated, the supply operations extended, and two-day free delivery is standard on orders of $35 or more. In April 2016, Best Buy announced it would offer same-day delivery in a few cities for a fee. Right after that, Amazon expanded same-day delivery to some Prime customers for free. Best Buy then lowered its price, which had been as high as $20, to $5.99. This past holiday season, Best Buy expanded its same-day service to 40 cities.

Bass also turned back to the stores. He started a system that allowed them to fulfill orders via delivery and pickup. Best Buy says 70 percent of Americans live within 15 miles of one of its locations, so it’s been encouraging customers to come collect their orders. Forty percent of the time they do, which “helps my budget a lot,” Bass says. To make those pickups faster, the company is testing an “On My Way” function on its app to ensure customers don’t arrive before their TVs are retrieved from the back of the store. Since 2012 the proportion of its online revenue has more than doubled, from 7 percent of all U.S. sales to 16 percent, well above those at other big-box retailers.

As individual pieces of technology become simpler to use, connecting them gets more complicated and important for their utility. To Joly, this was a missed opportunity. “The vision I had from the beginning is for us to be to the consumer what a company like Accenture is for a business,” he says.

To one longtime employee, this was an enticing idea: an elite group of salespeople who could offer more than the Geek Squad did. Corie Barry had tried to start an advisory program in 2010 when she was a senior director without a budget. Now she’s chief financial officer.

How do placebos work?

Saturday, August 4th, 2018

Using lab tools to activate the brain’s reward circuit in mice empowered the immune system to fight tumors:

Clues emerged in brain imaging experiments published a decade ago. Those analyses revealed that the same reward circuit activated by food, sex and social interactions (as well as gambling and addictive drugs) is also turned on in people who respond to placebos. Puzzling over those data, researchers in Israel turned the mind-body question into an easier-to-measure physiological one: Would activation of the reward circuit have any effect on the immune system?

It seemed fair to assume positive thoughts and emotions would alter the activity of neurons in the brain. “And neuronal activity is something we can manipulate,” says biologist Asya Rolls of Technion-Israel Institute of Technology, who was co-senior author of the current study.

In previous work her team stimulated the brains of mice with a relatively new technology called DREADD (designer receptors exclusively activated by designer drugs) that puts a molecular on-off switch on particular cells—in this case, neurons of the reward circuit. After activating a mouse’s reward system, the researchers analyzed immune cells in its spleen. The clearest effects showed up in monocytes—a group of white blood cells that chew up pathogens as part of the body’s non-specific immune defenses. Specifically, the team found that monocytes from brain-activated mice killed bacteria much more effectively than monocytes from untreated animals.

Seeing that the brain’s reward circuit could boost immune activity against pathogens, “our next thought was, what is a situation where the immune system fails?” says Tamar Ben-Shaanan, a biologist now at the University of California, San Francisco who published the bacterial experiments in 2016. Ben-Shaanan and Technion MD-PhD student Maya Schiller are co-lead authors on the new study.

The idea of looking into cancer came from study co-senior author Fahed Hakim, who directs the EMMS Nazareth Hospital and works as a pediatric pulmonologist and sleep specialist at Rambam Health Care Campus in Haifa, Israel. During a research stint at the University of Chicago, Hakim studied mouse models of cancer and published a 2014 study showing that fragmented sleep made the animals’ tumors grow faster. If bad sleep triggers tumor-promoting brain activity, Hakim says, it seemed reasonable to think that activating the reward pathway might produce the opposite effect—brain changes that slow cancer.

And that’s what the researchers found. In mice with implanted cancer cells, two weeks of daily reward circuit stimulation produced a powerful response—their tumors were 40 to 50 percent smaller than those in control mice that didn’t get the brain activation. Further experiments traced this effect to a specific group of immune cells made in the bone marrow called myeloid-derived suppressor cells (MDSCs). If left unhindered, MDSCs promote tumor growth by shutting down other immune cells that keep tumors in check. However, activating the brain’s reward system unleashes chemical signals that thwart this web of checks and balances in a manner that disables these pro-tumor MDSCs. That, in turn, allows typical anti-tumor immune responses to proceed.

What made Darth Vader such a visually iconic character?

Friday, August 3rd, 2018

Darth Vader was only on screen for 8 minutes in the original Star Wars movie and for 34 minutes in the whole original trilogy. What made Darth Vader such a visually iconic character?

The most expensive new public school in San Francisco history failed

Friday, August 3rd, 2018

Daniel Duane’s daughter was assigned to the most expensive new public school in San Francisco history, Willie Brown Middle School:

It cost $54 million to build and equip, and opened less than two years earlier. It was located less than a mile from my house, in the city’s Bayview district, where a lot of the city’s public housing sits and 20 percent of residents live below the federal poverty level. This new school was to be focused on science, technology, engineering, and math — STEM, for short. There were laboratories for robotics and digital media, Apple TVs for every classroom, and Google Chromebooks for students. A “cafetorium” offered sweeping views of the San Francisco Bay, flatscreen menu displays, and free breakfast and lunch. An on-campus wellness center was to provide free dentistry, optometry, and medical care to all students. Publicity materials promised that “every student will begin the sixth grade enrolled in a STEM lab that will teach him or her coding, robotics, graphic/website design, and foundations of mechanical engineering.” The district had created a rigorous new curriculum around what it called “design thinking” and a “one-to-one tech model,” with 80-minute class periods that would allow for immersion in complex subjects.

The money for Brown came from a voter-approved bond, as well as local philanthropists. District fund-raising materials proudly announced that, through their foundation, Twitter cofounder Evan Williams and his wife, Sara, had given a total of $400,000 for “STEM-focus” and “health and wellness.” (The foundation says that figure is incorrect.) Salesforce.org, the philanthropic arm of Marc Benioff’s company Salesforce, has given nearly $35 million to Bay Area public schools in the past five years alone; each year the organization also gives $100,000 to every middle school principal in San Francisco and Oakland. The Summit Public Schools network, an organization that runs charter schools in California and Washington state and has a board of directors filled with current and former tech heavy hitters (including Meg Whitman), made a $500,000 in-kind donation of its personalized learning platform, according to those fund-raising materials. That online tool, built to help students learn at their own pace and track their progress, was created in partnership with Priscilla Chan and Mark Zuckerberg’s funding organization.

As the school’s first principal, the district hired a charismatic man named Demetrius Hobson who was educated at Morehouse and Harvard and had been a principal in Chicago’s public schools. Students from four of the Bayview’s elementary schools, where more than 75 percent of kids are socio­economically disadvantaged, were given preference to enter Willie Brown Middle. To ensure that the place would also be diverse, the district lured families from other parts of town with a “golden ticket” that would make it easier for graduates from Brown to attend their first choice of public high school.

The message worked. Parents from all over the city — as well as parents from the Bayview who would otherwise have sent their kids to school elsewhere — put their kids’ names in for spots at the new school. Shawn Whalen, who was then the chief of staff at San Francisco State University, and Xander Shapiro, the chief marketing officer for a startup, had children in public elementary schools that fed into well-regarded middle schools. But, liking what they heard, both listed Brown as a top choice in the lottery. Kandace ­Landake — a Bayview resident and Uber driver who wanted her children to have a better education than she’d received, and whose children were in good public schools outside the neighborhood — likewise took a chance on Brown. One third-­generation Bayview resident, whom I’ll call Lisa Green, works at a large biotech company and had been sending her daughter to private school. But she too was so enticed that she marked Brown as her first choice in the lottery, and her daughter got in.

On opening day in August of 2015, around two dozen staff members greeted the very first class. That’s when the story took an alarming turn. Newspapers reported chaos on campus. Landake was later quoted in the San Francisco Examiner: “The first day of school there were, like, multiple incidents of physical violence.”

Inconceivable!

Microfilm has a future?

Thursday, August 2nd, 2018

Microfilm is profoundly unfashionable in our modern information age, but it has quite a history — and may still have a future:

The first micrographic experiments, in 1839, reduced a daguerreotype image down by a factor of 160. By 1853, the format was already being assessed for newspaper archives. The processes continued to be refined during the 19th century. Even so, microfilm was still considered a novelty when it was displayed at the Centennial Exposition in Philadelphia of 1876.

The contemporary microfilm reader has multiple origins. Bradley A. Fiske filed a patent for a “reading machine” on March 28, 1922, a pocket-sized handheld device that could be held up to one eye to magnify columns of tiny print on a spooling paper tape. But the apparatus that gained traction was G. L. McCarthy’s 35mm scanning camera, which Eastman Kodak introduced as the Rekordak in 1935, specifically to preserve newspapers. By 1938, universities began using it to microfilm dissertations and other research papers. During World War II, microphotography became a tool for espionage, and for carrying military mail, and soon there was a recognition that massive archives of information and cross-referencing gave agencies an advantage. Libraries adopted microfilm by 1940, after realizing that they could not physically house an increasing volume of publications, including newspapers, periodicals, and government documents. As the war concluded in Europe, a coordinated effort by the U.S. Library of Congress and the U.S. State Department also put many international newspapers on microfilm as a way to better understand quickly changing geopolitical situations. Collecting and cataloging massive amounts of information, in microscopic form, from all over the world in one centralized location led to the idea of a centralized intelligence agency in 1947.

It wasn’t just spooks and archivists, either. Excited by the changing future of reading, in 1931, Gertrude Stein, William Carlos Williams, F. W. Marinetti, and 40 other avant-garde writers ran an experiment for Bob Brown’s microfilm-like reading machine. The specially processed texts, called “readies,” produced something between an art stunt and a pragmatic solution to libraries needing more shelf space and better delivery systems. Over the past decade, I have redesigned the readies for 21st-century reading devices such as smartphones, tablets, and computers.

By 1943, 400,000 pages had been transferred to microfilm by the U.S. National Archives alone, and the originals were destroyed. Millions more were reproduced and destroyed worldwide in an effort to protect the content from the ravages of war. In the 1960s, the U.S. government offered microfilm documents, especially newspapers and periodicals, for sale to libraries and researchers; by the end of the decade, copies of nearly 100,000 rolls (with about 700 pages on each roll) were available.

Their longevity was another matter. As early as May 17, 1964, as reported in The New York Times, microfilm appeared to degrade, with “microfilm rashes” consisting of “small spots tinged with red, orange or yellow” appearing on the surface. An anonymous executive in the microfilm market was quoted as saying they had “found no trace of measles in our film but saw it in the film of others and they reported the same thing about us.” The acetate in the film stock was decaying after decades of use and improper storage, and the decay also created a vinegar smell—librarians and researchers sometimes joked about salad being made in the periodical rooms. The problem was solved by the early 1990s, when Kodak introduced polyester-based microfilm, which promised to resist decay for at least 500 years.

Microfilm got a competitor when National Cash Register (NCR), a company now known for introducing magnetic-strip and electronic data-storage devices in the late 1950s and early ’60s, marketed Carl O. Carlson’s microfiche reader in 1961. This storage system placed more than 100 pages on one four-by-six-inch sheet of film in a grid pattern. Because microfiche was introduced much later than microfilm, it played a reduced role in newspaper preservation and government archives; it was more widely used in emerging computer data-storage systems. Eventually, electronic archives replaced microfiche almost entirely, while its cousin microfilm remained separate.

Microfilm’s decline intensified with the development of optical-character-recognition (OCR) technology. Initially used to search microfilm in the 1930s, Emanuel Goldberg designed a system that could read characters on film and translate them into telegraph code. At MIT, a team led by Vannevar Bush designed a microfilm rapid selector capable of finding information rapidly on microfilm. Ray Kurzweil further improved OCR, and by the end of the 1970s, he had created a computer program, later bought by Xerox, that was adopted by LexisNexis, which sells software for electronically storing and searching legal documents.

[...]

Today’s digital searches allow a reader to jump directly to a desired page and story, eliminating one downside of microfilm. But there’s a trade-off: Digital documents usually omit the context. The surrounding pages in the morning paper or the rest of the issue of a magazine or journal vanish when a single, specific article can be retrieved directly. That context includes more than a happenstance encounter with an abutting news story. It also includes advertisements, the position and size of one story in relation to others, and even the overall design of the page at the time of its publication. A digital search might retrieve what you are looking for (it also might not!), but it can obscure the historical context of that material.

xkcd Digital Resource Lifespan

The devices are still in widespread use, and their mechanical simplicity could help them last longer than any of the current electronic technologies. As the web comic xkcd once observed, microfilm has better lasting power than websites, which often vanish, or CD-roms, for which most computers don’t have readers anymore.

The xkcd comic gets a laugh because it seems absurd to suggest microfilm as the most reliable way to store archives, even though it will remain reliable for 500 years. Its lasting power keeps it a mainstay in research libraries and archives. But as recent cutting-edge technologies approach ever more rapid obsolescence, past (and passed-over) technologies such as the microfilm machine won’t go away. They’ll remain, steadily doing the same work they have done for the past century for another five more at least — provided the libraries they are stored in stay open, and the humans that would read and interpret their contents survive.

It’s a party in the sky

Wednesday, August 1st, 2018

Turkish Airlines has a new safety video — starring characters from The LEGO Movie:

An old tuberculosis vaccine can reverse Type 1 diabetes

Wednesday, August 1st, 2018

The 100-year-old BCG vaccine against tuberculosis can reverse Type 1 diabetes to almost undetectable levels, an eight-year study has shown:

Used for almost a century to prevent tuberculosis, the BCG vaccine helps boost and regulate the immune system. The team also discovered that the jab speeds up the rate by which cells convert glucose into energy and tests on mice show that it could also be beneficial against Type 2 diabetes.

The new study involved 52 people with Type 1 diabetes. After three years of treatment average blood sugar levels had dropped by 10 per cent and by 18 per cent after four years. Treated participants had an average blood glucose score of 6.65, close to the 6.5 considered the threshold for diabetes diagnosis.

In comparison, the blood sugar of those in the placebo group continued to rise over the trial period.