Why does tech have so many political problems?

Monday, August 6th, 2018

Why does tech have so many political problems? Tyler Cowen suggests some reasons:

  • Most tech leaders aren’t especially personable. Instead, they’re quirky introverts. Or worse.
  • Most tech leaders don’t care much about the usual policy issues. They care about AI, self-driving cars, and space travel, none of which translate into positive political influence.
  • Tech leaders are idealistic and don’t intuitively understand the grubby workings of WDC.
  • People who could be “managers” in tech policy areas (for instance, they understand tech, are good at coalition building, etc.) will probably be pulled into a more lucrative area of tech. Therefore there is an acute talent shortage in tech policy areas.
  • The Robespierrean social justice terror blowing through Silicon Valley occupies most of tech leaders’ “political” mental energy. It is hard to find time to focus on more concrete policy issues.
  • By nature, tech leaders are disagreeable iconoclasts (with individualistic and believe it or not sometimes megalomaniacal tendencies). That makes them bad at uniting as a coalition.
  • The industry is so successful that it’s not very popular among the rest of U.S. companies and it lacks allies. (90%+ of S&P 500 market cap appreciation this year has been driven by tech.) Many other parts of corporate America see tech as a major threat.

Microfilm has a future?

Thursday, August 2nd, 2018

Microfilm is profoundly unfashionable in our modern information age, but it has quite a history — and may still have a future:

The first micrographic experiments, in 1839, reduced a daguerreotype image down by a factor of 160. By 1853, the format was already being assessed for newspaper archives. The processes continued to be refined during the 19th century. Even so, microfilm was still considered a novelty when it was displayed at the Centennial Exposition in Philadelphia of 1876.

The contemporary microfilm reader has multiple origins. Bradley A. Fiske filed a patent for a “reading machine” on March 28, 1922, a pocket-sized handheld device that could be held up to one eye to magnify columns of tiny print on a spooling paper tape. But the apparatus that gained traction was G. L. McCarthy’s 35mm scanning camera, which Eastman Kodak introduced as the Rekordak in 1935, specifically to preserve newspapers. By 1938, universities began using it to microfilm dissertations and other research papers. During World War II, microphotography became a tool for espionage, and for carrying military mail, and soon there was a recognition that massive archives of information and cross-referencing gave agencies an advantage. Libraries adopted microfilm by 1940, after realizing that they could not physically house an increasing volume of publications, including newspapers, periodicals, and government documents. As the war concluded in Europe, a coordinated effort by the U.S. Library of Congress and the U.S. State Department also put many international newspapers on microfilm as a way to better understand quickly changing geopolitical situations. Collecting and cataloging massive amounts of information, in microscopic form, from all over the world in one centralized location led to the idea of a centralized intelligence agency in 1947.

It wasn’t just spooks and archivists, either. Excited by the changing future of reading, in 1931, Gertrude Stein, William Carlos Williams, F. W. Marinetti, and 40 other avant-garde writers ran an experiment for Bob Brown’s microfilm-like reading machine. The specially processed texts, called “readies,” produced something between an art stunt and a pragmatic solution to libraries needing more shelf space and better delivery systems. Over the past decade, I have redesigned the readies for 21st-century reading devices such as smartphones, tablets, and computers.

By 1943, 400,000 pages had been transferred to microfilm by the U.S. National Archives alone, and the originals were destroyed. Millions more were reproduced and destroyed worldwide in an effort to protect the content from the ravages of war. In the 1960s, the U.S. government offered microfilm documents, especially newspapers and periodicals, for sale to libraries and researchers; by the end of the decade, copies of nearly 100,000 rolls (with about 700 pages on each roll) were available.

Their longevity was another matter. As early as May 17, 1964, as reported in The New York Times, microfilm appeared to degrade, with “microfilm rashes” consisting of “small spots tinged with red, orange or yellow” appearing on the surface. An anonymous executive in the microfilm market was quoted as saying they had “found no trace of measles in our film but saw it in the film of others and they reported the same thing about us.” The acetate in the film stock was decaying after decades of use and improper storage, and the decay also created a vinegar smell—librarians and researchers sometimes joked about salad being made in the periodical rooms. The problem was solved by the early 1990s, when Kodak introduced polyester-based microfilm, which promised to resist decay for at least 500 years.

Microfilm got a competitor when National Cash Register (NCR), a company now known for introducing magnetic-strip and electronic data-storage devices in the late 1950s and early ’60s, marketed Carl O. Carlson’s microfiche reader in 1961. This storage system placed more than 100 pages on one four-by-six-inch sheet of film in a grid pattern. Because microfiche was introduced much later than microfilm, it played a reduced role in newspaper preservation and government archives; it was more widely used in emerging computer data-storage systems. Eventually, electronic archives replaced microfiche almost entirely, while its cousin microfilm remained separate.

Microfilm’s decline intensified with the development of optical-character-recognition (OCR) technology. Initially used to search microfilm in the 1930s, Emanuel Goldberg designed a system that could read characters on film and translate them into telegraph code. At MIT, a team led by Vannevar Bush designed a microfilm rapid selector capable of finding information rapidly on microfilm. Ray Kurzweil further improved OCR, and by the end of the 1970s, he had created a computer program, later bought by Xerox, that was adopted by LexisNexis, which sells software for electronically storing and searching legal documents.

[...]

Today’s digital searches allow a reader to jump directly to a desired page and story, eliminating one downside of microfilm. But there’s a trade-off: Digital documents usually omit the context. The surrounding pages in the morning paper or the rest of the issue of a magazine or journal vanish when a single, specific article can be retrieved directly. That context includes more than a happenstance encounter with an abutting news story. It also includes advertisements, the position and size of one story in relation to others, and even the overall design of the page at the time of its publication. A digital search might retrieve what you are looking for (it also might not!), but it can obscure the historical context of that material.

xkcd Digital Resource Lifespan

The devices are still in widespread use, and their mechanical simplicity could help them last longer than any of the current electronic technologies. As the web comic xkcd once observed, microfilm has better lasting power than websites, which often vanish, or CD-roms, for which most computers don’t have readers anymore.

The xkcd comic gets a laugh because it seems absurd to suggest microfilm as the most reliable way to store archives, even though it will remain reliable for 500 years. Its lasting power keeps it a mainstay in research libraries and archives. But as recent cutting-edge technologies approach ever more rapid obsolescence, past (and passed-over) technologies such as the microfilm machine won’t go away. They’ll remain, steadily doing the same work they have done for the past century for another five more at least — provided the libraries they are stored in stay open, and the humans that would read and interpret their contents survive.

Google was not a normal place

Monday, July 23rd, 2018

Google was not a normal place, as this disjointed excerpt from Valley of Genius explains:

Charlie Ayers, Google’s first executive chef and, therefore, a member of an early executive team: I remember going in for an interview and Larry bounced on by on one of these big balls that have handles on them, like you buy at Toys “R” Us when you’re a kid. It was just a very unprofessional, uncorporation attitude. I have a pretty good understanding of doing things differently from the Grateful Dead—I’ve worked on and off with them over the years—but from my perspective, looking from the outside, it was an odd interview. I’d never had one like that. I left them thinking that these guys are crazy. They don’t need a chef!

Heather Cairns: I was very surprised that they hired this ex–Grateful Dead chef, since clearly everything that goes with that is coming with Charlie. Talk about a counterculture person!

Charlie Ayers: Larry’s dad was a big Deadhead; he used to run the Grateful Dead-hour talk show on the radio every Sunday night. Larry grew up in the Grateful Dead environment.

Larry Page: We do go out of our way to recruit people who are a little bit different.

Charlie Ayers: There was no under-my-thumb bullshit going on where you all had to dress and look and smell and act alike. Their unwritten tagline is like: You show up in a suit? You’re not getting hired! I remember people that they wanted showing up in suits and them saying, “Go home and change and be yourself and come back tomorrow.”

Heather Cairns: We said it was O.K. to bring pets to work one day a week. And what that did was encourage people to get lizards, cats, dogs—oh my God, everything was coming through the door! I was mortified because I know this much: if you have your puppy at work, you’re not working that much.

Douglas Edwards, Google employee #59: We would go up to Squaw Valley, C.A., and attendance was pretty much mandatory. That became the company thing.

Ray Sidney: The very first ski trip was in the first part of 1999. That was definitely a popular event over the years.

Charlie Ayers: On the ski trips in Squaw Valley, I would have these unsanctioned parties and finally the company was like, “All right, we’ll give Charlie what he wants.” And I created Charlie’s Den. I had live bands, D.J.s, and we bought truckloads of alcohol and a bunch of pot and made ganja goo balls. I remember people coming up to me and saying, “I’m hallucinating. What the fuck is in those?” . . . Larry and Sergey had like this gaggle of girls who were hot, and all become like their little harem of admins, I call them the L&S Harem, yes. All those girls are now different heads of departments in that company, years later. (A spokesperson for Google declined to comment.)

Heather Cairns: You kind of trusted Larry with his personal life. We always kind of worried that Sergey was going to date somebody in the company . . .

Charlie Ayers: Sergey’s the Google playboy. He was known for getting his fingers caught in the cookie jar with employees that worked for the company in the masseuse room. He got around.

Heather Cairns: And we didn’t have locks, so you can’t help it if you walk in on people if there’s no lock. Remember, we’re a bunch of twentysomethings except for me—ancient at 35, so there’s some hormones and they’re raging.

Charlie Ayers: H.R. told me that Sergey’s response to it was, “Why not? They’re my employees.” But you don’t have employees for fucking! That’s not what the job is.

Heather Cairns: Oh my God: this is a sexual harassment claim waiting to happen! That was my concern.

Charlie Ayers: When Sheryl Sandberg joined the company is when I saw a vast shift in everything in the company. People who came in wearing suits were actually being hired.

Heather Cairns: When Eric Schmidt joined, I thought, Well, now, we have a chance. This guy is serious. This guy is real. This guy is high-profile. And of course he had to be an engineer, too. Otherwise, Larry and Sergey wouldn’t have it.

Selling Ghost Gunners has been a lucrative business

Monday, July 16th, 2018

Crypto-provocateur Cody Wilson recently won his legal battle — the Department of Justice quietly offered him a settlement to end a lawsuit he and a group of co-plaintiffs had pursued since 2015 — and now posting gun designs online is recognized as free speech:

The Department of Justice’s surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber — with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition — and move their regulation to the Commerce Department, which won’t try to police technical data about the guns posted on the public internet.

[...]

Now Wilson is making up for lost time. Later this month, he and the nonprofit he founded, Defense Distributed, are relaunching their website Defcad.com as a repository of firearm blueprints they’ve been privately creating and collecting, from the original one-shot 3-D-printable pistol he fired in 2013 to AR-15 frames and more exotic DIY semi-automatic weapons. The relaunched site will be open to user contributions, too; Wilson hopes it will soon serve as a searchable, user-generated database of practically any firearm imaginable.

[...]

In the meantime, selling Ghost Gunners has been a lucrative business. Defense Distributed has sold roughly 6,000 of the desktop devices to DIY gun enthusiasts across the country, mostly for $1,675 each, netting millions in profit.

[...]

With the rule change their win entails, Defense Distributed has removed a legal threat to not only its project but an entire online community of DIY gunmakers. Sites like GrabCAD and FossCad already host hundreds of gun designs, from Defense Distributed’s Liberator pistol to printable revolvers and even semiautomatic weapons. “There’s a lot of satisfaction in doing things yourself, and it’s also a way of expressing support for the Second Amendment,” explains one prolific Fosscad contributor, a West Virginian serial inventor of 3-D-printable semiautomatics who goes by the pseudonym Derwood. “I’m a conservative. I support all the amendments.”

[...]

Inside is a far quieter scene: A large, high-ceilinged, dimly fluorescent-lit warehouse space filled with half a dozen rows of gray metal shelves, mostly covered in a seemingly random collection of books, from The Decline and Fall of the Roman Empire to Hunger Games. He proudly points out that it includes the entire catalog of Penguin Classics and the entire Criterion Collection, close to 900 Blu-rays. This, he tells me, will be the library.

And why is Defense Distributed building a library? Wilson, who cites Baudrillard, Foucault, or Nietzsche at least once in practically any conversation, certainly doesn’t mind the patina of erudition it lends to what is essentially a modern-day gun-running operation. But as usual, he has an ulterior motive: If he can get this room certified as an actual, official public library, he’ll unlock another giant collection of existing firearm data. The US military maintains records of thousands of the specs for thousands of firearms in technical manuals, stored on reels and reels of microfiche cassettes. But only federally approved libraries can access them. By building a library, complete with an actual microfiche viewer in one corner, Wilson is angling to access the US military’s entire public archive of gun data, which he eventually hopes to digitize and include on Defcad.com, too.

Liquid fluorine is spectacular

Saturday, July 14th, 2018

There was a time when rocket designers felt comfortable proposing propellants that would be considered insane today:

One of these was fluorine, an oxidizer so powerful that it will oxidize oxygen. Liquified it is denser than LOX and provides a higher specific impulse than LOX when burned with the same fuels. On paper, liquid fluorine is spectacular. In reality, fluorine is toxic and just about all of the combustion compounds are toxic (burn it with hydrogen and you get hydrofluoric acid, which will eat your bones). Fluorine has the added bonus that it will merrily combust with a whole lot of structural materials, so you have to be careful in your design and preparation for tanks, pumps, lines, etc.

Consequently, it was important to know your stuff. To that end, Douglas Missile & Space Systems Division produced a Fluorine Systems Handbook.

The best design uses gears from the middle of the list

Wednesday, July 11th, 2018

I was recently reminded of Feynman’s anecdote about an early wartime engineering job he had, and I finally got around to pulling my copy of Surely You’re Joking off the shelf to transcribe it:

Near the end of the summer I was given my first real design job: a machine that would make a continuous curve out of a set of points — one point coming in every fifteen seconds — from a new invention developed in England for tracking airplanes, called “radar.” It was the first time I had ever done any mechanical designing, so I was a little bit frightened.

I went over to one of the other guys and said, “You’re a mechanical engineer; I don’t know how to do any mechanical engineering, and I just got this job…”

“There’s nothin’ to it,” he said. “Look, I’ll show you. There’s two rules you need to know to design these machines. First, the friction in every bearing is so-and-so much, and in every gear junction, so-and-so much. From that, you can figure out how much force you need to drive the thing. Second, when you have a gear ratio, say 2 to 1, and you are wondering whether you should make it 10 to 5 or 24 to 12 or 48 to 24, here’s how to decide: You look at the Boston Gear Catalogue, and select those gears that are in the middle of the list. The ones at the high end have so many teeth they’re hard to make. If they could make gears with even finer teeth, they’d have made the list go even higher. The gears at the low end of the list have so few teeth they break easy. So the best design uses gears from the middle of the list.”

I had a lot of fun designing that machine. By simply selecting the gears from the middle of the list and adding up the little torques with the two numbers he gave me, I could be a mechanical engineer!

The Fourth Industrial Revolution will transform the character of war

Wednesday, June 27th, 2018

The U.S. military has extensive combat experience — in small wars — but it may not know what to expect from war in the Fourth Industrial Revolution:

Schwab’s book has generated some fascinating discussions about how the Fourth Industrial Revolution will affect governance, business, and society. But surprisingly little of this discussion seems to have penetrated the U.S. military and influenced its thinking about future wars. What will it mean to fight wars in a world characterized by the Fourth Industrial Revolution — and what will it take to win?

Just as it will disrupt and reshape society, the Fourth Industrial Revolution will transform the character of war. The fundamental nature of war may remain constant, as Clausewitz argued so many years ago, but the ways in which wars are fought constantly shift as societies evolve. The synergies among the elements of the Fourth Industrial Revolution are already transfiguring the battlefields of the 21st century, in several different ways:

Space and cyber. These two relatively new domains emerged from the third industrial revolution, but have never been fully contested during wartime. There are no lessons learned documents, no historic battles to study, no precedent for how warfare in these domains might play out — and no way to know how cripplingly destructive it could be to modern society. And any battles in those domains will also hinder — and could even debilitate — the U.S. military’s ability to fight in the more traditional domains of land, sea, and air, since vital communications and other support systems today depend almost entirely on space satellites and computer networks.

Artificial intelligence, big data, machine learning, autonomy, and robotics. Some of the most prominent leaders in these fields are publicly warning about the dangers in an unconstrained environment. Military operations enabled by these technologies, and especially by artificial intelligence, may unfold so quickly that effective responses require taking humans out of the decision cycle. Letting intelligent machines make traditionally human decisions about killing other humans is fraught with moral peril, but may become necessary to survive on the future battlefield, let alone to win. Adversaries will race to employ these capabilities and the powerful operational advantages they may confer.

The return of mass and the defensive advantage. T.X. Hammes convincingly argues that the U.S. military has traded mass for precision in recent decades, enabling smaller forces using guided weapons to fight successfully. But the technologies of the Fourth Industrial Revolution will enable a wide range of actors to acquire masses of inexpensive capabilities that they never could before, especially through advances in additive manufacturing (also known as 3D printing). That means the U.S. military must move away from today’s small numbers of exorbitantly expensive “exquisite” weapons systems toward smaller, smarter, and cheaper weapons — especially masses of autonomous drones with swarming destructive power. Hammes also argues that such swarms “may make defense the dominant form of warfare,” because they will make “domain denial much easier than domain usage.”

A new generation of high tech weapons. The United States and some of its potential adversaries are incorporating the technologies of the Fourth Industrial Revolution into a range of innovative new weapons systems, including railguns, directed energy weapons, hyper-velocity projectiles, and hypersonic missiles. These new weapons will dramatically increase the speed, range, and destructive power of conventional weapons beyond anything previously imaginable. However, the U.S. military remains heavily over-invested in legacy systems built upon late 20th century technologies which compete against these newest technologies for scarce defense dollars. Here, rising powers such as China have a distinct new mover advantage. They can incorporate the very newest technologies without the huge financial burdens of supporting of older systems and the military-industrial constituencies that promote them (and, for authoritarian states, without adhering to democratic norms of transparency and civilian oversight). This challenge is severely exacerbated by the broken U.S. acquisition system, in which the development timelines for new weapons systems extends across decades.

The unknown x-factor. Secret technologies developed by friend and foe alike will likely appear for the first time during the next major war, and it is impossible to predict how they will change battlefield dynamics. They could render current weapons inoperable or obsolete, or offer a surprise war-winning capability to one side. And it is entirely possible that technologies secretly guarded by one side or the other for surprise use on the first day of the next war may have already been compromised. The usual fog of war will become even denser, presenting all sorts of unanticipated, unfamiliar challenges to U.S. forces.

The emerging characteristics of the Fourth Industrial Revolution suggest we are on the precipice of profound changes to the character of war. While the next major conflict will unquestionably exhibit all of war’s enduring human qualities, its battles, weapons, and tactics may well be entirely unprecedented. Military officers today may be marching, largely unaware, to the end of a long and comfortably familiar era of how to fight a major war.

The study of warfare has always heavily relied upon scrutinizing past battles to discern the lessons of those as yet unfought. But in today’s world, that important historical lens should be augmented by one that focuses on the future. Fictional writings about future war can help military thinkers break free of the mental constraints imposed by linear thinking and identify unexpected dynamics, threats, and challenges of the future battlefield. Stories such as Ghost Fleet, Automated Valor, Kill Decision, and many others all can help creative military leaders imagine the unimaginable, and visualize how the battles of the next war may play out in ways the lens of the past fails to illuminate. This will help ensure the first war of the Fourth Industrial Revolution does not result from a failure of imagination, as the 9/11 attacks have been so memorably described.

Re-creating the first flip-flop

Friday, June 15th, 2018

The flip-flop was created 100 years ago — in the pre-digital age:

Many engineers are familiar with the names of Lee de Forest, who invented the amplifying vacuum tube, or John Bardeen, Walter Brattain, and William Shockley, who invented the transistor. Yet few know the names of William Eccles and F.W. Jordan, who applied for a patent for the flip-flop 100 years ago, in June 1918. The flip-flop is a crucial building block of digital circuits: It acts as an electronic toggle switch that can be set to stay on or off even after an initial electrical control signal has ceased. This allows circuits to remember and synchronize their states, and thus allows them to perform sequential logic.

The flip-flop was created in the predigital age as a trigger relay for radio designs. Its existence was popularized by an article in the December 1919 issue of The Radio Review [PDF], and two decades later, the flip-flop would find its way into the Colossus computer [PDF], used in England to break German wartime ciphers, and into the ENIAC in the United States.

Modern flip-flops are built in countless numbers out of transistors in integrated circuits, but, as the centenary of the flip-flop approached, I decided to replicate Eccles and Jordan’s original circuit as closely as possible.

This circuit is built around two vacuum tubes, so I started there. Originally, Eccles and Jordan most likely used Audion tubes or British-made knock-offs. The Audion was invented by de Forest, and it was the first vacuum tube to demonstrate amplification, allowing a weak signal applied to a grid to control a much larger electrical current flowing from a filament to a plate. But these early tubes were handmade and unreliable, and it would be impractical to obtain a usable pair today.

Instead I turned to the UX201A, an improved variant of the UV201 tube that General Electric started producing in 1920. While still close in time to the original patent, the UV201 marked the beginning of vacuum-tube mass production, and a consequent leap in reliability and availability. I was able to purchase two 01A tubes for about US $35 apiece.

Flip-Flop Circuit Diagram

In a flip-flop, the tubes are cross-coupled in a careful balancing act, using pairs of resistors to control voltages. This balancing act means that turning off one tube, even momentarily, turns the second tube on and keeps the first tube off. This state of affairs continues until the second tube is turned off with a control signal, which pushes the first tube on and keeps the second tube off.

Achieving the right balance means getting the values of the resistors just right. In their laboratory, Eccles and Jordan would have used resistor decade boxes, bulky pieces of equipment that would have let them dial in resistances at different points in their circuit. For reasons of space, I decided to use fixed resistors of a similar vintage as the patent.

I was able to obtain a set of such resistors from the collection of antique radios that I’ve accumulated over the years. In the 1920s, radio manufacturing exploded, and the result is that I have quite a few early radios that are pretty nondescript and beyond repair, so I didn’t feel too bad about cannibalizing them for parts. Resistors made before 1925 were generally placed into sockets, rather than soldered into a circuit board, so extracting them wasn’t hard.

The hard part was that these resistors are very imprecise. They were handmade with a resistive carbon element held between clips in a glass enclosure. One way to get their resistance closer to the desired value is to open up the enclosure, remove the strip of carbon, make notches in it to increase its resistance, and put it back in. I adjusted several of the resistors this way, but it was too tricky to do with others, so for those I cheated a little and placed modern resistors inside the vintage glass casing.

Flip-Flop Replica

I used modern battery supplies, in order to avoid the use of the numerous wet cells that the inventors probably used. One of the issues with tube-based circuits is that a range of voltages is required. Four D cells wired in series provides the 6 volts needed for the indicator lamps and the filament of the tubes. Connecting eleven 9-V batteries in series provided the 99 V required for the tubes’ plate. A similarly constructed 63-V power supply is needed to negatively bias the tubes’ grids. Old-fashioned brass doorbell buttons let me tap a 9-V battery connection to provide the control pulses. To show the flip-flop’s state, I used sensitive antique telegraph relays that operate miniature incandescent lamps.

With a lot of trial and error and tweaking of my nearly century-old components, over the course of a year I was finally able to achieve stable operation of this venerable circuit!

It is a one-way conduit to bring another society into their living rooms

Wednesday, June 13th, 2018

The Amish have negotiated a pact with modernity:

It’s interesting that the Amish have different districts, and each district has different rules about what’s allowed and what’s not allowed. Yet it’s very clear there are two technologies that, as soon as the community accepts them, they are no longer Amish. Those technologies are the television and the automobile.

They particularly see those two as having a fundamental impact on their society and daily lives.

I think a huge part is that they shape our relationships with other people. The reason the Amish rejected television is because it is a one-way conduit to bring another society into their living rooms. And they want to maintain the society as they have created it. And the automobile as well. As soon as you have a car, your ability to leave your local community becomes significantly easier.

You no longer have to rely on your neighbor for eggs when you run out. You can literally take half an hour and run to the store. In a horse and buggy, when you don’t have your own chickens, that’s a half-day process.

[...]

The Amish use us as an experiment. They watch what happens when we adopt new technology, and then decide whether that’s something they want to adopt themselves. I asked one Amish person why they didn’t use automobiles. He simply smiled and turned to me and said, “Look what they did to your society.” And I asked what do you mean? “Well, do you know your neighbor? Do you know the names of your neighbors?” And, at the time, I had to admit to the fact that I didn’t.

And he pointed out that my ability to simply bypass them with the windows closed meant I didn’t have to talk to them. And as a result, I didn’t.

His argument was that they were looking at us to decide whether or not this was something they wanted to do or not. I think that happens in our society as well. We certainly have this idea of alpha and beta testing. There are people very, very excited to play that role. I don’t know if they always frame themselves as guinea pigs, but that’s what they are.

It is your fault for following the wrong people

Sunday, June 3rd, 2018

Is surfing the internet dead?

Ten to fifteen years ago, I remember the joys of just finding things, clicking links through to other links, and in general meandering through a thick, messy, exhilarating garden.

Today you can’t do that as much. Many media sites are gated, a lot of the personal content is in the walled garden of Facebook, and blogs and personal home pages are not as significant as before.

[...]

That said, I do not feel that time on the internet has become an inferior experience. It’s just that these days you find most things by Twitter. You don’t have to surf, because this aggregator performs a surfing-like function for you. Scroll rather than surf, you could say (“scrolling alone,” said somebody on Twitter).

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).

No one else was familiar with both fields at the same time

Sunday, May 27th, 2018

The history of computers is best understood as a history of ideas:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

I don’t think most computer science students learn even a fraction of this intellectual history.

Making everything else that was previously considered into obviously terrible ideas

Wednesday, May 16th, 2018

John Carmack shares some stories about Steve Jobs:

My wife once asked me “Why do you drop what you are doing when Steve Jobs asks you to do something? You don’t do that for anyone else.”

It is worth thinking about.

As a teenage Apple computer fan, Jobs and Wozniak were revered figures for me, and wanting an Apple 2 was a defining characteristic of several years of my childhood. Later on, seeing NeXT at a computer show just as I was selling my first commercial software felt like a vision into the future. (But $10k+, yikes!)

As Id Software grew successful through Commander Keen and Wolfenstein 3D, the first major personal purchase I made wasn’t a car, but rather a NeXT computer. It turned out to be genuinely valuable for our software development, and we moved the entire company onto NeXT hardware.

We loved our NeXTs, and we wanted to launch Doom with an explicit “Developed on NeXT computers” logo during the startup process, but when we asked, the request was denied.

Some time after launch, when Doom had begun to make its cultural mark, we heard that Steve had changed his mind and would be happy to have NeXT branding on it, but that ship had sailed. I did think it was cool to trade a few emails with Steve Jobs.

Several things over the years made me conclude that, at his core, Steve didn’t think very highly of games, and always wished they weren’t as important to his platforms as they turned out to be. I never took it personally.

When NeXT managed to sort of reverse-acquire Apple and Steve was back in charge, I was excited by the possibilities of a resurgent Apple with the virtues of NeXT in a mainstream platform.

I was brought in to talk about the needs of games in general, but I made it my mission to get Apple to adopt OpenGL as their 3D graphics API. I had a lot of arguments with Steve.

Part of his method, at least with me, was to deride contemporary options and dare me to tell him differently. They might be pragmatic, but couldn’t actually be good. “I have Pixar. We will make something [an API] that is actually good.”

It was often frustrating, because he could talk, with complete confidence, about things he was just plain wrong about, like the price of memory for video cards and the amount of system bandwidth exploitable by the AltiVec extensions.

But when I knew what I was talking about, I would stand my ground against anyone.

When Steve did make up his mind, he was decisive about it. Dictates were made, companies were acquired, keynotes were scheduled, and the reality distortion field kicked in, making everything else that was previously considered into obviously terrible ideas.

I consider this one of the biggest indirect impacts on the industry that I have had. OpenGL never seriously threatened D3D on PC, but it was critical at Apple, and that meant that it remained enough of a going concern to be the clear choice when mobile devices started getting GPUs. While long in the tooth now, it was so much better than what we would have gotten if half a dozen SoC vendors rolled their own API back at the dawn of the mobile age.

It’s hardly the megawatt monster military scientists dreamed of

Wednesday, April 18th, 2018

The U.S. Navy’s most advanced laser weapon looks like a pricey amateur telescope, and, at just 30 kilowatts, it’s hardly the megawatt monster military scientists dreamed of decades ago to shoot down ICBMs, but it is a major milestone, built on a new technology:

The mission shift has been going on for years, from global defense against nuclear-armed “rogue states” to local defense against insurgents. The technology shift has been more abrupt, toward the hot new solid-state technology of optical-fiber lasers. These are the basis of a fast-growing US $2 billion industry that has reengineered the raw materials of global telecommunications to cut and weld metals, and it is now being scaled to even higher power with devastating effect.

Naval Laser by MCKIBILLO

Industrial fiber lasers can be made very powerful. IPG recently sold a 100-fiber laser to the NADEX Laser R&D Center in Japan that can weld metal parts up to 30 centimeters thick. But that high of a power output comes at the sacrifice of the ability to focus the beam over a distance. Cutting and welding tools need to operate only centimeters from their targets, after all. The highest power from single fiber lasers with beams good enough to focus onto objects hundreds of meters or more away is much less — 10 kW. Still, that’s adequate for stationary targets like unexploded ordnance left on a battlefield, because you can keep the laser trained on the explosive long enough to detonate it.

Of course, 10 kW won’t stop a speeding boat before it can deliver a bomb. The Navy laser demonstration on the USS Ponce was actually half a dozen IPG industrial fiber lasers, each rated at 5.5 kW, shot through the same telescope to form a 30-kW beam. But simply feeding the light from even more industrial fiber lasers into a bigger telescope would not produce a 100-kW beam that would retain the tight focus needed to destroy or disable fast-moving, far-off targets. The Pentagon needed a single 100-kW-class system for that. The laser would track the target’s motion, dwelling on a vulnerable spot, such as its engine or explosive payload, until the beam destroyed it.

Alas, that’s not going to happen with the existing approach. “If I could build a 100-kW laser with a single fiber, it would be great, but I can’t,” says Lockheed’s Afzal. “The scaling of a single-fiber laser to high power falls apart.” Delivering that much firepower requires new technology, he adds. The leading candidate is a way to combine the beams from many separate fiber lasers in a more controlled way than by simply firing them all through the same telescope.

There’s much, much more.

Kitty Hawk’s Cora

Sunday, March 18th, 2018

Kitty Hawk Corporation’s new Cora air taxi “is powered by 12 independent lift fans, which enable her to take off and land vertically like a helicopter” and has a range of “about 62 miles” while flying at “about 110 miles per hour” at an altitude “between 500 ft to 3000 ft above the ground”:

A proton battery combines the best aspects of hydrogen fuel cells and conventional batteries

Wednesday, March 14th, 2018

Researchers from RMIT University in Melbourne, Australia have produced a working-prototype proton battery, which combines the best aspects of hydrogen fuel cells and battery-based electrical power:

The latest version combines a carbon electrode for solid-state storage of hydrogen with a reversible fuel cell to provide an integrated rechargeable unit.

The successful use of an electrode made from activated carbon in a proton battery is a significant step forward and is reported in the International Journal of Hydrogen Energy.

During charging, protons produced by water splitting in a reversible fuel cell are conducted through the cell membrane and directly bond with the storage material with the aid of electrons supplied by the applied voltage, without forming hydrogen gas.

In electricity supply mode this process is reversed; hydrogen atoms are released from the storage and lose an electron to become protons once again. These protons then pass back through the cell membrane where they combine with oxygen and electrons from the external circuit to re-form water.

A major potential advantage of the proton battery is much higher energy efficiency than conventional hydrogen systems, making it comparable to lithium ion batteries. The losses associated with hydrogen gas evolution and splitting back into protons are eliminated.

Several years ago the RMIT team showed that a proton battery with a metal alloy electrode for storing hydrogen could work, but its reversibility and rechargeability was too low. Also the alloy employed contained rare-earth elements, and was thus heavy and costly.

The latest experimental results showed that a porous activated-carbon electrode made from phenolic resin was able to store around 1 wt% hydrogen in the electrode. This is an energy per unit mass already comparable with commercially-available lithium ion batteries, even though the proton battery is far from being optimised. The maximum cell voltage was 1.2 volt.