Re-creating the first flip-flop

Friday, June 15th, 2018

The flip-flop was created 100 years ago — in the pre-digital age:

Many engineers are familiar with the names of Lee de Forest, who invented the amplifying vacuum tube, or John Bardeen, Walter Brattain, and William Shockley, who invented the transistor. Yet few know the names of William Eccles and F.W. Jordan, who applied for a patent for the flip-flop 100 years ago, in June 1918. The flip-flop is a crucial building block of digital circuits: It acts as an electronic toggle switch that can be set to stay on or off even after an initial electrical control signal has ceased. This allows circuits to remember and synchronize their states, and thus allows them to perform sequential logic.

The flip-flop was created in the predigital age as a trigger relay for radio designs. Its existence was popularized by an article in the December 1919 issue of The Radio Review [PDF], and two decades later, the flip-flop would find its way into the Colossus computer [PDF], used in England to break German wartime ciphers, and into the ENIAC in the United States.

Modern flip-flops are built in countless numbers out of transistors in integrated circuits, but, as the centenary of the flip-flop approached, I decided to replicate Eccles and Jordan’s original circuit as closely as possible.

This circuit is built around two vacuum tubes, so I started there. Originally, Eccles and Jordan most likely used Audion tubes or British-made knock-offs. The Audion was invented by de Forest, and it was the first vacuum tube to demonstrate amplification, allowing a weak signal applied to a grid to control a much larger electrical current flowing from a filament to a plate. But these early tubes were handmade and unreliable, and it would be impractical to obtain a usable pair today.

Instead I turned to the UX201A, an improved variant of the UV201 tube that General Electric started producing in 1920. While still close in time to the original patent, the UV201 marked the beginning of vacuum-tube mass production, and a consequent leap in reliability and availability. I was able to purchase two 01A tubes for about US $35 apiece.

Flip-Flop Circuit Diagram

In a flip-flop, the tubes are cross-coupled in a careful balancing act, using pairs of resistors to control voltages. This balancing act means that turning off one tube, even momentarily, turns the second tube on and keeps the first tube off. This state of affairs continues until the second tube is turned off with a control signal, which pushes the first tube on and keeps the second tube off.

Achieving the right balance means getting the values of the resistors just right. In their laboratory, Eccles and Jordan would have used resistor decade boxes, bulky pieces of equipment that would have let them dial in resistances at different points in their circuit. For reasons of space, I decided to use fixed resistors of a similar vintage as the patent.

I was able to obtain a set of such resistors from the collection of antique radios that I’ve accumulated over the years. In the 1920s, radio manufacturing exploded, and the result is that I have quite a few early radios that are pretty nondescript and beyond repair, so I didn’t feel too bad about cannibalizing them for parts. Resistors made before 1925 were generally placed into sockets, rather than soldered into a circuit board, so extracting them wasn’t hard.

The hard part was that these resistors are very imprecise. They were handmade with a resistive carbon element held between clips in a glass enclosure. One way to get their resistance closer to the desired value is to open up the enclosure, remove the strip of carbon, make notches in it to increase its resistance, and put it back in. I adjusted several of the resistors this way, but it was too tricky to do with others, so for those I cheated a little and placed modern resistors inside the vintage glass casing.

Flip-Flop Replica

I used modern battery supplies, in order to avoid the use of the numerous wet cells that the inventors probably used. One of the issues with tube-based circuits is that a range of voltages is required. Four D cells wired in series provides the 6 volts needed for the indicator lamps and the filament of the tubes. Connecting eleven 9-V batteries in series provided the 99 V required for the tubes’ plate. A similarly constructed 63-V power supply is needed to negatively bias the tubes’ grids. Old-fashioned brass doorbell buttons let me tap a 9-V battery connection to provide the control pulses. To show the flip-flop’s state, I used sensitive antique telegraph relays that operate miniature incandescent lamps.

With a lot of trial and error and tweaking of my nearly century-old components, over the course of a year I was finally able to achieve stable operation of this venerable circuit!

It is a one-way conduit to bring another society into their living rooms

Wednesday, June 13th, 2018

The Amish have negotiated a pact with modernity:

It’s interesting that the Amish have different districts, and each district has different rules about what’s allowed and what’s not allowed. Yet it’s very clear there are two technologies that, as soon as the community accepts them, they are no longer Amish. Those technologies are the television and the automobile.

They particularly see those two as having a fundamental impact on their society and daily lives.

I think a huge part is that they shape our relationships with other people. The reason the Amish rejected television is because it is a one-way conduit to bring another society into their living rooms. And they want to maintain the society as they have created it. And the automobile as well. As soon as you have a car, your ability to leave your local community becomes significantly easier.

You no longer have to rely on your neighbor for eggs when you run out. You can literally take half an hour and run to the store. In a horse and buggy, when you don’t have your own chickens, that’s a half-day process.

[...]

The Amish use us as an experiment. They watch what happens when we adopt new technology, and then decide whether that’s something they want to adopt themselves. I asked one Amish person why they didn’t use automobiles. He simply smiled and turned to me and said, “Look what they did to your society.” And I asked what do you mean? “Well, do you know your neighbor? Do you know the names of your neighbors?” And, at the time, I had to admit to the fact that I didn’t.

And he pointed out that my ability to simply bypass them with the windows closed meant I didn’t have to talk to them. And as a result, I didn’t.

His argument was that they were looking at us to decide whether or not this was something they wanted to do or not. I think that happens in our society as well. We certainly have this idea of alpha and beta testing. There are people very, very excited to play that role. I don’t know if they always frame themselves as guinea pigs, but that’s what they are.

It is your fault for following the wrong people

Sunday, June 3rd, 2018

Is surfing the internet dead?

Ten to fifteen years ago, I remember the joys of just finding things, clicking links through to other links, and in general meandering through a thick, messy, exhilarating garden.

Today you can’t do that as much. Many media sites are gated, a lot of the personal content is in the walled garden of Facebook, and blogs and personal home pages are not as significant as before.

[...]

That said, I do not feel that time on the internet has become an inferior experience. It’s just that these days you find most things by Twitter. You don’t have to surf, because this aggregator performs a surfing-like function for you. Scroll rather than surf, you could say (“scrolling alone,” said somebody on Twitter).

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).

No one else was familiar with both fields at the same time

Sunday, May 27th, 2018

The history of computers is best understood as a history of ideas:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

I don’t think most computer science students learn even a fraction of this intellectual history.

Making everything else that was previously considered into obviously terrible ideas

Wednesday, May 16th, 2018

John Carmack shares some stories about Steve Jobs:

My wife once asked me “Why do you drop what you are doing when Steve Jobs asks you to do something? You don’t do that for anyone else.”

It is worth thinking about.

As a teenage Apple computer fan, Jobs and Wozniak were revered figures for me, and wanting an Apple 2 was a defining characteristic of several years of my childhood. Later on, seeing NeXT at a computer show just as I was selling my first commercial software felt like a vision into the future. (But $10k+, yikes!)

As Id Software grew successful through Commander Keen and Wolfenstein 3D, the first major personal purchase I made wasn’t a car, but rather a NeXT computer. It turned out to be genuinely valuable for our software development, and we moved the entire company onto NeXT hardware.

We loved our NeXTs, and we wanted to launch Doom with an explicit “Developed on NeXT computers” logo during the startup process, but when we asked, the request was denied.

Some time after launch, when Doom had begun to make its cultural mark, we heard that Steve had changed his mind and would be happy to have NeXT branding on it, but that ship had sailed. I did think it was cool to trade a few emails with Steve Jobs.

Several things over the years made me conclude that, at his core, Steve didn’t think very highly of games, and always wished they weren’t as important to his platforms as they turned out to be. I never took it personally.

When NeXT managed to sort of reverse-acquire Apple and Steve was back in charge, I was excited by the possibilities of a resurgent Apple with the virtues of NeXT in a mainstream platform.

I was brought in to talk about the needs of games in general, but I made it my mission to get Apple to adopt OpenGL as their 3D graphics API. I had a lot of arguments with Steve.

Part of his method, at least with me, was to deride contemporary options and dare me to tell him differently. They might be pragmatic, but couldn’t actually be good. “I have Pixar. We will make something [an API] that is actually good.”

It was often frustrating, because he could talk, with complete confidence, about things he was just plain wrong about, like the price of memory for video cards and the amount of system bandwidth exploitable by the AltiVec extensions.

But when I knew what I was talking about, I would stand my ground against anyone.

When Steve did make up his mind, he was decisive about it. Dictates were made, companies were acquired, keynotes were scheduled, and the reality distortion field kicked in, making everything else that was previously considered into obviously terrible ideas.

I consider this one of the biggest indirect impacts on the industry that I have had. OpenGL never seriously threatened D3D on PC, but it was critical at Apple, and that meant that it remained enough of a going concern to be the clear choice when mobile devices started getting GPUs. While long in the tooth now, it was so much better than what we would have gotten if half a dozen SoC vendors rolled their own API back at the dawn of the mobile age.

It’s hardly the megawatt monster military scientists dreamed of

Wednesday, April 18th, 2018

The U.S. Navy’s most advanced laser weapon looks like a pricey amateur telescope, and, at just 30 kilowatts, it’s hardly the megawatt monster military scientists dreamed of decades ago to shoot down ICBMs, but it is a major milestone, built on a new technology:

The mission shift has been going on for years, from global defense against nuclear-armed “rogue states” to local defense against insurgents. The technology shift has been more abrupt, toward the hot new solid-state technology of optical-fiber lasers. These are the basis of a fast-growing US $2 billion industry that has reengineered the raw materials of global telecommunications to cut and weld metals, and it is now being scaled to even higher power with devastating effect.

Naval Laser by MCKIBILLO

Industrial fiber lasers can be made very powerful. IPG recently sold a 100-fiber laser to the NADEX Laser R&D Center in Japan that can weld metal parts up to 30 centimeters thick. But that high of a power output comes at the sacrifice of the ability to focus the beam over a distance. Cutting and welding tools need to operate only centimeters from their targets, after all. The highest power from single fiber lasers with beams good enough to focus onto objects hundreds of meters or more away is much less — 10 kW. Still, that’s adequate for stationary targets like unexploded ordnance left on a battlefield, because you can keep the laser trained on the explosive long enough to detonate it.

Of course, 10 kW won’t stop a speeding boat before it can deliver a bomb. The Navy laser demonstration on the USS Ponce was actually half a dozen IPG industrial fiber lasers, each rated at 5.5 kW, shot through the same telescope to form a 30-kW beam. But simply feeding the light from even more industrial fiber lasers into a bigger telescope would not produce a 100-kW beam that would retain the tight focus needed to destroy or disable fast-moving, far-off targets. The Pentagon needed a single 100-kW-class system for that. The laser would track the target’s motion, dwelling on a vulnerable spot, such as its engine or explosive payload, until the beam destroyed it.

Alas, that’s not going to happen with the existing approach. “If I could build a 100-kW laser with a single fiber, it would be great, but I can’t,” says Lockheed’s Afzal. “The scaling of a single-fiber laser to high power falls apart.” Delivering that much firepower requires new technology, he adds. The leading candidate is a way to combine the beams from many separate fiber lasers in a more controlled way than by simply firing them all through the same telescope.

There’s much, much more.

Kitty Hawk’s Cora

Sunday, March 18th, 2018

Kitty Hawk Corporation’s new Cora air taxi “is powered by 12 independent lift fans, which enable her to take off and land vertically like a helicopter” and has a range of “about 62 miles” while flying at “about 110 miles per hour” at an altitude “between 500 ft to 3000 ft above the ground”:

A proton battery combines the best aspects of hydrogen fuel cells and conventional batteries

Wednesday, March 14th, 2018

Researchers from RMIT University in Melbourne, Australia have produced a working-prototype proton battery, which combines the best aspects of hydrogen fuel cells and battery-based electrical power:

The latest version combines a carbon electrode for solid-state storage of hydrogen with a reversible fuel cell to provide an integrated rechargeable unit.

The successful use of an electrode made from activated carbon in a proton battery is a significant step forward and is reported in the International Journal of Hydrogen Energy.

During charging, protons produced by water splitting in a reversible fuel cell are conducted through the cell membrane and directly bond with the storage material with the aid of electrons supplied by the applied voltage, without forming hydrogen gas.

In electricity supply mode this process is reversed; hydrogen atoms are released from the storage and lose an electron to become protons once again. These protons then pass back through the cell membrane where they combine with oxygen and electrons from the external circuit to re-form water.

A major potential advantage of the proton battery is much higher energy efficiency than conventional hydrogen systems, making it comparable to lithium ion batteries. The losses associated with hydrogen gas evolution and splitting back into protons are eliminated.

Several years ago the RMIT team showed that a proton battery with a metal alloy electrode for storing hydrogen could work, but its reversibility and rechargeability was too low. Also the alloy employed contained rare-earth elements, and was thus heavy and costly.

The latest experimental results showed that a porous activated-carbon electrode made from phenolic resin was able to store around 1 wt% hydrogen in the electrode. This is an energy per unit mass already comparable with commercially-available lithium ion batteries, even though the proton battery is far from being optimised. The maximum cell voltage was 1.2 volt.

Silly, fun things are important

Wednesday, February 7th, 2018

Yesterday’s Falcon Heavy test flight was impressive:

Launching a Tesla roadster into space was, of course, a ludicrous stunt. Kids these days may not get the allusion to the opening scene of Heavy Metal:

The South Park guys had quite a bit of fun — 10 years ago — spoofing that scene — and the rest of Heavy Metal:

A few hours after the launch, Elon Musk answered some questions:

Project Plowshare

Saturday, February 3rd, 2018

Back in the “Atoms for Peace” era, the US’s Project Plowshare attempted to harness peaceful nuclear explosions for massive public works. The first test, Project Gnome, took place roughly 40 km (25 mi) southeast of Carlsbad, New Mexico, in an area of salt and potash mines along with oil and gas wells:

It was learned during the 1957 Plumbbob-Rainier tests that an underground nuclear detonation created large quantities of heat as well as radioisotopes, but most would quickly become trapped in the molten rock and become unusable as the rock resolidifed. For this reason, it was decided that Gnome would be detonated in bedded rock salt. The plan was to then pipe water through the molten salt and use the generated steam to produce electricity. The hardened salt could be subsequently dissolved in water in order to extract the radioisotopes. Gnome was considered extremely important to the future of nuclear science because it could show that nuclear weapons might be used in peaceful applications. The Atomic Energy Commission invited representatives from various nations, the U.N., the media, interested scientists and some Carlsbad residents.

“We’re going to set off an atomic bomb in a cave. You wanna come?”

Gnome was placed 361 m (1,184 ft) underground at the end of a 340 m (1,115 ft) tunnel that was supposed to be self-sealing upon detonation. Gnome was detonated on 10 December 1961, with a yield of 3.1 kilotons. Even though the Gnome shot was supposed to seal itself, the plan did not quite work. Two to three minutes after detonation, smoke and steam began to rise from the shaft. Consequently, some radiation was released and detected off site, but it quickly decayed.

The cavity volume was calculated to be 28,000 ± 2,800 cubic meters with an average radius of 17.4 m in the lower portion measured. The Gnome detonation created a cavity about 170 ft (52 m) wide and almost 90 ft (27 m) high with a floor of melted rock and salt. A new shaft was drilled near the original and, on 17 May 1962, crews entered the Gnome Cavity. Even though almost six months had passed since the detonation, the temperature inside the cavity was still around 140 °F (60 °C). Inside, they found stalactites made of melted salt, as well as the walls of the cavity covered in salt. The intense radiation of the detonation colored the salt multiple shades of blue, green, and violet. Nonetheless, the explorers encountered only 5 milliroentgen, and it was considered safe for them to enter the cavern and cross its central rubble pile. While the three-kiloton explosion had melted 2400 tons of salt, the explosion had caused the collapse of the sides and top of the chamber, adding 28,000 tons of rubble that mixed with the molten salt and rapidly reduced its temperature. This was the reason the drilling program had originally been unsuccessful, finding temperatures of only 200 F, without high pressure steam, though the boreholes had encountered occasional pockets of molten salt at up to 1450 F deeper amid the rubble.

Today, all that exists on the surface to show what occurred below is a small concrete monument with two weathered and slightly vandalized plaques.

Other proposals under Project Plowshare included widening the Panama Canal, constructing a new sea-level waterway through Nicaragua nicknamed the Pan-Atomic Canal, cutting paths through mountainous areas for highways, and connecting inland river systems.

No mention of draining the Mediterranean though.

(Hat tip to commenter Sam J.)

Its rules are designed with one eye on how those rules might be exploited down the line

Thursday, February 1st, 2018

Steven Johnson looks beyond the Bitcoin bubble:

History is replete with stories of new technologies whose initial applications end up having little to do with their eventual use. All the focus on Bitcoin as a payment system may similarly prove to be a distraction, a technological red herring. Nakamoto pitched Bitcoin as a “peer-to-peer electronic-cash system” in the initial manifesto, but at its heart, the innovation he (or she or they) was proposing had a more general structure, with two key features.

First, Bitcoin offered a kind of proof that you could create a secure database — the blockchain — scattered across hundreds or thousands of computers, with no single authority controlling and verifying the authenticity of the data.

Second, Nakamoto designed Bitcoin so that the work of maintaining that distributed ledger was itself rewarded with small, increasingly scarce Bitcoin payments. If you dedicated half your computer’s processing cycles to helping the Bitcoin network get its math right — and thus fend off the hackers and scam artists — you received a small sliver of the currency. Nakamoto designed the system so that Bitcoins would grow increasingly difficult to earn over time, ensuring a certain amount of scarcity in the system. If you helped Bitcoin keep that database secure in the early days, you would earn more Bitcoin than later arrivals. This process has come to be called “mining.”

[...]

Token economies introduce a strange new set of elements that do not fit the traditional models: instead of creating value by owning something, as in the shareholder equity model, people create value by improving the underlying protocol, either by helping to maintain the ledger (as in Bitcoin mining), or by writing apps atop it, or simply by using the service. The lines between founders, investors and customers are far blurrier than in traditional corporate models; all the incentives are explicitly designed to steer away from winner-take-all outcomes. And yet at the same time, the whole system depends on an initial speculative phase in which outsiders are betting on the token to rise in value.

“You think about the ’90s internet bubble and all the great infrastructure we got out of that,” Dixon says. “You’re basically taking that effect and shrinking it down to the size of an application.”

[...]

So much of the blockchain’s architecture is shaped by predictions about how that architecture might be abused once it finds a wider audience. That is part of its charm and its power. The blockchain channels the energy of speculative bubbles by allowing tokens to be shared widely among true supporters of the platform. It safeguards against any individual or small group gaining control of the entire database. Its cryptography is designed to protect against surveillance states or identity thieves. In this, the blockchain displays a familial resemblance to political constitutions: Its rules are designed with one eye on how those rules might be exploited down the line.

Much has been made of the anarcho-libertarian streak in Bitcoin and other nonfiat currencies; the community is rife with words and phrases (“self-sovereign”) that sound as if they could be slogans for some militia compound in Montana. And yet in its potential to break up large concentrations of power and explore less-proprietary models of ownership, the blockchain idea offers a tantalizing possibility for those who would like to distribute wealth more equitably and break up the cartels of the digital age.

The blockchain worldview can also sound libertarian in the sense that it proposes nonstate solutions to capitalist excesses like information monopolies. But to believe in the blockchain is not necessarily to oppose regulation, if that regulation is designed with complementary aims. Brad Burnham, for instance, suggests that regulators should insist that everyone have “a right to a private data store,” where all the various facets of their online identity would be maintained. But governments wouldn’t be required to design those identity protocols. They would be developed on the blockchain, open source. Ideologically speaking, that private data store would be a true team effort: built as an intellectual commons, funded by token speculators, supported by the regulatory state.

Like the original internet itself, the blockchain is an idea with radical — almost communitarian — possibilities that at the same time has attracted some of the most frivolous and regressive appetites of capitalism. We spent our first years online in a world defined by open protocols and intellectual commons; we spent the second phase in a world increasingly dominated by closed architectures and proprietary databases. We have learned enough from this history to support the hypothesis that open works better than closed, at least where base-layer issues are concerned. But we don’t have an easy route back to the open-protocol era. Some messianic next-generation internet protocol is not likely to emerge out of Department of Defense research, the way the first-generation internet did nearly 50 years ago.

Yes, the blockchain may seem like the very worst of speculative capitalism right now, and yes, it is demonically challenging to understand. But the beautiful thing about open protocols is that they can be steered in surprising new directions by the people who discover and champion them in their infancy. Right now, the only real hope for a revival of the open-protocol ethos lies in the blockchain. Whether it eventually lives up to its egalitarian promise will in large part depend on the people who embrace the platform, who take up the baton, as Juan Benet puts it, from those early online pioneers. If you think the internet is not working in its current incarnation, you can’t change the system through think-pieces and F.C.C. regulations alone. You need new code.

Most people aren’t shoplifters

Sunday, January 28th, 2018

TechCrunch looks inside Amazon’s surveillance-powered no-checkout convenience store, which should work just fine as long as all the customers are Amazon employees:

In addition to the cameras, there are weight sensors in the shelves, and the system is aware of every item’s exact weight — so no trying to grab two yogurts at once and palm the second, as I considered trying. You might be able to do it Indiana Jones style, with a suitable amount of sand in a sack, but that’s more effort than most shoplifters are willing to put out.

And, as Kumar noted to me, most people aren’t shoplifters, and the system is designed around most people. Building a system that assumes ill intent rather than merely detecting discrepancies is not always a good design choice.

Space is open for business

Sunday, January 21st, 2018

Space is open for business. Rocket Lab has announced, with its successful Electron rocket launch, from its own private launch pad in New Zealand, which reached orbit and successfully deployed multiple small satellites that will map the earth’s surface and track weather systems and shipping.

The Electron rocket is disposable:

It is made of lightweight carbon composite material and has 3D-printed engines to reduce costs and assembly times. It is 17m long, roughly a quarter of the size of rivals such as SpaceX’s Falcon 9 rocket, which can carry satellites the size of a van into orbit. Each Rocket Lab launch costs about $5m, compared to $62m for SpaceX, the company founded by billionaire Elon Musk.

Sunday’s launch was the second test flight by the Electron rocket following an earlier flight in May. On that occasion the rocket entered space but was unable to reach lower earth orbit due to a technical fault. It is planning a third test flight later this year.

Some satellite providers are willing to risk their products on test rockets due the lengthy backlog in launches that has built up as the industry expands. Rocket Lab deployed the three small satellites on behalf of Planet and Spire Global, US-based satellite providers that are deploying constellations of nanosatellites at a low earth orbit of about 500km.

Rocket Lab says its private launch pad on the picturesque Mahia peninsula on New Zealand’s North Island gives it a commercial advantage to many competitors, who use government-run facilities such as Cape Canaveral in the US. The company is licensed to conduct a launch every 72 hours from the remote location, which benefits from the lack of air and shipping travel in the vicinity.

I first heard about Rocket Lab just last year.

An alternative to “old fashioned” deuterium-tritium fusion

Thursday, January 18th, 2018

HB11 Energy proposes an alternative to “old fashioned” deuterium-tritium fusion, laser hydrogen-boron fusion:

A scientific paper accepted for publication describes the road map that has deemed the approach by one of the founders with his team as a viable approach based on the experimentally confirmed reaction gains one billion times higher than the classical values, placing it far ahead any DT fusion approaches.

Other advantages: Unlike deuterium-tritium fusion and fission techniques, the HB11 reaction is sufficiently clean with respect to production of any harmful byproducts or radiation. It also has the potential to create electricity directly without the need for a heat exchanger and steam turbine to generate electricity as required for coal or fission nuclear power stations. This will allow power stations to be built with a relatively small capital investment and footprint based on presently achieved extreme laser technology.

We expect to be able to provide energy for about one-quarter of the price of coal fired power, without any carbon emissions or radioactive by-products, which will be disruptive to the power industry. With the small size and footprint of a HB11 power station, the addressable market is expected to reach further than the power grid to applications such as ships, submarines, large factories or to remote locations such as isolated towns and mine sites.

The birth of the digital camera

Monday, January 8th, 2018

Former Kodak employee Steve Sasson tells the story of the birth of the digital camera:

I worked for Eastman Kodak Company for over 35 years. I began in July of 1973. I was a junior engineer. My supervisor said, ‘We’ve got a filler job for you. There’s a new type of imaging device called a charged couple device imager; we want someone look at one of these and see if we could do anything useful with it.’

Our conversation probably lasted about 30 seconds, it was nothing.

Most of the parts I used to build it, I stole from around the factory. Digital volt meters and chips, digital tape cassette, prototype box, it looks like an erector set with a blue box on top with a lens stuck on top. And I would output to a television set. We took our first full images in December of 1975.

I folded the camera up, and I walked down a hallway, and there was a young lab technician, her name was Joy. I asked her, ‘Could I take a snapshot of you?’ She said, ‘sure, whatever.’ The tape started to move, that’s how I know I made a picture. I popped it out of the tape player, put it into the playback system. It was quite a moment, because this crazy thing actually worked. Up popped the image. We could see her black hair and a white background, but her face was complete static, completely unrecognizable. Jim and I were overjoyed at what we saw, because we knew so many reasons why we wouldn’t see anything at all.

Joy had followed us in, she looked at the picture and she said, ‘Needs work.’

We filed for a patent, and the first patent for a digital camera was granted in 1978. U.S. Patent 5016107. We started to show it to people at Kodak. Then, it became more interesting.

I thought they’d spend all their time asking me how did I get this to work. They didn’t ask me any of the hows, they asked me, ‘Why? Why would anyone want to do this?’