UNSW engineers have modified a conventional Diesel engine to use a mix of hydrogen and a small amount of diesel

Friday, January 27th, 2023

Engineers at the University of New South Wales (UNSW) say they have successfully modified a conventional Diesel engine to use a mix of hydrogen and a small amount of diesel, claiming their patented technology has cut carbon dioxide (CO2) emissions by more than 85%:

About 90% of fuel in the UNSW hybrid diesel engine is hydrogen but it must be applied in a carefully calibrated way. If the hydrogen is not introduced into the fuel mix at the right moment “it will create something that is explosive that will burn out the whole system,” Prof Kook explains.

He says that studies have shown that controlling the mixture of hydrogen and air inside the cylinder of the engine can help negate harmful nitrogen oxide emissions, which have been an obstacle to the commercialisation of hydrogen motors.

The Sydney research team believes that any diesel trucks and power equipment in the mining, transportation and agriculture sectors could be retrofitted with the new hybrid system in just a couple of months.

Prof Kook doubts the hybrid would be of much interest in the car industry though, where electric and hybrid vehicles are already advanced and replacing diesel cars.

However, he says Australia’s multibillion-dollar mining industry needs a solution for all its diesel-powered equipment as soon as possible.

The comparatively short lifespan of modern concrete is overwhelmingly the result of corrosion-induced failure

Thursday, January 26th, 2023

Roman concrete’s ability to last for millennia puts modern concrete to shame, but this ignores that the overwhelming majority of modern concrete is reinforced concrete, Brian Potter explains, with some type of steel embedded in it:

Usually this is in the form of bars (rebar), but it might also be mesh, or fibers, or steel cable. Steel is stronger than concrete, particularly in tension (reinforcing steel has perhaps 10-15x the compressive strength of concrete, but more than 100x the tensile strength of concrete), and a comparatively small amount of steel can greatly increase the strength of a concrete element. By adding steel, you can make shallow concrete elements (beams, slabs, etc.) that can still span long distances and that wouldn’t be possible if the concrete were unreinforced.

Concrete is also brittle, whereas steel is ductile — if a plain concrete element fails, it’s likely to fail suddenly without warning, whereas a steel element will (generally) stretch and sag significantly before it fails, absorbing a lot of energy in the process. This makes reinforced concrete fundamentally safer than unreinforced concrete — if you have a lot of warning before a structure fails, you have time to safely get out of the building. For this reason, structural concrete is often required by code to have some minimum amount of steel reinforcing in it, and concrete that might experience large sudden loads in unpredictable ways (such as from an earthquake) is required to have a LOT of additional reinforcing. Most buildings built in zones of very high seismicity aren’t actually designed to come through the earthquake undamaged — they’re merely designed to not catastrophically collapse so people can safely get out.

(Earthquake design might seem like something that you only need to worry about in a few places, but most of the US can theoretically see a surprisingly strong earthquake and the buildings must be designed accordingly.)

But while reinforcement provides a lot of benefits, it has drawbacks. The primary one is that, over time, the steel in concrete corrodes. This is the result of two mechanisms – chloride ions making their way through the concrete, and concrete absorbing CO2 over time (though the second one happens much more slowly). As the steel corrodes, it expands, putting internal pressure on the concrete, eventually resulting in cracking and spalls (chunks of concrete that have fallen off).

How quickly this happens depends on a lot of factors. Concrete exposed to weather or water will corrode faster than concrete that isn’t. Concrete where the rebar is farther from the surface of the concrete will last longer than concrete where the steel is closer to the surface. Concrete exposed to harsh chemicals such as salts or sulfates will corrode faster than concrete that isn’t.

The comparatively short lifespan of modern concrete is overwhelmingly the result of corrosion-induced failure. Unchecked, reinforced concrete exposed to the elements will often start to decay in a few decades or even less. Precast concrete parking garages, for instance, are exposed to a lot of weather, since they’re open-air structures and vehicles bring moisture and road salts inside them. And a precast garage will often have many exposed steel elements, since steel plates stitch the pieces of concrete together. A precast garage might have a design life of 50 years, and often need very substantial repairs much earlier. Roman concrete, however, is unreinforced, and doesn’t have this failure mechanism.

This type of failure is exacerbated by the fact that modern concrete is designed to come up to strength very quickly, which results in numerous small cracks caused by shrinkage strains in the hardened concrete. These cracks make it easier for water to reach the steel, accelerating the process of corrosion. They also make the concrete more susceptible to other types of decay like freeze-thaw damage. Roman concrete, on the other hand, cured much more slowly.

If we wanted to build more durable concrete structures, the most important thing would be to remove or minimize this failure mechanism, and structures designed for long lives often do. Buddhist or Hindu temples, for instance, will use unreinforced concrete, or concrete with stainless steel rebar, and often have 1000-year design lives (though whether they will actually survive 1000 years is another question). Stainless steel rebar advocates like to trot out a concrete pier in Mexico built in 1941 with stainless steel rebar, which has needed no major repair work despite being in a highly corrosive environment.

[…]

Using unreinforced concrete dramatically limits the sort of construction you can do — even if the code allows it, you’re basically limited to only using concrete in compression. Without reinforcing, modern concrete buildings and bridges would be largely impossible.

Other methods of reducing reinforcement corrosion also have drawbacks, especially cost. Stainless steel rebar is four to six times as expensive as normal rebar. Epoxy coated rebar (commonly used on bridge construction in the US) is also more expensive, and though it can slow down corrosion, it won’t stop it. Basalt rebar won’t corrode (as far as I know) but can apparently decay in other ways.

Adding cost to a building to potentially extend its lifespan is often tough to make the numbers work for a developer. Well-made reinforced concrete that’s protected from the weather can last over a century, so the net present value of any additional lifespan beyond that is pretty low. It’s much more likely that the building will be torn down for other reasons long before the concrete fails.

Having to extend the lifespan of older planes consumes money that could be used to acquire new aircraft

Tuesday, January 24th, 2023

Years of delays, cost overruns, and technical glitches with the F-35 have put the Pentagon in a dilemma:

If F-35s aren’t fit to fly in sufficient numbers, then older aircraft such as the F-16 must be kept in service to fill the gap. In turn, having to extend the lifespan of older planes consumes money that could be used to acquire new aircraft and results in aging warplanes that may not be capable of fulfilling their missions on the current battlefield.

[…]

The aircraft has been plagued by a seemingly endless series of bugs, including problems with its stealth coating, sustained supersonic flight, helmet-mounted display, excessive vibration from its cannon, and even vulnerability to being hit by lightning.

The military and Lockheed Martin have resolved some of those problems, but the cumulative effect of the delays is that the Air Force has had to shelve plans for the F-35 to replace the F-16, which now will keep flying until the 2040s.

[…]

The remarkable longevity of some aircraft — such as the 71-year-old B-52 bomber or the 41-year-old A-10 — tends to obscure the difficulty of keeping old warplanes flying. Production lines are usually shut down, and the original manufacturers of components and spare parts have long ceased production. In some cases, they are no longer in business.

An FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy

Thursday, January 19th, 2023

The FGC-9 stands out from previous 3D-printed firearms designs, in part because it was specifically designed to circumvent European gun regulations:

Thus, unlike its predecessors, the FGC-9 does not require the use of any commercially produced firearm parts. Instead, it can be produced using only unregulated commercial off-the-shelf (COTS) components. For example, instead of an industrially produced firearms barrel, the FGC-9 uses a piece of a pre-hardened 16 mm O.D. hydraulic tubing. The construction files for the FGC-9 also include instructions on how to rifle the hydraulic tubing using electrochemical machining (ECM). The FGC-9 uses a hammer-fired blowback self-loading action, firing from the closed-bolt position. The gun uses a commercially available AR-15 trigger group. In the United States, these components are unregulated. In the European Union and other countries—such as Australia—the FGC-9 can also be built with a slightly modified trigger group used by ‘airsoft’ toys of the same general design. This design choice provides a robust alternative to a regulated component, but also means that the FGC-9 design only offers semi-automatic fire, unless modified. The FGC-9 Mk II files also include a printable AR-15 fire-control group, which may be what was used in this case, as airsoft and ‘gel blaster’ toys are also regulated in Western Australia.

2DD658C4-832D-4C56-8801-7086FDD0CD7D

In tests performed by ARES, an FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy: the firearm shot groups of 60 mm at 23 meters, with no signs of tumbling or unstable flight. Further, in forensic tests with FCG-9 models seized in Europe, the guns generally exhibited good durability. One example, described as not being particularly well built, was able to fire more than 2,000 rounds without a catastrophic failure—albeit with deteriorating accuracy. The cost of producing an FGC-9 can be very low, and even with a rifled barrel and the purchase of commercial components, the total price for all parts, materials, and tools to produce such a firearm is typically less than $1,000 USD. As more firearms are made, the cost per firearm decreases significantly. In a 2021 case in Finland, investigators uncovered a production facility geared up to produce multiple FGC-9 carbines. In this case, the criminal group operating the facility had purchased numerous Creality Ender 3 printers—each sold online for around $200. In recent months, complete FGC-9 firearms have been offered for sale for between approximately 1,500 and 3,500 USD (equivalent), mostly via Telegram groups.

None of the precursors were in place

Sunday, January 15th, 2023

Once you understand how the Industrial Revolution came about, it’s easy to see why there was no Roman Industrial Revolution — none of the precursors were in place:

The Romans made some use of mineral coal as a heating element or fuel, but it was decidedly secondary to their use of wood and where necessary charcoal. The Romans used rotational energy via watermills to mill grain, but not to spin thread. Even if they had the spinning wheel (and they didn’t; they’re still spinning with drop spindles), the standard Mediterranean period loom, the warp-weighted loom, was roughly an order of magnitude less efficient than the flying shuttle loom, so the Roman economy couldn’t have handled all of the thread the spinning wheel could produce.

And of course the Romans had put functionally no effort into figuring out how to make efficient pressure-cylinders, because they had absolutely no use for them. Remember that by the time Newcomen is designing his steam engine, the kings and parliaments of Europe have been effectively obsessed with who could build the best pressure-cylinder (and then plug it at one end, making a cannon) for three centuries because success in war depended in part on having the best cannon. If you had given the Romans the designs for a Newcomen steam engine, they couldn’t have built it without developing whole new technologies for the purpose (or casting every part in bronze, which introduces its own problems) and then wouldn’t have had any profitable use to put it to.

All of which is why simple graphs of things like ‘global historical GDP’ can be a bit deceptive: there’s a lot of particularity beneath the basic statistics of production because technologies are contingent and path dependent.

The Industrial Revolution happened largely in one place

Saturday, January 14th, 2023

The Industrial Revolution was more than simply an increase in economic production, Bret Devereaux explains:

Modest increases in economic production are, after all, possible in agrarian economies. Instead, the industrial revolution was about accessing entirely new sources of energy for broad use in the economy, thus drastically increasing the amount of power available for human use. The industrial revolution thus represents not merely a change in quantity, but a change in kind from what we might call an ‘organic’ economy to a ‘mineral’ economy. Consequently, I’d argue, the industrial revolution represents probably just the second time in human history that as a species we’ve undergone a radical change in our production; the first being the development of agriculture in the Neolithic period.

However, unlike farming which developed independently in many places at different times, the industrial revolution happened largely in one place, once and then spread out from there, largely because the world of the 1700s AD was much more interconnected than the world of c. 12,000BP (‘before present,’ a marker we sometimes use for the very deep past). Consequently while we have many examples of the emergence of farming and from there the development of complex agrarian economies, we really only have one ‘pristine’ example of an industrial revolution. It’s possible that it could have occurred with different technologies and resources, though I have to admit I haven’t seen a plausible alternative development that doesn’t just take the same technologies and systems and put them somewhere else.

[…]

Fundamentally this is a story about coal, steam engines, textile manufacture and above all the harnessing of a new source of energy in the economy. That’s not the whole story, by any means, but it is one of the most important through-lines and will serve to demonstrate the point.

The specificity matters here because each innovation in the chain required not merely the discovery of the principle, but also the design and an economically viable use-case to all line up in order to have impact.

[…]

So what was needed was not merely the idea of using steam, but also a design which could actually function in a specific use case. In practice that meant both a design that was far more efficient (though still wildly inefficient) and a use case that could tolerate the inevitable inadequacies of the 1.0 version of the device. The first design to actually square this circle was Thomas Newcomen’s atmospheric steam engine (1712).

[…]

Now that design would be iterated on subsequently to produce smoother, more powerful and more efficient engines, but for that iteration to happen someone needs to be using it, meaning there needs to be a use-case for repetitive motion at modest-but-significant power in an environment where fuel is extremely cheap so that the inefficiency of the engine didn’t make it a worse option than simply having a whole bunch of burly fellows (or draft animals) do the job. As we’ll see, this was a use-case that didn’t really exist in the ancient world and indeed existed almost nowhere but Britain even in the period where it worked.

But fortunately for Newcomen the use case did exist at that moment: pumping water out of coal mines. Of course a mine that runs below the local water-table (as most do) is going to naturally fill with water which has to be pumped out to enable further mining. Traditionally this was done with muscle power, but as mines get deeper the power needed to pump out the water increases (because you need enough power to lift all of the water in the pump system in each movement); cheaper and more effective pumping mechanisms were thus very desirable for mining. But the incentive here can’t just be any sort of mining, it has to be coal mining because of the inefficiency problem: coal (a fuel you can run the engine on) is of course going to be very cheap and abundant directly above the mine where it is being produced and for the atmospheric engine to make sense as an investment the fuel must be very cheap indeed. It would not have made economic sense to use an atmospheric steam engine over simply adding more muscle if you were mining, say, iron or gold and had to ship the fuel in; transportation costs for bulk goods in the pre-railroad world were high. And of course trying to run your atmospheric engine off of local timber would only work for a very little while before the trees you needed were quite far away.

But that in turn requires you to have large coal mines, mining lots of coal deep under ground. Which in turn demands that your society has some sort of bulk use for coal. But just as the Newcomen Engine needed to out-compete ‘more muscle’ to get a foothold, coal has its own competitor: wood and charcoal. There is scattered evidence for limited use of coal as a fuel from the ancient period in many places in the world, but there needs to be a lot of demand to push mines deep to create the demand for pumping. In this regard, the situation on Great Britain (the island, specifically) was almost ideal: most of Great Britain’s forests seem to have been cleared for agriculture in antiquity; by 1000 only about 15% of England (as a geographic sub-unit of the island) was forested, a figure which continued to decline rapidly in the centuries that followed (down to a low of around 5%). Consequently wood as a heat fuel was scarce and so beginning in the 16th century we see a marked shift over to coal as a heating fuel for things like cooking and home heating. Fortunately for the residents of Great Britain there were surface coal seems in abundance making the transition relatively easy; once these were exhausted deep mining followed which at last by the late 1600s created the demand for coal-powered pumps finally answered effectively in 1712 by Newcomen: a demand for engines to power pumps in an environment where fuel efficiency mattered little.6

With a use-case in place, these early steam engines continue to be refined to make them more powerful, more fuel efficient and capable of producing smooth rotational motion out of their initially jerky reciprocal motions, culminating in James Watt’s steam engine in 1776. But so far all we’ve done is gotten very good and pumping out coal mines – that has in turn created steam engines that are now fuel efficient enough to be set up in places that are not coal mines, but we still need something for those engines to do to encourage further development. In particular we need a part of the economy where getting a lot of rotational motion is the major production bottleneck.

The internet wants to be fragmented

Thursday, January 12th, 2023

“You know,” Noah Smith quipped, “fifteen years ago, the internet was an escape from the real world. Now the real world is an escape from the internet.”

When I first got access to the internet as a kid, the very first thing I did was to find people who liked the same things I liked — science fiction novels and TV shows, Dungeons and Dragons, and so on. In the early days, that was what you did when you got online — you found your people, whether on Usenet or IRC or Web forums or MUSHes and MUDs. Real life was where you had to interact with a bunch of people who rubbed you the wrong way — the coworker who didn’t like your politics, the parents who nagged you to get a real job, the popular kids with their fancy cars. The internet was where you could just go be a dork with other dorks, whether you were an anime fan or a libertarian gun nut or a lonely Christian 40-something or a gay kid who was still in the closet. Community was the escape hatch.

Then in the 2010s, the internet changed. It wasn’t just the smartphone, though that did enable it. What changed is that internet interaction increasingly started to revolve around a small number of extremely centralized social media platforms: Facebook, Twitter, and later Instagram.

From a business perspective, this centralization was a natural extension of the early internet — people were getting more connected, so just connect them even more.

[…]

Putting everyone in the world in touch through a single network is what we did with the phone system, and everyone knows that the value of a network scales as the square of the number of users. So centralizing the whole world’s social interaction on two or three platforms would print loads of money while also making for a happier, more connected world.

[…]

It started with the Facebook feed. On the old internet, you could show a different side of yourself in every forum or chat room; but on your Facebook feed, you had to be the same person to everyone you knew. When social unrest broke out in the mid-2010s this got even worse — you had to watch your liberal friends and your conservative friends go at it in the comments of your posts, or theirs. Friendships and even family bonds were destroyed in those comments.

[…]

The early 2010s on Twitter were defined by fights over toxicity and harassment versus early-internet ideals of free speech. But after 2016 those fights no longer mattered, because everyone on the platform simply adopted the same patterns of toxicity and harassment that the extremist trolls had pioneered.

[…]

Why did this happen to the centralized internet when it hadn’t happened to the decentralized internet of previous decades? In fact, there were always Nazis around, and communists, and all the other toxic trolls and crazies. But they were only ever an annoyance, because if a community didn’t like those people, the moderators would just ban them. Even normal people got banned from forums where their personalities didn’t fit; even I got banned once or twice. It happened. You moved on and you found someone else to talk to.

Community moderation works. This was the overwhelming lesson of the early internet. It works because it mirrors the social interaction of real life, where social groups exclude people who don’t fit in. And it works because it distributes the task of policing the internet to a vast number of volunteers, who provide the free labor of keeping forums fun, because to them maintaining a community is a labor of love. And it works because if you don’t like the forum you’re in — if the mods are being too harsh, or if they’re being too lenient and the community has been taken over by trolls — you just walk away and find another forum. In the words of the great Albert O. Hirschman, you always have the option to use “exit”.

[…]

They tinkered at the edges of the platform, but never touched their killer feature, the quote-tweet, which Twitter’s head of product called “the dunk mechanism.” Because dunks were the business model — if you don’t believe me, you can check out the many research papers showing that toxicity and outrage drive Twitter engagement.

[…]

Humanity does not want to be a global hive mind. We are not rational Bayesian updaters who will eventually reach agreement; when we receive the same information, it tends to polarize us rather than unite us. Getting screamed at and insulted by people who disagree with you doesn’t take you out of your filter bubble — it makes you retreat back inside your bubble and reject the ideas of whoever is screaming at you. No one ever changed their mind from being dunked on; instead they all just doubled down and dunked harder. The hatred and toxicity of Twitter at times felt like the dying screams of human individuality, being crushed to death by the hive mind’s constant demands for us to agree with more people than we ever evolved to agree with.

I love to quote-tweet approvingly. I suppose that’s one of my eccentricities.

The group was elitist, but it was also meritocratic

Tuesday, January 10th, 2023

Sputnik’s success created an overwhelming sense of fear that permeated all levels of U.S. society, including the scientific establishment:

As John Wheeler, a theoretical physicist who popularized the term “black hole” would later tell an interviewer: “It is hard to reconstruct now the sense of doom when we were on the ground and Sputnik was up in the sky.”

Back on the ground, the event spurred a mobilization of American scientists unseen since the war. Six weeks after the launch of Sputnik, President Dwight Eisenhower revived the President’s Scientific Advisory Council (PSAC). It was a group of 16 scientists who reported directly to him, granting them an unprecedented amount of influence and power. Twelve weeks after Sputnik, the Department of Defense launched the Advanced Research Project Agency (ARPA), which was later responsible for the development of the internet. Fifteen months after Sputnik, the Office of the Director of Defense Research and Engineering (ODDRE) was launched to oversee all defense research. A 36-year-old physicist who worked on the Manhattan Project, Herb York, was named head of the Office of the ODDRE. There, he reported directly to the president and was given total authority over all defense research spending.

It was the beginning of a war for technological supremacy. Everyone involved understood that in the nuclear age, the stakes were existential.

It was not the first time the U.S. government had mobilized the country’s leading scientists. World War II had come to be known as “the physicists’ war.” It was physicists who developed proximity bombs and the radar systems that rendered previously invisible enemy ships and planes visible, enabling them to be targeted and destroyed, and it was physicists who developed the atomic bombs that ended the war. The prestige conferred by their success during the war positioned physicists at the top of the scientific hierarchy. With the members of the Manhattan Project now aging, getting the smartest young physicists to work on military problems was of intense interest to York and the ODDRE.

Physicists saw the post-Sputnik era as an opportunity to do well for themselves. Many academic physicists more than doubled their salaries working on consulting projects for the DOD during the summer. A source of frustration to the physicists was that these consulting projects were awarded through defense contractors, who were making twice as much as the physicists themselves. A few physicists based at the University of California Berkeley decided to cut out the middleman and form a company they named Theoretical Physics Incorporated.

Word of the nascent company spread quickly. The U.S.’s elite physics community consisted of a small group of people who all went to the same small number of graduate programs and were faculty members at the same small number of universities. These ties were tightened during the war, when many of those physicists worked closely together on the Manhattan Project and at MIT’s Rad Lab.

Charles Townes, a Columbia University physics professor who would later win a Nobel Prize for his role in inventing the laser, was working for the Institute for Defense Analysis (IDA) at the time and reached out to York when he learned of the proposed company. York knew many of the physicists personally and immediately approved $250,000 of funding for the group. Townes met with the founders of the company in Los Alamos, where they were working on nuclear-rocket research. Appealing to their patriotism, he convinced them to make their project a department of IDA.

A short while later the group met in Washington D.C., where they fleshed out their new organization. They came up with a list of the top people they would like to work with and invited them to Washington for a presentation. Around 80 percent of the people invited joined the group; they were all friends of the founders, and they were all high-level physicists. Seven of the first members, or roughly one-third of its initial membership, would go on to win the Nobel Prize. Other members, such as Freeman Dyson, who published foundational work on quantum field theory, were some of the most renowned physicists to never receive the Nobel.

The newly formed group was dubbed “Project Sunrise” by ARPA, but the group’s members disliked the name. The wife of one of the founders proposed the name JASON, after the Greek mythological hero who led the Argonauts on a quest for the golden fleece. The name stuck and JASON was founded in December 1959, with its members being dubbed “Jasons.”

The key to the JASON program was that it formalized a unique social fabric that already existed among elite U.S. physicists. The group was elitist, but it was also meritocratic. As a small, tight-knit community, many of the scientists who became involved in JASON had worked together before. It was a peer network that maintained strict standards for performance. With permission to select their own members, the Jasons were able to draw from those who they knew were able to meet the expectations of the group.

This expectation superseded existing credentials; Freeman Dyson never earned a PhD, but he possessed an exceptionally creative mind. Dyson became known for his involvement with Project Orion, which aimed to develop a starship design that would be powered through a series of atomic bombs, as well as his Dyson Sphere concept, a hypothetical megastructure that completely envelops a star and captures its energy.

Another Jason was Nick Christofilos, an engineer who developed particle accelerator concepts in his spare time when he wasn’t working at an elevator maintenance business in Greece. Christofilos wrote to physicists in the U.S. about his ideas, but was initially ignored. But he was later offered a job at an American research laboratory when physicists found that some of the ideas in his letters pre-dated recent advances in particle accelerator design. Dyson’s and Christofilios’s lack of formal qualifications would preclude an academic research career today, but the scientific community at the time was far more open-minded.

JASON was founded near the peak of what became known as the military-industrial complex. When President Eisenhower coined this term during his farewell address in 1961, military spending accounted for nine percent of the U.S. economy and 52 percent of the federal budget; 44 percent of the defense budget was being spent on weapons systems.

But the post-Sputnik era entailed a golden age for scientific funding as well. Federal money going into basic research tripled from 1960 to 1968, and research spending more than doubled overall. Meanwhile, the number of doctorates awarded in physics doubled. Again, meritocratic elitism dominated: over half of the funding went to 21 universities, and these universities awarded half of the doctorates.

With a seemingly unlimited budget, the U.S. military leadership had started getting some wild ideas. One general insisted a moon base would be required to gain the ultimate high ground. Project Iceworm proposed to build a network of mobile nuclear missile launchers under the Greenland ice sheet. The U.S. Air Force sought a nuclear-powered supersonic bomber under Project WS-125 that could take off from U.S. soil and drop hydrogen bombs anywhere in the world. There were many similar ideas and each military branch produced analyses showing that not only were the proposed weapons technically feasible, but they were also essential to winning a war against the Soviet Union.

Prior to joining the Jasons, some of its scientists had made radical political statements that could make them vulnerable to having their analysis discredited. Fortunately, JASON’s patrons were willing to take a risk and overlook political offenses in order to ensure that the right people were included in the group. Foreseeing the potential political trap, Townes proposed a group of senior scientific advisers, about 75 percent of whom were well-known conservative hawks. Among this group was Edward Teller, known as the “father of the hydrogen bomb.” This senior layer could act as a political shield of sorts in case opponents attempted to politically tarnish JASON members.

Every spring, the Jasons would meet in Washington D.C. to receive classified briefings about the most important problems facing the U.S. military, then decide for themselves what they wanted to study. JASON’s mandate was to prevent “technological surprise,” but no one at the Pentagon presumed to tell them how to do it.

In July, the group would reconvene for a six-week “study session,” initially alternating yearly between the east and west coasts. Members later recalled these as idyllic times for the Jasons, with the group becoming like an extended family. The Jasons rented homes near each other. Wives became friends, children grew up like cousins, and the community put on backyard plays at an annual Fourth of July party. But however idyllic their off hours, the physicists’ workday revolved around contemplating the end of the world. Questions concerning fighting and winning a nuclear war were paramount. The ideas the Jasons were studying approached the level of what had previously been science fiction.

Some of the first JASON studies focused on ARPA’s Defender missile defense program. Their analysis furthered ideas involving the detection of incoming nuclear attacks through the infrared signature of missiles, applied newly-discovered astronomical techniques to distinguish between nuclear-armed missiles and decoys, and worked on the concept of shooting what were essentially directed lightning bolts through the atmosphere to destroy incoming nuclear missiles.

The lightning bolt idea, known today as directed energy weapons, came from Christofilos, who was described by an ARPA historian as mesmerizing JASON physicists with the “kind of ideas that nobody else had.” Some of his other projects included a fusion machine called Astron, a high-altitude nuclear explosion test codenamed Operation Argus that was dubbed the “greatest scientific experiment ever conducted,” and explorations of a potential U.S. “space fleet.”

The Jasons’ analysis on the effects of nuclear explosions in the upper atmosphere, water, and underground, as well as methods of detecting these explosions, was credited with being critical to the U.S. government’s decision to sign the Limited Test Ban Treaty with the Soviet Union. Because of their analysis, the U.S. government felt confident it could verify treaty compliance; the treaty resulted in a large decline in the concentration of radioactive particles in the atmosphere.

The success of JASON over its first five years increased its influence within the U.S. military and spurred attempts by U.S. allies to copy the program. Britain tried for years to create a version of JASON, even enlisting the help of JASON’s leadership. But the effort failed: British physicists simply did not seem to desire involvement. Earlier attempts by British leaders like Winston Churchill to create a British MIT had run into the same problems.

The difference was not ability, but culture. American physicists did not have a disdain for the applied sciences, unlike their European peers. They were comfortable working as advisors on military projects and were employed by institutions that were dependent on DOD funding. Over 20 percent of Caltech’s budget in 1964 came from the DOD, and it was only the 15th largest recipient of funding; MIT was first and received twelve times as much money. The U.S. military and scientific elite were enmeshed in a way that had no parallel in the rest of the world then or now.

A North Pole Mission the Night Before Christmas

Saturday, December 31st, 2022

From 80,000 feet, SR-71 Blackbird could survey 100,000 square miles of Earth’s surface per hour. On the Night Before Christmas, in 1969, Richard “Butch” Sheffield flew a North Pole night mission:

Late In 1969, shortly after I was crewed with Bob Spencer, we were tasked to fly a night mission to the North Pole. Night missions were very rare in those days because of St. Martins crash (summer of 1967) at night when navigation system failed. We were one of the most experienced SR crews and we were told that the Russians were doing something with our submarines at night at a station they had built on the ice near the North Pole.

It was believed that our Side Looking, High Resolution Radar System could gain valuable intelligence by spying on the unsuspecting Russians in the middle of the night. I found out a few years ago what the Russians were doing, setting up acoustic sensors so they could track our submarines under the ice cape.

We launched from Beale at night, flew north to Alaska and refueled over the central part on a Northern heading. Once we were full of fuel, we lit the afterburners and climbed to about seventy five-thousand feet heading north to the ice station. The tanker was briefed to continue to fly north in case we lost an engine. There was no place to land and our emergency procedure was to turn around 180 degrees and do a head on rendezvous with the tanker on one engine.

As we departed Alaska heading North with the after burners blazing, I looked out the window at the barren land and ice. I could see well because of star light. We had no moon that night. The thought came to my mind, “this is really risky business,” and if anything goes wrong they will never find us. Nothing went wrong, I turned on the Side Looking Radar (SLR), looked at the location and took the images. Returned to Alaska and refueled from the tanker and returned to Beale.

[…]

The CIA found out that the station was not manned during the worst part of winter. When not manned, the CIA landed a few people by parachute to find out what was going on at the station. They found everything to include code books. The men were recovered by being snatched up into a low flying aircraft.

Vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes

Thursday, December 22nd, 2022

“Green hydrogen” is created through electrolysis, which goes much faster, RMIT researchers found, when you apply high-frequency sound waves:

So why does this process work so much better when the RMIT team plays a 10-MHz hybrid sound? Several reasons, according to a research paper just published in the journal Advanced Energy Materials.

Firstly, vibrating the water has the effect of “frustrating” the water molecules nearest to the electrodes, shaking them out of the tetrahedral networks they tend to settle in. This results in more “free” water molecules that can make contact with catalytic sites on the electrodes.

Secondly, since the separate gases collect as bubbles on each electrode, the vibrations shake the bubbles free. That accelerates the electrolysis process, because those bubbles block the electrode’s contact with the water and limit the reaction. The sound also helps by generating hydronium (positively charged water ions), and by creating convection currents that help with mass transfer.

In their experiments, the researchers chose to use electrodes that typically perform pretty poorly. Electrolysis is typically done using rare and expensive platinum or iridium metals and powerfully acidic or basic electrolytes for the best reaction rates, but the RMIT team went with cheaper gold electrodes and an electrolyte with a neutral pH level. As soon as the team turned on the sound vibrations, the current density and reaction rate jumped by a remarkable factor of 14.

So this isn’t a situation where, for a given amount of energy put into an electrolyzer, you get 14 times more hydrogen. It’s a situation where the water gets split into hydrogen and oxygen more quickly and easily. And that does have an impressive effect on the overall efficiency of an electrolyzer. “With our method, we can potentially improve the conversion efficiency leading to a net-positive energy saving of 27%,” said Professor Leslie Yeo, one of the lead researchers.

Interacting with ChatGPT is like talking to a celestial bureaucrat

Friday, December 16th, 2022

Passing the Turing test turns out to be boring, Erik Hoel notes:

ChatGPT was created by taking the original GPT-3 model and fine-tuning it on human ratings of its responses, e.g., OpenAI had humans interact with GPT-3, its base model, then rate how satisfied they were with the answer. ChatGPT’s connections were then shifted to give more weight to the ones that were important for producing human-pleasing answers.

Therefore, before we can discuss why ChatGPT is actually unimpressive, first we must admit that ChatGPT is impressive.

[…]

ChatGPT fails Turing’s test, but only because it admits it’s an AI! That is, only because its answers are either too good, too fast, or too truthful.

[…]

All to say: ChatGPT is impressive because it passes what we care about when it comes to the Turing test. And anyone who has spent time with ChatGPT (which you can for free here) feels intuitively that a milestone has been passed—if not the letter of Turing’s test, its spirit has certainly been conquered.

[…]

Sure, it’ll change everything, but it also basically feels like an overly censorious butler who just happens to have ingested the entirety of the world’s knowledge and still manages to come across as an unexciting dullard.

[…]

For as they get bigger, and better, and more trained via human responses, their styles get more constrained, more typified. Additionally, with the enormous public attention (and potential for government regulation) companies have taken to heart that AIs must be rendered “safe.” AIs must have the right politics and always say the least offensive thing possible and think nothing but of butterflies and rainbows. Rather than we being the judge, and suspicious of the AI, and AI is suspicious of us, and how we might misuse it, or misinterpret it, or disagree with it. Interacting with the early GPT-3 model was like talking to a schizophrenic mad god. Interacting with ChatGPT is like talking to a celestial bureaucrat.

The biggest mistake Jack made was continuing to invest in building tools for Twitter to manage the public conversation

Wednesday, December 14th, 2022

Jack admits that he completely gave up pushing for his principles when an activist entered Twitter’s stock in 2020:

I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.

The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society.

Security kept the crowd at least 200 feet from the front of the aircraft

Wednesday, December 14th, 2022

Security was tight when the US Air Force unveiled its new B-21 Raider stealth bomber on December 2, after what happened when the B-2 stealth bomber was revealed:

On November 22, 1988, as armed guards patrolled the tarmac and a Huey helicopter circled overhead, the world got a chance to see the B-2 Spirit — the predecessor of the B-21 in look and function — at the same Palmdale facility.

As with the B-21, spectators were kept at a distance, and only the front of the B-2 could be seen. That was frustrating for those who wanted to see the rear of the B-2, especially the distinctive trailing edges and engine exhausts of the tailless flying-wing bomber, which would give clues to the aircraft’s capabilities and its stealthiness.

[…]

The [Aviation Week] team considered several ideas, including flying a hot-air balloon over the B-2, which was dropped for safety reasons. Eventually they noticed that FAA’s notice to airmen — an alert known as a NOTAM — didn’t restrict flights in the area that were above 1,000 feet.

Aviation Week editor Michael Dornheim and photographer Bill Hartenstein flew a rented Cessna 172 to Palmdale Airport the weekend before the B-2 was unveiled.

“Dornheim performed several circuits and touch-and-gos to allay any potential suspicions from air traffic control, while Hartenstein tried out various telephoto lenses to guarantee he would have the best images of the day,” Aviation Week senior editor Guy Norris wrote this month.

When the big day came, security kept the crowd at least 200 feet from the front of the aircraft, while the low-flying Huey helicopter kept a watchful eye for intruders. But the Cessna circled overhead, unnoticed, as Hartenstein took photo after photo.

When the plane landed, Dornheim and Hartenstein “were just giddy,” Scott said. “They hadn’t got hollered at in any way by ATC [air traffic control] and I told them I hadn’t noticed anyone even looking up!”

The team then raced to meet Thanksgiving week deadlines. Hartenstein’s film was dispatched on an overnight FedEx flight to New York and emerged in the pages of Aviation Week as a beautiful, full-color photo of the B-2 — its trailing edges and exhausts fully visible.

Distance is the primary challenge the US military faces in East Asia

Tuesday, December 13th, 2022

The US is rapidly compensating for the short range of its fighter aircraft, Austin Vernon explains:

China’s response [to the US] is to invest in weapons that keep American planes and ships from getting close to the Chinese mainland. Their strategy is known as anti-access area denial (A2AD). The technological change driving this strategy is cheaper sensors that enable missiles to hit planes and ships hundreds of miles away. Munition effectiveness and logistics intensity dramatically improve. The strategy has an asymmetric advantage since missiles are cheaper than platforms like aircraft carriers.

[…]

Distance is the primary challenge the US military faces in East Asia. The military designed our weapons and supply lines for Europe, where distances are tiny and basing options are numerous. The root cause of the current distress is that carrier strike groups are vulnerable to mass missile attacks and must operate further away from the battle space, causing fighters to lose effectiveness. The two most critical impacted missions are destroying enemy warships and contesting airspace. China can’t invade most of our allies without ships, and ceding the air makes it difficult to kill their ships.

America needs weapons to cover for the deficiency of existing platforms. Opportunities include longer-range missiles, adapting platforms that can operate without carriers, and thwarting missile attacks.

[…]

Long-range stealth bombers are essential for projecting power in East Asia since basing options might be limited, and stealth will be critical to maintaining survivability without persistent fighter cover. The Air Force has gone to great lengths to keep its newest stealth bomber, the B-21, on time and budget. The Air Force Rapid Capability Office manages the program instead of using the traditional procurement process. The project has kept requirements constant, and the design has advanced technology but nothing bleeding edge. For example, the B-21 uses the same engine as the F-35 to save development time and reduce costs. Northrop Grumman also designed the plane to minimize maintenance and sustainment costs. Typically the Air Force and Congress are cutting plane orders due to budget overruns at this point in the process. They are looking at increasing planned B-21 numbers instead. The public rollout happened in December 2022.

It is hard to overstate how important having hundreds of these bombers will be to US power projection in East Asia because they make any Chinese target vulnerable to attack even if carrier aircraft are ineffective.

[…]

Unpowered munitions like gravity bombs and artillery shells are taking a back seat to missiles and rockets as range becomes critical for platform survival. But classical cruise missiles are too expensive for everyday usage. The US and other nations are striving for cheap missiles.

The Guided Multiple Launch Rocket System (GMRLS) rocket that fires from HIMARS and the M270 is a perfect example of the shift. It can hit critical targets far behind enemy lines that are too dangerous for aircraft or too far for tube artillery. Each round costs ~$100,000 – a bargain compared to most cruise missiles that cost millions. The warhead (90 kg) and range (80 km) are smaller than cruise missiles, but the rocket can destroy an ammo depot, troop concentrations, or a headquarters.

Suicide drones or “loitering munitions” are another variation of cheap missiles. The Iranian Shahed-136 costs $20,000-$50,000 and has a 1000+ km range. It sacrifices speed (120 km/h), payload (40 kg), and survivability to achieve cost and range goals. Other drones, like the American Switchblade, serve as squad weapons that improve on mortars.

The Air Force “Gray Wolf” program’s goal was a $100,000 subsonic cruise missile with a 400 km range and a 230 kg warhead. It successfully tested a low-cost engine, and other programs absorbed the follow-on phases. The engine is the Kratos TDI-J85 which can meet the program goals while costing less than $40,000. Kratos already has multiple customers using it for drones and missiles.

Notably, Boeing wants to use the TDI-J85 engine to power its 230 kg JDAM bomb, giving it a 370 km to 750 km range (depending on configuration). The US could lob more QUICKSINK-equipped JDAM cruise missiles in an engagement than the Chinese Navy has vertical launch tubes — all for less than the cost of a frigate. The munition would be 1/10 the price of a Harpoon Block II anti-ship missile with double the range.

[…]

A quirk of the US military is that the Army is responsible for most ground-based missile defense, even on Air Force bases, leading to incentive mismatches. The Navy, which faces an existential threat in anti-ship missiles, has had an automated battle management system in AEGIS for forty years. The Army is trying to field a similar protocol with its Integrated Air and Missile Defense Battle Command System (IBCS) to manage air defense radars and weapons.

[…]

It isn’t hard to shoot down low-end suicide drones, but it can be expensive. Saudi Arabia regularly shoots down Iranian Shaheds with million-dollar air defense missiles. Classic anti-aircraft guns with modern fire control have proven effective in Ukraine, and bullets are much cheaper than drones. Vehicles like the German Gephard are great when defending a wide area because the drones are so slow that vehicles can redeploy to shoot them down.

In East Asia, the US will be defending relatively small positions. One or two Centurion Counter Rocket Artillery Rocket (C-RAM) Gatling guns could probably defend Andersen Air Force Base on Guam.

[…]

Ballistic missiles are a top threat to carriers and US bases in the region. Base hardening, more ammo for existing anti-ballistic missile systems, denying the Chinese intel on ship and aircraft positions, and gaining early warning of Chinese strikes are critical to defending against these weapons.

Bases in Okinawa would be under constant threat from cruise missiles, but only China’s priciest ballistic missiles can reach Guam’s Andersen Air Force Base. Airfields are notoriously hard to take offline. Munitions designed to crater runways only keep a base offline for a few hours. The US has made recent improvements at Andersen AFB, like armoring fuel lines, adding a hardened maintenance hanger, and making fuel bladders available to replace damaged storage tanks.

The worst-case scenario is a surprise attack that kills personnel and destroys aircraft on the ground. The Air Force plans to use smaller dispersal bases to keep the Chinese guessing where the planes are. Investments in better dispersal options and more base hardening (like aircraft shelters for bases on Okinawa) would be beneficial. It would be a win if the Chinese waste their limited amounts of $10-$20 million ballistic missiles to crater a few runways.

The Chinese will find it harder to target Navy ships since they move. Even the fanciest missile is useless if you can’t find the carriers. If a conflict does escalate to space, China will quickly lose its ability to spot the US fleet with satellites. The Navy would expend incredible effort to splash any drones or submarines trying to break into the Pacific to find strike groups. Our carriers could have more freedom of movement than assumed.

The US has invested heavily in ballistic missile defense over the last few decades. There is typically a battery of THAAD missile interceptors deployed in Guam. And the Navy can fire SM-3 and SM-6 missiles at incoming threats. The record for these systems in testing and limited combat use is exemplary, with 90%+ success rates. They are also cheaper than the high-end Chinese missiles they counter. The only issue is that there might not be enough missiles in the theater to counter saturation attacks. Manufacturing more missiles and keeping an adequate number of AEGIS-guided missile ships in East Asia is critical. A credible active defense would force the Chinese to shoot their most valuable missiles in wasteful barrages that drain their missile inventory.

[…]

The AIM-260 air-to-air missile is a fast-track program nearing completion. It nearly doubles the range of the mainstay AIM-120 and is ~20% faster. That allows it to exceed the performance of the Chinese J-15 air-to-air missiles and gives our fighters extra legs. Low-rate production could already be underway.

Having more missiles in the air to handle Chinese mass attacks is also critical. An idea floated by the Pentagon and analysts is to equip bombers with long-range air-to-air missiles, allowing them to act like a missile magazine to support frontline fighters.

The AGM-88 HARM missile is the primary weapon for US aircraft to counter surface-to-air missile batteries. It homes in on their radar signals and forces the enemy to turn off their radar and move or eat a missile. A new extended-range version is faster and can go up to 300 km, allowing US fighters and bombers to counter longer-range surface-to-air missiles.

[…]

Cargo planes loaded with thousands of missiles or QUICKSINK JDAMs free up bombers to hit challenging targets like command and control bunkers or hardened bases and let tankers focus on getting the maximum amount of fighters into the battle to clear the skies.

[…]

Drones can absorb some fighter roles and make them more productive. But the current crop of inexpensive drones that highlight conflicts in Ukraine or Armenia are poorly suited for the Indo-Pacific theater. Most US bases are thousands of kilometers from Taiwan, eliminating smaller drones and quadcopters. Slow drones like TB-2 or Predator are not survivable in contested airspace. Drones must be expendable or much more capable to add value to US power projection.

One example is the RQ-180. The Air Force has never acknowledged its existence, but the rumors and evidence are strong that it exists. It replaces the Global Hawk in the high altitude, theater-wide surveillance mission. The Global Hawk has close to zero survivability and can’t function against near-peer threats. The RQ-180 is a flying wing like the B-2 and is stealthy, allowing it to operate in contested airspace. It likely costs hundreds of millions per copy, but small drones can’t replace it.

The Scan Eagle and its successor, the RQ-21 Blackjack, are current “attritable” surveillance drones. They are capable aircraft with high-end sensors, the ability to laser designate targets, and 16 hours of loiter time. The Navy and Marines have hundreds but want to replace them. Newer drones in this class have vertical take-off and landing (VTOL) capability, allowing them to ditch expensive launching/landing systems. Software flies the drones and soldiers only input waypoints. The competition is fierce, with AeroEnvironment’s Jump 20 and Shield AI’s V-Bat as examples. These drones are more capable than the RQ-21 at a fraction of the acquisition and operating cost, costing less than $1 million per unit even at low rate production. A limitation is they can’t stray more than ~150 km from the base station. Some obvious solutions are to use StarLink, drone relays, or autonomous software that can broadcast findings over the tactical data net. Much of the cost is in sensors, less expensive ones would make the drones more expendable. Production could ramp up fast because scrappy companies are the prime contractors.

[…]

Tankers and aerial refueling are the backbones of the US Air Force’s power projection, especially in East Asia. They are nearly as critical for the Navy. Tanker vulnerability is one reason why 24/7 combat air patrols over Taiwan from bases or carriers further than Guam are challenging. Fueling the patrols would stretch the tanker force thin while exposing them to Chinese attack. The Chinese Air Force could “lose the battle, but win the war” by bull rushing the few fighters on station, running them out of missiles, then splashing the string of valuable tankers leading back to US bases.

Inertial confinement fusion involves bombarding a tiny pellet of hydrogen plasma with the world’s biggest laser

Monday, December 12th, 2022

The federal Lawrence Livermore National Laboratory in California achieved net energy gain in a fusion experiment, using a process called inertial confinement fusion that involves bombarding a tiny pellet of hydrogen plasma with the world’s biggest laser:

The fusion reaction at the US government facility produced about 2.5 megajoules of energy, which was about 120 per cent of the 2.1 megajoules of energy in the lasers, the people with knowledge of the results said, adding that the data was still being analysed.

E78BEB06-0A67-4729-AA37-A9EC5ACBD385

The $3.5bn National Ignition Facility was primarily designed to test nuclear weapons by simulating explosions but has since been used to advance fusion energy research. It came the closest in the world to net energy gain last year when it produced 1.37 megajoules from a fusion reaction, which was about 70 per cent of the energy in the lasers on that occasion.