A fuel cell that runs on methane at practical temperatures

Monday, November 12th, 2018

Methane fuel cells usually require temperatures of 750 to 1,000 degrees Celsius to run, but a new fuel cell with a new catalyst can run at 500 degrees, cooler than an automobile engine:

That lower temperature could trigger cascading cost savings in the ancillary technology needed to operate a fuel cell, potentially pushing the new cell to commercial viability. The researchers feel confident that engineers can design electric power units around this fuel cell with reasonable effort, something that has eluded previous methane fuel cells.

“Our cell could make for a straightforward, robust overall system that uses cheap stainless steel to make interconnectors,” said Meilin Liu, who led the study and is a Regents’ Professor in Georgia Tech’s School of Material Science and Engineering. Interconnectors are parts that help bring together many fuel cells into a stack, or functional unit.

“Above 750 degrees Celsius, no metal would withstand the temperature without oxidation, so you’d have a lot of trouble getting materials, and they would be extremely expensive and fragile, and contaminate the cell,” Liu said.

“Lowering the temperature to 500 degrees Celsius is a sensation in our world. Very few people have even tried it,” said Ben deGlee, a graduate research assistant in Liu’s lab and one of the first authors of the study. “When you get that low, it makes the job of the engineer designing the stack and connected technologies much easier.”

The new cell also eliminates the need for a major ancillary device called a steam reformer, which is normally needed to convert methane and water into hydrogen fuel.

[...]

Hydrogen is the best fuel for powering fuel cells, but its cost is exorbitant. The researchers figured out how to convert methane to hydrogen in the fuel cell itself via the new catalyst, which is made with cerium, nickel and ruthenium and has the chemical formula Ce0.9Ni0.05Ru0.05O2, abbreviated CNR.

When methane and water molecules come into contact with the catalyst and heat, nickel chemically cleaves the methane molecule. Ruthenium does the same with water. The resulting parts come back together as that very desirable hydrogen (H2) and carbon monoxide (CO), which the researchers surprisingly put to good use.

“CO causes performance problems in most fuel cells, but here, we’re using it as a fuel,” Chen said.

A proposal for an archive revisiter

Thursday, November 8th, 2018

In his long list of statistical notes, Gwern includes a proposal for an archive revisiter:

One reason to take notes/clippings and leave comments in stimulating discussions is to later benefit by having references & citations at hand, and gradually build up an idea from disparate threads and make new connections between them. For this purpose, I make extensive excerpts from web pages & documents I read into my Evernote clippings (functioning as a commonplace book), and I comment constantly on Reddit, LessWrong, HN, etc. While expensive in time & effort, I often go back, months or years later, and search for a particular thing and expand & integrate it into another writing or expand it out to an entire essay of its own. (I also value highly not being in the situation where I believe something but I do not know why I believe it other than the conviction I read it somewhere, once.)

This sort of personal information management using simple personal information managers like Evernote works well enough when I have a clear memory of what the citation/factoid was, perhaps because it was so memorable, or when the citations or comments are in a nice cluster (perhaps because there was a key phrase in them or I kept going back & expanding a comment), but it loses out on key benefits to this procedure: serendipity and perspective.

As time passes, one may realize the importance of an odd tidbit or have utterly forgotten something or events considerably changed its meaning; in this case, you would benefit from revisiting & rereading that old bit & experiencing an aha! moment, but you don’t realize it. So one thing you could do is reread all your old clippings & comments, appraising them for reuse.

But how often? And it’s a pain to do so. And how do you keep track of which you’ve already read? One thing I do for my emails is semi-annually I (try to) read through my previous 6 months of email to see what might need to be followed up on10 or mined for inclusion in an article. (For example, an ignored request for data, or a discussion of darknet markets with a journalist I could excerpt into one of my DNM articles so I can point future journalists at that instead.) This is already difficult, and it would be even harder to expand. I have read through my LessWrong comment history… once. Years ago. It would be more difficult now. (And it would be impossible to read through my Reddit comments as the interface only goes back ~1000 comments.)

Simply re-reading periodically in big blocks may work but is suboptimal: there is no interface easily set up to reread them in small chunks over time, no constraints which avoid far too many reads, nor is there any way to remove individual items which you are certain need never be reviewed again. Reviewing is useful but can be an indefinite timesink. (My sent emails are not too hard to review in 6-month chunks, but my IRC logs are bad – 7,182,361 words in one channel alone – and my >38k Evernote clippings are worse; any lifestreaming will exacerbate the problem by orders of magnitude.) This is probably one reason that people who keep journals or diaries don’t reread Nor can it be crowdsourced or done by simply ranking comments by public upvotes (in the case of Reddit/LW/HN comments), because the most popular comments are ones you likely remember well & have already used up, and the oddities & serendipities you are hoping for are likely unrecognizable to outsiders.

This suggests some sort of reviewing framework where one systematically reviews old items (sent emails, comments, IRC logs by oneself), putting in a constant amount of time regularly and using some sort of ever expanding interval between re-reads as an item becomes exhausted & ever more likely to not be helpful. Similar to the logarithmically-bounded number of backups required for indefinite survival of data (Sandberg & Armstrong 2012), Deconstructing Deathism – Answering Objections to Immortality, Mike Perry 2013 (note: this is an entirely different kind of problem than those considered in Freeman Dyson’s immortal intelligences in Infinite in All Directions, which are more fundamental), discusses something like what I have in mind in terms of an immortal agent trying to review its memories & maintain a sense of continuity, pointing out that if time is allocated correctly, it will not consume 100% of the agent’s time but can be set to consume some bounded fraction.

[...]

So you could imagine some sort of software along the lines of spaced repetition systems like Anki, Mnemosyne, or Supermemo which you spend, say, 10 minutes a day at, simply rereading a selection of old emails you sent, lines from IRC with n lines of surrounding context, Reddit & LW comments etc; with an appropriate backoff & time-curve, you would reread each item maybe 3 times in your lifetime (eg first after a delay of a month, then a year or two, then decades). Each item could come with a rating function where the user rates it as an important or odd-seeming or incomplete item and to be exposed again in a few years, or as totally irrelevant and not to be shown again – as for many bits of idle chit-chat, mundane emails, or intemperate comments is not an instant too soon! (More positively, anything already incorporated into an essay or otherwise reused likely doesn’t need to be resurfaced.)

This wouldn’t be the same as a spaced repetition system which is designed to recall an item as many times as necessary, at the brink of forgetting, to ensure you memorize it; in this case, the forgetting curve & memorization are irrelevant and indeed, the priority here is to try to eliminate as many irrelevant or useless items as possible from showing up again so that the review doesn’t waste time.

More specifically, you could imagine an interface somewhat like Mutt which reads in a list of email files (my local POP email archives downloaded from Gmail with getmail4, filename IDs), chunks of IRC dialogue (a grep of my IRC logs producing lines written by me +- 10 lines for context, hashes for ID), LW/Reddit comments downloaded by either scraping or API via the BigQuery copy up to 2015, and stores IDs, review dates, and scores in a database. One would use it much like a SRS system, reading individual items for 10 or 20 minutes, and rating them, say, upvote (this could be useful someday, show me this ahead of schedule in the future) / downvote (push this far off into the future) / delete (never show again). Items would appear on an expanding schedule.

[...]

As far as I know, some to-do/self-help systems have something like a periodic review of past stuff, and as I mentioned, spaced repetition systems do something somewhat similar to this idea of exponential revisits, but there’s nothing like this at the moment.

The fall of Big Data and the rise of the Blockchain economy

Tuesday, October 30th, 2018

George Gilder’s Life After Google predicts the fall of Big Data and the rise of the Blockchain economy:

Famously, Google gives most of its content away for free, or (in comments Gilder credits to Tim Cook) if it’s free, you’re not the customer; you’re the product. That’s the least of it. Spanish has two words for “free”–gratis and libre. In our context it means gratis.

Let’s count the ways gratis benefits Google:

  • They are completely immune from any antitrust prosecution and most other regulatory oversight.
  • They can roll out buggy, beta software to consumers and improve it over time.
  • They don’t have to take responsibility for security. Unlike a bank, Google is at no risk if somehow your data gets corrupted or stolen.
  • They provide no customer support.
  • Your data doesn’t belong to you. Instead it belongs to Google, which can monetize it with the help of AI.
  • You get locked into a Google world, where everything you own is now at their mercy. (I’m in that situation.) Your data is precisely not libre.

Note that Google didn’t even bother to show up at the recent Congressional hearings about “fake news.” They consider themselves above the law (or, perhaps more accurately, below the law). They can get away with this because it’s free.

There are some disadvantages.

  • It’s not really free, but instead of paying with money you pay with time. Attention is the basic currency of Google-world.
  • People hate ads. “[O]nly 0.06 percent of smartphone ads were clicked through. Since more than 50 percent of the clicks were by mistake, according to surveys, the intentional response rate was 0.03 percent.” This works only for spammers. Ad-blockers are becoming universal.
  • Google thinks it can circumvent that by using AI to generate ads that will interest the user. No matter–people still hate them.The result is the value of advertising is declining. Gilder does not believe that AI will ever solve this problem. (I agree with him.)
  • Most important–Google loses any information about how valuable its products are. Airlines, for example, respond sensitively to price signals when determining which routes to fly, what equipment to use, what service levels to provide, etc. Price is the best communication mechanism known for conveying economic information. You immediately know what is valuable to consumers, and what isn’t.Google loses all that information by going gratis.Is Gmail more valuable than Waze? Google has no idea. As a result it has no way of knowing where to invest its money and resources. It’s just blindly throwing money at a dartboard.

It’s just plain good science fiction and it satisfies

Friday, October 26th, 2018

I haven’t read The Da Vinci Code — or any other conspiracy thrillers, now that I think of it — but I have to assume that Hans G. Schantz‘s Hidden Truth series reads like Dan Brown’s bestselling novel — but with physics taking the place of theology.

Schantz can credibly weave physics into his story, because he is a trained physicist and “wrote the book” on The Art and Science of Ultra-Wideband Antennas, and the first book definitely made me want to know more about the pioneers of electromagnetic theory — many of whom did die young or inexplicably left the field.

But the real draw — or drawback — of the novel is that it is unambiguously conservative and especially anti-Progressive. This makes it a bit of a guilty pleasure, if you ascribe to Jordan Peterson’s point about art versus propaganda:

Neovictorian reviewed the second book, and I think he reviewed it well:

It’s fun, it’s well written, it’s just plain good science fiction and it satisfies. Also, it’s a practical guide to understanding, infiltrating and grandly screwing with college SJWs. After you’ve read it, buy a copy (of both volumes) for your friends and children at school! Buy copies for younger kids, too. These books show how young people should conduct themselves with honor and perseverance, and not through preaching, but through example.

I may have to read Neovictorian’s own Sanity next.

How precision engineers created the modern world

Wednesday, October 24th, 2018

Simon Winchester’s The Perfectionists explains how precision engineers created the modern world:

The story of precision begins with metal.

And the story begins, according to Winchester, at a specific place and time: North Wales, “on a cool May day in 1776.” The Age of Steam was getting underway. So was the Industrial Revolution — almost but not quite the same thing. In Scotland, James Watt was designing a new engine to pump water by means of the power of steam. In England, John “Iron-Mad” Wilkinson was improving the manufacture of cannons, which were prone to exploding, with notorious consequences for the sailors manning the gun decks of the navy’s ships. Rather than casting cannons as hollow tubes, Wilkinson invented a machine that took solid blocks of iron and bored cylindrical holes into them: straight and precise, one after another, each cannon identical to the last. His boring machine, which he patented, made him a rich man.

Watt, meanwhile, had patented his steam engine, a giant machine, tall as a house, at its heart a four-foot-wide cylinder in which blasts of steam forced a piston up and down. His first engines were hugely powerful and yet frustratingly inefficient. They leaked. Steam gushed everywhere. Winchester, a master of detail, lists the ways the inventor tried to plug the gaps between cylinder and piston: rubber, linseed oil–soaked leather, paste of soaked paper and flour, corkboard shims, and half-dried horse dung — until finally John Wilkinson came along. He wanted a Watt engine to power one of his bellows. He saw the problem and had the solution ready-made. He could bore steam-engine cylinders from solid iron just as he had naval cannons, and on a larger scale. He made a massive boring tool of ultrahard iron and, with huge iron rods and iron sleighs and chains and blocks and “searing heat and grinding din,” achieved a cylinder, four feet in diameter, which as Watt later wrote “does not err the thickness of an old shilling at any part.”

By “an old shilling” he meant a tenth of an inch, which is a reminder that measurement itself — the science and the terminology — was in its infancy. An engineer today would say a tolerance of 0.1 inches.

James Watt’s fame eclipses Iron-Mad Wilkinson’s, but it is Wilkinson’s precision that enabled Watt’s steam engine to power pumps and mills and factories all over England, igniting the Industrial Revolution. As much as the machinery itself, the discovery of tolerance is crucial to this story. The tolerance is the clearance between, in this case, cylinder and piston. It is a specification on which an engineer (and a customer) can rely. It is the foundational concept for the world of increasing precision. When machine parts could be made to a tolerance of one tenth of an inch, soon finer tolerances would be possible: a hundredth of an inch, a thousandth, a ten-thousandth, and less.

Watt’s invention was a machine. Wilkinson’s was a machine tool: a machine for making machines and their parts. More and better machines followed, some so basic that we barely think of them as machines: toilets, locks, pulley blocks for sailing ships, muskets. The history of machinery has been written before, of course, as has the history of industrialization. These can be histories of science or economics. By focusing instead on the arrow of increasing precision, Winchester is, in effect, walking us around a familiar object to expose an unfamiliar perspective.

Can precision really be a creation of the industrial world? The word comes from Latin by way of middle French, but first it meant “cutting off” or “trimming.” The sense of exactitude comes later. It seems incredible that the ancients lacked this concept, so pervasive in modern thinking, but they measured time with sundials and sandglasses, and they counted space with hands and feet, and the “stone” has survived into modern Britain as a measure of weight.

Any assessment of ancient technology has to include, however, a single extraordinary discovery — an archaeological oddball the size of a toaster, named the “Antikythera mechanism,” after the island near Crete where Greek sponge divers recovered it in 1900 from a shipwreck 150 feet deep. Archaeologists were astonished to find, inside a shell of wood and bronze dated to the first or second century BC, a complex clockwork machine comprising at least thirty bronze dials and gears with intricate meshing teeth. In the annals of archaeology, it’s a complete outlier. It displays a mechanical complexity otherwise unknown in the ancient world and not matched again until fourteenth-century Europe. To call it “clockwork” is an anachronism: clocks came much later. Yet the gears seem to have been made — by hand — to a tolerance of a few tenths of a millimeter.

After a century of investigation and speculation, scientists have settled on the view that the Antikythera mechanism was an analog computer, intended to demonstrate astronomical cycles. Dials seem to represent the sun, the moon, and the five planets then known. It might have been able to predict eclipses of the moon. Where planetary motion is concerned, however, it seems to have been highly flawed. The engineering is better than the underlying astronomy. As Winchester notes, the Antikythera mechanism represents a device that is amazingly precise, yet not very accurate.

What makes precision a feature of the modern world is the transition from craftsmanship to mass production. The genius of machine tools — as opposed to mere machines — lies in their repeatability. Artisans of shoes or tables or even clocks can make things exquisite and precise, “but their precision was very much for the few,” Winchester writes. “It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today.” That was John Wilkinson’s achievement in 1776: “the first construction possessed of a degree of real and reproducible mechanical precision — precision that was measurable, recordable, repeatable.”

Perhaps the canonical machine tool — surely the oldest — is the lathe, a turning device for cutting and shaping table legs, gun barrels, and screws. Wooden lathes date back to ancient China and Egypt. However, metal lathes, enormous and powerful, turning out metal machine parts, did not come into their own until the end of the eighteenth century. You can explain that in terms of available energy: water wheels and steam engines. Or you can explain it as Winchester does, in terms of precision. The British inventor Henry Maudslay made the first successful screw-cutting lathe in 1800, and to Winchester the crucial part of his invention is a device known as a slide rest: the device that holds the cutting tools and adjusts their position as delicately as possible, with the help of gears. Maudslay’s lathe, described by one historian as “the mother tool of the industrial age,” achieved a tolerance of one ten-thousandth of an inch. Metal screws and other pieces could be turned out by the hundreds and then the thousands, every one exactly the same.

Because they were replicable, they were interchangeable. Because they were interchangeable, they made possible a world of mass production and the warehousing and distribution of component parts. A French gunsmith, Honoré Blanc, is credited with showing in 1785 that flintlocks for muskets could be made with interchangeable parts. Before an audience, he disassembled twenty-five flintlocks into twenty-five frizzle springs, twenty-five face plates, twenty-five bridles, and twenty-five pans, randomly shuffled the pieces, and then rebuilt “out of this confusion of components” twenty-five new locks. Particularly impressed was the American minister to France, Thomas Jefferson, who posted by packet ship a letter explaining the new method for the benefit of Congress:

It consists in the making every part of them so exactly alike that what belongs to any one, may be used for every other musket in the magazine…. I put several together myself taking pieces at hazard as they came to hand, and they fitted in the most perfect manner. The advantages of this, when arms need repair, are evident.

As it was, when a musket broke down in the field, a soldier needed to find a blacksmith.

Replication and standardization are so hard-wired into our world that we forget how the unstandardized world functioned. A Massachusetts inventor named Thomas Blanchard in 1817 created a lathe that made wooden lasts for shoes. Cobblers still made the shoes, but now the sizes could be systematized. “Prior to that,” says Winchester, “shoes were offered up in barrels, at random. A customer shuffled through the barrel until finding a shoe that fit, more or less comfortably.” Before long, Blanchard’s lathe was making standardized gun stocks at the Springfield Armory and then at its successor, the Harpers Ferry Armory, which began turning out muskets and rifles by the thousands on machines powered by water turbines at the convergence of the Shenandoah and Potomac Rivers. “These were the first truly mechanically produced production-line objects made anywhere,” Winchester writes. “They were machine-made in their entirety, ‘lock, stock, and barrel.’” It is perhaps no surprise that the military played from the first, and continues to play, a leading and deadly part in the development of precision-based technologies and methods.

Some Russian guy tried it 15 years ago

Thursday, October 18th, 2018

The origin of Blue Origin sounds fascinating:

Jeff Bezos remembers being 5 years old and watching the Apollo 11 moon landing on a black-and-white television. The event triggered a lifelong obsession. He spent his boyhood in Houston and moved to Florida by high school, but he passed his summers on his grandparents’ farm in rural Cotulla, Texas. There, his grandfather — a former top Defense Department official — introduced him to the extensive collection of science fiction at the town library. He devoured the books, gravitating especially to Robert Heinlein and other classic writers who explored the cosmos in their tales.

When he was a junior at Miami’s Palmetto Senior High School, his physics teacher, Deana Ruel, tasked the students with designing a piece of playground equipment. Bezos’ idea was to build one in low gravity. “One day I’m going to be the first one to have an amusement park on the moon,” he told Ruel. He promised her a ticket. For a newspaper profile, Bezos spouted O’Neillian talking points to a local reporter curious about his space obsession: “The Earth is finite, and if the world economy and population is to keep expanding, space is the only way to go.”

Bezos went to Princeton, where he attended seminars led by O’Neill and became president of the campus chapter of Students for the Exploration and Development of Space. At one meeting, Bezos was regaling attendees with visions of hollowing out asteroids and transforming them into space arks when a woman leapt to her feet. “How dare you rape the universe!” she said, and stormed out. “There was a pause, and Jeff didn’t make a public comment,” says Kevin Polk, another member of the club. “But after things broke up, Jeff said, ‘Did she really defend the inalienable rights of barren rocks?’”

After Princeton, Bezos put his energies toward finance, working at a hedge fund. He left it to move to Seattle and start Amazon. Not long after, he was seated at a dinner party with science fiction writer Neal Stephenson. Their conversation quickly left the bounds of Earth. “There’s sort of a matching game that goes on where you climb a ladder, figuring out the level of someone’s fanaticism about space by how many details they know,” Stephenson says. “He was incredibly high on that ladder.” The two began spending weekend afternoons shooting off model rockets.

In 1999, Stephenson and Bezos went to see the movie October Sky, about a boy obsessed with rocketry, and stopped for coffee afterward. Bezos said he’d been thinking for a long time about starting a space company. “Why not start it today?” Stephenson asked. The next year, Bezos incorporated a company called Blue Operations LLC. Stephenson secured space in a former envelope factory in a funky industrial area in south Seattle. Other early members of the team included Pablos Holman, a self-described computer hacker, and serial inventor Danny Hillis, who had crafted a proposal to build a giant mechanical clock that would run for 10,000 years. Bezos also recruited Amazon’s general counsel, Alan Caplan, a fellow space nerd. (“We both agreed we’d like to retire on Mars,” Caplan says.) These people were more thinkers than rocketeers, but at Blue Origin’s start the point was to brainstorm: Had any ideas been overlooked that could shake up space travel the way the internet had upended terrestrial commerce?

Another early participant was George Dyson, a science historian and son of physicist Freeman Dyson. At the 1999 PC Forum, an elite tech event run by Dyson’s sister, Esther, Bezos made a beeline for George, who had been writing about a little-known 1950s venture called Project Orion. Project Orion sought to propel space vehicles with atomic bomb explosions, and Bezos wanted to know all about it. As Dyson recalls, Bezos saw Orion as “his model for a small group of crazy people deciding to go into space without the restrictions of being an official government project.” (Bezos later reviewed Dyson’s book on Amazon—something he’s done only three times in the company’s history.) Some months later, Stephenson asked Dyson if he would consult for the company. Then he asked him to join Blue.

When Dyson signed on, he says, Blue Origin felt like Wernher von Braun’s Society for Space Travel. Like that amateur group of dazzling scientists, Blue resembled a club more than a company. Its members were obsessed with finding an alternative to chemical combustion, which is a woefully inefficient way to propel rockets on interplanetary journeys. “We went through a long list of not-quite-crazy but way-out-there projects at the beginning,” Dyson says.

Those were hashed out at Blue Origin’s monthly Saturday all-hands meetings. The sessions began at 9 and lasted all day. Bezos rarely missed one. “It was almost incomprehensible how technically engaged Jeff was in every part of the discussion,” Dyson says. “It wasn’t like, ‘Oh, we’ll leave the hydrogen-flow control valve question to the hydrogen-flow control valve people.’ Whatever the question was, Jeff would have technical knowledge and be involved.”

But as the Blue Origin team experimented with eccentric ways to heave things upward, they began to realize there was a reason big tubes full of chemical fuel had persisted. Every new tack proved infeasible, because of cost, risk, or technical complexity. “You can work really hard and come up with what you think is a super original idea, and you always find out that some Russian guy tried it 15 years ago,” Stephenson says.

Did China use a tiny chip to infiltrate U.S. companies?

Saturday, October 6th, 2018

Bloomberg claims that China used a tiny chip to infiltrate U.S. companies:

A Chinese military unit designed and manufactured microchips as small as a sharpened pencil tip. Some of the chips were built to look like signal conditioning couplers, and they incorporated memory, networking capability, and sufficient processing power for an attack.

The microchips were inserted at Chinese factories that supplied Supermicro, one of the world’s biggest sellers of server motherboards.

The compromised motherboards were built into servers assembled by Supermicro.

The sabotaged servers made their way inside data centers operated by dozens of companies.

When a server was installed and switched on, the microchip altered the operating system’s core so it could accept modifications. The chip could also contact computers controlled by the attackers in search of further instructions and code.

The claims are… incredible:

In emailed statements, Amazon (which announced its acquisition of Elemental in September 2015), Apple, and Supermicro disputed summaries of Bloomberg Businessweek’s reporting. “It’s untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental,” Amazon wrote. “On this we can be very clear: Apple has never found malicious chips, ‘hardware manipulations’ or vulnerabilities purposely planted in any server,” Apple wrote. “We remain unaware of any such investigation,” wrote a spokesman for Supermicro, Perry Hayes. The Chinese government didn’t directly address questions about manipulation of Supermicro servers, issuing a statement that read, in part, “Supply chain safety in cyberspace is an issue of common concern, and China is also a victim.” The FBI and the Office of the Director of National Intelligence, representing the CIA and NSA, declined to comment.

The inventor who plans to build a city under the sea

Thursday, September 27th, 2018

Phil Nuytten has built submarines and diving suits, but now he’s planning to build a city under the sea:

An underwater city is cool, but I’m not sure how much sense it makes. He does mention siting it on a thermal vent though, for “free” energy via a Stirling engine.

How fighting wildfires works

Sunday, September 23rd, 2018

In case you were wondering how fighting wildfires works, this video explains the process:

Not as outlandish as the concepts from the 1970s

Wednesday, September 19th, 2018

Jeff Foust of The Space Review reviews The High Frontier: An Easier Way:

In space, as in other fields, ideas come and go, returning after past failures in the hopes that changes in technology, policy, or economics will allow people to accept a concept they previously rejected. That appears to be the case with space settlements. In the 1970s, “space colonies” were all the rage among space enthusiasts, attracted by the idea proposed by Princeton professor Gerard K. O’Neill that giant habitats, many kilometers in size, would be the best place for humanity to live in space. There were NASA-sponsored studies of space colonies with lavish illustrations of the concepts, and ideas to use such facilities to enable space-based solar power (another idea that comes and goes) and other space industries. But, within a few years the concept faded away, with NASA ending its support and predictions that the Space Shuttle would enable frequent low-cost access to space failing to come true.

In the last few years, though, there’s been a push to bring back the idea, now often called “free space settlements” (avoiding the negative perception many have of “colonies.”) A new book by two space settlement advocates, Tom Marotta and Al Globus, offers an update of sorts of the original space colony concept O’Neill offered decades ago in his book The High Frontier, arguing that such settlements need not be as large and as expensive as O’Neill once thought.

As its subtitle suggests, the authors of The High Frontier: An Easier Way make the case that several changes in the original assumptions that drove the 1970s-era space colony concepts make such settlements more feasible today. One eschews the plan to place settlements at the Earth-Moon L-5 Lagrange point in favor of an equatorial low Earth orbit (ELEO) over the Equator at an altitude of 500 to 600 kilometers. That orbit gives such a facility radiation protection from the Earth’s magnetic field while also avoiding the South Atlantic Anomaly, a major source of charged particles. Doing so, they conclude, drastically reduces the mass needed for radiation protection: from five to ten tons per square meter of the facility’s surface to as little as 10 kilograms.

A second design change is to speed up the rotation rate of the facility needed to produce Earth-equivalent gravity. Previous studies assumed humans could tolerate rotation rates of no more than 1–2 revolutions per minute (RPM), but research suggests people can tolerate speeds of 4 RPM without any long-term consequences. That reduces the diameter of the facility, and hence its mass and cost.

Those changes, coupled with work to reduce launch costs, makes a settlement more feasible — or, at least, less infeasible. An initial concept mentioned in the book, called Kalpana, would be 112 meters in diameter and 112 meters long, weighing about 16,800 metric tons: enough to be carried by a little more than 100 flights of SpaceX’s Big Falcon Rocket (BFR) vehicle, at least according to designs the company disclosed last year. It’s still an expensive proposition, but one not as outlandish as the concepts from the 1970s.

Flown for recreational purposes over water and uncongested areas

Friday, September 14th, 2018

The Kitty Hawk Flyer does look like fun:

Flyer is Kitty Hawk’s first personal flying vehicle and the first step to make flying part of everyday life.

Flyer is designed to be easy to fly and flown for recreational purposes over water and uncongested areas. In just a couple of hours, you will experience the freedom and exhilaration of flight.

Flyer maintains an altitude of 3 meters/10 feet for our first riders’ flights.

We have adjusted the flight control system to limit the speed to 20 mph for our first riders’ flights.

Flyer creates thrust through all-electric motors that are significantly quieter than any fossil fuel based equivalent. When Flyer is in the air, depending on your distance, it will sound like a lawnmower (50ft) or a loud conversation (250ft).

In the US, Flyer operates under FAA CFR Part 103 – Ultralight. FAA does not require aircraft registration or pilot certification though flight training is highly encouraged. Ultralights may only be flown over uncongested areas.

More false positives among the hypochondriac set

Thursday, September 13th, 2018

The new ECG Apple Watch could do more harm than good:

“Do you wind up catching a few undiagnosed cases? Sure. But for the vast majority of people it will have either no impact or possibly a negative impact by causing anxiety or unnecessary treatment,” says cardiologist Theodore Abraham, director of the UCSF Echocardiography Laboratory. The more democratized you make something like ECG, he says, the more you increase the rate of false positives — especially among the hypochondriac set. “In the case of people who are very type-A, obsessed with their health, and fitness compulsive, you could see a lot of them overusing Apple’s tech to self-diagnose and have themselves checked out unnecessarily.”

The cases in which Apple’s new watch could be most helpful are obvious: People with atrial fibrillation, family histories of heart disease, heart palpitations, chest pain, shortness of breath, and so on. Sometimes, Abraham says, patients come in with vague cardiovascular symptoms that they can’t reproduce during their visit. Folks like that, he says, often require more expensive, prescription-based monitoring systems. If a doctor could ask that kind of patient to record their symptoms on a gadget they already own, that could be a win for the healthcare provider and the patient.

As for everyone else, it’s hard to say what benefit Apple Watch’s on-demand ECG could have, and existing evidence suggests it might actually do more harm than good.

There is, however, the matter of life-saving potential to consider, which AHA president Ivor Benjamin mentioned not once but twice in his presentation at yesterday’s Apple Event. If there’s a silver lining to putting electrocardiograms on every Apple Watch wearer’s wrist, it’s that their data (if they choose to share it — Apple emphasized at the event that your data is yours to do with as you please) could help researchers resolve the uncertainty surrounding ECG screening in seemingly healthy people. Apple’s new wearable might not be the handy heart-health tool it’s advertised as, but it could, with your permission, make you a research subject.

The ability to choose something simpler and more likely to endure

Tuesday, September 11th, 2018

Megan McArdle writes to a refrigerator dying young:

It turns out that refrigerators like the My First Fridge — the kind that quietly chug along decade after decade while needing only minor repairs — really are a thing of the past. According to the National Association of Home Builders, the average life span of a refrigerator is now just 13 years. And the German environmental agency found that between 2004 and 2013, the proportion of major appliances that had to be replaced in less than five years due to a defect rose from 3.5 percent to 8.3 percent. These days, we do not so much own our appliances as rent them from fate.

How did we become renters in our own homes? Peruse the Web, and you’ll discover a variety of explanations: outsourcing to suppliers who opt for cheapness rather than longevity; fancy computer-controlled features that add fancy problems; faster innovation cycles that leave inadequate time for testing; and government-imposed energy-efficiency standards that require a lot of fiddly engineering to comply with. But essentially, all of them boil down to one word: complexity. The more complicated something is, the more ways it can break.

When you are standing over the corpse of an appliance that died too young, it’s tempting to long for simpler days. But then, simpler isn’t the same as better. Replacement cycles may have shortened, but we can afford to replace our appliances sooner, because prices have fallen so dramatically. In 1979, a basic 17-cubic-foot Kenmore refrigerator cost $469 — or in today’s dollars, $1,735, which would have taken an average worker about 76 hours of labor to earn. It came with an ice maker, automatic defrost and some shelves. The nearest equivalent today has an extra cubic foot of storage, offers humidity-controlled crisper drawers and costs about a third as much to run. At $529, it represents under 20 hours of work at the average wage.

[...]

That’s the irony of modern life in so many ways, multiplying all our choices while taking away the most fundamental one: the ability to choose something simpler and more likely to endure.

Bulk metallic glasses can be readily extruded and 3D-printed

Wednesday, September 5th, 2018

The 3-D printing of thermoplastics is highly advanced, but the 3-D printing of metals is still challenging and limited:

The reason being that metals generally don’t exist in a state that they can be readily extruded.

“We have shown theoretically in this work that we can use a range of other bulk metallic glasses and are working on making the process more practical and commercially-usable to make 3-D printing of metals as easy and practical as the 3-D printing of thermoplastics,” said Prof. Schroers.

Unlike conventional metals, bulk metallic glasses (BMGs) have a super-cooled liquid region in their thermodynamic profile and are able to undergo continuous softening upon heating — a phenomenon that is present in thermoplastics, but not conventional metals. Prof. Schroers and colleagues have thus shown that BMGs can be used in 3-D printing to generate solid, high-strength metal components under ambient conditions of the kind used in thermoplastic 3-D printing.

The new work could side-step the obvious compromises in choosing thermoplastic components over metal components, or vice-versa, for a range of materials and engineering applications. Additive manufacturing of metal components has been developed previously, where a powder bed fusion process is used, however this exploits a highly-localized heating source, and then solidification of a powdered metal shaped into the desired structure. This approach is costly and complicated and requires unwieldy support structures that are not distorted by the high temperatures of the fabrication process.

The approach taken by Prof. Schroers and colleagues simplifies additive manufacturing of metallic components by exploiting the unique-amongst-metals softening behavior of BMGs. Paired with this plastic like characteristics are high strength and elastic limits, high fracture toughness, and high corrosion resistance. The team has focused on a BMG made from zirconium, titanium, copper, nickel and beryllium, with alloy formula: Zr44Ti11Cu10Ni10Be25. This is a well-characterized and readily available BMG material.

The team used amorphous rods of 1 millimeter (mm) diameter and of 700mm length. An extrusion temperate of 460 degrees Celsius is used and an extrusion force of 10 to 1,000 Newtons to force the softened fibers through a 0.5mm diameter nozzle. The fibers are then extruded into a 400°C stainless steel mesh wherein crystallization does not occur until at least a day has passed, before a robotically controlled extrusion can be carried out to create the desired object.

(Hat tip to Jonathan Jeckell.)

Fitbit heart data reveals its secrets

Monday, September 3rd, 2018

Fitbit has now logged 150 billion hours’ worth of heart-rate data from tens of millions of people, all over the world:

Fitbit Heart Data 1 Resting Heart Rate by Age

Fitbit Heart Data 2 BMI vs. HR by Gender

Fitbit Heart Data 3 Resting Heart Rate with Exercise

Fitbit Heart Data 4 Activity Effect on Resting Heart Rate by Age

Fitbit Heart Data 5 Resting Heart Rate with Sleep

Fitbit Heart Data 6 Activity vs. Heart Rate by Country