Why Saddam and Gaddafi Failed to get the Bomb

Saturday, November 26th, 2016

Målfrid Braut-Hegghammer, author of Unclear Physics, explains why Saddam and Gaddafi failed to get the Bomb:

While dictators with weak states can easily decide that they want nuclear weapons, they will find it difficult to produce them. Why? Personalist dictators like Saddam and Gaddafi weaken formal state institutions in order to concentrate power in their own hands. This helps them remain in power for longer, but makes their states inefficient. Weak states have fewer instruments to set up and manage complex technical programs. They lack the basic institutional capability to plan, execute, and review complicated technical projects. As a result, their leaders can be led to believe that the nuclear weapons program is doing great while, in fact, nothing is working out. In Libya, for example, scientists worked throughout the 1980s to produce centrifuges, with zero results.

[...]

As my book shows, these programs were afflicted with capacity problems at every stage, from initial planning to their final dismantlement. These problems were worse in Libya than in Iraq, because Gaddafi dismantled most state institutions as part of his Cultural Revolution during the 1970s. Saddam created a bloated state that was difficult to navigate for his officials, with competing agencies and programs blaming each other for various problems as these emerged. This made oversight difficult, from Saddam’s point of view, and caused endless infighting and backstabbing inside the Iraqi nuclear program. As a result, scientists spent days in endless meetings, blaming each other for delays, rather than working together as a team to solve problems they were facing.

Even when Saddam tried to put more pressure on his scientists to deliver results, he failed. After Israel destroyed a research reactor complex in Iraq in June 1981, Saddam became more determined to get nuclear weapons. But the program made little progress. In 1985, his leading scientists promised Saddam that they would achieve a major breakthrough by 1990 – without specifying what exactly they would achieve by that time. By 1987, it was clear that they would not be able to make a significant breakthrough by the deadline. This created plenty of shouting and conflict inside the program, and led to an in-house restructuring, but even at this stage no one was willing to tell Saddam the bad news. When the delays could no longer be denied, the scientists blamed another agency. This was a strategic blunder – because this agency was led by Saddam’s son-in-law, Hussein Kamil. Saddam put Kamil in charge of the nuclear weapons program. Even Kamil, who was notoriously brutal against his employees, became so frustrated with the nuclear program that he threatened to imprison anyone found to intentionally cause delays. Tellingly, this threat was never implemented.

In contrast, Libyan scientists often did not show up for work. The regime couldn’t just fire them, partly because there were too few scientists in Libya to begin with. The regime was unable to educate enough scientists and engineers, and had to hire foreigners (including many Egyptians). Some of the Egyptian scientists went on strike during a 1977 conflict between the two states – and, apparently, managed to negotiate better conditions. Not quite what we would expect from a brutal dictator, is it? But, as the history of Libya’s nuclear program demonstrates, the regime invested enormous sums in buying equipment without getting significantly closer to the nuclear weapons threshold. In fact, nothing worked – including phones, photo-copiers and expensive laboratory equipment. Some of the equipment broke, and no-one knew how to fix them, whereas other stuff was left unopened because the technical staff was concerned that fluctuating voltage in their electrical system could break the equipment. The Soviet research reactor also faced problems, because the Libyans were unable to filter the water cooling the reactor system, which meant the pipes became clogged with sand.

The Iraqi and Libyan programs failed for different reasons. The Iraqi program was beginning to make some progress after the internal restructuring. Kamil decided to ignore Saddam’s rule to not seek help from abroad, and bought equipment for the nuclear weapons program from Germany and other countries in the late 1980s. But then, Saddam miscalculated badly and decided to invade Kuwait in the summer of 1990. After the invasion, the Iraqis launched a crash nuclear program. Kamil told Saddam that they were on the threshold of acquiring nuclear weapons in the fall of 1990, which wasn’t true. But, if Saddam hadn’t invaded Kuwait, which led to the 1991 Gulf War, he would most likely have acquired nuclear weapons. The Libyan program never even got close.

A Twist on Wing Design

Tuesday, November 15th, 2016

MIT researchers are testing a shaping-changing wing that could replace the hinged flaps and ailerons of conventional flight controls:

They constructed the wing from tiny lightweight structural pieces made with Kapton foil on an aluminum frame, arranged in a lattice of cells like a honeycomb. The skin of the wing is made with overlapping strips of the flexible foil, layered like fish scales, allowing the pieces to slide across each other as the wing flexes, they said.

Flexible Wing from MIT

Two small motors apply a twisting pressure to each wingtip to control maneuvers in flight. They say this elastic airfoil can morph continuously to reduce drag, increase stall angle, and reduce vibration control flutter.

The Soft Robotics abstract:

We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness.

This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure’s intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system.

As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.

Starship Troupers

Saturday, November 12th, 2016

Starship research is enjoying something of a boom:

Serious work in the field dates back to 1968, when Freeman Dyson, an independent-minded physicist, investigated the possibilities offered by rockets powered by a series of nuclear explosions. Then, in the 1970s, the BIS designed Daedalus, an unmanned vessel that would use a fusion rocket to attain 12% of the speed of light, allowing it to reach Barnard’s Star, six light-years away, in 50 years. That target, though not the nearest star to the sun, was the nearest then suspected of having at least one planet.

[...]

During the cold war America spent several years and much treasure (peaking in 1966 at 4.4% of government spending) to send two dozen astronauts to the Moon and back. But on astronomical scales, a trip to the Moon is nothing. If Earth — which is 12,742km, or 7,918 miles, across — were shrunk to the size of a sand grain and placed on the desk of The Economist’s science correspondent, the Moon would be a smaller sand grain about 3cm away. The sun would be a larger ball nearly 12 metres down the hall. And Alpha Centauri B would be around 3,200km distant, somewhere near Volgograd, in Russia.

Chemical rockets simply cannot generate enough energy to cross such distances in any sort of useful time. Voyager 1, a space probe launched in 1977 to study the outer solar system, has travelled farther from Earth than any other object ever built. A combination of chemical rocketry and gravitational kicks from the solar system’s planets have boosted its velocity to 17km a second. At that speed, it would (were it pointing in the right direction) take more than 75,000 years to reach Alpha Centauri.

Nuclear power can bring those numbers down. Dr Dyson’s bomb-propelled vessel would take about 130 years to make the trip, although with no ability to slow down at the other end (which more than doubles the energy needed) it would zip through the alien solar system in a matter of days. Daedalus, though quicker, would also zoom right past its target, collecting what data it could along the way. Icarus, its spiritual successor, would be able at least to slow down. Only Project Longshot, run by NASA and the American navy, envisages actually stopping on arrival and going into orbit around the star to be studied.

But nuclear rockets have problems of their own. For one thing, they tend to be big. Daedalus would weigh 54,000 tonnes, partly because it would have to carry all its fuel with it. That fuel itself has mass, and therefore requires yet more fuel to accelerate it, a problem which quickly spirals out of control. And the fuel in question, an isotope of helium called 3He, is not easy to get hold of. The Daedalus team assumed it could be mined from the atmosphere of Jupiter, by humans who had already spread through the solar system.

A different approach, pioneered by the late Robert Forward, was championed by Dr Benford and his brother Gregory, who, like Forward was, is both a physicist and a science-fiction author. The idea is to leave the troublesome fuel behind. Their ships would be equipped with sails. Instead of filling them with wind, an orbiting transmitter would fill them with energy in the form of lasers or microwave beams, giving them a ferocious push to a significant fraction of the speed of light which would be followed (with luck) by an uneventful cruise to wherever they were going.

“Cheaper”, though, is a relative term. Jim Benford reckons that even a small, slow probe designed to explore space just outside the solar system, rather than flying all the way to another star, would require as much electrical power as a small country — beamed, presumably, from satellites orbiting Earth. A true interstellar machine moving at a tenth of the speed of light would consume more juice than the entirety of present-day civilisation. The huge distances involved mean that everything about starships is big. Cost estimates, to the extent they mean anything at all, come in multiple trillions of dollars.

That illustrates another question about starships, beyond whether they are possible. Fifty years of engineering studies have yet to turn up an obvious technical reason why an unmanned starship could not be built (crewed ships might be doable too, although they throw up a host of extra problems). But they have not answered the question of why anyone would want to go to all the trouble of building one.

Jeff Bezos discusses space flight and his vision for Blue Origin

Sunday, October 30th, 2016

Jeff Bezos discusses space flight and his vision for Blue Origin at the 2016 Pathfinder Awards at Seattle’s Museum of Flight:

Colonizing Venus With Floating Cities

Wednesday, October 26th, 2016

Colonizing Venus with floating cities raises the question, Why build the cities floating in the atmosphere?

Because the atmosphere has raw materials for construction and building breathable atmosphere, and because the planet provides gravity that you’d otherwise need a large rotating habitat or tether-and-counterweight system to achieve, and a human-friendly temperature range that removes the need for complex thermal control systems. Venus is effectively the only other source of nitrogen in the inner system aside from Earth itself, and although Venus is very dry, the atmosphere does contain water, and sulfuric acid which can be converted into water or itself used in industrial processes. And the atmosphere is primarily CO2, which can provide carbon for polymers.

Once bootstrapped, small supplemental shipments of minerals and machinery would allow enormous expansion, the atmosphere itself providing building material and lifting gas. The Venusian habitats could provide atmospheric gases and polymer building materials to the inner system. In reality, it is probably one of the easiest places to establish large Earthlike habitats. The plentiful sunlight, ease of constructing large habitats under Earthlike gravity, and constant supply of CO2, nitrogen, and water from the atmosphere could make it an agricultural center for supplying the inner system as well…not only with food, but also chemicals and materials derived from plants. And yes, the rocket fuel for getting things into orbit could also be synthesized from the atmosphere, and the thick atmosphere makes Venus itself one of the easiest planets to land on (even if you never actually touch land).

Converting CO2 to building material sounds like science fiction, but it is a fact that plants do it all around us.

[...]

The advantages of Venus are Earthlike gravity for small, non-rotating habitats, an environment with Earthlike temperatures and pressures and protection from radiation and micrometeorites reducing structural mass and the consequences of a breach (seal the compromised area off, then put on hazmat suits and patch the breach), plentiful availability of nitrogen (which Mars or orbital habitats would need a constant supply of), and sunlight for crops without concentrator mirrors.

Raining In High-Frequency Traders

Tuesday, October 25th, 2016

What is the relationship between high-frequency traders and liquidity?

Ever since high-frequency trading rose to prominence, a debate has raged over whether the ensuing arms race between super-fast traders helped or hindered markets. One side argues that it helps because the massive number of transactions the fastest traders engage in lower costs by reducing the spreads between bids and offers. Critics counter that, in reality, spreads widen since slower traders need to charge higher spreads as insurance against getting caught flatfooted by a fast-moving event.

[...]

Starting in 2010, high-frequency traders began using ultrafast microwave links to relay prices and other information between Chicago and New York. To begin with, only some traders had access to microwave networks. Until 2013, others had to rely on less speedy fiber-optic cable.

But microwave transmissions are disrupted by water droplets and snowflakes, so during heavy storms traders using the networks switch to fiber. Messrs. Shkilko and Sokolov used weather-station data from along the microwaves’ paths to determine when storms occurred and then looked at what happened to bid-ask spreads in a variety of securities during those periods.

They narrowed, suggesting that the slowing down of the fastest high-frequency traders improved market liquidity.

Someone Is Learning How to Take Down the Internet

Monday, October 24th, 2016

Someone is learning how to take down the Internet, Bruce Schneier suggests:

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company’s total defenses are. There are many different ways to launch a DDoS attacks. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they’ve got to defend themselves. They can’t hold anything back. They’re forced to demonstrate their defense capabilities for the attacker.

[...]

One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the U.S.’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.

Brian Krebs offers some specifics:

At first, it was unclear who or what was behind the attack on Dyn. But over the past few hours, at least one computer security firm has come out saying the attack involved Mirai, the same malware strain that was used in the record 620 Gpbs attack on my site last month. At the end September 2016, the hacker responsible for creating the Mirai malware released the source code for it, effectively letting anyone build their own attack army using Mirai.

Mirai scours the Web for IoT devices protected by little more than factory-default usernames and passwords, and then enlists the devices in attacks that hurl junk traffic at an online target until it can no longer accommodate legitimate visitors or users.

According to researchers at security firm Flashpoint, today’s attack was launched at least in part by a Mirai-based botnet. Allison Nixon, director of research at Flashpoint, said the botnet used in today’s ongoing attack is built on the backs of hacked IoT devices — mainly compromised digital video recorders (DVRs) and IP cameras made by a Chinese hi-tech company called XiongMai Technologies. The components that XiongMai makes are sold downstream to vendors who then use it in their own products.

“It’s remarkable that virtually an entire company’s product line has just been turned into a botnet that is now attacking the United States,” Nixon said, noting that Flashpoint hasn’t ruled out the possibility of multiple botnets being involved in the attack on Dyn.

Many of these devices allow users to change the default usernames and passwords on a Web-based administration panel — but the devices also have default usernames and passwords for telnet and SSH, which aren’t editable from the Web-based admin tools:

“The issue with these particular devices is that a user cannot feasibly change this password,” Flashpoint’s Zach Wikholm told KrebsOnSecurity. “The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist.”

Flashpoint’s researchers said they scanned the Internet on Oct. 6 for systems that showed signs of running the vulnerable hardware, and found more than 515,000 of them were vulnerable to the flaws they discovered.

Colonizing Venus

Monday, October 24th, 2016

Colonizing Venus may be easier than colonizing Mars:

In many ways Venus is the hell planet. Results of spacecraft investigation of the surface and atmosphere of Venus are summarized by Bougher, Hunten, and Phillips [1997]:

  • Surface temperature 735K: lead, tin, and zinc melt at surface, with hot spots with temperatures in excess of 975 K
  • Atmospheric pressure 96 Bar (1300 PSI); similar to pressure at a depth of a kilometer under the ocean
  • The surface is cloud covered; little or no solar energy
  • Poisonous atmosphere of primarily carbon dioxide, with nitrogen and clouds of sulfuric acid droplets.

However, viewed in a different way, the problem with Venus is merely that the ground level is too far below the one atmosphere level. At cloud-top level, Venus is the paradise planet. As shown in figure 2, at an altitude slightly above fifty km above the surface, the atmospheric pressure is equal to the Earth surface atmospheric pressure of 1 Bar. At this level, the environment of Venus is benign.

  • above the clouds, there is abundant solar energy
  • temperature is in the habitable “liquid water” range of 0-5OC
  • atmosphere contains the primary volatiles required for life (Carbon, Hydrogen, Oxygen, Nitrogen, and Sulfur)
  • Gravity is 90% of the gravity at the surface of Earth.

While the atmosphere contains droplets of sulfuric acid, technology to avoid acid corrosion are well known, and have been used by chemists for centuries. In short, the atmosphere of Venus is most earthlike environment in the solar system. Although humans cannot breathe the atmosphere, pressure vessels are not required to maintain one atmosphere of habitat pressure, and pressure suits are not required for humans outside the habitat.

It is proposed here. that in the near term, human exploration of Venus could take place from aerostat vehicles in the atmosphere, and that in the long term, permanent settlements could be made in the form of cities designed to float at about fifty kilometer altitude in the atmosphere of Venus.

On Venus, breathable air (i.e., oxygen-nitrogen mixture at roughly 21:78 mixture ratio) is a lifting gas. The lifting power of breathable air in the carbon dioxide atmosphere of Venus is about half kg per cubic meter. Since air is a lifting gas on Venus: the entire lifting envelope of an aerostat can be breathable gas, allowing the full volume of the aerostat to be habitable volume. For comparison, on Earth, helium lifts about one kg per cubic meter, so a given volume of air on Venus will lift about half as much as the same volume of helium will lift on Earth.

Settling Venus sounds oddly feasible:

In the long term, permanent settlements could be made in the form of cities designed to float at about fifty kilometer altitude in the atmosphere of Venus.

The thick atmosphere provides about one kilogram per square centimeter of mass shielding from galactic cosmic radiation and from solar particle event radiation, eliminating a key difficulty in many other proposed space settlement locations. The gravity, slightly under one Earth gravity, is likely to be sufficient to prevent the adverse affects of microgravity. At roughly one atmosphere of pressure, a habitat in the atmosphere will not require a high-strength pressure vessel.

Humans would still require provision of oxygen, which is mostly absent from the Venusian atmosphere, but in other respects the environment is perfect for humans (although on the habitat exterior humans would still require sufficient clothing to avoid direct skin exposure to aerosol droplets).

Since breathable air is a lifting gas, the entire lifting envelope of an aerostat can be breathable gas, allowing the full volume of the aerostat to be habitable volume. For objects the size of cities, this represents an enormous amount of lifting power. A one-kilometer diameter spherical envelope will lift 700,000 tons (two Empire state buildings). A two-kilometer diameter envelope would lift 6 million tons.

So, if the settlement is contained in an envelope containing oxygen and nitrogen the size of a modest city, the amount of mass which can be lifted will be, in fact, large enough that it could also hold the mass of a modest city. The result would be an environment as spacious as a typical city.

The lifting envelope does not need to hold a significant pressure differential. Since at the altitudes of interest the external pressure is nearly one bar, atmospheric pressure inside the envelope would be the same as the pressure outside. The envelope material itself would be a rip-stop material, with high-strength tension elements to carry the load. With zero pressure differential between interior and exterior, even a rather large tear in the envelope would take thousands of hours to leak significant amounts of gas, allowing ample time for repair. (For safety, the envelope would also consist of several individual units).

Solar power is abundant in the atmosphere of Venus, and, in fact, solar arrays can produce nearly as much power pointing downward (toward the reflective clouds) as they produce pointing toward the sun. The Venus solar day, 116.8 terrestrial days, is extremely long; however, the atmospheric winds circle the planet much more rapidly, rotating around the planet in four days. Thus, on the habitat, the effective solar “night” would be roughly fifty hours, and the solar “day” the same. This is longer than an Earth day, but is still comfortable compared to, for example, the six-month night experienced in terrestrial near-polar locations. If the habitat is located at high latitudes, the day and night duration could be shortened toward a 24-hour cycle.

Instructional Videos

Tuesday, October 18th, 2016

Instructional videos are popular and effective, because we’re designed to learn through imitation:

Last year, it was estimated that YouTube was home to more than 135 million how-to videos. In a 2008 survey, “instructional videos” were ranked to be the site’s third most popular content category — albeit a “distant third” behind “performance and exhibition” and “activism and outreach.” More recent data suggest that distance may have closed: In 2015, Google noted that “how to” searches on YouTube were increasing 70 percent annually. The genre is by now so mature that it makes for easy satire.

[...]

A 2014 study showed that when a group of marmosets were presented with an experimental “fruit” apparatus, most of those that watched a video of marmosets successfully opening it were able to replicate the task. They had, in effect, watched a “how to” video. Of the 12 marmosets who managed to open the box, just one figured it out sans video (in the human world, he might be the one making YouTube videos).

[...]

“We are built to observe,” as Proteau tells me. There is, in the brain, a host of regions that come together under a name that seems to describe YouTube itself, called the action-observation network. “If you’re looking at someone performing a task,” Proteau says, “you’re in fact activating a bunch of neurons that will be required when you perform the task. That’s why it’s so effective to do observation.”

[...]

This ability to learn socially, through mere observation, is most pronounced in humans. In experiments, human children have been shown to “over-imitate” the problem-solving actions of a demonstrator, even when superfluous steps are included (chimps, by contrast, tend to ignore these). Susan Blackmore, author of The Meme Machine, puts it this way: “Humans are fundamentally unique not because they are especially clever, not just because they have big brains or language, but because they are capable of extensive and generalised imitation.” In some sense, YouTube is catnip for our social brains. We can watch each other all day, every day, and in many cases it doesn’t matter much that there’s not a living creature involved. According to Proteau’s research, learning efficiency is unaffected, at least for simple motor skills, by whether the model being imitated is live or presented on video.

There are ways to learn from videos better:

The first has to do with intention. “You need to want to learn,” Proteau says. “If you do not want to learn, then observation is just like watching a lot of basketball on the tube. That will not make you a great free throw shooter.” Indeed, as Emily Cross, a professor of cognitive neuroscience at Bangor University told me, there is evidence — based on studies of people trying to learn to dance or tie knots (two subjects well covered by YouTube videos) — that the action-observation network is “more strongly engaged when you’re watching to learn, as opposed to just passively spectating.” In one study, participants in an fMRI scanner asked to watch a task being performed with the goal of learning how to do it showed greater brain activity in the parietofrontal mirror system, cerebellum and hippocampus than those simply being asked to watch it. And one region, the pre-SMA (for “supplementary motor area”), a region thought to be linked with the “internal generation of complex movements,” was activated only in the learning condition — as if, knowing they were going to have to execute the task themselves, participants began internally rehearsing it.

It also helps to arrange for the kind of feedback that makes a real classroom work so well. If you were trying to learn one of Beyonce’s dance routines, for example, Cross suggests using a mirror, “to see if you’re getting it right.” When trying to learn something in which we do not have direct visual access to how well we are doing — like a tennis serve or a golf swing — learning by YouTube may be less effective.

[...]

The final piece of advice is to look at both experts and amateurs. Work by Proteau and others has shown that subjects seemed to learn sample tasks more effectively when they were shown videos of both experts performing the task effortlessly, and the error-filled efforts of novices (as opposed to simply watching experts or novices alone). It may be, Proteau suggests, that in the “mixed” model, we learn what to strive for as well as what to avoid.

Crowds and Technology

Monday, October 17th, 2016

Mobs, demagogues, and populist movements are obviously not new:

What is new and interesting is how social media has transformed age-old crowd behaviors. In the past decade, we’ve built tools that have reconfigured the traditional, centuries-old relationship between crowds and power, transforming what used to be sporadic, spontaneous, and transient phenomena into permanent features of the social landscape. The most important thing about digitally transformed crowds is this: unlike IRL crowds, they can persist indefinitely. And this changes everything.

[...]

To translate Canetti’s main observations to digital environments:

  1. The crowd always wants to grow — and always can, unfettered by physical limitations
  2. Within the crowd there is equality — but higher levels of deception, suspicion, and manipulation
  3. The crowd loves density — and digital identities can be more closely packed
  4. The crowd needs a direction — and clickbait makes directions cheap to manufacture

Translating Eric Hoffer’s ideas to digital environments is even simpler: the Internet is practically designed to enable the formation of self-serving patterns of “true belief.”

Chuck Yeager Describes How He Broke The Sound Barrier

Friday, October 14th, 2016

Chuck Yeager describes how he broke the Sound Barrier:

Everything was set inside X-1 as Cardenas started the countdown. Frost assumed his position and the mighty crack from the cable release hurled the X-1 into the abyss. I fired chamber No. 4, then No. 2, then shut off No. 4 and fired No. 3, then shut off No. 2 and fired No. 1. The X-1 began racing toward the heavens, leaving the B-29 and the P-80 far behind. I then ignited chambers No. 2 and No. 4, and under a full 6000 pounds of thrust, the little rocket plane accelerated instantly, leaving a contrail of fire and exhaust. From .83 Mach to .92 Mach, I was busily engaged testing stabilizer effectiveness. The rudder and elevator lost their grip on the thinning air, but the stabilizer still proved effective, even as speed increased to .95 Mach. At 35,000 ft., I shut down two of the chambers and continued to climb on the remaining two. We were really hauling! I was excited and pleased, but the flight report I later filed maintained that outward cool: “With the stabilizer setting at 2 degrees, the speed was allowed to increase to approximately .95 to .96 Mach number. The airplane was allowed to continue to accelerate until an indication of .965 on the cockpit Machmeter was obtained. At this indication, the meter momentarily stopped and then jumped up to 1.06, and the hesitation was assumed to be caused by the effect of shock waves on the static source.”

I had flown at supersonic speeds for 18 seconds. There was no buffet, no jolt, no shock. Above all, no brick wall to smash into. I was alive.

And although it was never entered in the pilot report, the casualness of invading a piece of space no man had ever visited was best reflected in the radio chatter. I had to tell somebody, anybody, that we’d busted straight through the sound barrier. But transmissions were restricted. “Hey Ridley!” I called. “Make another note. There’s something wrong with this Machmeter. It’s gone completely screwy!”

“If it is, we’ll fix it,” Ridley replied, catching my drift. “But personally, I think you’re seeing things.”

The Glow Puck Returns

Thursday, October 6th, 2016

One of professional hockey’s most hated innovations, the glow puck, is making a comeback:

The company has developed new hockey pucks loaded with tracking chips and outfitted the players in the six-team tournament with sensors on their sweaters that track movement throughout the games. The sensors emit infrared signals that allow cameras circling Toronto’s Air Canada Centre to record data like the speed and trajectory of a shot, how fast and how far players skate, who is on the ice and the length of their shifts.

Sportvision had to develop new pucks to hold the sensors. To test them, the company shot the pucks out of a cannon at speeds up to 135 miles an hour, faster than the record 108.8 miles per hour shot by Boston defenseman Zdeno Chara in 2012.

First used during the 2015 All-Star skills competition and game, the sensors are getting their first real-game tryouts at the World Cup, which begins its final round Tuesday.

The sensors also allow Sportvision, which developed the computerized yellow first-down line used in NFL broadcasts and the virtual strike-zone shown in televised baseball games, to graphically enhance visuals for people watching on TV.

Broadcasters use the graphics to point out particularities about hockey that might get lost during games. For example, during a recent broadcast of a Team North America game against Team Finland, Canadian broadcaster Sportsnet showed a replay of goal by defenseman Colton Parayko. “It’s not how hard the shot is, it’s just where it gets to,” said the announcer, as the onscreen graphics traced with a red tail the arc of the shot from Parayko’s stick into the net, displaying the speed at a relatively modest 50 mph.

A Schizophrenic Computer

Tuesday, October 4th, 2016

You can “teach” a neural net a series of simple stories, but if the neural net is set to “hyperlearn” from examples, you get a schizophrenic computer:

For ordinary brains, while there’s significant evidence that people do pretty much remember everything, your brain stores them differently. In particular, intense experiences, which are signaled to the brain by the presence of dopamine, are remembered differently than others. Which is why, for example, you probably can’t remember what you had for lunch last Tuesday, but you still have strong memories of your first kiss.

The hyperlearning hypothesis posits that for schizophrenics, this system of classifying experiences breaks down because of excessive levels of dopamine. Rather than classifying some memories as important and others as less essential, the brain classes everything as important. According to the hypothesis, this is what leads to schizophrenics getting trapped into seeing patterns that aren’t there, or simply drown in so many memories that they can’t focus on anything.

In order to simulate the hyperlearning hypothesis, the team put the DISCERN network back through the paces of learning, only this time, they increased its learning rate — in other words, it wasn’t forgetting as many things. They “taught” it several stories, then asked them to repeat them back. They then compared the computer’s result to the results of schizophrenic patients, as well as healthy controls.

What they discovered is that, like the schizophrenics, the DISCERN program had trouble remembering which story it was talking about, and got elements of the different stories confused with each other. The DISCERN program also showed other symptoms of schizophrenia, such as switching back and forth between third and first person, abruptly changing sentences, and just providing jumbled responses.

Andrew Sullivan’s Distraction Sickness

Wednesday, September 21st, 2016

Andrew Sullivan doesn’t quite call for a Butlerian Jihad, but he does recognize that he developed a distraction sickness from modern technology:

Since the invention of the printing press, every new revolution in information technology has prompted apocalyptic fears. From the panic that easy access to the vernacular English Bible would destroy Christian orthodoxy all the way to the revulsion, in the 1950s, at the barbaric young medium of television, cultural critics have moaned and wailed at every turn. Each shift represented a further fracturing of attention — continuing up to the previously unimaginable kaleidoscope of cable TV in the late-20th century and the now infinite, infinitely multiplying spaces of the web. And yet society has always managed to adapt and adjust, without obvious damage, and with some more-than-obvious progress. So it’s perhaps too easy to view this new era of mass distraction as something newly dystopian.

But it sure does represent a huge leap from even the very recent past. The data bewilder. Every single minute on the planet, YouTube users upload 400 hours of video and Tinder users swipe profiles over a million times. Each day, there are literally billions of Facebook “likes.” Online outlets now publish exponentially more material than they once did, churning out articles at a rapid-fire pace, adding new details to the news every few minutes. Blogs, Facebook feeds, Tumblr accounts, tweets, and propaganda outlets repurpose, borrow, and add topspin to the same output.

We absorb this “content” (as writing or video or photography is now called) no longer primarily by buying a magazine or paper, by bookmarking our favorite website, or by actively choosing to read or watch. We are instead guided to these info-nuggets by myriad little interruptions on social media, all cascading at us with individually tailored relevance and accuracy. Do not flatter yourself in thinking that you have much control over which temptations you click on. Silicon Valley’s technologists and their ever-perfecting algorithms have discovered the form of bait that will have you jumping like a witless minnow. No information technology ever had this depth of knowledge of its consumers — or greater capacity to tweak their synapses to keep them engaged.

And the engagement never ends. Not long ago, surfing the web, however addictive, was a stationary activity. At your desk at work, or at home on your laptop, you disappeared down a rabbit hole of links and resurfaced minutes (or hours) later to reencounter the world. But the smartphone then went and made the rabbit hole portable, inviting us to get lost in it anywhere, at any time, whatever else we might be doing. Information soon penetrated every waking moment of our lives.

And it did so with staggering swiftness. We almost forget that ten years ago, there were no smartphones, and as recently as 2011, only a third of Americans owned one. Now nearly two-thirds do. That figure reaches 85 percent when you’re only counting young adults. And 46 percent of Americans told Pew surveyors last year a simple but remarkable thing: They could not live without one. The device went from unknown to indispensable in less than a decade. The handful of spaces where it was once impossible to be connected — the airplane, the subway, the wilderness — are dwindling fast. Even hiker backpacks now come fitted with battery power for smartphones. Perhaps the only “safe space” that still exists is the shower.

Am I exaggerating? A small but detailed 2015 study of young adults found that participants were using their phones five hours a day, at 85 separate times. Most of these interactions were for less than 30 seconds, but they add up. Just as revealing: The users weren’t fully aware of how addicted they were. They thought they picked up their phones half as much as they actually did. But whether they were aware of it or not, a new technology had seized control of around one-third of these young adults’ waking hours.

The interruptions often feel pleasant, of course, because they are usually the work of your friends. Distractions arrive in your brain connected to people you know (or think you know), which is the genius of social, peer-to-peer media. Since our earliest evolution, humans have been unusually passionate about gossip, which some attribute to the need to stay abreast of news among friends and family as our social networks expanded. We were hooked on information as eagerly as sugar. And give us access to gossip the way modernity has given us access to sugar and we have an uncontrollable impulse to binge. A regular teen Snapchat user, as the Atlantic recently noted, can have exchanged anywhere between 10,000 and even as many as 400,000 snaps with friends. As the snaps accumulate, they generate publicly displayed scores that bestow the allure of popularity and social status. This, evolutionary psychologists will attest, is fatal. When provided a constant source of information and news and gossip about each other — routed through our social networks — we are close to helpless.

How does the mass media (including social media) control people?

Friday, September 2nd, 2016

How does the mass media (including social media) control people?

The most obvious way, and which gains nearly all of the attention, is in terms of propaganda. So the mass/social media is full of propaganda in favour of the sexual revolution, against Christianity; in favour of Leftism and against traditional values (e.g marriage, family, biologically functional sexuality) and so forth.

But this is to miss the main point about content — which is the absence of content and the nature of assumptions.

The mass media simply eliminates all serious concerns.

[...]

But the main problem is the form not the content. The main problem with modern media addiction is that it shapes the way people think.

For a start, it takes-up attention for a large and increasing proportion of the day.

[...]

Then the attention is grabbed, manipulated, switched — again and again, thousands of times a day. This trains the mind positively to expect and want such attention switching, and negatively to become unable to hold attention — and rapidly to become bored by situations that lack this stream of attention-grabbing and rapidly changing stimuli.