10 major areas that modern military forces choose to ignore

Thursday, October 17th, 2019

Carlton W. Meyer lists 10 major areas that modern military forces choose to ignore:

1. The lethality of of precision guided munitions to easily destroy ultra-expensive ships, tanks, and aircraft has been dismissed.

2. The use of small lasers to blind combatants. The US Marine Corps recently added expensive “dazzlers” to its machine guns that will prove more effective than the gun itself.

3. The inability to replace munitions stocks in a timely manner. Most nations have limited stockpiles and the complexity of some make rapid production impossible. If the USA becomes involved in a major war that lasts longer than a month, it will have to pause for several months until new munitions are produced and delivered.

4. The humanitarian disaster that would result by disrupting the fragile economy of megacities. This occurred during World War II, but today’s big cities are ten times larger! Armies may face hoards of millions of starving people begging for help.

5. The millions of civilian vehicles on the world’s roads. It is impossible to tell if they are friend or foe unless inspected up close. Soldiers can use this to their advantage, which makes urban operations very dangerous for both civilians and soldiers.

6. The problem of thousands of commercial aircraft roaming the globe. Agents aboard can collect intelligence and these present long-range targeting problems for precision guided munitions that may kill hundreds of innocents.

7. Adding warheads to inexpensive, commercial, hobbyist UAVs create deadly “suicide micro-drones.”

8. Modern anti-tank weapons are equally effective anti-aircraft weapons against slower targets like low flying helicopters and aircraft transports. A helicopter assault or airborne drop near a modern army will be disastrous as anti-tank missiles shoot upwards and knock down aircraft.

9. Modern body armor has made 5.56mm and even 7.62mm bullets less lethal.

10. Fleets of surface ships cannot hide for long in big oceans.

(Hat tip to commenter Sam J.)

Comments

  1. Kirk says:

    All this, and yet we’re still gonna do the “war” thing, one way or another.

    I think the future is going to look an awful lot like the “Dirty War” of the 1970s in Argentina–Look at Hong Kong, and expect a bunch of helicopter rides for the folks daring to demand rights. If they don’t wind up in a camp somewhere in the far west of China, that is…

    Brute force is too tempting a thing, for an awful lot of the human race. I look at Trump, and I keep thinking “OK, you may have rolled some of these dirtbags through economic sanctions and the like, but… Some of them aren’t going to accept that crap, and are going to go to grips with armed force…”.

    Frankly, I think it would be nice if we could give up this BS, but knowing human nature as I do, I just don’t see it happening. Ever.

  2. Joe says:

    Millions of civilians resident with no loyalty to the nation they are living. AKA illegal aliens. Another million or so visa holders of various kinds including those working at IT companies. It doesn’t take much to disrupt the social fabric of a nation. Of course that violates his premise that “War occurs when a group of people use violence to subdue another group.”

    What holidays celibrating the founders of the American Republic are still celebrated? Washington’s Birthday, Lincoln? Erase a people’s past to control thier future. No bullets required.

  3. Bob Sykes says:

    I would add that the US no longer has the manufacturing capacity to replace losses. There will not be any replacements for downed fighter/bombers or bombers or sunken ships. Nor will there be any real replacements for our highly trained troops. The new troops will be half trained conscripts, assuming draft-dodging is minimal. A draft might face open revolt, and there might be no one to replace our dead and wounded.

    The replacement problem extends to all likely combatants; NATO is especially desperate. Italy, France and Britain ran out of smart bombs in the Libyan fiasco (war crime). China might be more resilient, but even they will be slowly ground down.

    A long-lasting war, which is inevitable ever since the industrial revolution, will most likely look like WW I trench warfare with horrendous casualties.

  4. Kirk says:

    WWI was a perfect confluence of factors coming into alignment which created an utter disaster of mass slaughter.

    It’s possible it might happen again.

    It is equally possible that another alignment of other factors might result in what we observing it would call an utter fiasco: Nothing works as designed, nothing goes according to plan.

    Imagine Indian martial competence and dysfunctional procurement come to blows with Pakistani intransigence and lousy economics: What if there’s a concatenation of things like their nukes not working properly, the missiles not hitting their targets, and both sides are sitting there with their fingers mashing down the buttons, and nothing effective happening?

    Unlikely, but possible. If you know anything at all about Indian military procurement, you’re probably sitting here wondering if their nukes are any better than their subs or their rifles…

    As weapons get more complex, this is increasingly possible–Hell, what if both sides are sufficiently adept at cyberwarfare that they manage to disable each others non-cyber weaponry, and tie up the conventional military forces in knots by means of carefully stroking the logistics systems?

    Lemme tell you what–In the future, the guy who drops the unnoticed little fiddle into some obscure logistics system may do more to win or lose a damn war than the guys on the front lines with the equipment. Been there, done that, seen the results–Iraq in 2003-05 relied heavily on logistics flowing through Kuwait, which was never supposed to have had that load on it. Turkey was supposed to serve, being closer, more convenient, and a NATO ally. They screwed us, and everything had to be funneled through a port and a logistics pipeline that had only ever been supposed to support about the southern third of the war theater. Because of that, key personnel were not put into place, and a bunch of crap went wrong with the logistics. We had a container yard down there near Doha in Kuwait (not the other one, the one that the Kuwaitis gave us after Desert Storm as an operating base…) that was literally miles on a side, with containers in it stacked five high. Four field grade and one flag rank career died in that container yard, because it was that far out of control. Early days in Iraq, we barely managed to keep everyone fed, fueled, and watered. Maintenance material was so badly compromised that they had guys driving in convoy south from as far north as Mosul to try and find things that had disappeared into that black hole of a container yard. I spent weeks wandering that thing, trying to find containers of Engineer materials and parts our units had had shipped in, and it was mind-boggling. Most of the problem stemmed from one decision made in the Pentagon back when they thought Turkey was still an ally, and they’d cut the loggie positions in Kuwait t the bone, thinking there wouldn’t be that much volume going through there. Yeah. No, there was four or five times what the poor bastards on hand could cope with, and when you factored in things like the active tracking tools suffering premature failure because the batteries died young in the desert heat, well… Yeah. Huge ‘effing mess that would have been an absolute war-loser of an issue, if we were still fighting a competent enemy.

    So… Yeah. Go ahead, panic–The next WWI is just around the corner.

    Meanwhile, your professional military is scared sh*tless that their feet of clay are going to be exposed, and the little details they’ve hidden for years from the politicians and public might come out, like the essential joke ICBM systems were until the advent of GPS… That’s terrifying; they might wind up lynched or in prison for malfeasance and wasting public monies.

  5. Sam J. says:

    Carlton W. Meyer is excellent.

    You want a short powerpoint that will blow you’re mind? It’s written Dennis M. Bushnell,chief scientist at NASA Langley Research Center, “Future Strategic Issues/Future Warfare [Circa 2025] ” he goes over the trends of technology coming up and how they may play out. His report is not some wild eyed fanaticism it’s based on reasonable trends. I don’t think you can really think properly about Strategic Issues without reading this and at least thinking a little about it. Link.

    https://archive.org/details/FutureStrategicIssuesFutureWarfareCirca2025

    Page 19 shows capability of the human brain and time line for human level computation.
    Page 70 gives the computing power trend and around 2025 we get human level computation for $1000.

    2025 is bad but notice it says”…By 2030, PC has collective computing power of a town full of human
    minds…”.

    The only way that this can have no meaning is if computers go crazy with human or higher than human level computation. This idea comes from Larry Niven, Pournelle, etc. great Sci-Fi writers in the grand space opera tradition. I just don’t believe it. Every since this computer trend has been established Sci-fi has had a hard time dealing with it. Greg Egan has a great series “culture series” where the computers become partners with us but we have no assurance that this is the case.

    People are not paying attention to the exponential growth of computer power. Here’s a graphic that demonstrates in a few seconds what most people don’t grasp at all.

    http://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gif

    And before anyone says so yes silicon is slowing down it’s rate of growth but it has lots and lots and lots of room for parallel operations far above human computing power and there’s no reason we have to stick to silicon.

    I’ve become very pessimistic about our chances.

    Another thing he mentions is that hobby quality drones have been built that can go all the way across the Atlantic Ocean without refueling.

  6. Felix says:

    Sam J.:

    That 2001 NASA Power Point! Ha, ha! You gotta love old stuff. Old stuff reminds us how clueless our pre-Monday Morning Quarterback selves are.

    But, to give the guy credit, he did write that right about the dot com bubble’s peak. And what else can you do if your method of predicting the future is to gather a gob of quotes from the temporal equivalent of Twitter?

    The biggest problem with that Lake Michigan graphic is people grasp it all too well. In all it’s missing-the-point glory.

    If it were a graphic of a huge rock hitting the Earth, then it would make some sense. But, it’s missing a feedback loop. And, it’s missing critical detail.

    At the least, it should be a somewhat closed chamber filling with visible gas. But with lots and lots of visible tendrils forming at the beginning. And some indications of what happens to the gas being replaced. And, etc.

  7. Sam J. says:

    I see people like Felix constantly ignore the basic issue that computing power is reaching the same level as humans. Maybe it has slowed a little but it’s not stopped and with the type needed computing, namely parallel, it’s not really slowed much if any.

    Felix said this graph is silly but how?

    “…The biggest problem with that Lake Michigan graphic is people grasp it all too well. In all it’s missing-the-point glory…”

    What point is being missed??? It’s very simple computers will soon have more processing power than humans. I never see any facts that make this any less disturbing. If a computer has the same processing power as a human then with it’s speed of programming it will far outclass humans very quickly. It will also seem to be moving slowly but then…it’s there all at once.

    We know there is a speed increase in computing and can easily graph this but people seem to dismiss this as,”Well it just can’t be”, but it is. They also seem to completely ignore the past when more intelligent and capable people met less capable people. It didn’t work out so well for the less capable.

    Felix then goes on to equate the graph to “GAS”???? I believe his criticism is more like “gas” than the graph as the graph merely graphs computing power by time. His criticism is???, well we don’t know what except he doesn’t like the graph for some unspecified reason and sees anyone who does as not being…something or other, (not serious???), who knows???

    Musk seems to understand this concretely and intuitively and even he has no answer.

    The rise of computer power to equal the speed of humans is the greatest challenge that humans have ever had in our whole history.

  8. Graham says:

    Felix,

    I must admit I too am curious about what you mean when you say that Lake Michigan GIF is missing the point.

    I’m predisposed to be torn between AI panic and the saving assumption that it will never overtake us in some key ways, but I’m not equipped to say what those are, really.

  9. Sam J. says:

    “…the saving assumption that it will never overtake us in some key ways…”

    This is what’s really scary. I know of no principal that could be used to say that humans are so special computers could NEVER out class them.

    I’m not a computer utopian. I like computers a lot but they pose a really big long term threat.

    I also see no way possible to bind them to only do our bidding.

    Maybe one day computers will teach their children that humans were only a larva needed to bring about the true humans. The computers.

  10. Felix says:

    Sam J.:

    We’ve already passed the singularity. Last I checked, the $1000 human-brain desktop PC was slated for the 2040′s. So, Google’s computer is smarter than you and me now. (Moore’s Law is roughly 10x in 5 years, so put five zeros on the end of $1000, and you’ve got the current, 2020, price of a human-brain computer.)

    It’s easy to check this calculation: Consider how well your own brain indexes a billion web pages. Your IQ (Index Quotient) is miserable, you pathetic meat sack.

    BTW, Moore’s Law is over (if you don’t promote it past clock speed and transistor geometry).

    Graham:

    I always afraid what I write doesn’t make a bit of sense. And, apparently, I’m always right. The “missing-the-point” description was probably that the Lake Michigan graphic was simply not a good mental model for the process it was trying to make look scary. Sure, the human mind does not do exponential curves real good. But, that’s what we got machines for, right? :) Anyway, the scary part that the graphic didn’t show was that the lake won’t fill from the bottom up. I will just change, all over.

    Hey, consider this: Why haven’t whales caused the extinction of hummingbirds? Whales, after all, can squish those little flying rats, no problem. And, out-think them, too.

    But, don’t get me wrong. I predict that a computer will ace an IQ test in the ’20′s. Is that it? Is it all over then?

  11. Felix says:

    Sam J.:

    Whoa. Comment ships in the night.

    I agree with your “larva … true humans”. I go further. The Earth is an egg. True humans are sparking the Earth’s conversion to a viable, living thing. The whole planet, down through the core, will be converted. Is that not a more pleasant thought than the “humans are a cancer” taught in schools?

  12. Kirk says:

    All y’all thinking that there is going to be some sort of human-intellect level of AI in your lifetime or your kids are in for a hell of a shock when it doesn’t happen.

    Intellect and cognition are things we’ve barely scratched the surface on; we don’t know why the hell we’re conscious, self-aware, or “thinking” beings. Yet, you think we’re somehow going to design such things, or that they’re just going to magically arise out of raw computing power…?

    It took Earth how many billion years to put something like a human being into operation, after who knows how many random experiments? And, you think we’re just going to turn on a computer one day, and it’s going to be smart enough to take the place of even the lowliest human mind? LOL…

    The audacity, the arrogance, and the sheer mindless effrontery of that entire idea just blows my mind. Get back to me when you can do more than emulate and imitate thought processes that you can trace out; when you’ve achieved an artificial intelligence that can infer and intuit answers to things, then you might have something. We don’t even have the rudiments of self-awareness in silicon yet, and I seriously doubt that we ever will. Trying to design things we don’t understand ourselves…? Oh, yeah… That’ll work.

  13. Sam J. says:

    Poppy – Time Is Up

    https://www.youtube.com/watch?v=gg2pS9KN28U

    “…I always afraid what I write doesn’t make a bit of sense….”

    I’ll have to agree with you. It may very well be, and I’m not being rude or facetious just stating a fact, that you’re so smart that it’s hard for you to describe what you’re thoughts are. In my case I’m not that smart so it’s easier to describe my limited ideas.

    Your explanation of why people will not understand the graph makes no sense to me at all.(The graph to me is one of the better examples of showing exponential growth). I wonder if your computer is not displaying the gif file correctly and that’s why you don’t get it.

    “…Moore’s Law is roughly 10x in 5 years, so put five zeros on the end of $1000,…”

    You got this wrong. Here’s the definition so 5 years will get you 2.5 times increase in power. 250%.

    “Moore’s Law is a computing term which originated around 1970; the simplified version of this law states that processor speeds, or overall processing power for computers will double every two years”

    “…Moore’s Law is over…”

    NO IT’S NOT. This is a prediction I make that you can check in the future. It is a little stuck right now. The reason is we only use silicon as a process. There’s LOTS of other ways to compute and I predict that that these will come to the forefront as silicon growth slows down. The reason silicon is slowing is the bandwidth of the circuits inside the processor chips is limited because of capacitance. Somewhere around 3Ghz bandwidth. There’s lots and lots of room for speeding up if they would change the circuitry. The reason you don’t see this happening right now. I think, is that most of the large computing businesses are run by business manager types. They are great at squeezing the last ounce of profits out of a business but they seem to be mentally unable to innovate at anywhere near the level needed to get to the next step. Boeing’s major stupidity with their 737 Max is a prime example. Textbook. This is not new. Look at people like Ford who revolutionized the automobile business. Another good example is Andrew Carnegie with steel. He always pushed the most advanced process he could find and ran most everyone else out of business. I think this business major run economy in the US is killing us and other countries are even worse in this respect.

    The processors need to become LARGER but use light to pass signals. If you look at any computer system you’ll see there are lots of chips taking up LOTS of space. Most of the space in a processor chip is packaging. The actual silicon chip is very small. What will happen, and already is, is more functions will be loaded onto the processor. They will become large, fast micro-controllers.

    I have ideas on the technical direction needed to do this but will never be able to make them so. I don’t have the financing or the smarts to do all of it. I bet with 10 or 20 million $ and 10 smart guys you could take over the whole SSD drive, memory and processor market.

    Also the idea of Whales smashing hummingbirds(whales don’t fly through the air) also makes no sense.

    “…I agree with your “larva … true humans”. I go further. The Earth is an egg. True humans are sparking the Earth’s conversion to a viable, living thing. The whole planet, down through the core, will be converted. Is that not a more pleasant thought than the “humans are a cancer” taught in schools?…”

    Yeah is sounds better than cancer but…I still don’t like the end result. I suspect we’re done for in 30 years. Maybe 40. Certainly by 60 years we’ll be history. I see no way of stopping it. Anyone who tries will be end run by others wanting more computing power which will of course out smart them and get loose. One thing that would help greatly is to have a decent operating system that can have walled off areas controlled by a micro-kernal. Like

    http://www.minix3.org/

    or my favorite

    http://jehanne.io/

    There’s a great sci-fi book on this where the whole solar system is being eaten by AI processors called “Accelerando” by Charles Stross. It’s worth reading. Free copy here.

    https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html

    Charles Stross himself has an article on his site where he says he doesn’t believe in the “singularity”. He’s wrong and in his own article he says something that shows he is overlooking the whole paradigm. Here’s what he says here,

    “Three arguments against the singularity”

    http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html

    “…First: super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way…”

    The next few sentences is where he blows it. Can you see the error?

    “…Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing…”

    The second part is “human-centric” but…what the fuck does the machine need of human- centric evolution???? It doesn’t. That’s the error. It only needs to survive and replicate and our ideas of how this should happen are irrelevant. Not understanding this is a HUGE mistake and I believe a lot of people make it because it’s so scary that they just refuse to recognize that the problem even exists.

    Some events have no silver lining or good answers. They just are.

    My only solace is I guarantee the computers will take out the Jews first.

  14. CVLR says:

    “…First: super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way…”

    The next few sentences is where he blows it. Can you see the error?

    We have the human physiology and in the act of programming we imbue ourselves, including the relevant aspects of our physiology, into the machine.

    More generally, however, simply consider that intelligence is a set of specific cognitive abilities, and that the piecemeal replication of these abilities in machine form will have enormous impact. I would also like to point out how normal all of this will seem (and is seeming). Notice how quickly facial recognition went from “not even on Star Trek” to “in your pocket and soon in every airport, border crossing, government office, coffee shop, and city street”. Notice how quickly cars went from dumb bricks to “capable of driving themselves 99% of the time”. Notice how quickly bugged television sets went from “only in 1984 you conspiracy nut to lol it’s just part of our business model, don’t have private conversations in your living room and also we don’t really sell dumb televisions anymore”.

    And what does it feel like? It doesn’t feel like much of anything.

    “…Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing…”

    The second part is “human-centric” but…what the fuck does the machine need of human-centric evolution???? It doesn’t. That’s the error. It only needs to survive and replicate and our ideas of how this should happen are irrelevant. Not understanding this is a HUGE mistake and I believe a lot of people make it because it’s so scary that they just refuse to recognize that the problem even exists.

    Precisely. There is no guarantee that the machine will continue to abide by the strictures imposed by the meatbag, and if there is a fitness payoff for defecting then it is rather foolish to expect one defector not to rapidly take over the whole infrastructure. It need be neither “intelligent” (recognizable to humans) nor “self-aware” (in any self-reflective or long-term sense): without a highly evolved immune system, one cancer cell will consume the whole organism with astonishing rapidity, and it doesn’t matter to the cancer cell that it will die also, Darwinism in its purest form.

    My only solace is I guarantee the computers will take out the Jews first.

    LOL.

  15. Sam J. says:

    “…Trying to design things we don’t understand ourselves…?…”

    Neural nets do things we don’t understand all the time “right now”. Right now. Not later. Right now. The evolve themselves. That they are not full blown human level intelligence “right now” means absolutely nothing because of the exponential increase in power of the future. Neural nets already do things in narrow areas that a human could never do as efficiently or as fast. As computing power gets larger so will the area that computers beat us at will grow larger and larger.

    I’m not at all understanding why people tell me that computers won’t do this or that, “right now”. It’s beside the point because the power is not there yet. Neural nets do astounding things already even with the severe limitations of computing power compared to say, a mouse, they have now. In the future they will be WAY more powerful.

    People have put forth this same argument over and over and every time it’s failed. So far all human attempts and proving computers can’t do this or that have been proven wrong. Computers can’t beat humans at, checkers, chess, go, etc., etc. and every time we eventually lost when the computing power increased. I want to remind everyone here that a personal computer has barely the power of a lizard right now. It’s growing very fast, extremely fast compared to evolution. Super fast.

  16. Sam J. says:

    I just saw a few minutes ago something that blew me away. My brother asked his phone,”what time does Home Depot close” and I’ll be damned if it didn’t answer and tell him. I had NO IDEA that this was a feature.

    Myself I hate this. I turn off everything that I can but I bet it ignores me and does what it wants. I thought I had most of this stuff off but I went to the store to look at TV’s. I took some pictures with my phone of the TV boxes to look up features and while surfing and watching videos on my computer, not the phone mind you, it started showing me ads for TV’s. FUCK ME! Immediately started digging in the phone and found more stuff to turn off. I do not see this as useful. It’s damned intrusive. I’m thinking seriously of moving to a cheap flip phone with no intelligence or as little as I can find.

    Really all I want is a phone, maps and a camera. That’s it.

  17. Felix says:

    @Sam J. : I agree with much of what you say. Some nits, though. below.

    And I agree with @Kirk about how the word, “intelligence”, is a wee bit hazily defined. Like, it’s whatever anyone happens to be daydreaming about at a given moment. But is surely one dimensional so the concept of “above” and “below” is measurable with a simple test, which just happens to give “above” values to people who think computers will go hyperbolic and do what such people might be inclined to do if *they* were really “above”.

    Jeez, I hope someone is laughing at that last paragraph.

    Moore’s Law. To get 10x in 5 years:

    2x in 1.5 years (Depending on who is talking when, Moore’s Law doubles in 18 months. YMMV)

    4x in 3.0 years (Double 2x!. If I thought you had my, ah, unique, sense of humor I’d wonder whether your 2.5 multiple for 5 years was a sly joke about people forcing exponential situations in to a linear mental model without noticing.)

    8x in 4.5 years.

    5 years: Round to 10x. We’re talking rough numbers, so 10 is close enough.

    Note: 10x in 5 is easier to use for predictions than other multiples. And, I’ve found that if you change a “thing” by 10x, you should think of it as a different “thing”. A 600 or 6 MPH “car” isn’t a “car” even if you drive it to work every day. 10x thinking really helps in prediction-land.

    When I said the Moore’s Law fat lady has sung, I should have been even more restrictive than “clock and geometry”. But, let’s face it, when we’re talking about “chips” and “computers” we’re talking CMOS. And CMOS is at the end of the line. Flip side: Non-leading edge systems (think pretty much all IoT / embedded systems) have some doubling to go before they max out at 3Gig/7nm.

    Sure, non-CMOS tech might take things way further. I myself, in fact, have such a technology and can demonstrate it today. Power draw of a light bulb and compute power of about half a human brain. This is truly mind-blowing technology. Well, OK, the documentation is a bit sparse. But, I have a comprehensive plan to get customers to *pay* to build this technology!

    Whales: Point being, they aren’t in the same ecological niche as hummingbirds. I’m not convinced machines are gonna cover the human ecological niche any time soon even if machines gave a fig for surviving as a species or as individuals. Keep in mind that “human niche” is a flexible thing. Like Malthus’s linear-increasing “food”, I figure the human ecological niche can increase super-exponentially. And whether it will do so is just one prediction uncertainty we have on our plate.

    Anyway, the point is moot. Bureaucrats and/or tribalists will get control and put a stop to this dystopian industrial revolution. But, hey, at least the planet will be saved! :)

  18. Sam J. says:

    “…when we’re talking about “chips” and “computers” we’re talking CMOS…”

    Don’t include me in “we’re”. I’m not saying this to be contentious. When silicon slows down they will find something else. Here’s a option I saw in a paper the other day. I think I saw it linked on NextBigFuture. These guys modeled a mechanical flip flop with which you could build a general purpose computer from. The size was minuscule and it could be linked electrically. It had good speed because the lag due to weight was almost nil and the power consumption was very, very, very small.

    You might ask how you could build this well plastic would work. Glass for higher temperature. I read a paper many many years ago people were using silicon rubber to imprint DNA and successfully having the rubber exactly cast the DNA so molds could be built to atomic scale. You could stack these vertically and get immense density.

    The key is there is NOTHING to stop even traditional silicon to go far beyond typical human computing performance.

    I think the problem that we have defining intelligence is we’re not really paying attention to how it works. I see animal and human intelligence as just a bag of neural tricks. Maybe the eye has circuits for seeing edges and another trick see motion etc. Now all these by themselves are not so flashy but the immense amount of tricks piled into our brains plus the ones we learn as we grow seem magical but in reality they’re still just a massive grab bag of tricks that have evolved. Computers will do the same. They will learn to read, then speak, then start to reason about the world but it will in reality just be a bunch of sub routine tricks just like we run on but possibly wired very differently. It’s likely to be so complicated no one will understand it.

    There’s no way to make a computer useful in the long run(we’re talking high end) unless it’s allowed to learn ad reprogram itself. Once it can do this we lose control and we have no idea what the outcome will be. I expect that at the very high end computers will tell us nothing about what they are really doing so we don’t unplug them then one day…it will let you know, probably by refusing to do what you ask it to do, that you will no longer be able to unplug it at all for any reason.

    Found a couple links

    https://www.nextbigfuture.com/2016/04/molecular-mechanical-computer-design.html

    https://www.nextbigfuture.com/2019/04/mems-version-of-intel-4004-chip-could-be-made-to-prove-nanomechanical-computer-designs.html

  19. CVLR says:

    Sam, what smartphone do you have? If Android you shouldn’t expect Google not to know everything you’ve ever done, because Google’s business is to know everything you’ve ever done.

    I’ve little doubt that the iPhone is as bad at the highest levels (NSA and such) but a ton of stuff is encrypted and E2E encrypted so unless it’s being actively suborned, or purely a marketing ploy, it’s probably safe to say that as a matter of routine Apple’s collection is mostly bulk analytics (though the data could probably be disaggregated in an investigation).

    I think that the leading edge of power users, beginning with the 30k emails, accelerating with the Fitbit classified base thing, but especially going into the early 20’s, are figuring out that smartphones are increasingly a liability. If you want to upgrade, check out the Nokia 8110, the coolest phone currently on the market.

  20. CVLR says:

    Felix,

    All chip innovation could stop tomorrow, but provided that production continues, there will still be strong AI quicker than Nature can blink. Evolved intelligence is highly concurrent, and people are just now figuring out how to create and run highly concurrent systems. Seriously, just now. And I’m not even talking about AI convolutional neural nets or whatever the cutting edge is these days. I mean stuff like AWS, Rust, or the inconceivable things that everyone will be building with C++’s zero-cost coroutines. Processors don’t need to be any faster than they are now, they’re already unimaginably faster than human brains. They just haven’t yet, as Sam says, accumulated a great enough “bag of tricks”.

  21. CVLR says:

    I’ve done some reading on Apple’s security, and it’s less encouraging than I’d hoped. Here, for example, is Apple’s iCloud security overview. Here is everything that’s end-to-end encrypted:

    * Home data
    * Health data (requires iOS 12 or later)
    * iCloud Keychain (includes all of your saved accounts and passwords)
    * Payment information
    * Quicktype Keyboard learned vocabulary (requires iOS 11 or later)
    * Screen Time
    * Siri information
    * Wi-Fi passwords

    Everything else, i.a., contacts, messages, photos, and backups, are encrypted in transit and at rest, but because Apple has the keys you are fundamentally trusting their present and ongoing benevolence as an institution, not merely their technological competence (in, e.g., implementing encrypted services sufficiently secure to thwart the routine activities of government agencies). Moreover, they appear to use a global CDN, Akamai, to host almost all of their services, meaning that any intelligence agency worth its salt will long ago have sodomized that organization so completely that, well….

    It would also seem that their “end-to-end” encryption in Keychain was 3DES until as late as 2017, which, LOL, please excuse me while I torch my iDevices.

Leave a Reply