On IT Evolution and Moore’s Law

 

ArtificialIntel_8_10_2015-e1439220952351Intel now says that the technological “cadence” of Moore’s Law is “now closer to 2.5 years than two.” Irving Wladawsky thinks that a semiconductor stutter-step could be signaling a new era approaching:

The Cambrian geological period marked a profound change in life on Earth. Before it, most organisms were very simple, composed of individual cells and simple multi-cell organisms sometimes organized into colonies, such as sponges. After a couple of billion years, evolution deemed the cell to be good-enough, that is, it’s continued refinement did not translate into an evolutionary advantage.

Then around 550 million years ago a dramatic change took place, which is known as the Cambrian Explosion. Evolution essentially took off in a different direction, leading to the development of all kinds of complex life forms. “Over the following 70 to 80 million years, the rate of diversification accelerated by an order of magnitude and the diversity of life began to resemble that of today.”

The IT industry is now going through something similar. Over the past several decades, we’ve been perfecting our digital components – microprocessors, memory chips, disks, networking and the like, and we used them to develop families of computers – mainframes, minicomputers, servers, PCs, laptops and so on.

But around ten years ago or so, the digital components started to become powerful, reliable, inexpensive, ubiquitous … and good-enough to start moving to a new phase. The acceptance of the Internet introduced a whole new set of technologies and standards for interconnecting all these components. Today, digital components are becoming embedded into just everything – smartphones, IoT devices, robots, consumer electronics, medical equipment, airplanes, cars, buildings, clothes and on and on and on.

Even for data centers and large supercomputers. “The question is not how many transistors can be squeezed onto a chip, but how many can be fitted economically into a warehouse,” writes The Economist.

Technology continues to be very important to IT, but after 50 years it’s no longer the key driver of innovation, and computers are no longer its preponderant families. The digital world has now entered its own Cambrian age. Innovation has now shifted to the creation of all kinds of digital life forms, and to the data-driven analytic algorithms and cognitive designs that infuse intelligence into these artificial life forms.

Published in Science & Technology
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 27 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Valiuth Member
    Valiuth
    @Valiuth

    So now is the time to crush the machines before they take over?

    Technology like life is an emergent process. One can not predict how it will flow forward from its constituent parts. In the Precambrian no organism could have predicted the emergence of the liver, or exoskeleton, yet probably most of the constituent components for them where largely present at the time.

    The future is going to be a wild ride. That is assuming some savages don’t kill us all before we get there.

    • #1
  2. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Moore’s Law taken in its “computing power” formulation hasn’t been true for a decade. Witness the rise of the multicore processor. We don’t have twice as much computing power in a single processor; we’ve maxed out the single processor and started putting multiple processors in one chip and calling them “cores,” then putting 100 of those chips together and calling that a “rack,” then putting 1,000 racks together and calling that a “data center.”

    The problem is that no one knows how to write software that works with more than one processor at a time. We fudge it; we kludge it; we bang two rocks together and sometimes we start a fire that way. But mostly it works by accident, and the rest of the time people like me pull our hair out over what we call, in a single node in a rack, “deadlock,” “livelock,” and “race conditions,” and at other scales “serializability issues.” Ricochet itself has seen them, with replies showing up before what they’re replying to, and like/unlike button states being the opposite of the 0/1 count right next to it.

    The current revolution in software development is actually an old one: a cadre of my colleagues and myself insisting these problems are best solved by applying mathematical logic to programming (“functional programming”) vs. the now old guard (“object-oriented programming”). You would not believe the shouting matches.

    Wait… you’re an economist. So yes, you would.

    • #2
  3. Pilli Inactive
    Pilli
    @Pilli

    In order for this continuing “revolution” to take place, the communications techniques and technologies are going to have to improve dramatically.  I fear this will not happen when the gummint gets its paws fully into the Internet.  Net neutrality is just one aspect of government intervention that will cause issues.

    • #3
  4. Last Outpost on the Right Inactive
    Last Outpost on the Right
    @LastOutpostontheRight

    Moore’s Law is NOT an inevitability, like gravity or the late-season collapse of the Orioles.

    More than anything else, Moore’s Law was a promise made by Gordon Moore, Fairchild Semiconductor’s Director of R&D, back in 1965. Keeping that promise is only possible if we stop protecting incumbent businesses, and remove the federally imposed obstacles to innovation.

    • #4
  5. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Last Outpost on the Right:More than anything else, Moore’s Law was a promise made by Gordon Moore, Fairchild Semiconductor’s Director of R&D, back in 1965. Keeping that promise is only possible if we stop protecting incumbent businesses, and remove the federally imposed obstacles to innovation.

    Eh, hold on. Moore himself has pointed out the obvious: there is ultimately a physical limitation involved. When you start to bump into limitations due to quantum tunneling, you’re near the end of the line.

    • #5
  6. Don Tillman Member
    Don Tillman
    @DonTillman

    Last Outpost on the Right:Moore’s Law is NOT an inevitability, like gravity or the late-season collapse of the Orioles.

    More than anything else, Moore’s Law was a promise made by Gordon Moore, Fairchild Semiconductor’s Director of R&D, back in 1965. Keeping that promise is only possible if we stop protecting incumbent businesses, and remove the federally imposed obstacles to innovation.

    Moore’s Law is not a promise, it’s an observation.

    More importantly, Moore’s Law has a very specific mechanism that drives it, and when you understand the mechanism, the future path is clear.

    I described the mechanism behind Moore’s Law here:

    http://ricochet.com/archives/moore-s-law/

    • #6
  7. Don Tillman Member
    Don Tillman
    @DonTillman

    Great Ghost of Gödel:Moore’s Law taken in its “computing power” formulation hasn’t been true for a decade. Witness the rise of the multicore processor. We don’t have twice as much computing power in a single processor; we’ve maxed out the single processor and started putting multiple processors in one chip and calling them “cores,”

    Moore’s Law never described the amount of power in a single processor.  Heck,  single-chip processors didn’t exist back then.

    While physical limitations have maxed out the clock rate, the use of multiple cores absolutely increases the density and computing power of chips.

    (And really, what’s the first thing that happens when someone comes out with a faster chip?  “Hey, let’s see it do all these random processes at once!”)

    And while software doesn’t currently support multiple cores very well, I think that’s likely going to be a future area of software growth.

    • #7
  8. Aaron Miller Inactive
    Aaron Miller
    @AaronMiller

    On a related note, one consequence of rapid advancement in programming power is that we are just two generations away from 1:1 visual fidelity in interactive virtual worlds.

    Our eyes are equivalent to the quality of a 30-megapixel camera. You don’t perceive improvements in frame rate beyond 72 frames per second. Many games already run at a rate of 60 frames per second. The best resolution for humans, then, is 8000 x 4000 pixels, or several times better than today’s best displays. That is about 20 billion to 40 billion triangles per second in terms of graphics rendering.

    Resolution standards have been increasing rapidly along with processors.

    Digital_video_resolutions_(VCD_to_4K).svg

    Additionally, environmental design, virtual physics-based character animations, and other necessary components of realistic simulations are being accomplished through procedural generation. That means, even while artists and modelers are empowered to craft virtual experiences with ever finer detail, programmers are gradually figuring out how to automate the process of creation; to produce compelling virtual experiences without so much investment of time and money.

    • #8
  9. Troy Senik, Ed. Member
    Troy Senik, Ed.
    @TroySenik

    Last Outpost on the Right:Moore’s Law is NOT an inevitability, like gravity or the late-season collapse of the Orioles.

    Ricochet sentence of the week.

    • #9
  10. Whiskey Sam Inactive
    Whiskey Sam
    @WhiskeySam

    Mike Malone about a decade ago was talking about the coming end of technology racing ahead and how the future was in creating content for that technology.

    • #10
  11. Black Prince Inactive
    Black Prince
    @BlackPrince

    I’d love to hear anonymous’s take on this.

    • #11
  12. Boss Mongo Member
    Boss Mongo
    @BossMongo

    Yeah, the technology that we’ve allowed to become essential to our lives is about to go through a Cambrian Explosion.  Sounds cool.

    Until we get hit by a big enough EMP or Coronal Mass Ejection.

    Then Mongo, with his stubby pencil and notepad, won’t look so uncool.

    • #12
  13. Michael S. Malone Member
    Michael S. Malone
    @MichaelSMalone

    Whiskey Sam:Mike Malone about a decade ago was talking about the coming end of technology racing ahead and how the future was in creating content for that technology.

    Yep.  But Moore’s Law moving out to 2.5 years/generation isn’t the end of the world.  Remember, in the 1960s the Law was closer to 18 mos/generation.  It slipped to 2 years/gen in the ’90s, about the time the Law starting getting a lot of attention from the general public — and so that’s how we think of it.  It is still an extraordinary pace of change — and there’s no guarantee it will continue to slow . . .a physical breakthrough with a new kind of non-silicon gate could accelerate the Law again.

    As for the Law being an observation, not a promise:  The fact is that it began as an observation, by Gordon in 1965.  But by 1985, the entire semiconductor industry had rebuilt itself around the Law, essentially making a contract with mankind to maintain the Law as long as possible.  Go ask Andy Grove or Craig Barrett, or Paul Otellini, if they ever felt, while running Intel, that Moore’s law was just an observation.

    • #13
  14. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Don Tillman:Moore’s Law never described the amount of power in a single processor. Heck, single-chip processors didn’t exist back then.

    No, it referred to transistor count. But the fact is that those transistors are used to form logic gates, and it turns out there’s a physical limit to how many transistors you can arrange into those gates per unit size of silicon that you run into before you run into the limit on transistors-per-wafer itself, hence multicore.

    While physical limitations have maxed out the clock rate, the use of multiple cores absolutely increases the density and computing power of chips.

    Again, sure, but that just reinforces my point:

    (And really, what’s the first thing that happens when someone comes out with a faster chip? “Hey, let’s see it do all these random processes at once!”)

    Only because OSes have promised to handle scheduling multiple processes efficiently for us since MULTICS (ironically, UNIX is “single-process MULTICS”).

    And while software doesn’t currently support multiple cores very well, I think that’s likely going to be a future area of software growth.

    Again, to my point: we actually do know how to do this; it’s just that there’s math (Tony Hoare’s CSP, circa 1978, showed up in Go in 2007; JoCaml is a more obscure dialect of the obscure OCaml language; etc.) My team uses the more mainstream Scala language, and tackles concurrency using purely functional programming.

    • #14
  15. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Great Ghost of Gödel:Tony Hoare’s CSP, circa 1978, showed up in Go in 2007…

    BTW, progress along one dimension can mean regression along others…

    https://twitter.com/extempore2/status/613735221498806272

    • #15
  16. Don Tillman Member
    Don Tillman
    @DonTillman

    Michael S. Malone:As for the Law being an observation, not a promise: The fact is that it began as an observation, by Gordon in 1965. But by 1985, the entire semiconductor industry had rebuilt itself around the Law, essentially making a contract with mankind to maintain the Law as long as possible.

    Where do you get this “contract with mankind” from?

    Go ask Andy Grove or Craig Barrett, or Paul Otellini, if they ever felt, while running Intel, that Moore’s law was just an observation.

    Moore’s Law clearly became a lot more than just an observation.  It’s a hugely powerful mechanism that has been applied in other places, though to a lesser degree.  As I describe here.

    • #16
  17. Michael S. Malone Member
    Michael S. Malone
    @MichaelSMalone

    Don Tillman:

    Where do you get this “contract with mankind” from?

    From the gentlemen themselves.  When I profiled Barrett and Otellini for the Wall Street Journal they both told me that their greatest fear, and what kept them up at night as CEOs of Intel, was that history would remember them as having let Moore’s Law fail on their watch.

    Grove told me, when I was writing The Intel Trinity, that he realized that if he could keep Intel moving at Moore’s Law he could eventually outrun and beat competitors like Zilog, AMD and Motorola — and he was right. But he also admitted that this commitment to the Law had taken on a momentum of its own, setting an expectation for this pace of change in the global economy that the semiconductor industry could never, voluntarily, shirk.

    And Moore himself has told me many times that his Law has taken on a life of its own in the larger culture — one he never intended for it.

    Are those sufficient sources?

    • #17
  18. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Michael S. Malone:Grove told me, when I was writing The Intel Trinity, that he realized that if he could keep Intel moving at Moore’s Law he could eventually outrun and beat competitors like Zilog, AMD and Motorola — and he was right.

    At the cost of completely missing the low-power market and even fabbing ARM processors for other manufacturers.

    That’s gonna leave a mark.

    • #18
  19. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    If you want to keep your eye on the future of microprocessor architecture (both of you…) follow developments of RISC-V.

    • #19
  20. SParker Member
    SParker
    @SParker

    Comments on #2:

    What is the “computing power” formulation of Moore’s Law?  Moore’s observation applies to transistors on a chip.  “Computing power” necessarily depends on architecture which leads to benchmarking controversies which leads to SParker’s corollary to the 2nd law of thermodynamics:  Computer programmers will argue about the inconsequential until it’s time to go home.

    The problem you describe is a function of interrupts and has been happening on single core systems since the operator dropped Og’s box of punch cards (a PL/1 compiler due the following Tuesday legend has it) and Og built a time-sharing system to show that jerk something.

    Moreover, I just did a delightful six-core render of the Stanford bunny rabbit and my computer did not blow up once.  Thus I refute GoG.  I feel like Bishop Berkeley.

    Granted interrupt-driven and parallel thingies are a pain in the ass to debug–because problems are generally hard to reproduce.  Still smarting over a 4th of July weekend spent finding out the thing intermittently killing my fault-tolerant industrial controller due for safety certification by goddamned TUEV SUED (the hard-ass Germans from Munich, not the pushover Rhinelanders) was a programmer who couldn’t be troubled to save the BC register of an I/O processor’s Z80 on entry to the interrupt handler  (“I didn’t think anyone else was using it” was a crap excuse, Ken.)

    As for procedural vs. functional since 1957, see SParker’s corollary.

    • #20
  21. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    SParker: What is the “computing power” formulation of Moore’s Law?  Moore’s observation applies to transistors on a chip.  “Computing power” necessarily depends on architecture which leads to benchmarking controversies which leads to SParker’s corollary to the 2nd law of thermodynamics:  Computer programmers will argue about the inconsequential until it’s time to go home.

    “What Moore giveth, Bill Gates taketh away”.

    • #21
  22. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    SParker:What is the “computing power” formulation of Moore’s Law?

    “Processor speed doubles every X.” as opposed to anything Moore actually said.

    Moore’s observation applies to transistors on a chip. “Computing power” necessarily depends on architecture which leads to benchmarking controversies which leads to SParker’s corollary to the 2nd law of thermodynamics: Computer programmers will argue about the inconsequential until it’s time to go home.

    Which is why “processor speed doubles every X” never was worthy of anything but eye-rolling.

    Moreover, I just did a delightful six-core render of the Stanford bunny rabbit and my computer did not blow up once. Thus I refute GoG. I feel like Bishop Berkeley.

    Covered by “banging two rocks together sometimes starting a fire.” I’ve used pthreads too.

    As for procedural vs. functional since 1957, see SParker’s corollary.

    Procedural vs. functional is not inconsequential. Since you mention 1957, I have to assume you know this, since you must be referring to FORTRAN vs. LISP (strictly speaking, 1958). Granted, “functional” doesn’t get you all the way to “if it compiles it doesn’t have deadlock, livelock, or race conditions” the way Go or JoCaml do—for that you need a process calculus in the type system—but it does give trivial parallelism.

    “All programming languages are equally powerful/good” is false.

    • #22
  23. Ball Diamond Ball Member
    Ball Diamond Ball
    @BallDiamondBall

    No need to optimize this.  The hardware will cover it up.

    • #23
  24. Dan Hanson Thatcher
    Dan Hanson
    @DanHanson

    Moore’s law is best thought of as a simple application of supply and demand.  Once we discovered how much ‘room at the bottom’ there was,  and the immense wealth to be found in exploiting it,  an economic race to the bottom started up.  And as subsequent generations of components were made,  they became more and more useful to the world and thus market pressure grew even stronger.  So strong that the technology is the only limiting factor – not available investment.

    Therefore,  we will continue to improve as fast as technology allows until we start hitting some limits.  It’s not a law of nature,  or a market contract or anything like that.  It’s simply an emergent property of a complex system.  Moore’s law is the technical equivalent of an industry that drives profits down through competition until they average just enough to keep the doors open, invest in the future,  and return enough capital to investors to compensate them for risk and time-value of money.

    What Gordon Moore contributed was a heuristic for understanding the nature of the race.  He realized roughly what it was going to take to go smaller,  looked at where the theoretical limits were and how far away from them we were,  and grasped the nature of the challenge and where it would take us.   But he didn’t discover a law of nature or describe some physical reason why transistors must be halved in size every two years.

    Btw,  if you haven’t seen Feynman’s Nanotechnology lecture  from which the phrase “there’s plenty of room at the bottom” comes,  I urge you to watch it.  He tells you the mechanism behind Moore’s law – that once you can build a smaller device,  you can use the power of that to build even smaller devices.  You’re building the technology that helps you build the technology, hence the exponential growth rate.  Anyway,  it’s a great lecture.  He first gave this lecture in 1959 (!!).

    • #24
  25. Don Tillman Member
    Don Tillman
    @DonTillman

    Great Ghost of Gödel:

    Don Tillman:And while software doesn’t currently support multiple cores very well, I think that’s likely going to be a future area of software growth.

    Again, to my point: we actually do know how to do this; it’s just that there’s math (Tony Hoare’s CSP, circa 1978, showed up in Go in 2007; JoCaml is a more obscure dialect of the obscure OCaml language; etc.) My team uses the more mainstream Scala language, and tackles concurrency using purely functional programming.

    Sure, but those are specialized and not in general use or generally applicable.  And different languages and approaches are necessary for different levels and different types of granularity.

    Parallelism wasn’t a big win in the past — spending N times as much on processors and getting less than N times the performance is not an especially compelling story.  But things are different now with clock speed limitations; all sorts of parallel architectures become more interesting.

    • #25
  26. Don Tillman Member
    Don Tillman
    @DonTillman

    Michael S. Malone:

    Are those sufficient sources?

    Fine sources.  It just doesn’t sound like a “contract with mankind” to me.

    • #26
  27. Great Ghost of Gödel Inactive
    Great Ghost of Gödel
    @GreatGhostofGodel

    Don Tillman:Sure, but those are specialized and not in general use or generally applicable.

    That’s actually not true: they’re all general-purpose languages and can be used for anything any other language can. Scala, in particular, is perfectly mainstream, powering Twitter, LinkedIn, The Guardian, FourSquare, Sony Pictures Imageworks, and my employer, Verizon OnCue, among others. Granted, again, our concurrency story is a bit different from the others because concurrency is just one effect that we manage with a type called Task, and we make heavy use of a purely-functional dataflow library, scalaz-stream. This gets us easily 99% of the way to “if it compiles, it works,” and we benefit from easy performance wins with scalaz.

    And different languages and approaches are necessary for different levels and different types of granularity.

    Eh, not really. We can even run Scala on our GPUs if we want to. We can definitely run Scala in our browsers. Get the abstractions right and the different levels and granularity get solved in the process.

    Parallelism wasn’t a big win in the past — spending N times as much on processors and getting less than N times the performance is not an especially compelling story. But things are different now with clock speed limitations; all sorts of parallel architectures become more interesting.

    Right. And purely functional programming along with it. Also, compacting garbage collectors go from lose to win because of cache locality benefits as CPU speed crazily outstrips memory speed.

    • #27
Become a member to join the conversation. Or sign in if you're already a member.