Recommended by Ricochet Members Created with Sketch. Tech’s Productivity Problem

 

I am a computer graphics software engineer. Computer graphics is unusual in tech for having an appetite for performance that far outstrips what computer hardware can deliver. In many ways, we are the ones who drive computing performance forward, and our hardware is often used by other fields for unrelated tasks (for example, AI makes heavy use of the GPUs–graphical processor units–that are used to power computer games).

Most of the software world does not face these constraints. For most programmers (and the managers who employ them) the increases in computing power over the past 30 years were truly awe-inspiring; far in excess of what they thought they needed. It didn’t matter how crappy programmers or their tools were when in a few years’ time computers could be counted on to be exponentially faster. Programmers didn’t need to learn to deal with hard things like memory management, optimizing performance, or writing code that can run on multiple CPU cores simultaneously. Even worse, the people who write the tools programmers use–programming languages–felt they need not worry about these things either. One of the members of the C++ standards committee (a widely used programming language) admitted to me earlier this year to having once thought this way.

But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation. There is a wide scope to make computer software faster simply by rewriting old code. Most managers (and many programmers) fear this the way most people fear math, but I think they will be pleasantly surprised. Parallel programming and memory management are simply not as hard as they think, not when the right tools are used. Programmers who have spent 20 years thinking they can’t deal with managing memory or write parallel code are going to find that, actually, they can do these things.

Moore’s Law may be ending, but software will continue to advance.

Published in General
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Get your first month free.

There are 64 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. SkipSul Coolidge
    SkipSulJoined in the first year of Ricochet Ricochet Charter Member

    Joseph Eagar: Parallel programming and memory management are simply not as hard as they think, not when the right tools are used. Programmers who have spent twenty years thinking they can’t deal with managing memory or write parallel code are going to find that, actually, they can do these things. 

    Programmers, however, who learned how to code 30 and 40 years ago are getting enticing offers to come out of retirement and teach the youngsters. A common line I’ve used on reps over the years is “Yes yes, this new dev system lets you write programs quickly, but when compiled it’s just massive for what it does. So what the hell is it actually doing? My engineers can solve that same problem in Assembly, with about 1/100 the code, and they know what every line is doing.”

    The code-bloat is an absolute menace especially in my industry – automotive. Nobody seems to know what in heck is buried inside the code of all these modules on vehicles today – often it’s legacy code that has been tweaked, hacked, spliced, recompiled, and stuff to the gills, and nobody has actually done a real examination on what is in it for 10-20 years – they just keep adding to it.

    That mentality is part of what led to the Boeing 737-Max problems, but also to Toyota’s accelerator problems, and any number of less perilous recalls besides.

    • #1
    • October 25, 2020, at 8:09 PM PDT
    • 20 likes
  2. kedavis Member

    Makes me glad I first learned on a PDP-8. :-)

     

    • #2
    • October 25, 2020, at 8:21 PM PDT
    • 8 likes
  3. DonG (Biden is compromised) Coolidge

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited. 

    • #3
    • October 25, 2020, at 8:47 PM PDT
    • 1 like
  4. Ontheleftcoast Member

    I used to love the Mac word processor WriteNow. It was very fast for its time. It was written in assembly language for the 680×0 family of processors. It plus the OS could fit on a bootable 400K floppy.

    • #4
    • October 25, 2020, at 8:48 PM PDT
    • 2 likes
  5. Joseph Eagar Member
    Joseph EagarJoined in the first year of Ricochet Ricochet Charter Member

    DonG (skeptic) (View Comment):

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited.

    I’m skeptical that smaller transistor sizes will speed things up much. They produce just as much heat, and heat is really the limiting factor here. It’s useless to build a CPU with 32 highly advanced cores if you can only turn 16 of them on without melting the damn thing.

     

    • #5
    • October 25, 2020, at 8:57 PM PDT
    • 3 likes
  6. SkipSul Coolidge
    SkipSulJoined in the first year of Ricochet Ricochet Charter Member

    Joseph Eagar (View Comment):

    DonG (skeptic) (View Comment):

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited.

    I’m skeptical that smaller transistor sizes will speed things up much. They produce just as much heat, and heat is really the limiting factor here. It’s useless to build a CPU with 32 highly advanced cores if you can only turn 16 of them on without melting the damn thing.

     

    Give the engineers time to figure out the cooling system – to them this is likely no barrier, merely an excuse to get out the liquid nitrogen.

    • #6
    • October 25, 2020, at 8:58 PM PDT
    • 7 likes
  7. Joseph Eagar Member
    Joseph EagarJoined in the first year of Ricochet Ricochet Charter Member

    SkipSul (View Comment):

    Joseph Eagar (View Comment):

    DonG (skeptic) (View Comment):

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited.

    I’m skeptical that smaller transistor sizes will speed things up much. They produce just as much heat, and heat is really the limiting factor here. It’s useless to build a CPU with 32 highly advanced cores if you can only turn 16 of them on without melting the damn thing.

     

    Give the engineers time to figure out the cooling system – to them this is likely no barrier, merely an excuse to get out the liquid nitrogen.

    Can’t use liquid nitrogen in consumer systems :) Seriously though, it’s hard to conduct that much heat from such a small surface area.

    • #7
    • October 25, 2020, at 9:23 PM PDT
    • 1 like
  8. kedavis Member

    Hm, did anyone ever try integrating that “cooling chip” technology into CPUs? If they did like the old mainframe systems I remember – rows and rows of cabinets on raised floors, and like every 3rd cabinet was an A/C unit – maybe they could use that to bring interior heat out to the surface for removal?

    • #8
    • October 25, 2020, at 10:12 PM PDT
    • 2 likes
  9. namlliT noD Member
    namlliT noDJoined in the first year of Ricochet Ricochet Charter Member

    Joseph Eagar: But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

    I’ve been in the business for almost 40 years. It’s not a problem.

    We’ve actually run into physical limits many, many times before. And each time a new technology was developed to get around the physical limit. The original logic families were replaced by faster logic families, then they found a way to keep transistors from saturating to go faster, then they introduced Field Effect Transistors which were simpler and increased the density, then Complementary FETs addressed power consumption, and so forth. More recently we hit a clock speed physical limit, so now you see multiple CPUs.

    See my article here: Moore’s Law

     

    • #9
    • October 25, 2020, at 11:22 PM PDT
    • 9 likes
  10. namlliT noD Member
    namlliT noDJoined in the first year of Ricochet Ricochet Charter Member

    Joseph Eagar: People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation.

    Panicking? I wouldn’t be overly concerned about it.

    At the same time, we’re having difficulty coming up with practical uses for all this computational power.

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    • #10
    • October 25, 2020, at 11:27 PM PDT
    • 11 likes
  11. kedavis Member

    namlliT noD (View Comment):

    Joseph Eagar: People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation.

    Panicking? I wouldn’t be overly concerned about it.

    At the same time, we’re having difficulty coming up with practical uses for all this computational power.

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    What about holographic imaging directly into the brain? :-)

    As Dennis Miller sometimes says, the day a factory worker can sit in his Barcolounger and VR f*** Claudia Schiffer for $19.99, will make crack look like Sanka.

    • #11
    • October 26, 2020, at 12:02 AM PDT
    • 2 likes
  12. namlliT noD Member
    namlliT noDJoined in the first year of Ricochet Ricochet Charter Member

    kedavis (View Comment):

    Makes me glad I first learned on a PDP-8. :-)

    I’ve said it before… DEC had the best damn industrial design ever.

    The cool flip switches, the color palette, the fonts, the off-white bezel. Man, oh man, that’s good.

    • #12
    • October 26, 2020, at 12:11 AM PDT
    • 7 likes
  13. BastiatJunior Member

    I remember an upper limit of a 33MHz clock speed on PC motherboards. Couldn’t get around it people said.

    • #13
    • October 26, 2020, at 12:17 AM PDT
    • 2 likes
  14. kedavis Member

    namlliT noD (View Comment):

    kedavis (View Comment):

    Makes me glad I first learned on a PDP-8. :-)

    I’ve said it before… DEC had the best damn industrial design ever.

    The cool flip switches, the color palette, the fonts, the off-white bezel. Man, oh man, that’s good.

     

    Did you ever see/use a PDP-12? I had one, for a while. Beautiful!

     

     

     

    • #14
    • October 26, 2020, at 12:27 AM PDT
    • 5 likes
  15. Joseph Eagar Member
    Joseph EagarJoined in the first year of Ricochet Ricochet Charter Member

    namlliT noD (View Comment):

    Joseph Eagar: But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

    I’ve been in the business for almost 40 years. It’s not a problem.

    We’ve actually run into physical limits many, many times before. And each time a new technology was developed to get around the physical limit. The original logic families were replaced by faster logic families, then they found a way to keep transistors from saturating to go faster, then they introduced Field Effect Transistors which were simpler and increased the density, then Complementary FETs addressed power consumption, and so forth. More recently we hit a clock speed physical limit, so now you see multiple CPUs.

    See my article here: Moore’s Law

    But the rate of improvement has been slowing down. An SMP quad-core processor isn’t going to perform four times as fast as a single-core in most cases, it will be limited by the relatively slow speed of main memory.

    • #15
    • October 26, 2020, at 3:07 AM PDT
    • Like
  16. Arahant Member

    Joseph Eagar: But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

    We’re running out of oil. I’ve been told that as long as I can remember, yet now we have more oil than we have ever had before. Moore’s Law is going to break soon. I’ve been told that as long as I’ve been associated with computers, which started in 1980. Forgive my skepticism.

    • #16
    • October 26, 2020, at 3:22 AM PDT
    • 8 likes
  17. Arahant Member

    SkipSul (View Comment):
    “Yes yes, this new dev system lets you write programs quickly, but when compiled it’s just massive for what it does. So what the hell is it actually doing? My engineers can solve that same problem in Assembly, with about 1/100 the code, and they know what every line is doing.”

    Amen, brother. Preach it. It’s been nigh on thirty years since I touched Assembler, but I am for that sort of efficiency. Another area where there are code generators are in Websites and other HTML pages. They cause horrible bloat. I keep my Websites fast and efficient. I don’t trust anything where I can’t see and control the code.

    • #17
    • October 26, 2020, at 3:25 AM PDT
    • 5 likes
  18. Guruforhire Member

    Speaking of the C++ standards committee.

    The problem that we have now, that ever since the end of the 486, every single line of code is a lie. Those are all lies you are writing because we still write programs for the 486. Now we have branch predictors, out of order processing, caching etc.

    So…. Reasoning about performance beyond, not doing more work that you have too is pretty tricky business. Even the rudimentary stuff like “this one generates more assembly” is only a crude approximation of performance.

    So, it looks to me that the real work for high performance is moving from the program author to the compiler developer. And the arms race is tech is compiler optimizers.

    So, the best I can tell as someone who is doing this for fun and lulz, that the current consensus is to follow best practices, don’t try to out think the compiler because you can’t.

    Here is another good talk

    Just because I like that french guy

    • #18
    • October 26, 2020, at 5:22 AM PDT
    • Like
    • This comment has been edited.
  19. Aaron Miller Member
    Aaron MillerJoined in the first year of Ricochet Ricochet Charter Member

    namlliT noD (View Comment):

    Joseph Eagar: People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation.

    Panicking? I wouldn’t be overly concerned about it.

    At the same time, we’re having difficulty coming up with practical uses for all this computational power.

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    AIs in porn?

    Seriously, in game design, the major publishers are pushing toward more complex simulations, bigger worlds, and more characters per zone. But the graphics push never stops either. “Realistic” ray tracing (light reflection and refraction) is the fad now.

    • #19
    • October 26, 2020, at 5:24 AM PDT
    • 3 likes
  20. Miffed White Male Member
    Miffed White MaleJoined in the first year of Ricochet Ricochet Charter Member

    The internet ruined quality software.

    Back in the days when patches required shipping physical media, software suppliers had to make a decent effort to put out working software. Now they can put out any garbage they want and just tell people to download the updates.

    Likewise, faster hardware ruined quality programmers.

    When you had to wait an hour or more to get your compile back before you could even start testing, you spent time doing “desk checking” of your code to make sure it worked right. Nothing concentrates the mind like only getting a couple chances a day to make fixes.

    • #20
    • October 26, 2020, at 5:49 AM PDT
    • 9 likes
  21. SkipSul Coolidge
    SkipSulJoined in the first year of Ricochet Ricochet Charter Member

    Joseph Eagar (View Comment):

    SkipSul (View Comment):

    Joseph Eagar (View Comment):

    DonG (skeptic) (View Comment):

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited.

    I’m skeptical that smaller transistor sizes will speed things up much. They produce just as much heat, and heat is really the limiting factor here. It’s useless to build a CPU with 32 highly advanced cores if you can only turn 16 of them on without melting the damn thing.

     

    Give the engineers time to figure out the cooling system – to them this is likely no barrier, merely an excuse to get out the liquid nitrogen.

    Can’t use liquid nitrogen in consumer systems :) Seriously though, it’s hard to conduct that much heat from such a small surface area.

    You’d be surprised. My company manufactures various DC power products, and the fets keep shrinking, making them more and more unable to shed heat as they lose their metal content. But we have a couple of patents up our sleeve that have solved this issue, at least for our designs.

    • #21
    • October 26, 2020, at 6:07 AM PDT
    • 3 likes
  22. Aaron Miller Member
    Aaron MillerJoined in the first year of Ricochet Ricochet Charter Member

    Is there hope in automating subroutines? Is programming so often more like speech than like manufacturing that programmers cannot greatly benefit from software that handles the more primitive elements of code? 

    If a programmer can produce new functionality by arranging familiar premises with new variables and conditions, then those replicable premises could be provided in optimized form by pre-existing software (like a game engine). Does that at least limit the optimization necessary at the end? 

    You can tell I’m not a programmer.

    • #22
    • October 26, 2020, at 7:29 AM PDT
    • Like
  23. WillowSpring Member
    WillowSpringJoined in the first year of Ricochet Ricochet Charter Member

    Miffed White Male (View Comment):
    When you had to wait an hour or more to get your compile back before you could even start testing, you spent time doing “desk checking” of your code to make sure it worked right. Nothing concentrates the mind like only getting a couple chances a day to make fixes.

    That was life back when I started. One of the best programmers I knew would spend hours writing the code on a paper pad and literally “cutting and pasting” his corrections before ever entering his program – on punched cards.

    I went from mini-computers like the DEC pictured above through microprocessors to custom design. The chips were so slow and so memory limited that there was always a doubt that you could solve the problem at all. One of the early products I worked on was a MOS 6502 based temperature control system. The memory was so limited that there would only be several unused and in order to add any new function, I had to go back and optimize existing code. Good times!

    I am retired now and feel lucky that my career was when it was a real skill to get the job done. Even though I eventually moved from assembly language through Forth to C, I never had an application where I didn’t own the code ‘all the way down’.

     

    • #23
    • October 26, 2020, at 7:30 AM PDT
    • 4 likes
  24. Arahant Member

    Aaron Miller (View Comment):

    Is there hope in automating subroutines? Is programming so often more like speech than like manufacturing that programmers cannot greatly benefit from software that handles the more primitive elements of code?

    If a programmer can produce new functionality by arranging familiar premises with new variables and conditions, then those replicable premises could be provided in optimized form by pre-existing software (like a game engine). Does that at least limit the optimization necessary at the end?

    You can tell I’m not a programmer.

    “Parts is parts.”

    • #24
    • October 26, 2020, at 7:32 AM PDT
    • 2 likes
  25. Ontheleftcoast Member

    kedavis (View Comment):

    namlliT noD (View Comment):

    Joseph Eagar: People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation.

    Panicking? I wouldn’t be overly concerned about it.

    At the same time, we’re having difficulty coming up with practical uses for all this computational power.

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    What about holographic imaging directly into the brain? :-)

    As Dennis Miller sometimes says, the day a factory worker can sit in his Barcolounger and VR f*** Claudia Schiffer for $19.99, will make crack look like Sanka.

    Philip Jose Farmer’s 1967 novella:

    Riders of the Purple Wage is an extrapolation of the mid-twentieth century’s tendency towards state supervision and consumer-oriented economic planning. In the story, all citizens receive a basic income (the purple wage) from the government, to which everyone is entitled just by being born. The population is self-segregated into relatively small communities, with a controlled environment, and keeps in contact with the rest of the world through the Fido, a combination television and videophone. The typical dwelling is an egg-shaped house, outside of which is a realistic simulation of an open environment with sky, sun, and moon. In reality, each community is on one level of a multi-level arcology. For those who dislike this lifestyle, there are wildlife reserves where they can join “tribes” of Native Americans and like-minded Anglos living closer to nature for a while. Some choose this lifestyle permanently. . . .

    For people who do not want to bother with social interaction, there is the fornixator, a device that supplies sexual pleasure on demand by direct stimulation of the brain’s pleasure centers. The fornixator is technically illegal, but tolerated by the government because its users are happy, never demand anything else, and usually do not procreate.

    • #25
    • October 26, 2020, at 7:51 AM PDT
    • 1 like
  26. DonG (Biden is compromised) Coolidge

    Aaron Miller (View Comment):

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    AIs in porn?

    Seriously, in game design, the major publishers are pushing toward more complex simulations, bigger worlds, and more characters per zone. But the graphics push never stops either. “Realistic” ray tracing (light reflection and refraction) is the fad now.

    Indeed, there a lot of room to extend the driving industry. Realistic 3D renderings. Deep fakes of you and with famous people have intelligent conversations and interactions. Think of it as a change from passive beyond interactive to immersive experiences. The Matrix is a documentary of a future not yet created.

    • #26
    • October 26, 2020, at 9:04 AM PDT
    • 1 like
  27. David Foster Member
    David FosterJoined in the first year of Ricochet Ricochet Charter Member

    There was a story in 2010 about James Cameron’s completely immersive spectacle “Avatar” (which) may have been a little too real for some fans who say they have experienced depression and suicidal thoughts after seeing the film because they long to enjoy the beauty of the alien world Pandora.

    According to the article, there have been more than 1000 posts to a forum for people trying to cope from the depression they experienced after seeing this film..and not being able to stay within it permanently.

    When I saw this story, I immediately thought of the old Chinese opium dens…which were largely inhabited by people whose lives were so miserable that their desire to ‘disappear in dreams’ (to borrow the words of Tom Russell’s song) was entirely understandable.

    But what misery or bleakness are the would-be permanent habitués of the Avatar and of other virtual worlds seeking to escape?

    • #27
    • October 26, 2020, at 9:14 AM PDT
    • 3 likes
    • This comment has been edited.
  28. Arahant Member

    David Foster (View Comment):
    (to borrow the words of Tom Russell’s song)

    Always.

    David Foster (View Comment):
    But what misery or bleakness are the would-be permanent habitués of the Avatar and of other virtual worlds seeking to escape?

    Many of them have no purpose in life. We have removed the old roles and purposes and not replaced them, and the majority of people are miserable that they feel they aren’t allowed to fulfill the old roles and purposes.

    • #28
    • October 26, 2020, at 9:27 AM PDT
    • 5 likes
  29. kidCoder Member

    I don’t think there is room to rewrite parts of the stack.

    Currently I write a lot of Clojure. I’m quite productive in Clojure, and get to write some neat systems at a very high level. For my Clojure programs to run, I need an entire JVM, and on top of that Java, and on top of that Clojure, and on top of that my libraries, and on top of that my own code. But man can I write that code easily.

    I started in C, and find things like manual memory management… also easy. It just takes me longer to get to the overall result because I need to write so much more of the program that Clojure could just intuit.

    The way I see it there are four core parts of the stack. The hardware, the bottom, the middle, and the top.

    At the hardware level new stuff is written when new hardware is developed. It’s generally thin layers on top of hardware facilities. There is very little room for optimization there.

    At the bottom level you have things like kernel subsystems and display servers. There are manually managed. There is no opportunity to optimize because it’s already being done by the regular maintainers.

    In the middle you have things like programming languages, window managers, some servers. Often they give you the biggest bang for your optimizing buck. If you try to rewrite them, you will be faced with the Second System Syndrome and probably doomed to failure. Wayland has been Coming now for years, and yet people try the latest firefoxes and Blenders and still can’t use them except on a good old X.org server.

    At the top you have things like webapps, browsers, the MS Office Suite. Rewriting performance critical components here WORKS, and works well. However mass rewrites are themselves doomed to failure, again, Second System Syndrome. Attempts to write new software in memory managed languages can work, given good enough libraries (I am a C guy at heart), but for the most part this layer is fast enough the users don’t care. So there isn’t much benefit in optimizing this part of the stack anyway.

    There was some discussion on transistor sizes above. 7 nm seems to be the best we will get. 7 nm gives you transistors that are 14 silicon atoms across. Either we change to a smaller atom, or the quantum effects become overwhelming and the transistor becomes uncontrollable. Intel’s 7 nm process, by the way, seems to be more of a 10 nm process with some marketing thrown in. 7 nm is hard. What AMD is doing with their smarter cores is fascinating, and they are working hard on using hardware to optimize existing applications even without the applications necessarily knowing it, so there is in fact room for more CPU design, but not necessarily guaranteed “faster because we go smaller” speed gains.

    • #29
    • October 26, 2020, at 10:32 AM PDT
    • 3 likes
  30. WillowSpring Member
    WillowSpringJoined in the first year of Ricochet Ricochet Charter Member

    kidCoder (View Comment):
    For my Clojure programs to run, I need an entire JVM, and on top of that Java, and on top of that Clojure, and on top of that my libraries, and on top of that my own code. But man can I write that code easily.

    I don’t mean to be critical, but this sentence and the rest of your post make me glad that I am retired. (And yes, I have looked into Clojure)

    Oh – and stay off my lawn :-)

    • #30
    • October 26, 2020, at 11:34 AM PDT
    • 5 likes