Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
I am a computer graphics software engineer. Computer graphics is unusual in tech for having an appetite for performance that far outstrips what computer hardware can deliver. In many ways, we are the ones who drive computing performance forward, and our hardware is often used by other fields for unrelated tasks (for example, AI makes heavy use of the GPUs–graphical processor units–that are used to power computer games).
Most of the software world does not face these constraints. For most programmers (and the managers who employ them) the increases in computing power over the past 30 years were truly awe-inspiring; far in excess of what they thought they needed. It didn’t matter how crappy programmers or their tools were when in a few years’ time computers could be counted on to be exponentially faster. Programmers didn’t need to learn to deal with hard things like memory management, optimizing performance, or writing code that can run on multiple CPU cores simultaneously. Even worse, the people who write the tools programmers use–programming languages–felt they need not worry about these things either. One of the members of the C++ standards committee (a widely used programming language) admitted to me earlier this year to having once thought this way.
But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.
People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation. There is a wide scope to make computer software faster simply by rewriting old code. Most managers (and many programmers) fear this the way most people fear math, but I think they will be pleasantly surprised. Parallel programming and memory management are simply not as hard as they think, not when the right tools are used. Programmers who have spent 20 years thinking they can’t deal with managing memory or write parallel code are going to find that, actually, they can do these things.
Moore’s Law may be ending, but software will continue to advance.Published in