Tech’s Productivity Problem

 

I am a computer graphics software engineer. Computer graphics is unusual in tech for having an appetite for performance that far outstrips what computer hardware can deliver. In many ways, we are the ones who drive computing performance forward, and our hardware is often used by other fields for unrelated tasks (for example, AI makes heavy use of the GPUs–graphical processor units–that are used to power computer games).

Most of the software world does not face these constraints. For most programmers (and the managers who employ them) the increases in computing power over the past 30 years were truly awe-inspiring; far in excess of what they thought they needed. It didn’t matter how crappy programmers or their tools were when in a few years’ time computers could be counted on to be exponentially faster.  Programmers didn’t need to learn to deal with hard things like memory management, optimizing performance, or writing code that can run on multiple CPU cores simultaneously. Even worse, the people who write the tools programmers use–programming languages–felt they need not worry about these things either. One of the members of the C++ standards committee (a widely used programming language) admitted to me earlier this year to having once thought this way.

But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation. There is a wide scope to make computer software faster simply by rewriting old code. Most managers (and many programmers) fear this the way most people fear math, but I think they will be pleasantly surprised. Parallel programming and memory management are simply not as hard as they think, not when the right tools are used. Programmers who have spent 20 years thinking they can’t deal with managing memory or write parallel code are going to find that, actually, they can do these things.

Moore’s Law may be ending, but software will continue to advance.

Published in General
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 64 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. kedavis Coolidge
    kedavis
    @kedavis

    WillowSpring (View Comment):
    I am retired now and feel lucky that my career was when it was a real skill to get the job done. Even though I eventually moved from assembly language through Forth to C, I never had an application where I didn’t own the code ‘all the way down’.

    “Mega-dittos.”

    “Kids today” just don’t seem to understand how computers really work.

    • #31
  2. kedavis Coolidge
    kedavis
    @kedavis

    Aaron Miller (View Comment):

    Is there hope in automating subroutines? Is programming so often more like speech than like manufacturing that programmers cannot greatly benefit from software that handles the more primitive elements of code?

    If a programmer can produce new functionality by arranging familiar premises with new variables and conditions, then those replicable premises could be provided in optimized form by pre-existing software (like a game engine). Does that at least limit the optimization necessary at the end?

    You can tell I’m not a programmer.

    Indeed.  :-)

    One problem you have with that kind of “subroutine” situation is that they will always have to be designed to include handling more possibilities than they will be needed for any given situation, which means they will contain more code than is needed for any given situation.  Which amounts to bloat.

    • #32
  3. kedavis Coolidge
    kedavis
    @kedavis

    Ontheleftcoast (View Comment):

    kedavis (View Comment):

    namlliT noD (View Comment):

    Joseph Eagar: People are panicking. Industry leaders are wondering how they will manage, but their dependence on ever-faster CPUs will ultimately be their salvation.

    Panicking? I wouldn’t be overly concerned about it.

    At the same time, we’re having difficulty coming up with practical uses for all this computational power.

    I mean, once you can deliver high resolution digital porn, it’s like, y’know, mission accomplished, what else do you need?

    What about holographic imaging directly into the brain? :-)

    As Dennis Miller sometimes says, the day a factory worker can sit in his Barcolounger and VR f*** Claudia Schiffer for $19.99, will make crack look like Sanka.

    Philip Jose Farmer’s 1967 novella:

    Riders of the Purple Wage is an extrapolation of the mid-twentieth century’s tendency towards state supervision and consumer-oriented economic planning. In the story, all citizens receive a basic income (the purple wage) from the government, to which everyone is entitled just by being born. The population is self-segregated into relatively small communities, with a controlled environment, and keeps in contact with the rest of the world through the Fido, a combination television and videophone. The typical dwelling is an egg-shaped house, outside of which is a realistic simulation of an open environment with sky, sun, and moon. In reality, each community is on one level of a multi-level arcology. For those who dislike this lifestyle, there are wildlife reserves where they can join “tribes” of Native Americans and like-minded Anglos living closer to nature for a while. Some choose this lifestyle permanently. . . .

    For people who do not want to bother with social interaction, there is the fornixator, a device that supplies sexual pleasure on demand by direct stimulation of the brain’s pleasure centers. The fornixator is technically illegal, but tolerated by the government because its users are happy, never demand anything else, and usually do not procreate.

    These days, we actually need more procreation.

    • #33
  4. kedavis Coolidge
    kedavis
    @kedavis

    Arahant (View Comment):

    David Foster (View Comment):
    (to borrow the words of Tom Russell’s song)

    Always.

    David Foster (View Comment):
    But what misery or bleakness are the would-be permanent habitués of the Avatar and of other virtual worlds seeking to escape?

    Many of them have no purpose in life. We have removed the old roles and purposes and not replaced them, and the majority of people are miserable that they feel they aren’t allowed to fulfill the old roles and purposes.

    But they themselves may have played a part in getting rid of those roles and purposes because they felt them to be oppressive etc.

    • #34
  5. kidCoder Member
    kidCoder
    @kidCoder

    WillowSpring (View Comment):

    kidCoder (View Comment):
    For my Clojure programs to run, I need an entire JVM, and on top of that Java, and on top of that Clojure, and on top of that my libraries, and on top of that my own code. But man can I write that code easily.

    I don’t mean to be critical, but this sentence and the rest of your post make me glad that I am retired. (And yes, I have looked into Clojure)

    Yes. The modern Javascript stack is even worse. There are more pieces you can’t understand because you are meant to install it and move quickly. At least in my case I understand the stack pretty far down, and Clojure has a mentality of “go read your library if you want to know how stuff works.”

    Being able to bootstrap the world from just a working computer is an open problem. We need more people who can bootstrap different languages without having those languages to begin with. Try compiling gcc without gcc, it can get to be quite an adventure.

    • #35
  6. Guruforhire Inactive
    Guruforhire
    @Guruforhire

    kidCoder (View Comment):
    Try compiling gcc without gcc, it can get to be quite an adventure.

    use clang?

    • #36
  7. kidCoder Member
    kidCoder
    @kidCoder

    Guruforhire (View Comment):
    Guruforhire

    kidCoder (View Comment):
    Try compiling gcc without gcc, it can get to be quite an adventure.

    use clang?

    How do you bootstrap llvm?

    • #37
  8. E. Kent Golding Moderator
    E. Kent Golding
    @EKentGolding

    Joseph Eagar (View Comment):

    SkipSul (View Comment):

    Joseph Eagar (View Comment):

    DonG (skeptic) (View Comment):

    Moore’s law has about 10 years left. 7nm is the current technology and we’ll have 5nm soon and then 3nm after that. Combined with better density techniques (vertical transistors, more metals,…) we should have another 6 doublings of speed. But the real difference is that most computing will be moving to the cloud. Many of the interesting problems like language translation will be done in the cloud, where answers to previously asked questions are quickly retrieved. At some point, there are no new questions.

    Computer graphics are special problem. The need for more resolution, higher framerates, more depth, more realism is nearly unlimited.

    I’m skeptical that smaller transistor sizes will speed things up much. They produce just as much heat, and heat is really the limiting factor here. It’s useless to build a CPU with 32 highly advanced cores if you can only turn 16 of them on without melting the damn thing.

     

    Give the engineers time to figure out the cooling system – to them this is likely no barrier, merely an excuse to get out the liquid nitrogen.

    Can’t use liquid nitrogen in consumer systems :) Seriously though, it’s hard to conduct that much heat from such a small surface area.

    Cool some of them with liquid acetylene and others with liquid oxygen.

    • #38
  9. Guruforhire Inactive
    Guruforhire
    @Guruforhire

    kidCoder (View Comment):

    Guruforhire (View Comment):
    Guruforhire

    kidCoder (View Comment):
    Try compiling gcc without gcc, it can get to be quite an adventure.

    use clang?

    How do you bootstrap llvm?

    use Microsoft visual studio.

    But you ask the question and there is a cppcon video on the subject

    • #39
  10. kedavis Coolidge
    kedavis
    @kedavis

    Speaking of bootstrapping…

    The one thing I really liked the most about the PDP-12 was that you could bootstrap it from a LINCtape/DECtape by just entering a single instruction on the switches, and then START.  I don’t think any other DEC system let you execute an instruction immediately from the front panel, without first “depositing” it into memory.  Certainly the PDP-8 models didn’t.  Entering even the simplest “bootstrap” program to start from paper tape, took a few minutes, even if you had it memorized like I did.  (And most PDP-8 system seemed to have that bootstrap program on a piece of paper taped to the front.)

    But in practical terms the most useful thing about the PDP-12 was probably the ability to use the “oscilloscope” display as a “console” and just the keyboard of a teletype for input, without using lots and lots of paper (and ribbons) like we did with the 8/L.

    The photo in my previous post makes it look deceptively simple, but in reality that text/number display was not a traditional terminal display like on even the simplest CRT video terminal.

    The PDP-12 display was really an oscilloscope, and everything seen in that photo is actually being DRAWN on the screen, basically one dot/pixel at a time.

    And for example in the LAP6/DIAL system, one of the A/D (Analog to Digital) control knobs could be used as a “cursor” for moving around in text to be edited, etc.  Pretty cool for the 1960s!

    • #40
  11. Headedwest Coolidge
    Headedwest
    @Headedwest

    I wrote a PhD dissertation based on solving a problem using Integer Dynamic Programming optimization. IDP iterates toward a solution by incremental choices along many paths of possible solutions. When you get to the end and see the least expensive (in my case; there could be other objectives) solution, you have to work your way back along the path to reveal all of the decisions that were made to get that value. That means you have to store all of the intermediate partial paths. This was a classic case of using up all the available CPU and storage capacity given to me, so I had to write the software to be as efficient as possible.

    I was fortunate to have been at a university that had an array of IBM supercomputers (well, super in the late 80s). So I had access to a lot of memory and CPU power. I could not afford the time it would have taken to write the partial paths to disk (or maybe tape!) and get it back. So it had to fit into RAM. My intermediate arrays were triangular, so I paired them and declared a rectangular array for both halves; when I read or wrote to each half I had to compute the row and column by context. 

    I wrote the software in FORTRAN. Why? Because that language offered me a 1-bit memory type and I could store my 1-0 data in the smallest memory footprint possible.

    The procedure was basically to submit a text file with the program surrounded by JCL(!) statements to submit as batch jobs to the supercomputer array. JCL (Job Control Language) came out of early computing, and if somebody really knew it, you wanted that person as a friend for life. Every keystroke counted, a missing space or a double space where one was called for would totally invalidate that run. JCL was based on the 80-character Hollerith card, so no statement could exceed 80 characters. Even in the 1980s this felt like horse-and-buggy technology.

    My procedure was this: when I was ready to leave the office for the day, I would submit about 8 of these jobs (with different data for the experiments I was running). Next morning (if I was lucky) I would have the results. The biggest problems I would submit took around 30 MINUTES of CPU time. My main competition for access on this array was the Meteorology Department running arcane weather models. My FORTRAN programs were very difficult (for me) to write, but the final one was only about 300 lines of code. I may have set a record for the most CPU time per line of code.

    I could pretty easily run those models today on my home computer; lots of RAM, so I could rewrite in a modern language. But each data point would still take quite a while to compute.

     

     

    • #41
  12. BastiatJunior Member
    BastiatJunior
    @BastiatJunior

    kedavis (View Comment):

    Aaron Miller (View Comment):

    Is there hope in automating subroutines? Is programming so often more like speech than like manufacturing that programmers cannot greatly benefit from software that handles the more primitive elements of code?

    If a programmer can produce new functionality by arranging familiar premises with new variables and conditions, then those replicable premises could be provided in optimized form by pre-existing software (like a game engine). Does that at least limit the optimization necessary at the end?

    You can tell I’m not a programmer.

    Indeed. :-)

    One problem you have with that kind of “subroutine” situation is that they will always have to be designed to include handling more possibilities than they will be needed for any given situation, which means they will contain more code than is needed for any given situation. Which amounts to bloat.

    There is a programming practice whose acronym is “SOLID,” which would take a lot of writing to explain here.  Following its practices can increase portability and decrease bloat.  If you search for it on Big Brother Google, you’ll find good explanations of it.

    • #42
  13. kedavis Coolidge
    kedavis
    @kedavis

    Headedwest (View Comment):

    I wrote a PhD dissertation based on solving a problem using Integer Dynamic Programming optimization. IDP iterates toward a solution by incremental choices along many paths of possible solutions. When you get to the end and see the least expensive (in my case; there could be other objectives) solution, you have to work your way back along the path to reveal all of the decisions that were made to get that value. That means you have to store all of the intermediate partial paths. This was a classic case of using up all the available CPU and storage capacity given to me, so I had to write the software to be as efficient as possible.

    I was fortunate to have been at a university that had an array of IBM supercomputers (well, super in the late 80s). So I had access to a lot of memory and CPU power. I could not afford the time it would have taken to write the partial paths to disk (or maybe tape!) and get it back. So it had to fit into RAM. My intermediate arrays were triangular, so I paired them and declared a rectangular array for both halves; when I read or wrote to each half I had to compute the row and column by context.

    I wrote the software in FORTRAN. Why? Because that language offered me a 1-bit memory type and I could store my 1-0 data in the smallest memory footprint possible.

    That sounds a bit (har har) like my second-year CS project.  I started college when most students could get there without having SEEN, let alone actually USED, a real computer.  But the High School I went to, had one – ONE, the PDP-8/L, and it wasn’t in the office!  In the office they used typewriters!  Big advantage!

    I started  college with the usual mix of “bonehead” classes everyone had to take for any kind of degree.  Back then it was common to not take courses seriously related to your major, until 2nd or even 3rd year.

    By third term of first year I knew I needed some computer courses just to keep some sanity.  But going through the course catalog, nothing seemed the least bit (again!) challenging.  I wound up taking 3 senior-level (400) classes, and totally blew them away: 2-hour finals in 20 or 30 minutes with perfect scores, things like that…

    Before starting the second year, my advisor, who was also the head of the math department, had a special project he wanted done, a Normal Algorithm processor for their basic (at that time) AI classes.  He offered a full 12-credit-hours of “A” grade for it.  I met with him once a week for most of the term to work out what he wanted, I wrote it out long-hand on paper pads the last weekend of the term, and keypunched it myself.  It worked the first time.

    [continued for word limit]

    • #43
  14. OccupantCDN Coolidge
    OccupantCDN
    @OccupantCDN

    While CPUs have hit a wall (Intel has been stuck at 10nm since 2014) AMD and TSMC have successfully been on 7nm for several years, and will soon shift to 5 nm. AMD’s Epyc ‘Rome’ CPU is a 7nm chipset with nearly 40 Billion transistors in the CPU package. In order to keep yield up, the chip has been divided in 9 separate semiconductor components “Chiplets”. This is a return to the past, Linked is a video of a tear down of IBM mainframe CPU from the 90s that had 100s of chips inside its package:

    While it might be a while before we see new lines launched with 100’s of chips encapsulated into a single cpu package – the technology to do that is quite old. Should a market require it – it could be done.

    Underneath the heat spreader – Nude photos of AMD’s Epyc server chip:

    There have been many details that AMD has just recently started revealing for their 2nd Gen EPYC Rome processors. The AMD EPYC Rome processors are composed of a 9 die design which is also to be referred to as MCM (Multi-Chip-Module). The 9 dies include eight CCD’s (Compute Core dies) & a single IOD (Input / Output die). Each CCD is composed of two CCX (Compute Core complexes) that feature four Zen 2 cores with their own respective L2 cache and a shared L3 cache. All eight CCD’s are connected to the I/O die using infinity fabric.

    The entire article is here:

    AMD Epyc Rome CPU

    With faster and faster CPUs the bottle neck in overall system speed is again becoming memory DDR4 is showing its age, higher end desktop and workstation CPUs have Quad channel memory controllers, and some servers have 8 channel memory controllers. DDR5, has been finalized and engineering samples have already been shipping to vendors for design verification and certifications. DDR5 will double the speed of the memory channels into the pc, PC5-51200 speed memory should be available next year. (DDR5 is not backwards compatible – and will not work in currently shipping DDR4 systems)

    • #44
  15. kedavis Coolidge
    kedavis
    @kedavis

    [continued]

    However, the Computer Center admins were not happy about it, because to keep from tying up CPU power just unpacking and re-packing character text, I stored the “program” and “working” text one character per “word,” using just 6 bits out of 60.  (CDC Cyber mainframe system.)

    I suppose I could have packed things like you describe, but then my program would have been chewing up CPU time like yours did, essentially for the make-work of “optimizing” memory use.  And they were far more conservative of CPU time.  But nobody had ever encountered a situation where so much memory would be used at once.

    And where most even rather complicated programs might be just a few hundred (60-bit) words at most, for my Normal Algorithm processor to handle a decent-size program might require upwards of 100k.  And the whole mainframe system only had 192k at that time.  So when someone wanted to run my program for their Normal Algorithm test, most or all of everything else previously running on the system had to get “swapped out.”

    To optimize it as much as possible, I wound up (after “encouragement” from the Computer Center admins, passed through the department head/my advisor) breaking up the main program into “segments” which meant that only the data part was always in system memory.  That at least allowed a FEW other programs to be resident and run at the same time…

    And that’s also when I discovered that the setup CDC had created for segmenting programs like that, didn’t actually work very well…  One of several times I’ve managed to uncover bugs in sometimes very important and complex systems.

    • #45
  16. kedavis Coolidge
    kedavis
    @kedavis

    WillowSpring (View Comment):
    I went from mini-computers like the DEC pictured above through microprocessors to custom design. The chips were so slow and so memory limited that there was always a doubt that you could solve the problem at all. One of the early products I worked on was a MOS 6502 based temperature control system. The memory was so limited that there would only be several unused and in order to add any new function, I had to go back and optimize existing code. Good times!

    Oh, I also did a little embedded work.  I came in partway through the development of a microprocessor-controlled video effects unit.  The previous software guy hadn’t been able to get it to work, and just couldn’t figure out why.  Finally he just gave up and left.

    The basis of the hardware was a 64180 chip, basically a Z80 enhanced with some on-chip RAM (not very much, maybe 1k?) and I/O capability so it didn’t need a lot of external chips to be used in devices.

    The code was mostly in C, with some assembly for hardware-specific functions that couldn’t be handled by C.  The code development was done on a PC and then cross-compiled to Z80/64180, put on a ROM chip, and stuck in the device for testing.

    It didn’t take me long to see that he hadn’t included the initialization routine at the very start of the assembly code, to initialize the CPU stack etc.  How can someone look and look and not see that?  I suppose it’s a tree that people can miss if they’re in the forest of C and just assume things like that are done by the Software Elves.

    Cross-compiling was a big thing since it let people (at least theoretically) write portable code in C that could then be compiled for different hardware just by using a different version of the software.  What it actually did, at least in the case I was involved with, was compile the C code to Z80 assembly, and then a regular Z80 assembler took it to machine code level for programming the EPROM.

    And that was actually another case where I encountered a hidden bug:  At least in some cases, I don’t remember the circumstances now, the compiler would produce assembly code missing some linking designations, like EXTERNAL values between modules, and then the assembly and linking steps would fail.

    Fortunately in those days, if you had a technical problem like that, it wasn’t yet impossible to reach someone at a company who actually knew what they were doing.  I let someone at Aztec/Manx software know about the problem, including reproducible examples, and they fixed it.

    There was also an optimizer available which I used, but the things it optimized seemed to me like they should have been done by the compiler to start with. More profit from the optimizer, I guess.

     

    • #46
  17. namlliT noD Member
    namlliT noD
    @DonTillman

    Joseph Eagar (View Comment):

    namlliT noD (View Comment):

    Joseph Eagar: But computers aren’t getting faster anymore. There is a physical limit to how small you can make transistors, and there is also a limit to how many transistors you can turn on at once and not melt the chip. We have probably reached both limits and we certainly will have reached them in a year or two’s time.

    I’ve been in the business for almost 40 years. It’s not a problem.

    We’ve actually run into physical limits many, many times before. And each time a new technology was developed to get around the physical limit. The original logic families were replaced by faster logic families, then they found a way to keep transistors from saturating to go faster, then they introduced Field Effect Transistors which were simpler and increased the density, then Complementary FETs addressed power consumption, and so forth. More recently we hit a clock speed physical limit, so now you see multiple CPUs.

    See my article here: Moore’s Law

    But the rate of improvement has been slowing down. An SMP quad-core processor isn’t going to perform four times as fast as a single-core in most cases, it will be limited by the relatively slow speed of main memory.

    A couple issues here.  One is that it depends on what you mean by “improvement”.   If you measure just one parameter, sure.  But different types of improvements happen in different areas at different times.

    But Moore’s Law wasn’t about improvement in speed, or number of computations, or whatever.  Moore was only about the number of transistors on an IC product (ie., not a lab curiosity).

    Here’s the chart from the Wikipedia entry (it’s nicely up to date):

    I think it’s lookin’ pretty good.  How amazing is that?

    • #47
  18. Joseph Eagar Member
    Joseph Eagar
    @JosephEagar

    OccupantCDN (View Comment):

     

    AMD Epyc Rome CPU

    With faster and faster CPUs the bottle neck in overall system speed is again becoming memory DDR4 is showing its age, higher end desktop and workstation CPUs have Quad channel memory controllers, and some servers have 8 channel memory controllers. DDR5, has been finalized and engineering samples have already been shipping to vendors for design verification and certifications. DDR5 will double the speed of the memory channels into the pc, PC5-51200 speed memory should be available next year. (DDR5 is not backwards compatible – and will not work in currently shipping DDR4 systems)

    My understanding was that DDR4 has about the same latency as DDR5? 

    • #48
  19. OccupantCDN Coolidge
    OccupantCDN
    @OccupantCDN

    Joseph Eagar (View Comment):

    OccupantCDN (View Comment):

     

    AMD Epyc Rome CPU

    With faster and faster CPUs the bottle neck in overall system speed is again becoming memory DDR4 is showing its age, higher end desktop and workstation CPUs have Quad channel memory controllers, and some servers have 8 channel memory controllers. DDR5, has been finalized and engineering samples have already been shipping to vendors for design verification and certifications. DDR5 will double the speed of the memory channels into the pc, PC5-51200 speed memory should be available next year. (DDR5 is not backwards compatible – and will not work in currently shipping DDR4 systems)

    My understanding was that DDR4 has about the same latency as DDR5?

    Yes, and no. When counted in clock cycles the latency of ram has not changed in 15-20 years. The trick is to increase in the clock speed, DDR5 will support clock speeds from 3200 MHz – 4800 MHz out of the box, and eventually up to 8400 MHz.

    • #49
  20. WillowSpring Member
    WillowSpring
    @WillowSpring

    Headedwest (View Comment):
    My procedure was this: when I was ready to leave the office for the day, I would submit about 8 of these jobs (with different data for the experiments I was running). Next morning (if I was lucky) I would have the results

    In my first company, we were doing a proof of concept IFF (Identify Friend or Foe) pattern recognition system for the Navy based on Radar return patterns.  The system was trained by repeatedly passing through a set of known data and then tested against a different set.  The “Train” took overnight and sometimes needed to be baby-sat.  I was just starting out and volunteered for the overnight runs.  I have always felt that my willingness to do that gave my career a kick-start.  And it also let me go to Engineering school full time (paid for by my company).  I can’t imagine having the energy for that again.

    also:

    The procedure was basically to submit a text file with the program surrounded by JCL(!) statements to submit as batch jobs to the supercomputer array. JCL (Job Control Language) came out of early computing, and if somebody really knew it, you wanted that person as a friend for life.

    I absolutely HATED JCL.  It might as well be called “Just Comeback Later”.  I went to Wright Patterson AFB to port one of our programs to their multiple IBM mainframe systems.  The head guy would show me what JCL I needed and I would type that up, submit the card deck and the next day get a rejection notice.  The head guy would say “Oh, you forgot the gobbldygook”, so I would add that and rinse and repeat.  I think I was there a week and probably 4 days was getting the JCL right.

    The other people you wanted to keep happy were the computer operators (when you weren’t doing it yourself).  My first programming course was at UNC which had one of the first IBM 360 computers in a shared arrangement with Duke and NC State.  The professor was Fred Brooks who had managed the 360 and 360/OS projects.  

    My first project was to print a 3D plot on a line printer using overprinting to alter the perceived density.  It was the typical big computer setup where you would punch the code on cards, submit the card deck and come back in a couple of hours to get the results.  Unfortunately, I misread the special code to reprint on the same line and used the one to eject the page and print at the top of the next page.

    The result was a stack of about 6″ of paper with one character at the top left of each page.  This was labelled with a very nasty note from the operator who managed to stop my program before it ran too long.  

    It took a long time to get that operator’s trust back.

    • #50
  21. Headedwest Coolidge
    Headedwest
    @Headedwest

    WillowSpring (View Comment):

    I was just starting out and volunteered for the overnight runs.

    Staying up all night used to be easy. Now it happens inadvertently via insomnia. But the next day is no fun. And you got a nice benefit out of your all-nighters.

    I absolutely HATED JCL. It might as well be called “Just Comeback Later”. I went to Wright Patterson AFB to port one of our programs to their multiple IBM mainframe systems. The head guy would show me what JCL I needed and I would type that up, submit the card deck and the next day get a rejection notice.

    The terrible part was that the syntax was incomprehensible, so you could never guess or intuit what was needed. You had to know it or look it up, and if you weren’t doing it constantly you needed to find the person who knew. I remember only one line of JCL:

    SYSIN DD *

    which means treat the lines (cards) that follow as data to be read. I remember it because I mis-heard my expert and I didn’t put the space between DD and *. Instant run failure, and no information came back as to why.

    The other people you wanted to keep happy were the computer operators (when you weren’t doing it yourself). My first programming course was at UNC which had one of the first IBM 360 computers in a shared arrangement with Duke and NC State. The professor was Fred Brooks who had managed the 360 and 360/OS projects.

    I own his book “The Mythical Man-Month”. Good read.

    My first project was to print a 3D plot on a line printer using overprinting to alter the perceived density. It was the typical big computer setup where you would punch the code on cards, submit the card deck and come back in a couple of hours to get the results.

    Always unpredictable. At Pitt’s data center you’d get a slip of paper with a number, and you could phone up and a recorded voice would tell you the last job run, so you wouldn’t have to guess. That was luxury, in the day.

    The result was a stack of about 6″ of paper with one character at the top left of each page. This was labelled with a very nasty note from the operator who managed to stop my program before it ran too long.

    It took a long time to get that operator’s trust back.

    I heard a story when I was an undergraduate about somebody who had done almost exactly that error. He got yelled at, too.

    Between waiting for an available keypunch machine and waiting for the run delay, you quickly learned to be a very careful pre-run debugger or you’d never get anything done. 

     

    • #51
  22. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Headedwest (View Comment):

     

    My first project was to print a 3D plot on a line printer using overprinting to alter the perceived density. It was the typical big computer setup where you would punch the code on cards, submit the card deck and come back in a couple of hours to get the results.

    Always unpredictable. At Pitt’s data center you’d get a slip of paper with a number, and you could phone up and a recorded voice would tell you the last job run, so you wouldn’t have to guess. That was luxury, in the day.

    The result was a stack of about 6″ of paper with one character at the top left of each page. This was labelled with a very nasty note from the operator who managed to stop my program before it ran too long.

    It took a long time to get that operator’s trust back.

    I heard a story when I was an undergraduate about somebody who had done almost exactly that error. He got yelled at, too.

    Between waiting for an available keypunch machine and waiting for the run delay, you quickly learned to be a very careful pre-run debugger or you’d never get anything done.

     

    ah, punch cards.  

    In High School, each person had a couple cards with a standard heading.  It printed your name, the class period, etc.

    A friend slipped a card into another friends deck that had a print command on it to print “I’m a porno freak” on the next line after his name.  Friend #2 never noticed, and friend #1 finally fessed up and told him to re-run it just before he was going to hand it in.

     

    • #52
  23. kedavis Coolidge
    kedavis
    @kedavis

    Miffed White Male (View Comment):

    Headedwest (View Comment):

     

    My first project was to print a 3D plot on a line printer using overprinting to alter the perceived density. It was the typical big computer setup where you would punch the code on cards, submit the card deck and come back in a couple of hours to get the results.

    Always unpredictable. At Pitt’s data center you’d get a slip of paper with a number, and you could phone up and a recorded voice would tell you the last job run, so you wouldn’t have to guess. That was luxury, in the day.

    The result was a stack of about 6″ of paper with one character at the top left of each page. This was labelled with a very nasty note from the operator who managed to stop my program before it ran too long.

    It took a long time to get that operator’s trust back.

    I heard a story when I was an undergraduate about somebody who had done almost exactly that error. He got yelled at, too.

    Between waiting for an available keypunch machine and waiting for the run delay, you quickly learned to be a very careful pre-run debugger or you’d never get anything done.

     

    ah, punch cards.

    In High School, each person had a couple cards with a standard heading. It printed your name, the class period, etc.

    A friend slipped a card into another friends deck that had a print command on it to print “I’m a porno freak” on the next line after his name. Friend #2 never noticed, and friend #1 finally fessed up and told him to re-run it just before he was going to hand it in.

     

    High School for me was all about paper tape.  Cards didn’t come until college.

    • #53
  24. FloppyDisk90 Member
    FloppyDisk90
    @FloppyDisk90

    All this discussion reminds me of this old Dilbert cartoon:  Link

     

    • #54
  25. kedavis Coolidge
    kedavis
    @kedavis

    FloppyDisk90 (View Comment):

    All this discussion reminds me of this old Dilbert cartoon: Link

     

    • #55
  26. WillowSpring Member
    WillowSpring
    @WillowSpring

    Miffed White Male (View Comment):
    A friend slipped a card into another friends deck that had a print command on it to print “I’m a porno freak” on the next line after his name. Friend #2 never noticed, and friend #1 finally fessed up and told him to re-run it just before he was going to hand it in.

    When running a compile, our system would print out error messages on a separate line starting with FRSOPN….. It made a particular sound when printing out the compile results, so we all were keyed to that particular sound.

    I have no idea what the FRSOPN stood for, but a great trick was to stick in a card which started

    #FRSPON …. with some error message.

    The # meant it was a compiler comment and it would usually take the target programmer a while to figure out there wasn’t a problem.

    • #56
  27. Arahant Member
    Arahant
    @Arahant

    I rather liked JCL.

    • #57
  28. WillowSpring Member
    WillowSpring
    @WillowSpring

    Arahant (View Comment):

    I rather liked JCL.

    That might explain a lot!

    • #58
  29. Arahant Member
    Arahant
    @Arahant

    WillowSpring (View Comment):

    Arahant (View Comment):

    I rather liked JCL.

    That might explain a lot!

    🤣

    • #59
  30. WillowSpring Member
    WillowSpring
    @WillowSpring

    kidCoder (View Comment):
    Being able to bootstrap the world from just a working computer is an open problem. We need more people who can bootstrap different languages without having those languages to begin with. Try compiling gcc without gcc, it can get to be quite an adventure.

    I have very fond memories of gcc.  When I was working for a company which did a custom chip design for battery management, the first silicon had a bug which made branches work incorrectly.  We were able to modify gcc so that an extra location was added after each branch so they would work correctly.

    I think open source tools are a very important part of improving computer programming.

    • #60
Become a member to join the conversation. Or sign in if you're already a member.