Do You Remember Your First?

 

It was the mid-eighties. I had a degree in Literature from the College of Creative Studies at the University of California, Santa Barbara, and I was determined to make my living as a freelance writer. And I adored my typewriter—the noises it made, the satisfying push on the keys, the occasional ink on my hands from changing ribbons. I’d heard personal computers were the next big thing, but not for me. No self-respecting writer would give up a beautiful typewriter for that.

Then, at a writer’s conference, I was introduced to this bad boy:

And it seduced me with the convenience of never having to retype a whole page because of a typo or a reconsidered word choice, with the usefulness of having all my work on one slender floppy disk, with the smooth way the keyboard clipped onto the front so you could carry it by the handle in the back like a sewing machine. All my writerly pride went out the window. I had to have it.

Nowadays, I sling around two slim MacBooks like they were nothing, and I can hardly believe I once thought that heavy Kaypro was portable, or that dot-matrix printers were legible. But I’ll never forget you, Kaypro II. You turned my head, you metal-cased rascal you.

What was your first computer?

Published in Technology
Tags:

This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 92 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    Gary McVey (View Comment):

    The computer magazines of the day reflect the changes in the industry. Magazines from 1980-82 are pitched to hobbyists and buffs, people who are willing to install print spoolers and set elaborate DIP switches at both the computer and printer ends. People who rhapsodized over writing device drivers. From the mid-80s on, they became less geeky, more attuned to end users, particularly business users. More color illustrations. Fewer admonitions to use a little PEEK and POKE.

    In the MS-DOS world, the 286 was a big advance for graphics quality and ease of use, as was the 386 later.

     

    And I miss those 500+ page tomes, BYTE Magazine with Pournelle’s Chaos Manor column and John Dvorak’s industry musings. (I miss Jerry Pournelle, too, may he rest in peace.)

    I’d forgotten that he’d passed.  I was at a science-fiction convention in the late 70s, I think it was, where he and Larry Niven had arrived in a chartered 747 or something.  There was one discussion panel he was on that covered, among other things, the atomic-bomb-powered Orion spaceship plan, “Old Bang-Bang” as Pournelle called it.  I think I still have the tape of it somewhere.

    • #61
  2. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):

    Gary McVey (View Comment):

    The computer magazines of the day reflect the changes in the industry. Magazines from 1980-82 are pitched to hobbyists and buffs, people who are willing to install print spoolers and set elaborate DIP switches at both the computer and printer ends. People who rhapsodized over writing device drivers. From the mid-80s on, they became less geeky, more attuned to end users, particularly business users. More color illustrations. Fewer admonitions to use a little PEEK and POKE.

    In the MS-DOS world, the 286 was a big advance for graphics quality and ease of use, as was the 386 later.

     

    And I miss those 500+ page tomes, BYTE Magazine with Pournelle’s Chaos Manor column and John Dvorak’s industry musings. (I miss Jerry Pournelle, too, may he rest in peace.)

    I’d forgotten that he’d passed. I was at a science-fiction convention in the late 70s, I think it was, where he and Larry Niven had arrived in a chartered 747 or something. There was one discussion panel he was on that covered, among other things, the atomic-bomb-powered Orion spaceship plan, “Old Bang-Bang” as Pournelle called it. I think I still have the tape of it somewhere.

    Jerry was one-of-a-kind, a bit of a dinosaur, a bit of a visionary. He irritated me sometimes with his Chaos Manor musings, but I was touched by his personal account on his blog (arguably among the first blogs ever created) of his battle with brain cancer, happy to hear of its remission — and very sad to read of his passing.

    Someone should write a biography about him that intertwines the computer revolution with his life. I think it could be good.

    • #62
  3. Gary McVey Contributor
    Gary McVey
    @GaryMcVey

    Computer Currents used to feature parodies, like their version of “Singing in the Rain”:

    “I’m booting it again, just booting it again,

    What a horrible feeling, I’m going insane,

    I’m trapped in defeat, with Control-Alt-Delete

    And I’m booting, just booting it again”. 

    • #63
  4. kedavis Coolidge
    kedavis
    @kedavis

    Gary McVey (View Comment):

    Computer Currents used to feature parodies, like their version of “Singing in the Rain”:

    “I’m booting it again, just booting it again,

    What a horrible feeling, I’m going insane,

    I’m trapped in defeat, with Control-Alt-Delete

    And I’m booting, just booting it again”.

    How about this oldie?

     

    • #64
  5. Chuck Coolidge
    Chuck
    @Chuckles

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    Gary McVey (View Comment):

    The computer magazines of the day reflect the changes in the industry. Magazines from 1980-82 are pitched to hobbyists and buffs, people who are willing to install print spoolers and set elaborate DIP switches at both the computer and printer ends. People who rhapsodized over writing device drivers. From the mid-80s on, they became less geeky, more attuned to end users, particularly business users. More color illustrations. Fewer admonitions to use a little PEEK and POKE.

    In the MS-DOS world, the 286 was a big advance for graphics quality and ease of use, as was the 386 later.

     

    And I miss those 500+ page tomes, BYTE Magazine with Pournelle’s Chaos Manor column and John Dvorak’s industry musings. (I miss Jerry Pournelle, too, may he rest in peace.)

    I’d forgotten that he’d passed. I was at a science-fiction convention in the late 70s, I think it was, where he and Larry Niven had arrived in a chartered 747 or something. There was one discussion panel he was on that covered, among other things, the atomic-bomb-powered Orion spaceship plan, “Old Bang-Bang” as Pournelle called it. I think I still have the tape of it somewhere.

    Jerry was one-of-a-kind, a bit of a dinosaur, a bit of a visionary. He irritated me sometimes with his Chaos Manor musings, but I was touched by his personal account on his blog (arguably among the first blogs ever created) of his battle with brain cancer, happy to hear of its remission — and very sad to read of his passing.

    Someone should write a biography about him that intertwines the computer revolution with his life. I think it could be good.

    You write well…

    • #65
  6. kedavis Coolidge
    kedavis
    @kedavis

    Chuck (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    Gary McVey (View Comment):

    The computer magazines of the day reflect the changes in the industry. Magazines from 1980-82 are pitched to hobbyists and buffs, people who are willing to install print spoolers and set elaborate DIP switches at both the computer and printer ends. People who rhapsodized over writing device drivers. From the mid-80s on, they became less geeky, more attuned to end users, particularly business users. More color illustrations. Fewer admonitions to use a little PEEK and POKE.

    In the MS-DOS world, the 286 was a big advance for graphics quality and ease of use, as was the 386 later.

     

    And I miss those 500+ page tomes, BYTE Magazine with Pournelle’s Chaos Manor column and John Dvorak’s industry musings. (I miss Jerry Pournelle, too, may he rest in peace.)

    I’d forgotten that he’d passed. I was at a science-fiction convention in the late 70s, I think it was, where he and Larry Niven had arrived in a chartered 747 or something. There was one discussion panel he was on that covered, among other things, the atomic-bomb-powered Orion spaceship plan, “Old Bang-Bang” as Pournelle called it. I think I still have the tape of it somewhere.

    Jerry was one-of-a-kind, a bit of a dinosaur, a bit of a visionary. He irritated me sometimes with his Chaos Manor musings, but I was touched by his personal account on his blog (arguably among the first blogs ever created) of his battle with brain cancer, happy to hear of its remission — and very sad to read of his passing.

    Someone should write a biography about him that intertwines the computer revolution with his life. I think it could be good.

    You write well…

    But he probably doesn’t want to sift through all of those Chaos Manor columns again, as research.

    • #66
  7. WillowSpring Member
    WillowSpring
    @WillowSpring

    All of this makes me feel pretty old. 

    My first ‘personal’ computer was an ASR-33 Teletype.  I had just been hired as an “Engineering Aide” (sounded good, but was basically the low guy on the team) and was assigned a desk in a large room with the terminal since it was so noisy.  I talked my boss into letting me use the timeshare system is was hooked to when I had idle time.  I used that to write a simulation of the military pattern recognition system the company was building.  That started my programming career*

    Along with Data General and DEC minicomputers, the next really small computer I used was an Intel development board using the 8008 chip (that’s right, 8008, not 8080).  It was a large board with 1k Byte memory which was provided by I think 4 rows of 8 chips that you could fry eggs on.

    Next was a KIM-1.  It used the MOS technology 6502 – a rip-off of the Motorola 6809, I believe.   I used that to build a prototype display for a control system.  That got me a job doing the design for a multi-zone industrial temperature control system.  I designed it so it could be configured and wrote software for an Apple II to program the configuration.  Since the company was multi-national, I ended up getting multiple Apple II computers for the different subsidiaries.  Because of that, the local Apple dealer started referring customers to me for custom programming.  That kept me busy for several years.

    After that, there was a long period of IBM/PC/Windows stagnation.  All of the design tools we needed ran on Windows.

    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers.  For about $35, you get more computing power than the first several mainframes I used.

    *I have to give credit to several of the programmers on the staff (all women – the men were Engineers) for helping me when I needed a hint.

    • #67
  8. kedavis Coolidge
    kedavis
    @kedavis

    WillowSpring (View Comment):
    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers.  For about $35, you get more computing power than the first several mainframes I used.

    That’s a common sentiment, but really, I’m sure even a fairly old mainframe could solve complex math problems faster than a Raspberry Pi.  That’s just not what the Pi is designed or intended to do.

    Also, the mainframe systems I worked on could handle up to hundreds of timesharing users.  Let’s see a Pi do THAT.

    • #68
  9. Doug Kimball Thatcher
    Doug Kimball
    @DougKimball

    kedavis (View Comment):

    Headedwest (View Comment):

    kedavis (View Comment):

    Headedwest (View Comment):

    kedavis (View Comment):
    I remember when I looked at it, there were other options that were more usable and less expensive.

    When I bought mine, it was on a big sale at one of the computer store chains at the time. I would have preferred a Kaypro for the larger screen, but at that moment it cost a lot more. Portability (however limited with the size and weight) was a crucial factor.

    The T3200 was portable, had a much larger screen (gas plasma, not lcd), probably a better keyboard, and i think a 20 meg hard drive, and it was Compaq so less worries. I don’t remember how much it cost, but if you got a big discount then maybe…

     

    That would have been a much later computer. When I was buying it was either Osborne or Kaypro or a box + monitor. I bought when portables with hard drives did not exist; the Osborne had two 5.25 inch floppies. Hard drives for desktops were physically big and expensive.

    Seems there was pretty close overlap, anyway. The Osborne-1 looks to have been available through 1985 or so, and the T3200 came out in 87, I got one of the first available in my area.

    We had a couple of Osbornes in our Touche Ross computer room.  We only used them to run streams of random numbers (out of a string of given numbers) to be used for statistical random sampling arrays for the field.  They were useless otherwise.  

    • #69
  10. kedavis Coolidge
    kedavis
    @kedavis

    Doug Kimball (View Comment):

    kedavis (View Comment):

    Headedwest (View Comment):

    kedavis (View Comment):

    Headedwest (View Comment):

    kedavis (View Comment):
    I remember when I looked at it, there were other options that were more usable and less expensive.

    When I bought mine, it was on a big sale at one of the computer store chains at the time. I would have preferred a Kaypro for the larger screen, but at that moment it cost a lot more. Portability (however limited with the size and weight) was a crucial factor.

    The T3200 was portable, had a much larger screen (gas plasma, not lcd), probably a better keyboard, and i think a 20 meg hard drive, and it was Compaq so less worries. I don’t remember how much it cost, but if you got a big discount then maybe…

     

    That would have been a much later computer. When I was buying it was either Osborne or Kaypro or a box + monitor. I bought when portables with hard drives did not exist; the Osborne had two 5.25 inch floppies. Hard drives for desktops were physically big and expensive.

    Seems there was pretty close overlap, anyway. The Osborne-1 looks to have been available through 1985 or so, and the T3200 came out in 87, I got one of the first available in my area.

    We had a couple of Osbornes in our Touche Ross computer room. We only used them to run streams of random numbers (out of a string of given numbers) to be used for statistical random sampling arrays for the field. They were useless otherwise.

    I don’t know if my vision was ever good enough to use an Osborne.  Reminds me of some of the Max Headroom/Terry Gilliam type productions where the computer terminals had tiny screens with a huge Fresnel lens in front.

     

    • #70
  11. WillowSpring Member
    WillowSpring
    @WillowSpring

    kedavis (View Comment):

    WillowSpring (View Comment):
    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers. For about $35, you get more computing power than the first several mainframes I used.

    That’s a common sentiment, but really, I’m sure even a fairly old mainframe could solve complex math problems faster than a Raspberry Pi. That’s just not what the Pi is designed or intended to do.

    Also, the mainframe systems I worked on could handle up to hundreds of timesharing users. Let’s see a Pi do THAT.

    I agree the Pi isn’t designed for timesharing, but I bet it could do it if you wanted to put in the programming effort.

    My first programming course was taught by Fred Brooks who had just come from the IBM 360 project.  UNC shared a 360 with Duke and NC State.  This was back in the ’64-’65 timeframe. (One of my first programs was an attempt to do a 3D function plot using over-prints on the line printer to get the 3rd dimension of ink density.  Unfortunately, I misread the printer manual and used the “eject page” command instead of the “print again on the same line” command.  The result was a 3″ stack of paper with one line on each page and a very nasty note from the computer operator)

    According to Wikipedia, the ‘low end’ 360 was the model 30 which could do 34,500 instructions/sec and had 8-64 KB of memory.  Higher end models could do 16.6 MIPS and had up to 8 MB of memory. 

    The lower end Raspberry Pi has a 4 core, 64 bit 1.2 GHz processor with 1 GB memory.  It also supports Ethernet, WiFi, Bluetooth and USB.  For $100 or so more, you can add a 1TB hard drive.

    In the other room, I have a stack about the size of a loaf of bread with 5 Raspberry Pi boards, a router, power supply and USB hub.  I use it to play with parallel programming (all in Python). 

    One note about Byte Magazine – I used to get it when it just started out as a stapled together set of copied pages.  Those were the days.

    • #71
  12. Henry Racette Member
    Henry Racette
    @HenryRacette

    WillowSpring (View Comment):
    In the other room, I have a stack about the size of a loaf of bread with 5 Raspberry Pi boards, a router, power supply and USB hub.  I use it to play with parallel programming (all in Python). 

    I like that. I have something similar, except they’re Beaglebone Blacks. And I don’t do anything useful at all with them. Not yet, anyway.

    • #72
  13. kedavis Coolidge
    kedavis
    @kedavis

    WillowSpring (View Comment):

    kedavis (View Comment):

    WillowSpring (View Comment):
    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers. For about $35, you get more computing power than the first several mainframes I used.

    That’s a common sentiment, but really, I’m sure even a fairly old mainframe could solve complex math problems faster than a Raspberry Pi. That’s just not what the Pi is designed or intended to do.

    Also, the mainframe systems I worked on could handle up to hundreds of timesharing users. Let’s see a Pi do THAT.

    I agree the Pi isn’t designed for timesharing, but I bet it could do it if you wanted to put in the programming effort.

    My first programming course was taught by Fred Brooks who had just come from the IBM 360 project. UNC shared a 360 with Duke and NC State. This was back in the ’64-’65 timeframe. (One of my first programs was an attempt to do a 3D function plot using over-prints on the line printer to get the 3rd dimension of ink density. Unfortunately, I misread the printer manual and used the “eject page” command instead of the “print again on the same line” command. The result was a 3″ stack of paper with one line on each page and a very nasty note from the computer operator)

    According to Wikipedia, the ‘low end’ 360 was the model 30 which could do 34,500 instructions/sec and had 8-64 KB of memory. Higher end models could do 16.6 MIPS and had up to 8 MB of memory.

    The lower end Raspberry Pi has a 4 core, 64 bit 1.2 GHz processor with 1 GB memory. It also supports Ethernet, WiFi, Bluetooth and USB. For $100 or so more, you can add a 1TB hard drive.

    In the other room, I have a stack about the size of a loaf of bread with 5 Raspberry Pi boards, a router, power supply and USB hub. I use it to play with parallel programming (all in Python).

    One note about Byte Magazine – I used to get it when it just started out as a stapled together set of copied pages. Those were the days.

    Yes, but, the 16.6 MIPS did a lot more work than any single instruction on any microprocessor, including a Pi.

    The CDC Cyber system that I’ve referenced before, did a 60-bit floating-point multiply or divide in a single “cycle.”  Check to see how many instructions a microprocessor has to do, for the same thing.  And consider that even the single instructions might take several clock cycles.

    Multi-core CPUs are much better, of course, but people were saying the same thing about “my desktop computer is more powerful than a mainframe!” back in the 70s and 80s when it was complete balderdash.  Not even counting timesharing with multiple terminals.

    • #73
  14. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    WillowSpring (View Comment):

    kedavis (View Comment):

    WillowSpring (View Comment):
    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers. For about $35, you get more computing power than the first several mainframes I used.

    That’s a common sentiment, but really, I’m sure even a fairly old mainframe could solve complex math problems faster than a Raspberry Pi. That’s just not what the Pi is designed or intended to do.

    Also, the mainframe systems I worked on could handle up to hundreds of timesharing users. Let’s see a Pi do THAT.

    I agree the Pi isn’t designed for timesharing, but I bet it could do it if you wanted to put in the programming effort.

    No effort required.  Comes capable of multi-user time-slicing out of the box.  It is a one-liner to create new users.  One more to assign a password.  Part of configuring it the first time allows you to turn on the supplied OpenSSH service.

    Voilà, time-slicing multi-user host.  With enough clock speed and multiple cores that will blow any 60’s or 70’s mainframe out of the water by orders of magnitude.  More orders of magnitude if you need floating point operations. And, out of the box, capable of enforcing strict resource allocation amongst the connected users.

    If OpenSSH is too complicated for end-users, it is trivial to substitute multi-port serial concentrators via ethernet.

    • #74
  15. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    kedavis (View Comment):

    Yes, but, the 16.6 MIPS did a lot more work than any single instruction on any microprocessor, including a Pi.

    No, both X86 and ARM processors are “CISC”, not “RISC”.  That is, they are Complex Instruction Set CPUs, that dispatch multiple actual operations per instruction, and execute several sub-operations per clock tick.

    The CDC Cyber system that I’ve referenced before, did a 60-bit floating-point multiply or divide in a single “cycle.”  Check to see how many instructions a microprocessor has to do, for the same thing.  And consider that even the single instructions might take several clock cycles.

    Nope.  All modern processors execute multiple floating point operations per clock tick, per core.

    You are out of date.

    • #75
  16. kedavis Coolidge
    kedavis
    @kedavis

    Phil Turmel (View Comment):

    kedavis (View Comment):

    Yes, but, the 16.6 MIPS did a lot more work than any single instruction on any microprocessor, including a Pi.

    No, both X86 and ARM processors are “CISC”, not “RISC”. That is, they are Complex Instruction Set CPUs, that dispatch multiple actual operations per instruction, and execute several sub-operations per clock tick.

    The CDC Cyber system that I’ve referenced before, did a 60-bit floating-point multiply or divide in a single “cycle.” Check to see how many instructions a microprocessor has to do, for the same thing. And consider that even the single instructions might take several clock cycles.

    Nope. All modern processors execute multiple floating point operations per clock tick, per core.

    You are out of date.

    Fact check:  False.  Microprocessors never have executed a complete floating-point instruction in a single clock cycle, otherwise a GHz CPU would be rated in Giga-FLOPS which they aren’t.  (And if they were, the NSA would be VERY interested!)  They’re rated in Mega-FLOPS or MFLOPS and until recently were still on the low side.  For example, the chart in the following article shows that the CDC Cyber 205, running at 50MHz in 1981, performed 8 64-bit MFLOPs.  (I suppose the “degradation” from the 50MHz clock speed might have been model-specific or related to loading and saving registers before and after, or something.  That and the Cyber models used 60-bit words so meeting the 64-bit standard may have required some gyrations that slowed things down some.)

    Meanwhile, AMD microprocessors didn’t reach that level until the recent Ryzen series, and Intel didn’t get there until Core 2.

    https://en.wikipedia.org/wiki/FLOPS

    After that, it’s pretty simple math:  If a microprocessor performs 8 MFLOPS at, say, 4GHz, that means it takes 500 clock cycles per FLOP.

    • #76
  17. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    kedavis (View Comment):

    Phil Turmel (View Comment):

    kedavis (View Comment):

    Yes, but, the 16.6 MIPS did a lot more work than any single instruction on any microprocessor, including a Pi.

    No, both X86 and ARM processors are “CISC”, not “RISC”. That is, they are Complex Instruction Set CPUs, that dispatch multiple actual operations per instruction, and execute several sub-operations per clock tick.

    The CDC Cyber system that I’ve referenced before, did a 60-bit floating-point multiply or divide in a single “cycle.” Check to see how many instructions a microprocessor has to do, for the same thing. And consider that even the single instructions might take several clock cycles.

    Nope. All modern processors execute multiple floating point operations per clock tick, per core.

    You are out of date.

    Fact check: False. Microprocessors never have executed a complete floating-point instruction in a single clock cycle, otherwise a GHz CPU would be rated in Giga-FLOPS which they aren’t. (And if they were, the NSA would be VERY interested!) They’re rated in Mega-FLOPS or MFLOPS and until recently were still on the low side. For example, the chart in the following article shows that the CDC Cyber 205, running at 50MHz in 1981, performed 8 64-bit MFLOPs. (I suppose the “degradation” from the 50MHz clock speed might have been model-specific or related to loading and saving registers before and after, or something. That and the Cyber models used 60-bit words so meeting the 64-bit standard may have required some gyrations that slowed things down some.)

    Meanwhile, AMD microprocessors didn’t reach that level until the recent Ryzen series, and Intel didn’t get there until Core 2.

    https://en.wikipedia.org/wiki/FLOPS

    After that, it’s pretty simple math: If a microprocessor performs 8 MFLOPS at, say, 4GHz, that means it takes 500 clock cycles per FLOP.

    Go back to that chart.  It’s units are FLOPS per cycle per core.  Everything Intel from the P6 on were >= 1 for 64bit floats..  All of the ARM processors are >= 1 for 64bit floats.  Latest top-of-the-line Intel SkyLake and friends are 32 FLOPS per core, per cycle.  For Gigahertz clocks, that beaucoup GigaFlops.

    Your fact check is a flop.

    • #77
  18. kedavis Coolidge
    kedavis
    @kedavis

    Phil Turmel (View Comment):

    kedavis (View Comment):

    Phil Turmel (View Comment):

    kedavis (View Comment):

    Yes, but, the 16.6 MIPS did a lot more work than any single instruction on any microprocessor, including a Pi.

    No, both X86 and ARM processors are “CISC”, not “RISC”. That is, they are Complex Instruction Set CPUs, that dispatch multiple actual operations per instruction, and execute several sub-operations per clock tick.

    The CDC Cyber system that I’ve referenced before, did a 60-bit floating-point multiply or divide in a single “cycle.” Check to see how many instructions a microprocessor has to do, for the same thing. And consider that even the single instructions might take several clock cycles.

    Nope. All modern processors execute multiple floating point operations per clock tick, per core.

    You are out of date.

    Fact check: False. Microprocessors never have executed a complete floating-point instruction in a single clock cycle, otherwise a GHz CPU would be rated in Giga-FLOPS which they aren’t. (And if they were, the NSA would be VERY interested!) They’re rated in Mega-FLOPS or MFLOPS and until recently were still on the low side. For example, the chart in the following article shows that the CDC Cyber 205, running at 50MHz in 1981, performed 8 64-bit MFLOPs. (I suppose the “degradation” from the 50MHz clock speed might have been model-specific or related to loading and saving registers before and after, or something. That and the Cyber models used 60-bit words so meeting the 64-bit standard may have required some gyrations that slowed things down some.)

    Meanwhile, AMD microprocessors didn’t reach that level until the recent Ryzen series, and Intel didn’t get there until Core 2.

    https://en.wikipedia.org/wiki/FLOPS

    After that, it’s pretty simple math: If a microprocessor performs 8 MFLOPS at, say, 4GHz, that means it takes 500 clock cycles per FLOP.

    Go back to that chart. It’s units are FLOPS per cycle per core. Everything Intel from the P6 on were >= 1 for 64bit floats.. All of the ARM processors are >= 1 for 64bit floats. Latest top-of-the-line Intel SkyLake and friends are 32 FLOPS per core, per cycle. For Gigahertz clocks, that beaucoup GigaFlops.

    Your fact check is a flop.

    I don’t think that chart is labeled correctly.  And/or they didn’t adjust/normalize the figures for each processor correctly.  I noticed some disputation about that on the wikipedia “talk” page when I looked.  Also, I noticed some comments mentioned that just doing FLOPS is not the be-all/end-all of computing, including such things as packing/unpacking the numbers themselves, which mainframe processors are much faster at.

    • #78
  19. Henry Racette Member
    Henry Racette
    @HenryRacette

    One thing big iron really was good at (and, I’m sure, still is) was managing hierarchical storage and performing truly impressive data transfer. I can remember watching people move huge datasets on the 3090 with almost unbelievable speed, like it was nothing at all. I’ve got a lot of respect for mainframes — though I don’t want to work on them.

    • #79
  20. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    kedavis (View Comment):
    I don’t think that chart is labeled correctly.  And/or they didn’t adjust/normalize the figures for each processor correctly.

    I remember marveling years ago at the pipeline architectures for ALUs that deliver one or more operations per tick.  What do you think they’ve been doing with the billions of transistors they now stuff onto the dies of modern CPUs?

    The chart is labeled correctly.

    kedavis (View Comment):
    just doing FLOPS is not the be-all/end-all of computing

    Moving the goalposts now?

    I’m enough of a man to admit when I’m wrong.  Including in various places and times here on Ricochet.  What about you?

    { In the late 80’s I worked on IBM’s 2nd generation “Compass” chipset, the core of the AS/400.  I am intimately familiar with mainframe and mid-range design principles of that era.  I marveled at the progress microprocessors made in the decades since because it is all the more impressive when you understand how it works. }

    • #80
  21. kedavis Coolidge
    kedavis
    @kedavis

    Phil Turmel (View Comment):

    kedavis (View Comment):
    I don’t think that chart is labeled correctly. And/or they didn’t adjust/normalize the figures for each processor correctly.

    I remember marveling years ago at the pipeline architectures for ALUs that deliver one or more operations per tick. What do you think they’ve been doing with the billions of transistors they now stuff onto the dies of modern CPUs?

    The chart is labeled correctly.

    kedavis (View Comment):
    just doing FLOPS is not the be-all/end-all of computing

    Moving the goalposts now?

    I’m enough of a man to admit when I’m wrong. Including in various places and times here on Ricochet. What about you?

    { In the late 80’s I worked on IBM’s 2nd generation “Compass” chipset, the core of the AS/400. I am intimately familiar with mainframe and mid-range design principles of that era. I marveled at the progress microprocessors made in the decades since because it is all the more impressive when you understand how it works. }

    There’s something going on that’s making things look weird, because if home-computer microprocessors were really as powerful as you seem to think, even “bloatware” such as Windows would never go slow, etc.

    • #81
  22. WillowSpring Member
    WillowSpring
    @WillowSpring

    Henry Racette (View Comment):

    WillowSpring (View Comment):
    In the other room, I have a stack about the size of a loaf of bread with 5 Raspberry Pi boards, a router, power supply and USB hub. I use it to play with parallel programming (all in Python).

    I like that. I have something similar, except they’re Beaglebone Blacks. And I don’t do anything useful at all with them. Not yet, anyway.

    Useful!?  Nobody said anything about doing anything useful.  I was useful for 50+ years.  Now, its my turn.

    • #82
  23. Rōnin Coolidge
    Rōnin
    @Ronin

    Zenith Z-100

    A Zenith Z-100, 1983.  The DoD had bought thousands of these that were never used.  Few came with instructions, and no one seemed to know how to use them.  One organization I belonged to would not allow anyone below the grade of E-6 to touch one, thus no one touched one.  Finally, in late 1986, I came across a a “Z” with its full complement of operational manuals (my first deployment to the RoK), and I bought a MS-Dos guide from the PX (and extra 5.25 disks) and finally learned enough to do word processing and some forms.  Took me about a month to get the basics down.  Kept me off the streets at night, but I was on the DMZ, so not a lot to do when off duty anyway.

    • #83
  24. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    kedavis (View Comment):
    There’s something going on that’s making things look weird, because if home-computer microprocessors were really as powerful as you seem to think, even “bloatware” such as Windows would never go slow, etc.

    Now that is relevant. Modern personal computers are choked by RAM bandwidth.  Give any modern CPU a math problem that either entirely fits in the CPU’s on-die data cache, or involves many math operations per input data point, and it’ll peg the GigaFlops meter.  Unfortunately, not many real-world problems present that way.  Try to do a multiply-accumulate on Gigabyte of data, and you will be disappointed.

    RAM latency is an even worse curse.  Any cache miss that has to fetch all the way out to RAM costs hundreds of cycles.  RISC architectures are toast in the modern world purely due to RAM performance limitations.

    • #84
  25. WillowSpring Member
    WillowSpring
    @WillowSpring

    Henry Racette (View Comment):
    though I don’t want to work on them.

    That’s my main issue

    • #85
  26. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    Rōnin (View Comment):

     

    Zenith Z-100

    A Zenith Z-100, 1983. The DoD had bought thousands of these that were never used. Few came with instructions, and no one seemed to know how to use them. One organization I belonged to would not allow anyone below the grade of E-6 to touch one, thus no one touched one. Finally, in late 1986, I came across a a “Z” with its full complement of operational manuals (my first deployment to the RoK), and I bought a MS-Dos guide from the PX (and extra 5.25 disks) and finally learned enough to do word processing and some forms. Took me about a month to get the basics down. Kept me off the streets at night, but I was on the DMZ, so not a lot to do when off duty anyway.

    Zenith bought HeathKit, and this was the result.  I built the H-100 version of this in High School.  Twice–the first was my father’s, the second was mine for college.

    • #86
  27. kedavis Coolidge
    kedavis
    @kedavis

    Phil Turmel (View Comment):

    kedavis (View Comment):
    There’s something going on that’s making things look weird, because if home-computer microprocessors were really as powerful as you seem to think, even “bloatware” such as Windows would never go slow, etc.

    Now that is relevant. Modern personal computers are choked by RAM bandwidth. Give any modern CPU a math problem that either entirely fits in the CPU’s on-die data cache, or involves many math operations per input data point, and it’ll peg the GigaFlops meter. Unfortunately, not many real-world problems present that way. Try to do a multiply-accumulate on Gigabyte of data, and you will be disappointed.

    RAM latency is an even worse curse. Any cache miss that has to fetch all the way out to RAM costs hundreds of cycles. RISC architectures are toast in the modern world purely due to RAM performance limitations.

    And yet CPU load %s sometimes max out at 100% which shouldn’t be possible if the problem is RAM access.

    • #87
  28. kedavis Coolidge
    kedavis
    @kedavis

    Phil Turmel (View Comment):

    WillowSpring (View Comment):

    kedavis (View Comment):

    WillowSpring (View Comment):
    Now that I am retired, I have a Lenovo running Linux tied to multiple Raspberry Pi computers. For about $35, you get more computing power than the first several mainframes I used.

    That’s a common sentiment, but really, I’m sure even a fairly old mainframe could solve complex math problems faster than a Raspberry Pi. That’s just not what the Pi is designed or intended to do.

    Also, the mainframe systems I worked on could handle up to hundreds of timesharing users. Let’s see a Pi do THAT.

    I agree the Pi isn’t designed for timesharing, but I bet it could do it if you wanted to put in the programming effort.

    No effort required. Comes capable of multi-user time-slicing out of the box. It is a one-liner to create new users. One more to assign a password. Part of configuring it the first time allows you to turn on the supplied OpenSSH service.

    Voilà, time-slicing multi-user host. With enough clock speed and multiple cores that will blow any 60’s or 70’s mainframe out of the water by orders of magnitude. More orders of magnitude if you need floating point operations. And, out of the box, capable of enforcing strict resource allocation amongst the connected users.

    If OpenSSH is too complicated for end-users, it is trivial to substitute multi-port serial concentrators via ethernet.

    If someone is connecting via ssh, they already have a computer – dumb terminals can’t do ssh – and are probably only using it for data not processing.  (Especially in the olden days of much slower network/internet speeds.)  They certainly aren’t using the ssh’d-into pi for graphics on their own computer.  So it’s really just file-sharing.

    • #88
  29. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    kedavis (View Comment):
    And yet CPU load %s sometimes max out at 100% which shouldn’t be possible if the problem is RAM access.

    CPU stalls caused by memory access patterns are considered part of a “busy” CPU’s load.  They can’t be avoided by any other hardware intervention.  Though they do get tallied up in the CPU’s “cache miss” performance counters.

    • #89
  30. TBA Coolidge
    TBA
    @RobtGilsdorf

    Terri Mauro

    What was your first computer?

    • #90
Become a member to join the conversation. Or sign in if you're already a member.