I’ve never worked with major enterprise or government systems where there’s aging mainframes — the type that get parodied for running COBOL. So, I’m completely ignorant, although fascinated. Are they power hogs? Are they wildly cheap to run? Are they even run as they were back in the day?
Not all mainframes are ancient; new models are still designed and sold to this day. And the brand spanking new mainframes may still be running COBOL code and other such antiquities, as many new mainframes are installed as upgrades for older mainframes and inherit a lot of legacy software that way.
And to answer your question: a mainframe is just a server. A specific design-type of server with a particular specialism for a particular set of usecases, but the basics of the underlying technology are no different from any other server. Old machines (mainframes or otherwise) will always consume far more power per instruction than a newer machine, so any old mainframes still chugging along out there are likely to be consuming a lot of power comparable to the work they’re doing.
The value of mainframes is that they tend to have enormous redundancy and very high performance characteristics, particularly in terms of data access and storage. They’re the machine of choice for things like financial transactions, where every transaction must be processed almost instantly, data loss is unacceptable, downtime nonexistent, and spikes in load are extremely unpredictable. For a usecase like that, the over-engineering of a mainframe is exactly what you need, and well worth the money over the alternative of a bodged together cluster of standard rack servers.
See also machines like the HP Nonstop line of fault-tolerant servers, which aren’t usually called mainframes but which share a kinship with them in terms of being enormously over-engineered and very expensive servers which serve a particular niche.
Mainframes are basically large rack mounted computers, and typically require many kW of power to run.
They’re still selling mainframes. A new IBM z16 takes 3-phase power and can use up to 30kW, or about 1,000 times a typical laptop.
I think that refers to present day mainframes, while OP is asking about the behemoths of the 1970s. Those were in tall bays (not sure what they were called), they used high voltage 400 cycle(?) power provided by on site motor generators, and they were water cooled. Planning an installation involved arranging the facilities for all of this. You didn’t just wheel them into an office and turn them on.
I just spent some minutes with web search and it is surprisingly hard to find power consumption figures. Two that I found were:
-
IBM 360/91, a high end scientific mainframe from 1965, used 74KW for 1.9 mips of cpu performance (https://arxiv.org/pdf/1601.07133). This was a big increase in power efficency compared to older machines that used vacuum tubes.
-
Cray-1 supercomputer from 1976 used 250KW, https://www.cpushack.com/2018/05/27/mainframes-and-supercomputers-from-the-beginning-till-today/
Beyond the CPU itself, you generally had a room full of periperhals such as disk drives (they looked like washing machines and a row of them looked like a laundromat), tape drives (old movies often depicted big computers as tape drives spinning back and forth), etc.
The 360 series had an “emergency stop” knob just in case. The one I saw had a sign next to it saying not to pull the knob unless the machine was literally on fire. It seemed that there was some kind of knife blade behind the knob, so when you pulled it, the blade would physically cut through a bundle of wires to make sure that power was disconnected from the machine. You couldn’t simply reset the emergency stop after someone pulled it. You had to call an IBM technician to replace the cable bundle that had been cut through.
The story about IBM technicians was that they always wore tie clips to hold down their neckties. That was to prevent the ties from getting caught in rotating machinery.
One thing I miss about the old IBM mainframes is the console keyboard had a little LED “Wait” light. If it was on you weren’t using much CPU. But if you got it to turn off or mostly off you were really working out the system.
-
Modern hardware designed to run ancient software. Not all that special.
An older example that’s popular still is the as400. IBM replaced these but a lot of businesses refuse to acknowledge that and maintain these beasts sometimes paying more for parts than MSRP.
Interesting article that’s related.
https://www.gao.gov/blog/outdated-and-old-it-systems-slow-government-and-put-taxpayers-risk
If you ever talk with an insurance guy or system admin, you will understand why as/400 can’t be replaced that easily and most of the time people were unhappy with generic stuff replacing it.
Once while the split of IBM was on table, Microsoft was only interested in AS400 line. They used to do a lot of critical things on them. Yes, even Microsoft.
One can emulate AS400 since the entire thing including hardware and OS is a virtual platform from the start. I am not into financial/insurance/travel so I didn’t investigate if IBM offers a POWER or Xeon replacement. You won’t be able to explain throwing away millions of lines working code to move to some current fashion framework/language. These people make their money from 1/1000s of cents.
Modern systems do a lot more work per second than these old machines, while drawing less power. If you were to collect a large enough collection of mainframes to equal the performance of a modern rack server, you would need 10-1000 times the power to run the old stuff, depending on how far back you want to go - even 10 year old hardware can cost 4-10 times more in electric bill compared to a modern server with the same total performance level.
Kind of “older”, i guess… the ones I have at work are from 2017. Each server has 36 10TB 3.5" harddrives and they’re the main power hog. Each server eats around 1.2kW. Each storage cluster holds four of these servers for a total of 1.2PB of storage space. The entire cluster is powered by a 5kVA UPS.
Quite power hungry, but pretty lean when compared to other methods for running that amount of storage.
huge power consumption… very expensive to run. i have regularly had access to take these things, but its like wanting to take home an old steam engine. its huge, expensive and to what end?
ha, they are run exactly as they were back in the day, thats half the point. no one wants to pay to replace(rebuild from scratch) that system.
Newer systems are way more power efficient than those of yester-year. Systems design and engineering, while built on the principles of the past, have very much changed just in the last decade alone. Older mainframe systems are really no better than museum pieces and technological curiosities today.
Are those older systems largely virtualized now? When you hear about some old system at a government office not being able to keep up, is it the same hardware?
Definitely power hogs. Modern switch mode power supplies are incredibly efficient.
I never really administered anything like that myself but I had a friend who took care of some old servers ~20 years ago in college. Multiple power drops in that small room went to fuse panels rated for several hundred amps each.
Unfortunately all I know were that they were VAX mainframes and were already considered obsolete in the late 90’s ;-)
A lot, just like today’s Mainframe and Super computers. They are calculating complex formulas and doing gigantic batch jobs, millisecond AI fraud detection etc. A regular computer or server will throttle a lot while they are designed to be loaded 100% of times. Dave Plummer of MS recently made a video of a 40TB RAM monster.
Did you ever look at how much today’s top of the line gaming rigs consume? ;-)