Supercomputers

2010 August 16
by Chris Vernon

So the University of Southhampton have a new Supercomputer. The BBC made a little video here:

According to the university’s page it consists of:

  • 1008 Intel Nehalem compute nodes with two 4-core processors;
  • 8064 processor-cores providing over 72 TFlops;
  • Standard compute nodes have 22 GB of RAM per node;
  • 32 high-memory nodes with 45 GB of RAM per node;
  • All nodes are connected to a high speed disk system with 110 TB of storage;

In the video Dr Oz Parchment suggests that in the world supercomputer ranking this new system would place around 83rd, interestingly he also notes that 5-6 years ago it could have been number 1. That’s the pace of computer improvement. Let’s compare with the basic office PC I’m writing this on, it cost around £600. It’s based around Intel’s Core i5-750 CPU, running at 2.66GHz. The Intel specification sheet give this CPU a floating point performance of 42.56 GFlops (billion floating point operations per second). This sounds reasonable when we consider the supercomputer with its 2016 CPUs is reported to have 72 TFlops suggesting 36 GFlops per processor. After all, Supercomputers are just large numbers of regular processors (and memory) connected together with a fast bus.

We can run Parchment’s rough calculation for my computer. How far back in time do we have to go for my standard desktop PC to be considered a supercomputer?

Since 1993 a list of the world’s fastest supercomputers has been maintained, Top 500. Going back to the beginning, we see that in 1993 a CM-5/1024 developed by Thinking Machines Corporation and owned by Los Alamos National Laboratory in the US held the top spot. This was also the computer used in the control room in the Jurassic Park film. Here’s what just a few nodes looked like, the Los Alamos system was far larger:

CM-5 Supercomputer

Thinking Machines' CM-5 Supercomputer

Being the fastest computer of it’s day it would have cost millions, been staffed by a team of engineers and scientists and been employed on the most computationally taxing investigations being carried out anywhere in the world. I expect it spent most of its time working on nuclear weapons. According to this the CM-5 cost $46k per node in 1993, which would price the Los Alamos National Laboratory system at $47 million, or around $70 million in today’s money. It’s performance? A theoretical peak of 131 GFlops, with a benchmark achieved performance of 59.7 GFlops. The same ball park as my run of the mill office computer today. It was also twice as fast as number two and ten times the power of the 20th ranked system.

What this means is that the computational resources available at the cutting edge just 17 years ago, now sit on everyone’s desk running Office 2010.

In 1997 I was lucky enough to visit the European Centre for Medium-Range Weather Forecasts (EMCWF). They had recently taken delivery of a new Fujitsu VPP700/116 and had claimed the 8th spot in the Top 500 ranking with a theoretical peak of 255.2 GFlops. The system was used for 10-day weather forecasts. This image shows a 56 node VPP700 system, the EMCWF system was ~twice the size:

Fujitsu VPP700

Fujitsu VPP700 Supercomputer

Using off the shelf components, a similarly powerful desktop computer could be built for a few thousand pounds using four Intel Xeon processors.

State of the art computer performance from a little over a decade ago, is now available to everyone able to afford a modern PC. We’re all using supercomputers. Could we be doing more with our computers than playing games and Microsoft Office 2010?

4 Responses leave one →
  1. August 16, 2010

    A fun trip down memory lane, thank you. What caught my eye in the video was the water cooling. I’d always assumed pipes run into the rack mount servers. Air cooled machines in water cooled racks hadn’t occurred to me. Presumably it’s a compromise between lower purchase cost per node, but lower cooling efficiency compared to board-level water cooling.

  2. aid permalink
    August 16, 2010

    you make a good point! I think its pretty clear that we have much more computing power than we use. It’s an inefficiency but its not the only one. For example we all have a biro on our desks, a technological leap ahead of Shakespeare’s quill but very few of us are writing prose. I guess that’s slightly different but my point is that the availability of the human brains which use the tools humans make has been a bottle neck for quite some time.

  3. Chris Vernon permalink*
    August 17, 2010

    Human brains are the bottleneck… yeah, I guess so. Our constraints aren’t the tools but rather our use of them. Interesting.

  4. August 18, 2010

    Nice work Chris.

    This has always struck me as the problem with the ‘Singularity’ type arguments. The ability to process vastly more data does not in fact make us vastly better at doing most things.

    John Michael Greer wrote a lovely piece on this recently:
    http://thearchdruidreport.blogspot.com/2010/07/cybernetics-of-black-knights.html

    Still, the latest games sure do look pretty, so maybe these amazing computers mean we’re actually heading here…?

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS