DoE calls for 30 petaflop supercomputer, invites proposals from all comers
The Department of Energy wants to keep pushing the supercomputer envelope and it’s calling for proposals on how to make that happen. Early drafts of a joint request for proposal call for two new supercomputing systems to be built at the Los Alamos National Laboratory and Sandia National Laboratory. The goal is to build both computers on the same core architecture, even though the two systems will differ in size, scope, and mission.
The first system, codenamed Trinity, will be used by the United States’ Stockpile Stewardship Program. The program is responsible for evaluating our aging nuclear arsenal (the youngest missiles are now over 20 years old), testing how the characteristics of the fissile materials have changed over the decades, and ensuring that our ICBMs and bombs A) go boom if told to do so and B) don’t go boom at any other time, or in an inappropriate fashion.
Trinity is tasked with maintaining backwards compatibility with older programs while providing a framework for the development of next-generation software. That’s the sort of capability that’s often bandied about in consumer circles, but at the supercomputing level, it’s absolutely essential. Our best software and hardware doesn’t currently scale well enough to allow for exascale supercomputers.
The DoE’s draft paper (link to .doc) states that Trinity “is expected to help technology development for the ASC (Advanced Simulation and Computing) Program… a design goal of Trinity is to achieve a balance between usability of current NNSA ASC simulation codes and adaptation to new computing technologies.”
Talk NERSCy to me
The other system the DoE is planning is earmarked for the National Energy Research Scientific Computing Center (NERSC). This new system will be NERSC-8 and must provide “10-30 times the sustained performance over the NERSC-6 Hopper system.” For reference, the NERSC-6 “Hopper” is an AMD Opteron-based supercomputer with a peak performance of 1.2PFLOPs and a total of 153,408 CPU cores.
The DoE gives more concrete guidelines for NERSC-8 than for Trinity and its requirements are stringent. NERSC-8 has to support the development of more detailed models, be capable of running a greater number of independent simulations simultaneously, and still scale a single task across up to 50% of the entire supercomputer. Applications must be able to scale from launch to full size within 30 seconds. NERSC-8 must be capable of supporting hundreds of concurrent users and tens of thousands of batch jobs. It must be fully backwards compatible with the Message Passing Interface (MPI) 3, but must also anticipate the adoption of an “MPI + X” system in which “MPI continues to serve as the programming model for inter-node communication and X provides for finer-grain, on-node parallelism.”
Both computers must support C, C++, Fortran 77, Fortran 2008, and Python on the compute partition. Would-be system builders must also submit optimized versions of libm, libgsl, FFTW, BLAS1-3, LAPACK/ScaLAPACK, HDF5 and netCDF.
Total power consumption for Trinity will be 15MW while NESC-8 is limited to 6MW. Trinity will support 2-4PB of RAM per compute partition (1-2 for NESC-8), and 20-30x that much disk storage.
What’s interesting about the draft proposal is seeing the scale of the work to be done, the various challenges that must be met, and the proposed solutions for bringing exascale computing into reach. The links between supercomputing and ultramobile computing are much tighter than most realize; lowering the fundamental amount of energy required to complete an operation in a cell phone has direct applications to the same task in a compute node with 250,000 processors.
- TCTS live from GlobalGathering Courtyard Sessions
- GIGABYTE X99 Motherboard Launch: Eight Models from X99-UD3 to G1 WIFI and SOC Force
- China’s supersonic submarine, which could go from Shanghai to San Francisco in 100 minutes, creeps ever closer to reality
- The Life Cycle of a Career DJ: From 1 to 25 years
- Skeptical’s live drum & bass set from DJ Mag HQ
- CD’s from 1990s are dying
- A Month with the iPhone 5s: Impressions from an Android User
- Recovering Data from a Failed Synology NAS
- The demon core: A scary story of sloppy science from the Manhattan Project
- Russ Yallop’s 60 minute house set from DJ Mag HQ