Supercomputing underneath a new lens: A Sandia-developed benchmark re-ranks tip computers

37 views Leave a comment

A Sandia National Laboratories module module now commissioned as an additional exam for a widely celebrated TOP500 supercomputer plea has turn increasingly prominent. The program’s full name — High Performance Conjugate Gradients, or HPCG — doesn’t come trippingly to a tongue, though word is seeping out that this comparatively new benchmarking module is apropos as profitable as a princely partner — a High Performance LINPACK module — that some contend has turn rebate than acceptable in measuring many of today’s computational challenges.

TOP500 LINPACK and HPCG charts of a fastest supercomputers of 2017. The rearranged process and extreme rebate in estimated speed for a HPCG benchmarks are a outcome of a opposite process of contrast complicated supercomputer programs. (Image pleasantness of Sandia National Laboratories)

“The LINPACK module used to paint a extended spectrum of a core computations that indispensable to be performed, though things have changed,” pronounced Sandia researcher Mike Heroux, who combined and grown a HPCG program. “The LINPACK module performs compute-rich algorithms on unenlightened information structures to brand a fanciful limit speed of a supercomputer. Today’s applications mostly use meagre information structures, and computations are leaner.”

The tenure “sparse” means that a pattern underneath care has mostly 0 values. “The universe is unequivocally meagre during vast sizes,” pronounced Heroux. “Think about your amicable media connections: there competence be millions of people represented in a matrix, though your quarrel — a people who change we — are few. So, a effective pattern is sparse. Do other people on a world still change you? Yes, though by people tighten to you.”

Sandia National Laboratories computational researcher Mike Heroux combined a HPCG module that re-arranges supercomputer rankings. (Photo pleasantness of Sandia National Laboratories)

Similarly, for a systematic problem whose resolution requires billions of equations, many of a pattern coefficients are zero. For example, when measuring vigour differentials in a 3-D mesh, a vigour on any node is directly contingent on a neighbors’ pressures. The vigour in lost places is represented by a node’s nearby neighbors. “The cost of storing all pattern terms, as a LINPACK module does, becomes prohibitive, and a computational cost even some-more so,” pronounced Heroux. A mechanism competence be really quick in computing with unenlightened matrices, and so measure rarely on a LINPACK test, though in unsentimental terms a HPCG exam is some-more realistic.

To softened simulate a unsentimental elements of stream supercomputing focus programs, Heroux grown HPCG’s preconditioned iterative process for elucidate systems containing billions of linear equations and billions of unknowns. “Iterative” means a module starts with an initial theory to a solution, and afterwards computes a process of softened answers. Preconditioning uses other properties of a problem to fast intersect to an acceptably tighten answer.

“To solve a problems we need to for a mission, that competence operation from a full weapons make-believe to a breeze farm, we need to report earthy phenomena to high fidelity, such as a vigour differential of a liquid upsurge simulation,” pronounced Heroux. “For a filigree in a 3-D domain, we need to know during any node on a grid a family to values during all a other nodes. A preconditioner creates a iterative process intersect some-more quickly, so a multigrid preconditioner is practical to a process during any iteration.”

Supercomputer vendors like NVIDIA Corp., Fujitsu Ltd., IBM, Intel Corp. and Chinese companies write versions of HPCG’s module that are optimal for their platform. While it competence seem peculiar for students to cgange a exam to fit themselves, it’s clearly fascinating for supercomputers of several designs to personalize a test, as prolonged as any aspirant touches all a agreed-upon calculation bases.

“We have checks in a formula to detect optimizations that are not available underneath published benchmark policy,” pronounced Heroux.

On a HPCG TOP500 list, a Sandia and Los Alamos National Laboratory supercomputer Trinity has risen to No. 3, and is a tip Department of Energy system. Trinity is No. 7 altogether in a LINPACK ranking. HPCG softened reflects a Trinity pattern choices.

Heroux says he wrote a bottom HPCG formula 15 years ago, creatively as a training formula for students and colleagues who wanted to learn a anatomy of an focus that uses scalable meagre solvers. Jack Dongarra and Piotr Luszczek of a University of Tennessee have been essential collaborators on a HPCG project. In particular, Dongarra, whose prominence in a high-performance computing village is unrivaled, has been a clever upholder of HPCG.

“His promotional contributions are essential,” pronounced Heroux. “People honour Jack’s believe and it helped immensely in swelling a word. But if a module wasn’t solid, graduation alone wouldn’t be enough.”

Heroux invested his time in building HPCG since he had a clever enterprise to softened assure a U.S. stockpile’s reserve and effectiveness. The supercomputing village indispensable a new benchmark that softened reflected a needs of a inhabitant confidence systematic computing community.

“I had worked during Cray Inc. for 10 years before fasten Sandia in ’98,” he says, “when we saw a algorithmic work we cared about relocating to a labs for a Accelerated Strategic Computing Initiative (ASCI). When a US motionless to observe a Comprehensive Nuclear Test Ban Treaty, we indispensable high-end computing to softened safeguard a chief stockpile’s reserve and effectiveness. we suspicion it was a eminent thing, that we would be happy to be partial of it, and that my imagination could be practical to rise next-generation make-believe capabilities. ASCI was a large new plan in a late 1990s if we wanted to do something suggestive in my area of investigate and development.”

Heroux is now executive of module record for a Department of Energy’s Exascale Computing Project. There, he works to orchestrate a computing work of a DOE inhabitant labs — Oak Ridge, Argonne, Lawrence Berkeley, Pacific Northwest, Brookhaven and Fermi, along with a 3 National Nuclear Security Administration labs.

“Today, we have an event to emanate an integrated bid among a inhabitant labs,” pronounced Heroux. “We now have daily forums during a plan level, and a people we work with many closely are people from a other labs. Because a Exascale Computing Project is integrated, we have to broach module to a applications and a hardware during all labs. The Department of Energy’s try during a multi-lab, multi-university plan gives an organizational structure for us to work together as a cohesive section so that module is delivered to fit a pivotal applications.”

Source: Sandia




Comment this news or article