[ad_1]
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
The new high-performance computing (HPC) system at Stony Brook University has been a long time coming for computational astrophysicist Alan Calder.
Until now, Calder, who studies supernovae, had to be satisfied with 2D simulations of an exploding star. He has performed 3D simulations on supercomputers at national facilities. But that often means that he gets just one crack at it, so he’d better have everything set up right—no tweaking allowed.
“Like climbing a mountain, it was a heroic act to try to get this thing to work,” Calder, professor of astronomy and physics and deputy director of the Institute for Advanced Computational Science (IACS) at Stony Brook, told EE Times.
That’s changed, though, with the $1.5 million HPC that uses the Intel Xeon CPU Max series processors on Hewlett Packard Enterprise ProLiant DL360 Gen11 servers, both of which are optimized for modeling, simulation, AI and analytics. The tech became available earlier this year, and its use in Stony Brook’s IACS is the first in U.S. academia.
“I think it’s going to be capable of doing 3D [simulations of] supernovae explosions, and that’s the next big step in our research [for which] we’ve been gearing up for a couple of years,” Calder said. “I’m optimistic that we will be able to perform a lot of science with this new capability that we can’t now.”
There’s certainly a lot of science going on at Stony Brook, and there are many users for the new HPC there. This comes as a result of the explosion in data-based research and the number of traditional disciplines that are now preceded by the word “computational.”
“Increased computing power is always of interest because it allows more computing in less time and thereby more productive research,” Calder said. “In a field like theoretical astrophysics, where one is simulating multi-scale, multi-physics events like stellar explosions, more computing means that one can include more detailed physics in the models and thus more realism and better results.”
Simulated supernovae are not the only things detonating on Stony Brook’s campus.
Computational science ‘just exploding’
“Computational science, data-centric science is just exploding, and it’s penetrating every single field, not just in science but scholarship in general,” Robert J. Harrison, director of the IACS, told EE Times.
In addition to Calder, one user will be Heather J. Lynch. Her field is ecology and evolution. Her work, in part, contributes to the development of international fishing quotas.
Lynch studies the animal populations in Antarctica by assembling massive amounts of satellite imagery video collected from drones to count the seals, penguins and various birds along the continent’s coast.
“So she’s got a data-centric pipeline that takes the images, maps them onto terrain models and uses machine learning to try to identify either individuals or other evidence of animals living there, like guano stains and so on,” Harrison said. “And so she’s working with datasets that are in the petabytes.”
Standing in line with Lynch and Calder for the new HPC might be Joel Saltz, VP for clinical informatics at Stony Brook Medicine.
Saltz’s work includes creating 3D models of tissue that have been frozen and sliced into thin layers, Harrison said. After each layer is imaged, the images—each of which can be 30 Gb—are assembled to give a full picture of the tissue being studied in all three dimensions.
Eliminating bottlenecks
The new HPC is a major expansion of Stony Brook’s Seawulf computing cluster and builds on experience gained from running the Ookami supercomputer testbed that was installed in 2020.
Ookami uses the same processor technology, Fujitsu’s a64fx, as the Japanese supercomputer Fugaku. The Japanese supercomputer was deemed the world’s fastest until it was knocked out of the top spot by a supercomputer at Oak Ridge National Laboratory, according to Kyodo News.
“Unfortunately, the processor on the Ookami is a low-power Arm processor that was really built for exascale systems, where you have literally 150,000 of these things,” Harrison said. “And so power is really the No. 1 concern after performance.”
When Intel came out with the latest-generation Sapphire Rapids processor, along with the high-bandwidth memory attached, Harrison knew it would be a good fit because their apps work great on x86-based processors, he said. And he also knows that the IACS can really take full advantage of their high-bandwidth memory.
“Memory bandwidth is really the fundamental bottleneck for lots and lots of applications,” he said. “Not all but many of the important ones for us.”
The computer processors that are in the new system are the next-generation Intel processors, with significant advances in instruction set, especially for machine learning, Harrison said. On their own, they represent only a 20% increase in performance on a per-core basis compared with the older system.
“But the real difference is the high-bandwidth memory, where you can fully utilize all of the cores and get a lot more bang for your buck,” he said. “We’ve got a whole bunch of applications that are running 2× to 4× faster on this high-bandwidth memory than they would on the older DDR [double-data–rate] memory.”
ALAN CALDER, HEATHER J. LYNCH, HIGH PERFORMANCE COMPUTING, HIGH PERFORMANCE COMPUTING (HPC), HIGH PERFORMANCE COMPUTING ACCELERATOR CHIPS, HIGH PERFORMANCE COMPUTING ARCHITECTURE, HIGH PERFORMANCE COMPUTING CLUSTERS (HPCC), HIGH PERFORMANCE COMPUTING COMPONENTS, HIGH PERFORMANCE COMPUTING STORAGE, HPC, INSTITUTE FOR ADVANCED COMPUTATIONAL SCIENCE, INTEL XEON CPU MAX, JOEL SALTZ, PROLIANT DL360, ROBERT J. HARRISON, SCIENTIFIC COMPUTING VS MACHINE LEARNING, STONY BROOK UNIVERSITY
[ad_2]
Source link