A Web portal All About Energy source

NREL: Marquee NREL Partnership With Hewlett Packard Enterprise Continues To Build on Years of Groundbreaking Innovation

12

Eleven years ago, the buds of what would become one of the National Renewable Energy Laboratory’s (NREL’s) most impactful and successful partnerships began to bloom. NREL commissioned a new data center in the new Energy Systems Integration Facility (ESIF), designed to house a new high-performance computing (HPC) system—or supercomputer—and push NREL into the leading edge of high-performance computing.

Shortly before the ESIF ribbon cutting in 2013, Hewlett Packard Enterprise installed NREL’s first supercomputer—Peregrine—into the truly innovative new data center. The symbiotic relationship between the building and the supercomputer won an R&D 100 award in 2014 and a 2014 R&D 100 Editor’s Choice Award for Sustainability. It also helped NREL’s ESIF earn R&D World Magazine’s 2014 Laboratory of the Year award and the U.S. Department of Energy’s 2013 Sustainability Award. It has since set the standard for new data center installations housing supercomputers.

Data centers are estimated to consume about 1% of global electricity. One of the things that made the NREL-HPE design so innovative and cutting edge at the time was how it captured the warm water used to cool Peregrine and pumped it throughout the building and other parts of the campus to heat them, but that was not all. Evaporative cooling units on the ESIF roof are used for data center cooling rather than traditional mechanical chillers (more expensive and energy demanding), further increasing efficiency and helping the data center become the most efficient in the world. It boasts an annualized power usage efficiency (PUE) of 1.036. In 2018, Data Center Dynamics rewarded NREL’s efficiency work with a Data Center Dynamics Data Center Eco-Sustainability Award.

“We thought this was the right partnership for this,” said Steve Hammond, NREL Computational Science Center director at the time and current senior research advisor. “We did things neither of us had ever done before. We set aggressive performance metrics HPE had to meet both in energy efficiency and in compute capability. It’s worked better than we ever imagined. We changed the industry. With $10 million, we changed the industry.”

How It Worked and Why It Worked
Ray Grout, NREL’s current Computational Science Center director, said, “A decade ago, HPE wasn’t in the high-performance computing business, and neither was NREL. We created a product together that wasn’t there.”

According to Grout and Hammond, in order for the partnership to work the way it did—the way it needed to—they had to tear down traditional business models. Grout referred to it as a “tiny little budget, but a big meeting of the minds.” The focus was not purely about getting into the HPC business or making money; both parties were heavily interested in innovating and doing it together. Hammond laid it all out, detailing the risks—and rewards—associated with such aggressive goals, so many unknowns, and so much on the line.

“We did things together that neither one of us could have done alone,” Hammond said. “Usually, you take what you know and you do one new thing you’ve never done before. We did about four new things. We got lucky in more ways than we deserved. Liquid cooling had never been done at this scale before. We set up a data center, which we had never done before. NREL had not previously been in the HPC business, and HPE had fallen out of it. We figured it out as we went. It was the sheer effort of the vision. The liquid cooling approach worked. We weren’t going against physics and thermodynamics. We were working with it. We just needed the right way to engineer it.”

The collaboration between NREL and HPE was special from the start. After winning the R&D 100 award, the NREL and HPE teams met in Houston to celebrate.

Hammond recalled, “People came up to me and said they had never worked harder, longer hours and had never had more fun. Maybe once in your career, you get something people gravitate to because they can see it’s going to be special—it’s like a moonshot.”

What It Meant Then and What It Means Now
In the early 2010s, NREL and HPE codeveloped the award-wining HP Apollo system with liquid-cooling capabilities, helping to propel HPE’s growth in HPC. For HPE, the triumph resulted in the delivery of a product to market about two years sooner than originally anticipated and the development of the computer management software that is now several versions along and part of every other HPE supercomputer. Since 2012, HPE has acquired both SGI and Cray, combining strong technologies to deliver a robust, world-leading supercomputing portfolio.

For NREL, it gets to claim the original, award-winning data center and add a much-needed compute capability. NREL desperately needed to add a compute capability, and the oversubscription by more than 2 1/2 times of Eagle—NREL’s current supercomputer and successor to Peregrine—has only driven that point home. Some of NREL’s most relevant and publicized projects make regular use of Eagle. Athena’s digital twin of Dallas Fort Worth International Airport, the Los Angeles 100% Renewable Energy Study’s forecasting, the city of Columbus’ charging infrastructure analysis, ExaWind’s wind power simulations, and many more have all leaned heavily on NREL’s supercomputing capability.

“When we look back on our most impactful, marquee relationships, our work with HPE is certainly on the list,” said NREL Associate Laboratory Director for Innovation, Partnering, and Outreach Bill Farris. “Together we blazed a trail that innovated liquid cooling for data centers and changed the industry, and NREL’s ESIF was the perfect stage. Long-term research partnerships like what we have established with HPE are at the heart of NREL’s partnership strategy and just another example of the power of working together with motivated partners on our most urgent energy challenges,” Farris added.

Glen Rowe, HPE’s senior director of Civilian Agencies, who has worked closely with the NREL team for decades, noted, “HPE has a long-standing collaboration with the National Renewable Energy Laboratory where we have developed joint high-performance computing and AI solutions to innovate new approaches that reduce energy consumption and lower operating costs. Our collaboration began with the Peregrine system, which was the first installation of the HP Apollo liquid-cooled supercomputer.”

Since then, the NREL and HPE teams have collaborated on a number of projects that aim to improve sustainability in data centers.

Rowe added, “We have worked on projects that include a hydrogen fuel-cell initiative with Daimler and a collaboration with the HPE Data Sciences Institute at the University of Houston to develop and validate AI/ML in predicting renewable energy availability to support smart grid technology. Additionally, NREL and HPE partnered on a multiyear AIOps R&D project to develop AI and machine learning technologies to automate and improve operational efficiency, including resiliency and energy usage, in data centers. We look forward to continuing our long partnership with NREL to help strengthen the lab’s efforts in making new discoveries as we drive towards a clean energy future.”

Kestrel is next up for NREL’s supercomputer ambitions, with the laboratory expecting to take delivery of cabinets in the fall of 2022—around the 10-year anniversary of Peregrine’s installation—and hoping to bring it into production by mid-year 2023. Eagle’s oversubscription has driven HPE to design Kestrel with themes of futureproofing. Using the HPE Cray EX supercomputer, Kestrel represents NREL’s third, HPE-built supercomputer. HPE won each supercomputer contract through a competitive process, and each time the compute capability has climbed steeply. Kestrel brings five times Eagle’s size and a completely new architecture. NREL has never before made a jump this size in scale nor moved to processors as different as Kestrel’s are from Eagle’s. Kestrel will comprise a balanced capability between CPUs and GPUs, which will enable more artificial intelligence (AI) workloads and speed up other computing, but the process is ongoing and Kestrel is not the end of the line.

“In the HPC business you’re either acquiring a new system or retiring the old one,” Hammond said. “You’re always in this mode. As soon as you get one in, you immediately start planning for the next one. It’s this continual ramp-up. It takes about 36–40 months to plan for each successive HPC system, which is only marginally less than the life cycle of the one you’re going to replace.”

The work NREL and HPE put in 10 years ago is still paying off today. The partnership made history and changed the game. Supercomputers are now designed to work with data centers like NREL’s.

“We get to this next piece of the arc where we are in the computing business, and we’re going out to buy a product and there’s a product to buy,” Grout said. “We don’t have to create it this time.”

Grout said they are just now figuring out how the partnership will look across the next decade. NREL made history with HPE through the data center and Peregrine, but the partnership playbill was longer than just the headlining acts. Kris Munch, NREL laboratory program manager for Advanced Computing, said, among other work, they had a three-year Collaborative Research and Development Agreement (CRADA) with HPE on AI that centered around designing the future data center to be even more intelligent—like a smart data center.

“NREL was the perfect place to do it, because we had all of this data from our building all the way down to our power systems and our racks,” Munch said. “The three-year project was to work with them to design AI and ML (machine learning) to make sure we could handle these giant supercomputers and handle some of the automation of the data center itself.”

The future is bright for NREL and HPE and the proven power of the partnership. Ten years after the partnership shifted the paradigm, the team’s work, generated through true collaboration, dedication to the cause, and commitment to the effort, has stood the test of time.

“We’re still revered as the gold standard for energy-efficient data centers,” Grout said. “Things do not stand the test of time in the computing world. Think about a 10-year-old iPhone and what it looks like today, but our data center—it’s still right at the cutting edge.”

Comments are closed.