A Web portal All About Energy source

ORNL: ARM plans upgrades as it marks 30 years of collecting atmospheric data

58

As the Department of Energy’s Atmospheric Radiation Measurement user facility marks 30 years of collecting continuous measurements of the Earth’s atmosphere this year, the ARM Data Center at Oak Ridge National Laboratory is shepherding changes to its operations to make the treasure trove of data more easily available accessible and useful to scientists studying Earth’s climate around the world.

The observations, comprising more than 3.3 petabytes of data so thus far, start as raw data from more than 460 instruments worldwide. Observational measurements include daily records of temperature, wind speed, humidity, cloud cover, atmospheric particles called aerosols and dozens of other atmospheric processes that are critically important to weather and climate.

The team at the ARM Data Center refine the data so they are more useful to researchers and ensure their quality. In some cases, experts use these processed data to create higher-end data products that sharpen high-resolution models.

In the past 30 years, the multi-laboratory ARM facility has amassed more than 11,000 data products. That’s the capacity of about 50,000 smartphones, at 64 gigabytes per phone. With that much data on hand, ARM is taking steps over the next decade to upgrade its field measurements, data analytics, data-model interoperability and data services. Upgrades and aspirations are outlined in a 31-page Decadal Vision document, released last year.

ARM Data Services Manager Giri Prakash said that when he started at ORNL in 2002, ARM had about 16 terabytes of stored observational data.

“I looked at that as big data,” he said.

By 2010, the total was 200 terabytes. In 2016, ARM reached one petabyte of data.

Collecting those first 16 terabytes took nearly 10 years. Today, ARM, a DOE Office of Science user facility supported by nine national laboratories, collects that much data about every six days. Its data trove is growing at a rate of one petabyte a year.

Prakash credits this meteoric rise to more complex data, more sophisticated instruments, more high-resolution measurements (mostly from radars), more field campaigns and more high-resolution models.

Rethinking data management

How should all these data be handled?

“We had to completely rethink our approach to data management and re-design much of it from the ground up,” said Prakash. “We need end-to-end data services competence to streamline and automate more of the data process. We refreshed almost 70 data-processing tools and workflows in the last four years.”

That effort has brought recognition. Since 2020, the ARM Data Center has been recognized as a CoreTrustSeal repository, was named a DOE Office of Science PuRe (Public Reusable Research) Data Resource and earned membership in the World Data System.

All these important professional recognitions require a rigorous review process.

“ARM is special,” said Prakash, who represents the United States on the International Science Council’s Committee on Data. “We have an operationally robust and mature data service, which allows us to process quality data and distribute them to users.”

ARM measurements, free to researchers worldwide, flow continuously from field instruments at six fixed and mobile observatories. The instruments operate in climate-critical regions across the world.

Jim Mather, ARM technical director at Pacific Northwest National Laboratory, said that as part of the Decadal Vision, increasingly complex ARM data will get a boost from emerging data management practices, hardware and software, which are increasingly sophisticated.

Data services, “as the name suggests,” said Mather, “is in direct service to enable data analysis.”

That service includes different kinds of ARM assets, he said, including physical infrastructure, software tools, and new policies and frameworks for software development.

Meanwhile, adds Prakash, ARM employs FAIR guidelines for its data management and stewardship. FAIR stands for Findability, Accessibility, Interoperability and Reuse. Following FAIR principles helps ensure that data are findable and useful for repeatable research as scientists increasingly rely on data digitization and artificial intelligence.

One step in ARM’s decadal makeover will be to improve its operational and research computing infrastructure. Greater computing, memory and storage assets will make it easier to couple high-volume data sets – from scanning radars, for instance – with high-resolution models. More computing power and new software tools will also support machine learning and other techniques required by big-data science.

The ARM Data Center already supports the user facility’s computational and data-access needs. But the data center is being expanded to strengthen its present mix of high-performance and cloud computing resources by providing seamless access to data and computing.

Mather laid out the challenge: ARM has more than 2,500 active datastreams rolling in from its hundreds of instruments. Processing bottlenecks are possible when you add the pressure of those datastreams to the challenge of managing petabytes of information. In all, volumes like that could mean it is harder to make science advances with ARM data.

To get around that, in the realm of computing hardware, said Mather, ARM will provide “more powerful computation services” for data processed and stored at the ARM Data Center.

The need continues to grow

Some of that ramped-up computing power came online in the last few years to support a new ARM modeling framework, where large-eddy simulations, or LES, require a lot of computational horsepower.

So far, the LES ARM Symbiotic Simulation and Observation, or LASSO, activity has created a large library of simulations informed by ARM data. These exhaustively screened and streamlined data bundles, to atmospheric researchers, are proxies of the atmosphere. For example, they make it easier to test the accuracy of climate models.

Conceived in 2015, LASSO first focused on shallow cumulus clouds. Now, data bundles are being developed for a deep-convection scenario. Some of those data were made available through a beta release in May 2022.

Still, “the need continues to grow” for more computing power, said Mather. “Looking ahead, we need to continually assess the magnitude and nature of the computing need.”

ARM has a new Cumulus high-performance computing cluster at the Oak Ridge Leadership Computing Facility, which provides more than 16,000 processing cores to ARM users. The average laptop has four to six cores.

As needed, ARM users can apply for more computing power at other DOE facilities, such as the National Energy Research Scientific Computing Center. Access to external cloud computing resources is also available through DOE.

Prakash envisions a menu of user-friendly tools, including Jupyter Notebook, available to ARM users to work with ARM data. The tools are designed for users to transition from a laptop or workstation while they access petabytes of ARM data at a time.

Prakash said, “Our aim is to provide ARM data, wherever the computer power is available.”

Developing a data workbench

“Software tools are also critical,” says Mather. “We expect single cases of upcoming (LASSO) simulations of deep convection to be on the order of 100 terabytes each. Mining those data will require sophisticated tools to visualize, filter and manipulate data.”

Imagine, for instance, he said, LASSO trying to visualize convective cloud fields in three dimensions. It’s a daunting software challenge.

Challenges like that require more engagement than ever with the atmospheric research community to identify the right software tools.

More engagement helped shape the Decadal Vision document. To gather information for it, Mather drew from workshops and direct contact with users and staff to cull ideas on increasing ARM’s science impact.

Given the growth in data volume, there was a clear need to give a broader audience of data users even more seamless access to the ARM Data Center’s resources. They already have access to ARM data, analytics, computing resources and databases. ARM data users can also select data by date range or conditional statements.

For deeper access, ARM is developing an ARM Data Workbench.

Prakash envisions the workbench as an extension of the current Data Discovery interface―one that will “provide transformative knowledge discovery” by offering an integrated data-computing ecosystem. It would allow users to discover data of interest using advanced data queries. Users could perform advanced data analytics by using ARM’s vast trove of data as well as software tools and computing resources.

The workbench will allow users to tap into open-source visualization and analytic tools. Open-source code, free to anyone, can also be redistributed or modified. They could also use technologies such as Apache Cassandra or Apache Spark for large-scale data analytics.

By early 2023, said Prakash, a preliminary version of the workbench will be online. Getting there will require more hours of consultations with ARM data users to nail down their workbench needs.

From that point on, he adds, the workbench will be “continuously developed” until the end of fiscal year 2023.

Prakash calls the workbench, with its enhanced access and open-source tools, “a revolutionary way to interact with ARM data.”

ARM recently restructured its open-source code capabilities and has added data service organizations on the software-sharing site GitHub.

“Within ARM, we have a limited capacity to develop the processing and analysis codes that are needed,” said Mather. “But these open-source software practices offer a way for us to pool our development resources to implement the best ideas and minimize any duplication of effort.”

In the end, he added, “this is all about enhancing the impact of ARM data.”

Comments are closed.