Your smart phone can feel like a lifeline, helping you navigate a new town or delivering an urgent message to a friend. Many people have a funny or embarrassing anecdote about an autocorrected text message or a roundabout route to a destination. But these artificial intelligence (AI) flaws exist on a spectrum, from minor inconveniences to unfair treatment or even risk to human life.
The people who create and use these AI technologies are also imperfect; we have our own biases, whether we are aware of them or not. Unconscious bias can influence our decisions and lead to unintended consequences; overt prejudice can result in our unethical and harmful exploitation of AI technologies.
This level of potential AI power has not gone unnoticed by the U.S. and nations around the world. In 2019, members of the Organization for Economic Co-operation and Development adopted “OECD Principles on Artificial Intelligence” in order to “promote AI that is innovative and trustworthy and that respects human rights and democratic values,” according to the OECD website.
Bias Is Too Risky To Ignore
In 2021, the city of Oakland banned predictive policing tools, due to their disproportionate targeting of Black communities. Racial bias is one of many biases—including, but not limited to gender, geography, language, and socio-economic status—that are compounded by those that have yet to be documented or addressed directly.
For researchers at the National Renewable Energy Laboratory (NREL), the convergence of these known and unknown biases in AI is an urgent alarm that our vision—a clean energy future for the world—is impossible if we do not address this problem. If we want everyone to have access to clean energy, that means we must plan for equity and ensure solutions don’t create inequitable outcomes. And since AI and machine learning (ML) play a major part in our clean energy solutions, we need to address the potential for bias throughout the innovation process.
“NREL is focused on a clean energy future, and it’s an exciting time to find equitable transitions to that future. But we’ve seen bias at the algorithm and data set levels, so if we as researchers don’t address this directly, we can inadvertently exacerbate it at speed and scale,” said Roderick Jackson, NREL’s laboratory program manager for buildings research.
Bias can actively derail our progress, and Jackson added, “It can also create problems that are costly or impossible to address in the later stages of tech progress. We’ve seen this already in the healthcare industry. This challenge isn’t only applicable to the clean energy space.”
NREL’s computational scientists are discovering AI and ML applications to help deliver clean energy solutions to communities across the country and around the world.
“We have had great success applying AI and ML breakthroughs to clean energy deployment and scale, but one grand challenge in the greater AI field at large is ensuring equitable outcomes. We at NREL see this as an opportunity to be bold and lead the way to more equitable solutions,” said Ray Grout, director of NREL’s Computational Science Center.
In Uncharted Territory, NREL Takes Initiative
Like all grand challenges, it is hard to know where to begin. But at NREL, we know people are at the heart of all we do. So NREL senior researcher Jennifer King dedicated herself to collecting perspectives—from NREL peers to those in the greater research field and beyond—about their challenges. Though many understand why they need to incorporate equity into their research practices, the outstanding question is: “How do we do that?”
“Jennifer King did an incredible amount of research—she really did her homework to understand the existing challenges, not just at NREL, but from people across the country in different disciplines and industries,” Jackson remarked.
In November 2021, Grout, Jackson, and King hosted an NREL workshop on the potential implications for AI bias in clean energy. NREL researchers shared their challenges and heard from Craig Watkins, a professor at University of Texas at Austin who designs computational models to address structural inequality. King synthesized that feedback and presented her findings at the U.S. Department of Energy’s (DOE’s) “AI@DOE” roundtable, hosted by DOE’s Office of Science, National Nuclear Security Administration, and Applied Energy Offices in collaboration with the Artificial Intelligence and Technology Office (AITO).
As the director of AITO, Pamela Isom took particular interest in NREL’s findings. “I am focused on equity, ethics, and the interlock with AI and at the same time concerned about the harms that can get infiltrated into communities via the inappropriate use of AI and irresponsible behaviors,” she explained. “That is why this workshop was so important to me and why AITO chose to champion and support the researchers.”
“We need a strong set of practices to support principles, and that is the AITO team’s focus,” Isom said. “NREL made it clear that getting external stakeholders from across industries was the vital next step. We need more voices to understand the existing challenges and to help us find solutions.”
Finding attendees for NREL’s March 2022 workshop “Responsible and Trustworthy AI in Clean Energy” was driven by King’s research findings. “We reached out to people in energy equity as well as experts in AI ethics in other fields such as computer vision, healthcare, and finances,” King said.
With a hunger for guidance and a genuine desire to learn from each other, the workshop attendees had an enthusiasm that could have easily propelled the four-hour discussion into a multiday event.
Timely Workshop on Responsible and Trustworthy AI
The intersecting pathways of AI, clean energy, and equity brought DOE’s AITO and Office of Energy Efficiency and Renewable Energy to the co-hosting table with NREL.
“Advancements in AI tech are creating exciting opportunities, but there are risks and intended and unintended consequences,” said Isom, whose introductory statement foreshadowed what workshop attendees illustrated in their own remarks.
Each participant—from national laboratories, nonprofits, academia, government, and industry—briefly highlighted the challenges and outstanding questions around equity they observed in their fields: spanning research and data, to manufacturing, deployment, policymaking, and workforce hiring.
An equity table set with these distinct, yet interconnected, questions served up a vibrant exchange of ideas to establish practices, principles, and behaviors needed for responsible and trustworthy AI for clean energy. Attendees discovered common challenges around needing language definitions; data metrics and representation; multidisciplinary input from psychologists, sociologists, and other human behavior experts; and inclusion of impacted communities and individuals.
These outstanding questions have not prevented some institutions from taking important steps forward. Keynote speaker Elham Tabassi—who is chief of staff for the National Institute of Science and Technology’s (NIST’s) Information Technology Laboratory—discussed NIST’s AI Risk Management Framework to document terminology, metrics, and assessment for AI governance, with a goal to “improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
The “Promises and Perils” of AI in Clean Energy
There is disproportionate solar adoption between homeowners and renters, with mostly white-owned households more likely to have solar energy. And during the Texas energy grid failure in February 2021, renters were slower to get energy reconnected. These energy justice problems illustrate the potential for AI to help or hinder progress, according to keynote speaker Anjuli Jain Figueroa, an AAAS science technology and policy fellow at the DOE Office of Economic Impact and Diversity.
“There are promises and perils associated with AI in the clean energy space,” Jain Figueroa explained. “There are known risks associated with AI bias, nonrepresentative data, and lack of societal context. We’ve seen racial bias in face and voice recognition, gender bias in hiring, and the misuse of smart technology for domestic abuse.”
Jain Figueroa acknowledged how AI tools in a grid can balance electricity supply and demand and dynamic pricing; create empowered “prosumers” and peer-to-peer markets; and reduce rates and outages. “But dynamic pricing in a more personalized way can take advantage of someone’s need or desperation. And who will certify and regulate these power companies that are increasingly becoming software companies?”
As households begin to invest more in the Internet of Things—like smart meters and sensors—that can increase energy efficiencies and empower customers with control, “but the peril is in data ownership and security of personal data,” Jain Figueroa said.
Big Equity Concerns Need Small Data
Researchers can introduce bias in the way they collect, model, and interpret data.
Researchers can inadvertently perpetuate bias when they rely on data that only represents a fraction of a given population. But in early-stage research, finding data that includes everyone is challenging.
Data sources such as smart thermostats, occupancy surveys, and research demonstration projects are used to understand a building’s energy demands. This data is used to develop AI technologies to manage energy load on a grid more efficiently, thereby reducing costs. But if these data sources are from homes representing specific populations (e.g., wealthy, suburban, and/or workers with routine 9-to-5 jobs), the benefits from AI innovations built on these data can be disproportionately distributed to only these communities.
Communities composed of older homes with residents who hold multiple jobs are far less likely to have smart thermostats and/or participate in research demonstrations and are thus less likely to be adequately represented in the data. Ironically, these homes are likely in more need of innovations that AI solutions could provide, given the wide variations in energy use throughout the day and evening. But there is often limited existing data on their thermal demands to inform future AI energy control designs.
Similarly, this data representation problem arises in the transportation sector. AI technologies have been built using GPS data pulled directly from vehicles or from the smart phones of riders in vehicles. Lacking a smart phone or a vehicle with GPS functions means your mobility behaviors are not captured in any data informing transportation technologies.
As an NREL researcher, King sees this data representation challenge as an opportunity to incorporate equity into early-stage research, to close the gap and get closer to NREL’s desired end-state: clean energy for everyone.
“This presents an opportunity to develop new foundational AI/ML approaches with innovative problem formulations that address needs of all communities,” King explained, noting that “small” data could be an untapped, vital resource for equity in early-stage research. “Data from hard-to-reach-communities is often small and sparse, but I think we can be creative and find a way to incorporate this data into our AI solutions.” NREL is investigating how AI/ML algorithms can generate synthetic data to accurately augment nonrepresentative data sets.
Beyond early-stage research, King and her colleagues are also identifying equity-focused steps to bring to the development and deployment phases too—from problem formulation, stakeholder engagement and impact assessment, to algorithm performance checks and application-specific analysis.
Insights Key To Unlock the Right Actions
The AITO has been instrumental in leading DOE in AI, launching a department-wide Responsible and Trustworthy (R&T) AI Task Force to turn stakeholder insights into action and establishing the AI Advancement Council (AIAC). Workforce development and AI training that embeds responsible and trustworthy principles, practices, and behaviors is already in progress; other task force activities include planning an extended reality proof-of-concept exercise and gathering cross-industry input through stakeholder focus sessions.
The task force will incorporate outcomes from NREL’s March 2022 workshop into AITO’s AI Risk Management Playbook “to help guide good decision-making and stewardship of AI,” Isom explained, adding, “This isn’t about AI for clean energy only. R&T AI can be applied across dimensions such as humanitarian crisis and mental health.”
NREL’s vision is a clean energy future for the world, and the laboratory is focused on developing and delivering solutions that enable all people to participate in and benefit from the transition to sustainable energy. Learn more about how NREL places energy justice at the center of our mission-driven work.
Comments are closed.