by Yana A. St. Clair, Esq.
The article focusses on some crucial information regarding the dangers AI may potentially cause to the energy sector.
Lately I have spent considerable time discussing the subject of AI, and quite understandably, it’s an extremely hot topic. However, it is time to lay the subject to rest, but not before following up with some crucial information which we were unable to address last time due to space constraints. In our June issue we introduced the DOE Office of Cybersecurity, Energy Security, and Emergency Response (CESER)’s Report on AI, and most notably, introduced the potential dangers, which the DOE breaks up into several Risk Categories.
As we were only able to mention them briefly, we will now lay out the details of the subcategories presented by the report. Since the language is rather technical and specific, and we wish to adequately inform our readers, we shall not take away from the spirit of CESER’s report, and will therefore present these areas of concern precisely as written.
- Risk Category 1 is Unintentional Failure Modes of AI. It is broken up into the following:
- Bias in AI is the systematic shift of a decision-making process away from its goal, usually due to a mismatch between training data and real-world use. When applied to energy systems, training data that presents an incomplete picture of energy infrastructure based on limited sensor data could skew an AI model’s replication of system behavior
- Extrapolation is the use of a model to make predictions about “unexpected” events – situations outside that model’s experience – which can lead to unpredictable or inaccurate behavior. For example, when confronted with an extreme weather event beyond its training, an AI model that informs the use of energy storage resources might over/under commit capacity or attempt to rely on resources impacted by the event.
- Misalignment is when an AI model’s behavior deviates from the goals of its designers, typically due to poorly aligned training data or poorly defined objectives. Without robust and scalable human oversight, AI decision-support tools that are misaligned may end up prioritizing economic gain over, for example, grid reliability. This risk is especially pronounced when energy systems are under stress, as during extreme weather events
- Energy Use of AI is a slightly different risk than the others presented here – not AI errors, but the implications for the energy system from the energy consumption of training and using large AI models. This is an area where significant further research can help characterize the shape and scale of expected AI load, identify opportunities for efficiency gains, and explore how AI deployment trends can help drive resilience and environmental benefits
Risk Category 2 is Adversarial Attacks Against AI, broken up into the following:
- Poisoning attacks add, modify, or alter the data used to train an artificial intelligence model, in order to force the model to learn the wrong behavior. This can include modifying data on energy system operations, so that a model develops an incorrect conception of what “normal operations” look like. It can also include more sophisticated efforts to create a “backdoor,” which yields specific results when triggered – for example, poisoning the training data so that a model meant to detect physical wear in oil and gas equipment never declares an equipment to need maintenance, when presented with a very specific image
- Evasion attacks use adversarial input data, which may look indistinguishable from regular data to a human, to produce a desired model output — typically counter to the wishes of the model creator. An evasion attack might slightly alert the data presented to a model trained to predict energy market prices, in a carefully engineered manner that causes the model to incorrectly overestimate or underestimate prices
- Data extraction attacks seek to learn sensitive information about an artificial intelligence model, or the data it has been trained on. For AI tools that are customized for energy applications, or even tuned to specific energy infrastructure, a data extraction attack could allow an adversary to access the closely held information about an energy system of interest that is embedded in the AI tool.
- Risk Category 3 is Hostile Applications of AI
While CESER’s report touches on a number of scenarios where AI may be used nefariously, the possibilities are endless, and sadly we may let our imaginations run wild with regards to all the ways in which an evil doer, or any hostile individual, or State, may employ this handy technological tool to wreak havoc and cause substantial and irreparable harm. As space here is limited, we strongly urge you to visit the US Department of Energy’s website for further reading, and to familiarize yourselves with their views on the subject.
That being said, and in closing, it is undisputable that AI is here to stay, so we might as well accept this inevitable reality, and learn to live with it, or him, or her, or whatever you choose to lovingly call our new Sci-Fi friend 🙂
Disclosure: Please note that none of the information contained within the above column is to be considered legal advice.
Biography
Yana is an American attorney licensed to practice in all State and Federal courts of California. Yana holds a Bachelor of Arts Degree in Political Science specializing in International Relations from UCLA, the Degree of Juris Doctor from Loyola Law School, and a Master of Business Administration Degree from Ashford University. Since the beginning of her undergraduate studies, Yana has been involved in various aspects of the field of Electrical Engineering, where she employs her business and legal knowledge to consulting and advising businesses and individuals on relevant topics of concern. Yana also serves as Editor for PAC World magazine, having been with the publication since its inception. As an attorney, Yana specializes in criminal defense, where she devotes her talents and expertise to fighting for her clients’ rights and freedom.