Companies in the utilities industries face many challenges today. They are responsible for meeting government and private renewable energy goals and priorities, which vary state by state. They need to incorporate new technologies that are outside their traditional domains and balance losses in revenue (while still offering the same level of service). Most challenging of all, they need to do all of that while meeting the regulatory requirement of providing highly reliable and affordable energy to the rate payer.
In order to meet these challenges, utilities need technologies that allow them to rapidly and easily respond; virtualization is one of those technologies. Here, we discuss the core attributes of a virtualized environment as well as best-practice techniques that can help minimize the pain points when virtualizing a system.
Why virtualization matters
An increasing percentage of the critical and non-critical equipment used in the modern grid is already coming equipped to leverage the benefits of virtualization. Proven use cases include remote accessibility that decreases O&M costs, “fleet” management of assets for rapid installation of software patches and upgrades, to name a couple.
When virtualization is fully adopted, a digital twin of the physical system can be fashioned. The digital twin can assist grid planners and operators in testing various grid events without affecting real-time operations, such as: what happens when 15 residential solar arrays are added to a single feeder; or how will territorial demand fluctuate between a sunny day and a cloudy day; or when do grid operators have to balance behaviors of assets the utility does not own but are connected to their territory. This added benefit can help organizations not only answer these types of questions with confidence but, it also allows them to build contingency plans respectively.
What’s required for virtualization?
In a virtualized grid, most assets, if not all, would be controlled by specific lightweight software functions inside a software element called a container. These assets’ feed data upstream into an edge computing platform, known as node, running a larger number of containers which then feed further upstream to the decentralized edge-servers which access shared storage technology to provide control of the complete grid (see figure 1). From there, the data can interface with enterprise interfaces.