The Architectural Perspective
This issue of IEEE Power & Energy Magazine discusses a computational capability to extract a cause–effect understanding of power system events. It demonstrates associated data analytics for control center applications, enhanced security assessment and management, tuned state estimation, automated fault analysis, and renewable resource integration.
Because most utilities are forced to operate close to their security limits, they are constantly trying to better integrate data and processes and to make better decisions to reduce operating and maintenance costs as well as to improve overall reliability. Consequently, the volume of captured measurements across power grid management systems has increased dramatically. The concept of automating knowledge and information represents a paradigm shift in the current thinking and is designed not only to improve the effectiveness of operational short-term planning and operators’ situational awareness but also to facilitate decision-making steps to ensure a reliable power supply to customers. This includes capturing measurements, converting them to data, converting the data to information, and then distilling the information into knowledge that can be used to make faster and better decisions. To support the process, several industry standards and the interoperability infrastructure must be defined to streamline actual implementations. While the articles in this issue point to advanced knowledge extraction analytics, they are not addressing the overall implementation concept where all the applications would interact with each other using a common standard-based framework, and hence more attention to that issue is needed.
Besides the data analytics discussed in the articles, the applicability of the automating knowledge and information concept needs to be addressed from the architectural perspective. Considering the interoperability of new solutions with legacy solutions, including integration, technology choices and the role of industry standards are crucial to wider use of new data analytics.
Architecture and Technology Perspectives
The National Institute of Standards and Technology (NIST) has been tasked to coordinate the development of architectural frameworks that includes protocols and data model standards for information management to achieve interoperability. The coordination tasks are carried out through an organization called the Smart Grid Interoperability Panel (SGIP). The organization was created in 2009 and is now transitioning to SGIP 2.0 as a public-private partnership. Per SGIP, fundamental goals of the architecture include incorporating evolving technologies to work with legacy applications and devices in a standardized way. SGIP has also defined a conceptual model to support planning, requirements development, documentation, and organization of the diverse, expanding collection of interconnected networks and equipment that will compose the interoperable systems for power grid management.
Consistent with the SGIP framework to achieve high-performance levels of services vision for utility operations in real-time and short-term planning and leveraging automating knowledge and information concepts on a commodity platform, an extreme transaction processing (XTP) is envisioned as an important technology choice. The key distinct innovations of XTP include distributed, replicated memory spaces and persisted data storages, the use of event-driven architecture for intra- and intersystems communications (event driven architecture), and the use of microkernel-style extensible modularity of platform technology as well as the dynamic server networks (dynamic grid). To help achieve extreme, scalable performance with continuous availability for mission-critical systems, these applications often share the same basic requirements and challenges.
- There are several key areas where benefits of the automating knowledge and information concept are obvious.
- Improved quality of decisions: Computer and communication devices in substations can extract a huge amount of data while operators can process only some of it and thus cannot consider all of the available data and information for making decisions. The article titled “The Situation Room” is an example where advanced analytics and visualization framework enhances operators’ situational awareness, including an improved ability for monitoring operating limits, a better understanding of complex events, and enhancing post-mortem analysis. As illustrated in “Measures of Value,” the results of the data analytics processing can tell operators not only the basic information about the fault type and location but also whether the fault clearing sequences were executed correctly. The software solution implements the experts’ knowledge through the rules formulated by experts.
- Faster response: In the power grid management domain, sometimes human response is not fast enough. Therefore some decisions must be fully automated without human intervention. A concept of a synchrophasor assisted state estimation (SPASE), which allows improvements based on statistical properties of the measurements while taking into account model uncertainties, is seen as prerequisite for more robust decision making.
- Data overload prevention: The idea is also to reduce the amount of information brought to the operators’ attention. The automating knowledge and information paradigm should be used in conjunction with visualization tools to implement management-by-exception strategies where operators are notified or alarmed less often and only in those situations where their involvement is required. The “Measures of Value” article describes a solution that processes a huge volume of information, however the operator is presented just what is necessary, and that is cause-effect-action information that is obtained in seconds after the event has occurred.
- Improved reliability: This will enable operators to maintain a high reliability level by improved situational awareness and the ability to react promptly in complex situations that require, for example, corrective actions. As discussed in the “Operating in the Fog” article, a new way of handling uncertainties and security assessment tools is envisioned not only for online decisions but also as offline tools helping define security rules, validate dynamic models, and outline defense plan and restoration strategy.
- Reduced cost: The integration of renewable resources such as wind power presents opportunities to reduce overall generation cost (a new data analytics for the wind power forecasts that may be utilized for predictive control is presented in this issue). In addition, improved asset management results in reducing outage time and unsupplied energy indices as well.
In support of the automating knowledge and information concept, almost every new system or application, regardless of its physical location, will be required to interact with other applications or systems. Therefore interoperability readiness is extremely important.
- Interoperability in this context, related to distributed systems, is defined with the following common factors:
- an exchange of meaningful, actionable information between two or more systems across departmental and organizational boundaries
- a shared meaning (semantic) of the exchanged information
- an agreed expectation for the response to the information exchange
- a requisite quality of service in information exchange: reliability, fidelity, security
- an operationalized common semantic model at run-time to achieve near-plug-and-play
- key interoperability decisions made at the semantic layer.
Role of Standards and Common Semantic Understanding
To achieve the required level of interoperability readiness, a more rigid and disciplined approach in standard development and adoption is critical.
- The interoperability readiness is a necessary prerequisite to
- enhance the future grid’s reliability, interoperability, and extreme event protection for an increasingly complex system operation
- increase transmission transfer capabilities and power flow control
- use efficient, cost-effective, environmentally sound energy supply and demand
- maximize asset use.
Per SGIP, the key step in defining industry standards is reaching an adequate level of semantic understanding for all data and information exchanged between various components. Therefore, to eliminate semantic ambiguities and set the foundation for defining industry standards at a syntactic level, a common semantic model should be developed and standardized as well. A common semantic understanding of raw and processed data as well as cause-effect-action knowledge information is seen as the key enabler of interoperability. A common semantic model that leverages existing industry standards as reference models [the International Electrotechnical Commission Common Information Model (IEC CIM) is the key reference model] is possible to operationalize at run time. For example, a common semantic model can be used as a vehicle to harmonize IEEE C37.118 with IEC 61850 and precision time synchronization.
To summarize, a common semantic model as a common vocabulary and model can be leveraged in the following ways:
- to provide a basis for the design of endpoints such as interfaces and staging areas between functions, systems, and vendors (all applications discussed in this issue must provide endpoints where each data element is clearly described)
- to standardize the design of data exchanges and convert data from a provider to a consumer using the semantic model as a logical intermediary (each element exchanged must have the same meaning to all integrated applications)
- to serve as a logical model for all integration patterns [for example, service design (e.g., Web Service Description Language), message payload design, database design (Data Definition Language or DDL)] (e.g., precise endpoint syntax can be forward engineered from the semantic model)
- to provide a platform-independent logical model for operational data store, data warehouse, data marts, staging area, and other data stores (a common semantic model that covers all data exchanges between components discussed here can be used to design data stores as well, e.g., generate DDLs)
- to operationalize a semantic model at run time, allowing key interoperability decisions to be made at the semantic layer
- to provide a basis for capturing experts knowledge and the development of related business rules
- to provide a basis for effective network model management.
The use of a cause-effect-action understanding of power system events for the short-term operation planning and the real-time operation of the power grid is identified as a promising area where significant benefits can be achieved to enhance future power grid management solutions such as EMS. The automating knowledge and information concept can be applied wherever a stream of real-time event data is available from field devices, digital fault recorders, phasor measurement units, applications, the Web, and other sources. All potential solutions presented in this issue are in use either in real life or in lab environments. As the volume of events data increases, the automating knowledge and information concept becomes more important. To implement these concepts sooner rather than later, a common semantic model should be used to describe data and cause-effect-action information unambiguously.