PAC System Engineering Perspectives

Author: Luc Hossenlopp, Schneider Electric, France

This article considers a system that is first governed by never changing fundamental laws (Ohm’s Law), then by progressively evolved practices over time, all while surrounded by a series of standard that guide some of its realizations (e.g., IEC, IEEE, etc). Thus, though system engineering starts from a well-established basis, it needs a change supported by the evolving technology. This requires a novel approach. The article concentrates on the possible evolution of traditional practices to cope with new requirements, leverage new technologies, and contribute to cost and quality improvements to system engineering.

Context and Drivers
Maximizing System Value: It is well known that a system is more than the sum of its components. The value of a system is modeled by several laws, the simplest one (Metcalfe) saying that the potential value of a system grows according to the square of the interactions among its components. This is about creating distributed applications leveraging local intelligence capabilities-beyond the sole device connection to a network for basic monitoring and control.  Take a simplified example. 25 years ago, early substation automation systems used master-slave communication protocol (e.g., IEC 60870-5-103) between an RTU or local SCADA (as master) and the N protection relays (as slaves). This results in a system value of N. Now consider a modern system leveraging the IEC 61850 peer-to-peer mechanism: In addition to the connection to the SCADA (N), the protection relays may communicate among them through N(N-1)/2 relations that avoid specific wiring or additional devices for the functions executed among them.

Metcalfe’s law gives an upper limit of the (potential) value. It was later refined by Beckstrom’s law, which says the “value of a network equals the net value added to each user’s transactions conducted through that network, summed over all users,” (net value being the gain minus the cost linked to the transaction). Still, this strongly increases with interaction among the nodes (see Figure 2). Here, system engineering’s goal is to maximize value while considering not only initial system delivery and Factory Acceptance Testing, but also its entire lifecycle. That includes minimizing system unavailability and the effect of upgrading components, e.g., those become obsolete. Choosing the proper trade-off between simplicity and value is a part of the engineering art, and requires deep understanding of users’ needs.

System Value Growth: Looking at the evolution of PAC, it’s clear that system value has grown, and that this growth will likely accelerate. Hardware cost reduction, whether via new sensor/actuator or processing/communication technology, enables new uses and solutions while removing unnecessary redundancy. New control schemes, such as the 50ms fast load shedding between generation and loads located in different substations, leverage fast communication speeds. Proper integration of fast-growing distributed energy resources, comprehensive asset performance management, cybersecurity schemes, financial market mechanisms, and application virtualization trends are some other ongoing extensions of the previous generation of PAC systems.
Communication cost reduction is obvious when comparing the volume of data transferred today over 100 Mbps Ethernet to the former 19.6 kbps links. The ubiquity of mobile devices carried by people today offers a new means of non-permanent connection. All of this being connected to the cloud offers unprecedented flexibility with respect to processing and database services, and follows the Internet of Things paradigm. Interactions are growing (Figure 1). Peer-to-peer communication and client-server mechanisms, as opposed to a simple master-slave protocol, enable fast and flexible exchanges. Access to remote information, whether in real time, or non-real time communication that enables strong coordination with multiple databases expands system capabilities. Systems of systems are assembled, unifying previously isolated silos.

Traditional Engineering Processes are no Longer Sufficient:  Engineering cost is linked to observability and controllability requirements, and for the sake of simplicity is often directly related to the number of points. The first step is to identify them and check that their meaning is the same for the supplier and user, possibly conducting an audit in case of refurbishment. Its usage is then discussed, from basic alarming and labeling needs to its integration with automation schemes. The Verification and Validation provide the proper coordinated stimuli for testing.

Increasing numbers of points, relationships among entities, automation schemes, and thus, though indirectly, the probability of a change (due to the growing system size), poses several potential consequences when using overly simplified system engineering practices. First, it can introduce non-linear expansion of the lead-time (i.e., increasing faster than the number of points). Quality can also suffer, with bugs resulting from the complex tasks requested of the solution engineers; system availability may be altered in case of insufficient architectural design of the solution. Finally, application extensions to new neighboring domains (from substation to micro-grid, for instance) requires increasing communication among more experts than before. As systems grow in size and complexity (see page 46), so does the need for expertise required to manage it. “Hardware” (integrating embedded software for IED) costs have decreased for a given functional perimeter. Each level of the engineering process requires new methods and tools to address the coming challenges and guarantee a solution’s competitiveness over its lifetime, i.e., engineering total cost of ownership. To improve the overall efficiency, traditional boundaries among the different stakeholders may also be changed.

Let?s start with organization in protection testing