May 2, 2010
Since the purpose of this blog is to advertise my personal brand and make connections to people in the industry, I posted a link to my first blog posting last week on several LinkedIn groups related to Smart Grid technology and that elicited a number of interesting responses. Those of you who are members of the SmartGrids: Energy & Water group on LinkedIn can follow the main thread of discussion on this here.
In future posts I will step back and take a look at what the term smart grid actually means and try to discuss the breadth and depth of transformation and opportunity that this represents. I also plan to report back from The Networked Grid 2010 conference which I will be attending in Palm Springs on May 18 and 19. However, today, I would like to address some follow up discussions related to the need for standardization to allow for interoperability in the smart grid domain.
I talked with Des Farren, CEO of ServusNet a software company based in Cork, Ireland who provide Operations & Maintenance (O&M) and Operational Intelligence (OI) solutions for the renewable energy industry. Des’s experience indicates a very similar situation in the wind farm industry to that which I warned of in my earlier posting.
The ServusNet Operations Platform provides a way for an operator of a windfarm portfolio to:
- Consolidate multi-vendor turbine fault and statistics data into one standard database.
- Generate operations-level and portfolio-wide dashboards and reports.
- Initiate pre-defined operational workflows directly from alarms, calendar events or trend thresholds.
- Utilise detailed equipment data to quickly build more accurate production forecasts.
- Allow operations centre or field access through any web browser.
- Integrate wind farm data with existing enterprise software systems.
What Des is seeing in the wind farm industry is that although an operator may start off with a single wind turbine supplier, after a few years, when they expand their operation, they are bringing in newer wind turbines in which the fault and data reporting capabilities have been updated to provide newer functionality. In some cases, wind farm operators select multiple vendors and the capabilities of the equipment may vary substantially. Another challenge is that, in some cases, even the data within a single wind turbine implementation may be un-normalized, with the same object being referred to by different names in different parts of the data stream. This is indicative of the types of problems that will emerge as the smart grid build out continues and we begin to integrate operations across multiple vendors. The International Electrotechnical Commission (IEC) has done some work on creating a standard data model for wind turbine generators but this model does not cover all of the required objects or data associated with those objects and not all vendors adhere to the model anyway. The ServusNet solution mediates the data from the various generations of technology or vendors within an operator’s portfolio and stores that data in a normalized database based on the IEC model but extended to address the shortcomings in that model. From there, ServusNet can provide a uniform view of the network to the operator regardless of the specific technology embedded within each turbine.
Large system integrators (eg IBM, Accenture, Cap Gemini, Logica etc) can offer a similar solution for wind farm operators but those solutions typically involve tailoring large enterprise management solutions to the wind operators’ requirements and are very capital intensive. ServusNet provides a lower cost alternative that still preserves the ability to further integrate into an enterprise level solution if and when the need arises and that integration can now be done to a single standardized interface provided by ServusNet rather than to each of the different turbine models that exist within the portfolio.
The connection back to my original blog: as is always the case with new technologies, the wind turbine vendors are providing a basic management capability for their equipment, based on the requirements inherent in the design of their unique solution. In the absence of common data models and operations models such as those provided by the TMN model, it is difficult for the operators of a heterogeneous network to get a homogeneous picture of the operations of their entire network. As a consequence, the implementation of the higher layer management functions described in the network management, service management and business management layers of the TMN model are much more difficult to achieve and require a lot of custom development effort.
I also had an interesting conversation with Dr Colin Fitzpatrick at the University of Limerick in Ireland. Colin described a project that he is working on which is looking at the possibility of using demand response (the process of automatically reducing demand to match available supply) to allow integration of greater quantities of renewable power sources which tend to be intermittent. The more common approach to dealing with the intermittency problem posed by renewable sources such as wind and solar power has been to regulate the voltage on the grid using other, more reliable power sources in order to ensure a consistent power supply. Colin’s team is looking at the alternative of reducing or shifting demand selectively for those devices that are not sensitive to instantaneous power availability. Examples of such devices would include storage heaters, geothermal heat pumps, water heaters, dishwashers, electric vehicle charging stations etc. Obviously it would be unacceptable to deny power to lighting, TV’s, medical devices etc. The solution being proposed raises the concept of prioritization among devices for the available power supply. There is an analogy here to another concept in the telecom world that could be re-used in the smart grid space to achieve these goals. In telecom networks, the finite resource that has to be allocated to all competing devices and services is bandwidth and this is accomplished using a suite of resource reservation control mechanisms known as Quality of Service (QoS). In the case of regulation of the grid for intermittent generation sources, the managed resource is power rather than bandwidth but the principles are essentially the same. This is another example of a problem that the smart grid space is starting to address for which the telecom industry already has a potential solution.
This topic is gaining traction in other fora also: Shahid Ahmed, Accenture’s Global Network Technology Practice Lead writes, in power-technology.com’s online magazine The Future’s Smart about four emerging trends affecting how the smart grid will be built and operated:
- A shift from a centralized to a decentralized model of command and control with an increase in peer-to-peer relationships and communications.
- A corresponding shift to more decentralized generation with the attendant need for management or large numbers of discrete distributed energy sources which all need to be integrated into the grid in a reliable manner, preserving the stability of the grid.
- A migration of intelligence from a few nodes close to the center of the grid (eg substations) to many end points (transmission lines, switches, smart meters, even intelligent devices within the consumer premises) and an associated increase in the amount of intelligence at each of those nodes.
- A dramatic increase in the volume of data and a decrease in tolerance for latency (the amount of delay in collecting and accessing that data) within the network. This includes both synchronous data (where the volume and frequency of origination is known and can be planned for) and asynchronous data such as event and alarm data that may come in bursts and which requires significant expertise in networking sizing and latency design to plan for.
These are trends that we have seen in the telecom industry for some time. As telecom technologies moved to an all-IP model, the architectures have been flattened with an increase in peer-to-peer relationships. We have seen a corresponding trend to push intelligence to the edges of the network where changes in the operational needs or behavior of the network can be managed more efficiently. This trend also provides the foundation for self-healing and self-optimization in the network which is not possible with the centralized command and control models of old. And, of course, the telecom industry has been dealing with huge volumes of operational data both synchronous and asynchronous for years and we have mechanisms for throttling such data at the points of origination and for correlating and suppressing less important data so that the operator can clearly see the most important events that are occurring in the network at any given point in time.
Ravi Raju, VP of Corporate Strategy at Smartsynch, a leading AMI provider, authored an article in the online journal, Energy Pulse in which he forecasts that, if 2009 was the year of Smart Grid, 2010 will be the year of Smart Grid Collaboration. In the article, Raju quotes Dr. Aaron Snyder, principal consultant at Enernex, an electrical power engineering and consulting firm, as saying “If utilities spend [the US federal stimulus money] on proprietary technology, they may have to replace it in a few years. The vision of an interoperable, plug-and-play power grid cannot come to pass if each of the country’s 3,000 utilities is in its own incompatible island of non-compliant technology.” This article introduces another interesting challenge which also came up in a recent conversation that I had with an executive at a major system integrator working in the smart grid field: in addition to concerns about interoperability among competing vendors in the same space, the fact that many utilities are structured into operational silos, (Transmission, distribution, metering etc), results in a situation in which there are communication and interoperability issues between these silos too. Raju’s proposed solution is a standard based, technology agnostic, communication platform that will enable integration of disparate vendors and silos of information while still allowing a path for new technologies to be integrated into the ecosystem as they emerge. In reality such a platform would be defined by standards but instantiated by multiple competing vendors. This may sound like Nirvana but it is the vision that we need to be aiming for if we are to avoid the scenario that Dr Snyder warns against.