While weighing an investment in forecast systems, plant owners have an opportunity to wait and see how the various methods are proven in the field, writes Dr Colin Bonner.
In 2016, wind farm operators in a bid to reduce causer pays led the push for semi-scheduled generators to provide self-forecasts to AEMO for dispatch, instead of relying on AEMO’s forecasts. In response, ARENA provided funding for 11 projects to trial a range of different forecasting technologies. The trials run for two years to allow for seasonal variations and ensure AEMO’s stringent vetting process to use a specific self-forecast for dispatch is met.
At the time of writing in late August, no self-forecasts have been used in AEMO’s dispatch despite several forecast systems being installed at trial sites in the first half of 2019.
The 11 ARENA-funded projects are using different technologies ranging from ground- and space-based remote sensing, artificial intelligence, sky cameras, thermal cameras and numerical weather prediction.
Forecasts are assessed by their “skill score”, which is a simple process of comparing the forecast parameter to what actually happened. A perfect forecast would have a skill score of zero; the score becomes negative as the forecast becomes less accurate or loses forecasting skill.
Respected meteorologist Edward Zipser presented the graph below in 1990, which illustrates the trends in skill score of various meteorological forecast methods versus the time ahead they are trying to forecast. Zipser’s limit of predictability is derived from a 1960s theory by renowned meteorologist Edward Lorenz which has been popularised as the “butterfly effect”, which essentially means small changes can have larger effects as time progresses.
Numerical weather prediction models can be erroneous at initialisation due to initial conditions and boundary effects. They are generally best used for forecasts from 4-6 hours to days ahead as they are complex computer models of the atmosphere. They are essential for long-term forecasts as they allow the state of the atmosphere to evolve. They are computationally expensive and it can take hours to run regional models. It is possible to run numerical weather prediction models much faster on smaller areas, but data sources for assimilation become problematic, resulting in a lower skill score.
The extrapolation in Zipser’s graph is applicable to forecasting from SCADA, cloud forecasting from satellite images or ground-based sky cameras and assimilation of additional data sources such as Lidars and Sodars. Extrapolation forecasting takes data from sources which it uses to estimate the state of the environment such as cloud shape and velocity, wind speed gradient, optical turbidity.
The state of the environment is assumed to be constant for the forecast period and the impact on energy generation calculated. There is no inherent benefit from the various AI/machine learning, physical models, image processing and statistical forecasting methods which are currently being trialled in the NEM. All have strengths and weaknesses and it comes down to the vendors’ specific implementation.
The trend before the results
As part of the ARENA short-term forecasting program, AEMO is publishing the daily summaries of all solar and wind energy forecasting systems and generator self-forecasts. The summaries are for all generators and forecast providers and not just those included in the ARENA project. The dispatch forecast data is available to the public via the NEM website. SCADA data, also available from NEM website, provides summaries of actual generation. There is no doubt that the public data sets will be analysed by the industry to understand the performance and return on investment of the dozen or so suppliers.
From an insider’s perspective, self-forecasting for wind generators is proceeding in a conservative manner while, in contrast, the solar forecasting sector is peppered with plenty of marketing hype – driven, I suspect, by vendors and the need for generators to reduce causer pays costs as soon as possible.
While the self-forecast trials will run for at least another year, most providers will have early results from forecasts validated by AEMO in late 2019. While there is significant interest in self-forecasting, there can be considerable setup costs and integration/validation time to set up a forecast system. I would caution asset owners and operators against early adoption of forecast systems until their performance has been proven.
Dr Colin Bonner is technical director and co-founder of Fulcrum3D.