The convenience of using performance ratios as benchmarks can put project proponents in a blind spot, writes Michelle McCann.

The allure of a performance ratio (PR) lies in its simplicity as a power plant performance guarantee. It is a single number, expressed as a percentage that is used to describe a complete, built system and appears to capture everything. It includes optical losses, array losses and system losses all in one. A PR is the ratio of delivered to input power. It is a commonly used factor in PV power plants, from small rooftop systems through to large solar farms. Performance ratios may be as high as 90% and the performance ratio even has its own IEC definition: IEC 61724, complete with a commonly agreed method of accounting for some of the tricky bits like temperature correction: NRELTP-5200-57991.

The performance ratio is sometimes thought to hold the EPC partner to account or to pick up on any underperforming component of a solar farm. It also lets the EPC make the most economically efficient choices in the many trade-offs in plant design and to allocate losses in the plant across cables, panels, inverters and AC components in a way that achieves a holistic specification. The performance ratio would seem to be the ideal tool, a simple, single, number that a financier can use without having to understand technical details and that a marketer can use to convey the value of a proposition. Almost everyone understands a percentage.

Got what you paid for?

A performance ratio is indeed a valuable and vital tool, but its use should not provide an excuse to neglect other aspects of quality control. As module testers, we at PV Lab Australia would argue that performance ratio should particularly not be used to neglect incoming module inspection.

To understand this more deeply, let us consider the limitations of a performance ratio.

Performance ratio is defined according to:

Calculating the performance ratio therefore means knowing both the outgoing electrical energy and the incoming energy (DC). The outgoing electrical energy is easily measured at the farm electricity meter and is often simply what enters the electricity network.

Incoming energy (also called reference yield) is trickier. It needs to be corrected for temperature, wind conditions and the accuracy of the sunlight sensor (pyranometer or reference cell). If using bifacial panels there is a world of complexity in calculating backside illumination. All these extra calculations mean that the true, deeper story has subtle and not-so-subtle complications. Remember, the model didn’t predict exact weather conditions, only plant performance assuming certain weather conditions.

This means that the exact weather conditions need to be measured and the time of the performance ratio needs to be defined.

Measurement of the weather conditions requires accurate measurements of both local irradiance and temperature. The first order simplification of the reference yield is:

Obviously, weather varies throughout the year and from year to year. This is a complication that is overcome contractually, since there is no absolute right answer that can be predicted in advance. The parties need to decide if they want to measure performance ratio over, say, two weeks or a longer period, such as one year. The longer period offers higher accuracy but obviously takes longer to reach a conclusion. Measurement over one year (or longer!) is complicated because the nameplate value of the panels could be considered to change due to several factors including the different types of degradation that may occur: light induced degradation (LID), light and elevated temperature induced degradation (LeTID), potential induced degradation (PID), etc.

Another problem with the definition of reference yield is that, in addition to degradation complications, the nameplate value at STC doesn’t take into account any difference in soiling between modules and the sunlight sensor. Losses due to modules operating at temperatures higher than STC are usually corrected for with the NREL method but the formula is now both more complicated and more precise. It contains a number of inputs that have been agreed between the parties.

The end effect is that room for negotiation between the parties and uncertainty on the final number also increases. This can be a problem, especially if the parties think that by agreeing on a performance ratio they have covered all aspects of quality control and are protected against poor module output. Unfortunately, PVsyst, the most commonly used modelling tool for solar farms, states that implementation of NREL’s ‘weather corrected PR’ in PVsyst is “not so convincing” and indeed “almost unuseable with tracking systems”.

Margin for error

While error on PR can be less than 3%, the calculation requires a sophisticated user and the upper bound on the possible error is much higher. To give some real numbers, a study from Germany found that the monitored PR using irradiation data from pyranometers was systematically about 24% lower than that measured using reference cells.

Beyond the technicalities there is also a commercial aspect; there will usually be a buffer in the level of PR warranted by the EPC. This will typically be in the range of 2-4% below the modelled engineering estimates. 

Because of the uncertainties and buffers it remains important to check that each part of the system is functioning as it should throughout procurement, construction and commissioning. Inspection test plans and commissioning measurements of string voltage and current, inverter efficiency and AC performance all remain important tools to ensuring that the plant is performing to specification. 

But there is another check that is still overlooked in many farms – are you installing the number of watts for which you have paid?

The graph below shows two batches of solar panels sampled from the Australian market. The performance of panels in each batch was analysed by PV Lab Australia and ‘binned’ according to output relative to the manufacturer’s stated output. On average, batch A underperformed by 2.2% while batch B averaged 0.8% above name plate; a net difference of 3%. These panels were from the same “tier 1” manufacturer.

By the time calculation uncertainties and other buffers have been included, the 3% difference shown in this graph is unlikely to trigger a PR breach, but it represents money that was paid and goods (watts) that were not delivered.

On a 10MW plant, this difference could amount to a lost revenue of $30,000 a year and could easily be picked up in an unambiguous way with incoming module inspections. Furthermore, the incoming module inspections are, by their nature, done at the front end of a project and any deficiency can be addressed upfront.

In conclusion, performance ratios are an extremely valuable tool and they do provide all parties with certainty on full project performance. This certainty can be more tightly bound with complementary tests, including incoming module inspections that serve to de-risk any project and thereby improve project returns for all parties involved.

Dr Michelle McCann has worked in solar energy since 1996. She is a Partner at PV Lab Australia. PV Lab Australia has tested solar panels used in more than 1GW of Australian solar farms. Michelle was the CEO and one of the founders of Spark Solar Australia. She also worked in the photovoltaic group at the University of Konstanz, where she led the novel devices group. She has a PhD from the photovoltaic group at the ANU. Michelle has twice held a world record for high efficiency solar cells.