Soiling puts operators of solar power plants before the challenge of finding the right strategy for the cleaning of their solar fields. The trade-off between a low cleanliness and thus low revenues on the one hand and elevated cleaning costs and high field efficiency on the other hand has to be met.
In this study we address this problem using a reinforced learning algorithm. Reinforced learning is a trial and error based learning process based on a scalar reward. The algorithms improve with an increasing number of training runs, each performed on a different one-year data set. The reward being the profit of the CSP project. In order to prevent overfitting to a special case, the training data has to be sufficiently large. To increase our 5 year soiling-rate and 25 year meteorological measurement data set from CIEMAT’s Plataforma Solar de Almeria (PSA). We first present a method to create artificial long term data sets based on these measurements that are representative of the sites’ weather conditions. With the extended datasets we are able to train the algorithm sufficiently before testing it on the validation dataset.
The algorithm is given the daily choice to deploy up to two cleaning units in day and/or night shifts. In a second step, it is given soiling rate forecasts with different forecast horizons. At PSA our trained algorithm can increase a project’s profit by 1.28 % compared to a reference constant cleaning frequency (RPI) if only the current cleanliness of the solar field is known. If it is given a one day soiling-rate forecast the profit can be increased by 1.33 %. A three day soiling-rate-forecast can increase the profit by 1.37 %. An extended forecast horizon does not seem to increase the RPI further. For sites with higher dust loads than PSA the RPI is expected to be significantly higher than at PSA.
Reinforced learning in combination with the data extension algorithm can be a useful method to increase a CSP project’s profit over its lifetime.