This Reinforcement Learning System Flies a Kite to Boost Wind Harvesting Capabilities
Through a trial-and-error approach, this ML system can boost the output of airborne wind energy harvesting systems with no prior knowledge.
A cross-university team of European researchers, led by Antonio Celani of the Abdus Salam International Center for Theoretical Physics, have come up with a trial-and-error machine learning system that could help boost the output of airborne wind energy (AWE) kites and gliders.
"Airborne wind energy is a lightweight technology that allows power extraction from the wind using airborne devices such as kites and gliders, where the airfoil orientation can be dynamically controlled in order to maximize performance," Celani's team explains. "The dynamical complexity of turbulent aerodynamics makes this optimization problem unapproachable by conventional methods such as classical control theory, which rely on accurate and tractable analytical models of the dynamical system at hand."
Where classical control theory may be ill-suited, machine learning can help. In a recently-published paper, the team details a reinforcement learning system which uses "repeated trial-and-error interactions with the environment" to learn what actions provide the best outcome with no prior knowledge of the system.
To prove the concept, the team created a simulated environment in which an energy-harvesting kite was tethered to a ship through a cable of fixed length which remains permanently under tension. The reinforcement learning algorithm proved able to control the kite, providing an insight into "simple control rules" for the system β and once an improved version, capable of switching from the learned policy to one which "distills" these rules was claimed to be "indistinguishable" from a fully-learned policy.
"The application of our method beyond the simulated environment is a tantalizing perspective. However, several challenges lie ahead when training takes place in the real physical world," the team admits. "Among those, a prominent necessity is finding algorithms that learn faster. Encouraging results from robotics and unmanned aerial navigation offer some hope that these challenges can be overcome and that Reinforcement Learning can become an important algorithmic tool for AWE applications."
The team's work has been published under closed-access terms in The European Physical Journal E; an open-access preprint is available on Cornell's arXiv server.