Poster
in
Affinity Workshop: 4th MusIML workshop at ICML’25
Stabilizing the Kuramoto–Sivashinsky Equation Using Deep Reinforcement Learning with a DeepONet Prior
Nadim Ahmed · Ashraful Babu · Md. Mortuza Ahmmed · M Rahman · Mufti Mahmud
Abstract:
This paper presents a novel reinforcement learning framework that leverages DeepONet priors to stabilize the Kuramoto–Sivashinsky (KS) equation. DeepONet first learns a generalized control operator offline, which is refined online using Deep Deterministic Policy Gradient (DDPG) to adapt to trajectory-specific dynamics. The approach achieves a 55\% energy reduction within 0.2 time units and narrows chaotic fluctuations significantly, outperforming traditional feedback control. DeepONet reduces MSE by 99.3\%, while the RL agent improves mean episode reward by 59.3\%. The method offers a scalable and effective solution for controlling complex, high-dimensional nonlinear systems.
Chat is not available.