Loading [MathJax]/jax/output/HTML-CSS/config.js
Volume 4, Issue 1
Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces

Andrea Angiuli, Jean-Pierre Fouque, Ruimeng Hu & Alan Raydan

J. Mach. Learn. , 4 (2025), pp. 11-47.

Published online: 2025-03

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

We present the development and analysis of a reinforcement learning algorithm designed to solve continuous-space mean field game (MFG) and mean field control (MFC) problems in a unified manner. The proposed approach pairs the actor-critic (AC) paradigm with a representation of the mean field distribution via a parameterized score function, which can be efficiently updated in an online fashion, and uses Langevin dynamics to obtain samples from the resulting distribution. The AC agent and the score function are updated iteratively to converge, either to the MFG equilibrium or the MFC optimum for a given mean field problem, depending on the choice of learning rates. A straightforward modification of the algorithm allows us to solve mixed mean field control games. The performance of our algorithm is evaluated using linear-quadratic benchmarks in the asymptotic infinite horizon framework.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-4-11, author = {Angiuli , AndreaFouque , Jean-PierreHu , Ruimeng and Raydan , Alan}, title = {Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces}, journal = {Journal of Machine Learning}, year = {2025}, volume = {4}, number = {1}, pages = {11--47}, abstract = {

We present the development and analysis of a reinforcement learning algorithm designed to solve continuous-space mean field game (MFG) and mean field control (MFC) problems in a unified manner. The proposed approach pairs the actor-critic (AC) paradigm with a representation of the mean field distribution via a parameterized score function, which can be efficiently updated in an online fashion, and uses Langevin dynamics to obtain samples from the resulting distribution. The AC agent and the score function are updated iteratively to converge, either to the MFG equilibrium or the MFC optimum for a given mean field problem, depending on the choice of learning rates. A straightforward modification of the algorithm allows us to solve mixed mean field control games. The performance of our algorithm is evaluated using linear-quadratic benchmarks in the asymptotic infinite horizon framework.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.230919}, url = {http://global-sci.org/intro/article_detail/jml/23890.html} }
TY - JOUR T1 - Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces AU - Angiuli , Andrea AU - Fouque , Jean-Pierre AU - Hu , Ruimeng AU - Raydan , Alan JO - Journal of Machine Learning VL - 1 SP - 11 EP - 47 PY - 2025 DA - 2025/03 SN - 4 DO - http://doi.org/10.4208/jml.230919 UR - https://global-sci.org/intro/article_detail/jml/23890.html KW - Actor-critic, Linear-quadratic control, Mean field game, Mean field control, Mixed mean field control game, Score matching, Reinforcement learning, Timescales. AB -

We present the development and analysis of a reinforcement learning algorithm designed to solve continuous-space mean field game (MFG) and mean field control (MFC) problems in a unified manner. The proposed approach pairs the actor-critic (AC) paradigm with a representation of the mean field distribution via a parameterized score function, which can be efficiently updated in an online fashion, and uses Langevin dynamics to obtain samples from the resulting distribution. The AC agent and the score function are updated iteratively to converge, either to the MFG equilibrium or the MFC optimum for a given mean field problem, depending on the choice of learning rates. A straightforward modification of the algorithm allows us to solve mixed mean field control games. The performance of our algorithm is evaluated using linear-quadratic benchmarks in the asymptotic infinite horizon framework.

Angiuli , AndreaFouque , Jean-PierreHu , Ruimeng and Raydan , Alan. (2025). Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces. Journal of Machine Learning. 4 (1). 11-47. doi:10.4208/jml.230919
Copy to clipboard
The citation has been copied to your clipboard