Reinforcement Learning from Experience Feedback: Application to Economic Policy

Author/Editor:

Tohid Atashbar

Publication Date:

June 7, 2024

Electronic Access:

Free Download. Use the free Adobe Acrobat Reader to view this PDF file

Disclaimer: IMF Working Papers describe research in progress by the author(s) and are published to elicit comments and to encourage debate. The views expressed in IMF Working Papers are those of the author(s) and do not necessarily represent the views of the IMF, its Executive Board, or IMF management.

Summary:

Learning from the past is critical for shaping the future, especially when it comes to economic policymaking. Building upon the current methods in the application of Reinforcement Learning (RL) to the large language models (LLMs), this paper introduces Reinforcement Learning from Experience Feedback (RLXF), a procedure that tunes LLMs based on lessons from past experiences. RLXF integrates historical experiences into LLM training in two key ways - by training reward models on historical data, and by using that knowledge to fine-tune the LLMs. As a case study, we applied RLXF to tune an LLM using the IMF's MONA database to generate historically-grounded policy suggestions. The results demonstrate RLXF's potential to equip generative AI with a nuanced perspective informed by previous experiences. Overall, it seems RLXF could enable more informed applications of LLMs for economic policy, but this approach is not without the potential risks and limitations of relying heavily on historical data, as it may perpetuate biases and outdated assumptions.

Series:

Working Paper No. 2024/114

Subject:

Frequency:

regular

English

Publication Date:

June 7, 2024

ISBN/ISSN:

9798400277320/1018-5941

Stock No:

WPIEA2024114

Format:

Paper

Pages:

23

Please address any questions about this title to publications@imf.org