Market Making with Learned Beta Policies

Yongzhao Wang, Rahul Savani, Anri Gu, Chris Mascioli, Theodore Turocy, Michael Wellman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Downloads (Pure)

Abstract

In market making, a market maker (MM) can concurrently place many buy and sell limit orders at various prices and volumes, re- sulting in a vast action space. To handle this large action space, beta policies were introduced, utilizing a scaled beta distribution to concisely represent the volume distribution of an MM’s orders across different price levels. However, in these policies, the param- eters of the scaled beta distributions are either fixed or adjusted only according to predefined rules based on the MM’s inventory. As we show, this approach potentially limits the effectiveness of market-making policies and overlooks the significance of other market characteristics in a dynamic market. To address this limita- tion, we introduce a general adaptive MM based on beta policies by employing deep reinforcement learning (RL) to dynamically control the scaled beta distribution parameters and generate orders based on current market conditions. A sophisticated market simulator is employed to evaluate a wide range of existing market-making policies and to train the RL policy in markets with varying levels of inventory risk, ensuring a comprehensive assessment of their performance and effectiveness. By carefully designing the reward function and observation features, we demonstrate that our RL beta policy outperforms baseline policies across multiple metrics in dif- ferent market settings. We emphasize the strong adaptability of the learned RL beta policy, underscoring its pivotal role in achieving superior performance compared to other market-making policies.
Original languageEnglish
Title of host publicationICAIF'24: The 5th ACM International Conference on AI in Finance
DOIs
Publication statusPublished - 2024

Cite this