Enlightenments, like accidents, happen only to prepared minds.
--Herbert Simon

Reinforcement Learning based Financial Application

RL-powered, interpretable, and constraint-aware frameworks for robust financial behaviors, e.g. quantitative trading


Our research investigates reinforcement learning-based approaches for the discovery, combination, and optimization of formulaic alpha factors in quantitative investment strategies. We emphasize key aspects such as interpretability, adaptability, and constraint-aware performance to ensure the developed models align with practical investment constraints and offer transparent insights into decision-making processes. 

AlphaForge: A Framework to Mine and Dynamically Combine Formulaic Alpha Factors (AAAI 2025)

Mobirise

We propose AlphaForge, a two-stage framework for formulaic alpha factor mining and combination. AlphaForge uses a generative-predictive neural network to generate diverse factors and a dynamic weighting model for factor combination based on temporal performance. Experiments on real-world data show that AlphaForge outperforms existing benchmarks and significantly improves portfolio returns. [paper]

Generating Synergistic Formulaic Alpha Collections via Reinforcement Learning (KDD 2023)

Mobirise

We propose a new alpha-mining framework that optimizes for synergistic formulaic alpha sets by directly using the performance of the downstream combination model to guide alpha generation. Leveraging reinforcement learning for efficient exploration, our method assigns the combination model’s performance as the RL reward, enabling the discovery of alpha factors that work well together. Experiments on real-world stock data show that our framework outperforms previous methods in stock trend forecasting and achieves higher investment returns. [paper]

Gradient-Adaptive Pareto Optimization for Constrained Reinforcement Learning (AAAI 2023)

Mobirise

Constrained Reinforcement Learning (CRL) burgeons broad interest in recent years, which pursues maximizing long-term returns while constraining costs. Although CRL can be cast as a multi-objective optimization problem, it is still facing the key challenge that gradient-based Pareto optimization methods tend to stick to known Pareto-optimal solutions even when they yield poor returns (eg, the safest self-driving car that never moves) or violate the constraints (eg, the record-breaking racer that crashes the car). In this paper, we propose Gradient-adaptive Constrained Policy Optimization (GCPO for short), a novel Pareto optimization method for CRL with two adaptive gradient recalibration techniques. First, to find Pareto-optimal solutions with balanced performance over all targets, we propose gradient rebalancing which forces the agent to improve more on under-optimized objectives at every policy iteration. Second, to guarantee that the cost constraints are satisfied, we propose gradient perturbation that can temporarily sacrifice the returns for costs. Experiments on the SafetyGym benchmarks show that our method consistently outperforms previous CRL methods in reward while satisfying the constraints. [paper]

© Last modified on July 2025 by Xiang Ao

Free AI Website Builder