FinSearch Research Competition
This repository contains the code and resources for implementing a Deep Reinforcement Learning (RL) based stock trading strategy using ensemble methods. The project involves training multiple RL agents—DQN, DDQN, and DDPG—on Nifty50 stock data from 2010 to 2019, and then ensembling their predictions to generate robust trading signals.
- Nifty50_data/: Contains the stock data for Nifty50 from 2010 to 2019.
- RL_Models_Weights/: Stores the trained weights for the DQN, DDQN, and DDPG models.
- Results & Plots/: Includes the results and plots generated during testing and evaluation.
- DDPG_Agent.py: Implementation of the Deep Deterministic Policy Gradient (DDPG) agent.
- DDQN_Agent.py: Implementation of the Double Deep Q-Network (DDQN) agent.
- DQN_Agent.py: Implementation of the Deep Q-Network (DQN) agent.
- Ensemble_Pipeline.py: Script to ensemble the outputs of the DQN, DDQN, and DDPG agents.
- Final_Script.ipynb: A Jupyter Notebook that brings everything together, from data preprocessing to final results.
- requirements.txt: List of Python dependencies required to run the project.
- train.py: Script to train the RL models.
- test.py: Script to test the trained RL models.
- utils.py: Utility functions used throughout the project.
The primary objective of this project is to develop an ensemble-based stock trading strategy using deep reinforcement learning. By combining the strengths of multiple RL agents, the ensemble model aims to improve the robustness and accuracy of trading signals.
-
Data Preprocessing:
- Historical stock prices for Nifty50 from 2010 to 2019 are preprocessed and used as the input for the RL models.
-
Model Training:
- Three RL agents (DQN, DDQN, and DDPG) are trained separately on the processed data.
- Each model learns to predict actions (buy, sell, hold) based on the stock prices.
-
Ensembling:
- The predictions from the three models are combined using a majority voting mechanism.
- The final trading action is determined by the ensemble, which helps in reducing the risk of wrong predictions by individual models.
-
Evaluation:
- The performance of the ensemble model is evaluated using key metrics such as cumulative returns, Sharpe ratio, and maximum drawdown.
- Plots and results are generated to visualize the model's performance.
- Multi-Agent RL: Combines the strengths of DQN, DDQN, and DDPG for better decision-making.
- Ensemble Learning: Uses majority voting to enhance the robustness of trading signals.
- Comprehensive Evaluation: Provides detailed analysis through results and plots.
To install the required dependencies, run:
pip install -r requirements.txt
- Run
train.py
to train the DQN, DDQN, and DDPG models. - The trained weights will be saved in the
RL_Models_Weights/
directory.
- Run
test.py
to evaluate the performance of the trained models.
- Use
Ensemble_Pipeline.py
to ensemble the predictions from the models and generate the final trading signals. - Visualize the results using
Final_Script.ipynb
.
The ensemble model's performance is evaluated on the Nifty50 stock data, showing promising results in terms of cumulative returns, risk-adjusted returns, and overall trading strategy robustness.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
This project is licensed under the MIT License. See the LICENSE file for more details.
For any questions or inquiries, please contact Sahil Dhanraj Barbade.