๐Ÿค– Multi-Agent Reinforcement Learning Trading System

This repository contains trained Deep Reinforcement Learning agents for automated stock trading. The agents were trained using stable-baselines3 on a custom OpenAI Gym environment simulating the US Stock Market (AAPL, MSFT, GOOGL).

๐Ÿง  Models

The following algorithms were used:

  1. DQN (Deep Q-Network): Off-policy RL algorithm suitable for discrete action spaces.
  2. PPO (Proximal Policy Optimization): On-policy gradient method known for stability.
  3. A2C (Advantage Actor-Critic): Synchronous deterministic policy gradient method.
  4. Ensemble: A meta-voter that takes the majority decision from the above three.

๐Ÿ‹๏ธ Training Data

The models were trained on technical indicators derived from historical daily price data (2018-2024):

  • Returns: Daily percentage change.
  • RSI (14): Relative Strength Index.
  • MACD: Moving Average Convergence Divergence.
  • Bollinger Bands: Volatility measure.
  • Volume Ratio: Relative volume intensity.
  • Market Regime: Bull/Bear trend classification.

๐Ÿ”— Related Data

๐ŸŽฎ Environment (TradingEnv)

  • Action Space: Discrete(3) - 0: HOLD, 1: BUY, 2: SELL.
  • Observation Space: Box(10,) - Normalized technical features + portfolio state.
  • Reward: Profit & Loss (PnL) minus transaction costs and drawdown penalties.

๐Ÿš€ Usage

import gymnasium as gym
from stable_baselines3 import PPO

# Load the environment (custom wrapper required)
# env = TradingEnv(df)

# Load model
model = PPO.load("ppo_AAPL.zip")

# Predict
action, _ = model.predict(obs, deterministic=True)

๐Ÿ“ˆ Performance

Performance varies by ticker and market condition. See the generated results/ CSVs for detailed Sharpe Ratios and Max Drawdown stats per agent.

๐Ÿ› ๏ธ Credits

Developed by Adityaraj Suman as part of the Multi-Agent RL Trading System project.

Downloads last month
135
Video Preview
loading

Evaluation results