A portfolio optimization framework leveraging Deep Reinforcement Learning (DRL) and a custom trading environment developed on AWS SageMaker. Supported by YouTube playlist.
This repo is an extension of a three term Independent Study of Daniel Fudge with Professor Yelena Larkin as part of a concurrent Master of Business Administration (MBA) and a Diploma in Financial Engineering from the Schulich School of Business.
Please see this repo for a detailed description of the previous three terms. A YouTube playlist was also created to document the implementation and results.
This term was spawned in a new repo for clarity with less focus on the background and theory and more focus on the implementation and results.
As discussed in report 1, cloud computing is one of the key enablers that takes machine learning from theory and toy problems to real production applications. The Amazon Web Service (AWS) is the current industry leader in cloud computing and its SageMaker platform provides users with the ability to rapidly scale applications from small experiments to massively parallel production solutions.
This project levers the power of SageMaker. If you wish to run these models, it is recommend that you review this playlist from AWS. A Cloud Guru also has excellent training on AWS and other cloud providers.
In the previous term, a DRL framework was developed for portfolio optimization but as was discussed in the future work section, the standard DRL training and test process is not well suited for portfolio optimization because the rules of the game keep changing. To build an realistic process we need to build a custom DRL algorithm and a custom train/test environment. Previously we relied on proven frameworks but since we will be building everything from scratch we will follow an incremental approach that proves the effectiveness of each component before increasing the complexity and difficulty. The first increment will be developed in a separate repo to isolate the development.
Before beginning to develop the custom environment, the A Cloud Guru Docker Fundamentals and AWS ECS - Scaling Docker courses were completed. This repo then developed a custom DRL algorithm to solve the Tennis environment provided by Unity. The SageMaker Python SDK is normally used when training DRL models on AWS. Unfortunately this is limited to the Ray RLlib or Coach tool kits, which can't be used for the custom training and testing environment we wish to develop. Therefore we built a custom Docker container, registered it in the AWS ECR and used the AWS BYOD functionality to demonstrate the capability required for the next increment.
This increment builds a custom training and test environment that replicates how the custom DRL would be deployed in production. To separate the environment development from the complexity of real trading data, simulated price data will be generated in a deterministic manner by functions that remain constant over time. This ensures that an effective process will be able to solve the environment.
Please see this notebook for the generation of the synthetic price data. If you wish to run on your local PC, please follow this instructions.
In this final increment the process is applied to real signals as a final test.
As expected, the algorithm was able to achieve very high gains when trading the synthetic data as shown below. However, the true test is with the real pricing history.
As shown below, the gains from the real data was more modest and erratic.
We pushed the system trade over two years and achieved the performance below.
Here we let the algorithm trade through the March 2020 market meltdown.
The following architecture drawings were made with the make-network.py custom script. Please feel free to customize for you own application.
Below is a simplified network to illustrate the connections between each layer of the actor neural network.
Below is the actual actor network selected by the hyperparameter tuning.
Below is the actual critic network.
This code is copyright under the MIT License.
Please feel free to raise issues against this repo if you have any questions or suggestions for improvement.