Skip to content

Start Adding tutorial for Reinforcement Learning #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions _data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,17 @@ wiki:
url: /wiki/machine-learning/mediapipe-live-ml-anywhere.md/
- title: NLP for robotics
url: /wiki/machine-learning/nlp_for_robotics.md/
- title: Reinforcement Learning
url: /wiki/reinforcemnet-learning
children:
- title: Key Concepts in Reinforcemnet Learning (RL)
url: /wiki/reinforcemnet-learning/key-concepts-in-rl/
- title: Reinforcement Learning Algorithms
url: /wiki/reinforcemnet-learning/reinforcement-learning-algorithms/
- title: Policy Gradient Methods
url: /wiki/reinforcemnet-learning/intro-to-policy-gradient-methods/
- title: Foundation of Value-Based Reinforcement Learning
url: /wiki/reinforcemnet-learning/value-based-reinforcement-learning/
- title: State Estimation
url: /wiki/state-estimation/
children:
Expand Down
Binary file added assets/images/Humanoid robot.drawio.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/multi_contact_planning.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
152 changes: 152 additions & 0 deletions wiki/reinforcement-learning/intro-to-policy-gradient-methods.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
---
date: 2025-05-04
title: Proximal Policy Optimization (PPO): Concepts, Theory, and Insights
---

Proximal Policy Optimization (PPO) is one of the most widely used algorithms in modern reinforcement learning. It combines the benefits of policy gradient methods with a set of improvements that make training more stable, sample-efficient, and easy to implement. PPO is used extensively in robotics, gaming, and simulated environments like MuJoCo and OpenAI Gym. This article explains PPO from the ground up: its motivation, theory, algorithmic structure, and practical considerations.

## Motivation

Traditional policy gradient methods suffer from instability due to large, unconstrained policy updates. While they optimize the expected return directly, updates can be so large that they lead to catastrophic performance collapse.

Trust Region Policy Optimization (TRPO) proposed a solution by introducing a constraint on the size of the policy update using a KL-divergence penalty. However, TRPO is relatively complex to implement because it requires solving a constrained optimization problem using second-order methods.

PPO was designed to simplify this by introducing a clipped surrogate objective that effectively limits how much the policy can change during each update—while retaining the benefits of trust-region-like behavior.

## PPO Objective

Let the old policy be $\pi_{\theta_{\text{old}}}$ and the new policy be $\pi_\theta$. PPO maximizes the following clipped surrogate objective:

$$
L^{\text{CLIP}}(\theta) = \mathbb{E}_t \left[
\min\left(
r_t(\theta) \hat{A}_t,
\text{clip}(r_t(\theta), 1 - \epsilon, 1 + \epsilon) \hat{A}_t
\right)
\right]
$$

where:

- $r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{\text{old}}}(a_t|s_t)}$ is the probability ratio,
- $\hat{A}_t$ is the advantage estimate at time step $t$,
- $\epsilon$ is a small hyperparameter (e.g., 0.1 or 0.2).

### Why Clipping?

Without clipping, large changes in the policy could lead to very large or small values of $r_t(\theta)$, resulting in destructive updates. The **clip** operation ensures that updates do not push the new policy too far from the old one.

This introduces a **soft trust region**: when $r_t(\theta)$ is within $[1 - \epsilon, 1 + \epsilon]$, the update proceeds normally. If $r_t(\theta)$ exceeds this range, the objective is "flattened", preventing further change.

## Full PPO Objective

In practice, PPO uses a combination of multiple objectives:

- **Clipped policy loss** (as above)
- **Value function loss**: typically a mean squared error between predicted value and empirical return
- **Entropy bonus**: to encourage exploration

The full loss function is:

$$
L^{\text{PPO}}(\theta) =
\mathbb{E}_t \left[
L^{\text{CLIP}}(\theta)
- c_1 \cdot (V_\theta(s_t) - \hat{V}_t)^2
+ c_2 \cdot \mathcal{H}[\pi_\theta](s_t)
\right]
$$

where:

- $c_1$ and $c_2$ are weighting coefficients,
- $\hat{V}_t$ is an empirical return (or bootstrapped target),
- $\mathcal{H}[\pi_\theta]$ is the entropy of the policy at state $s_t$.

## Advantage Estimation

PPO relies on high-quality advantage estimates $\hat{A}_t$ to guide policy updates. The most popular technique is **Generalized Advantage Estimation (GAE)**:

$$
\hat{A}_t = \sum_{l=0}^{T - t - 1} (\gamma \lambda)^l \delta_{t+l}
$$

with:

$$
\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t)
$$

GAE balances the bias-variance trade-off via the $\lambda$ parameter (typically 0.95).

## PPO Training Loop Overview

At a high level, PPO training proceeds in the following way:

1. **Collect rollouts** using the current policy for a fixed number of steps.
2. **Compute advantages** using GAE.
3. **Compute returns** for value function targets.
4. **Optimize the PPO objective** with multiple minibatch updates (typically using Adam).
5. **Update the old policy** to match the new one.

Unlike TRPO, PPO allows **multiple passes through the same data**, improving sample efficiency.

## Practical Tips

- **Clip epsilon**: Usually 0.1 or 0.2. Too large allows harmful updates; too small restricts learning.
- **Number of epochs**: PPO uses multiple SGD epochs (3–10) per batch.
- **Batch size**: Typical values range from 2048 to 8192.
- **Value/policy loss scales**: The constants $c_1$ and $c_2$ are often 0.5 and 0.01 respectively.
- **Normalize advantages**: Empirically improves stability.

> **Entropy Bonus**: Without sufficient entropy, the policy may prematurely converge to a suboptimal deterministic strategy.

## Why PPO Works Well

- **Stable updates**: Clipping constrains updates to a trust region without expensive computations.
- **On-policy training**: Leads to high-quality updates at the cost of sample reuse.
- **Good performance across domains**: PPO performs well in continuous control, discrete games, and real-world robotics.
- **Simplicity**: Easy to implement and debug compared to TRPO, ACER, or DDPG.

## PPO vs TRPO

| Feature | PPO | TRPO |
|---------------------------|--------------------------------------|--------------------------------------|
| Optimizer | First-order (SGD/Adam) | Second-order (constrained step) |
| Trust region enforcement | Clipping | Explicit KL constraint |
| Sample efficiency | Moderate | Low |
| Stability | High | Very high |
| Implementation | Simple | Complex |

## Limitations

- **On-policy nature** means PPO discards data after each update.
- **Entropy decay** can hurt long-term exploration unless tuned carefully.
- **Not optimal for sparse-reward environments** without additional exploration strategies (e.g., curiosity, count-based bonuses).

## PPO in Robotics

PPO has become a standard in sim-to-real training workflows:

- Robust to partial observability
- Easy to stabilize on real robots
- Compatible with parallel simulation (e.g., Isaac Gym, MuJoCo)

## Summary

PPO offers a clean and reliable solution for training RL agents using policy gradient methods. Its clipping objective balances the need for learning speed with policy stability. PPO is widely regarded as a default choice for continuous control tasks, and has been proven to work well across a broad range of applications.


## Further Reading
- [Proximal Policy Optimization Algorithms – Schulman et al. (2017)](https://arxiv.org/abs/1707.06347)
- [Spinning Up PPO Overview – OpenAI](https://spinningup.openai.com/en/latest/algorithms/ppo.html)
- [CleanRL PPO Implementation](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_continuous_action.py)
- [RL Course Lecture on PPO – UC Berkeley CS285](https://rail.eecs.berkeley.edu/deeprlcourse/)
- [OpenAI Gym PPO Examples](https://github.com/openai/baselines/tree/master/baselines/ppo2)
- [Generalized Advantage Estimation (GAE) – Schulman et al.](https://arxiv.org/abs/1506.02438)
- [PPO Implementation from Scratch – Andriy Mulyar](https://github.com/awjuliani/DeepRL-Agents)
- [Deep Reinforcement Learning Hands-On (PPO chapter)](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On)
- [Stable Baselines3 PPO Documentation](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html)
- [OpenReview: PPO vs TRPO Discussion](https://openreview.net/forum?id=r1etN1rtPB)
- [Reinforcement Learning: State-of-the-Art Survey (2019)](https://arxiv.org/abs/1701.07274)
- [RL Algorithms by Difficulty – RL Book Companion](https://huggingface.co/learn/deep-rl-course/unit2/ppo)
85 changes: 85 additions & 0 deletions wiki/reinforcement-learning/key-concepts-in-rl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
date: 2025-03-11 # YYYY-MM-DD
title: Key Concepts of Reinforcement Learning
---

This tutorial provides an introduction to the fundamental concepts of Reinforcement Learning (RL). RL involves an agent interacting with an environment to learn optimal behaviors through trial and feedback. The main objective of RL is to maximize cumulative rewards over time.

## Main Components of Reinforcement Learning

### Agent and Environment
The agent is the learner or decision-maker, while the environment represents everything the agent interacts with. The agent receives observations from the environment and takes actions that influence the environment's state.

### States and Observations
- A **state** (s) fully describes the world at a given moment.
- An **observation** (o) is a partial view of the state.
- Environments can be **fully observed** (complete information) or **partially observed** (limited information).

### Action Spaces
- The **action space** defines all possible actions an agent can take.
- **Discrete action spaces** (e.g., Atari, Go) have a finite number of actions.
- **Continuous action spaces** (e.g., robotics control) allow real-valued actions.

## Policies
A **policy** determines how an agent selects actions based on states:

- **Deterministic policy**: Always selects the same action for a given state.

$a_t = \mu(s_t)$

- **Stochastic policy**: Samples actions from a probability distribution.

$a_t \sim \pi(\cdot | s_t)$


### Example: Deterministic Policy in PyTorch
```python
import torch.nn as nn

pi_net = nn.Sequential(
nn.Linear(obs_dim, 64),
nn.Tanh(),
nn.Linear(64, 64),
nn.Tanh(),
nn.Linear(64, act_dim)
)
```

## Trajectories
A **trajectory (\tau)** is a sequence of states and actions:
```math
\tau = (s_0, a_0, s_1, a_1, ...)
```
State transitions follow deterministic or stochastic rules:
```math
s_{t+1} = f(s_t, a_t)
```
or
```math
s_{t+1} \sim P(\cdot|s_t, a_t)
```

## Reward and Return
The **reward function (R)** determines the agent's objective:
```math
r_t = R(s_t, a_t, s_{t+1})
```
### Types of Return
1. **Finite-horizon undiscounted return**:
```math
R(\tau) = \sum_{t=0}^T r_t
```
2. **Infinite-horizon discounted return**:
```math
R(\tau) = \sum_{t=0}^{\infty} \gamma^t r_t
```
where \( \gamma \) (discount factor) balances immediate vs. future rewards.

## Summary
This tutorial introduced fundamental RL concepts, including agents, environments, policies, action spaces, trajectories, and rewards. These components are essential for designing RL algorithms.

## Further Reading
- Sutton, R. S., & Barto, A. G. (2018). *Reinforcement Learning: An Introduction*.

## References
- [Reinforcement Learning Wikipedia](https://en.wikipedia.org/wiki/Reinforcement_learning)
Loading