GitHub / hoverslam / rl-proximal-policy-optimization
Proximal Policy Optimization (PPO) implemented in PyTorch
Stars: 0
Forks: 0
Open issues: 0
License: mit
Language: Jupyter Notebook
Size: 61.9 MB
Dependencies parsed at: Pending
Created at: 10 months ago
Updated at: 9 months ago
Pushed at: 9 months ago
Last synced at: about 2 months ago
Topics: ppo, procgen, proximal-policy-optimization, pytorch
Loading...