GitHub topics: optimal-policy
ChaitanyaC22/Deep-RL-Project---Maximize-total-profits-earned-by-cab-driver
The goal of this project is to build an RL-based algorithm that can help cab drivers maximize their profits by improving their decision-making process on the field. Taking long-term profit as the goal, a method is proposed based on reinforcement learning to optimize taxi driving strategies for profit maximization. This optimization problem is formulated as a Markov Decision Process i.e. MDP.
Language: Jupyter Notebook - Size: 1.61 MB - Last synced at: 5 months ago - Pushed at: about 4 years ago - Stars: 13 - Forks: 3

IsmaelMousa/mdp-value-iteration
Implementation of the MDP algorithm for optimal decision-making, focusing on value iteration and policy determination.
Language: Python - Size: 114 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 1 - Forks: 0

etienneandre/ImpRator
ImpRator (Inverse Method for Policy with Reward AbstracT behaviOR) is a prototype implementation to compute parameter valuations in parametric Markov decision processes such that optimal policies remain optimal.
Language: OCaml - Size: 55.7 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

nicolaloi/Dynamic-Programming-and-Optimal-Control
Infinite horizon policy optimization for drone navigation. Graded project for the ETH course "Dynamic Programming and Optimal Control".
Language: MATLAB - Size: 758 KB - Last synced at: almost 2 years ago - Pushed at: almost 4 years ago - Stars: 3 - Forks: 2

raklokesh/ReinforcementLearning_Sutton-Barto_Solutions
Solutions and figures for problems from Reinforcement Learning: An Introduction Sutton&Barto
Language: Python - Size: 4.47 MB - Last synced at: about 2 years ago - Pushed at: about 6 years ago - Stars: 20 - Forks: 4

Megha-Bose/Markov-Decision-Process
Computing optimal MDP policy using Value Iteration Algorithm and Linear Programming
Language: Python - Size: 2.04 MB - Last synced at: over 2 years ago - Pushed at: over 4 years ago - Stars: 1 - Forks: 0
