GitHub / flowun / gardnerChessAi
Implementation of the Double Deep Q-Learning algorithm with a prioritized experience replay memory to train an agent to play the minichess variante Gardner Chess
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flowun%2FgardnerChessAi
PURL: pkg:github/flowun/gardnerChessAi
Stars: 3
Forks: 1
Open issues: 0
License: mit
Language: Python
Size: 3.56 MB
Dependencies parsed at: Pending
Created at: over 1 year ago
Updated at: about 2 months ago
Pushed at: over 1 year ago
Last synced at: 15 days ago
Commit Stats
Commits: 8
Authors: 2
Mean commits per author: 4.0
Development Distribution Score: 0.125
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/flowun/gardnerChessAi
Topics: ai, ai-projects, artificial-inteligence, chess, chess-ai, ddqn, deep-q-learning, double-deep-q-learning, double-dqn, minichess, prioritized-experience-replay, q-value, reinforcement-learning, tensorflow, tensorflow-tutorials, tutorials