An open API service providing repository metadata for many open source software ecosystems.

GitHub / aleksa-sukovic / iclr2024-reward-design-for-justifiable-rl

Code for the paper "Reward Design for Justifiable Sequential Decision-Making"; ICLR 2024

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aleksa-sukovic%2Ficlr2024-reward-design-for-justifiable-rl
PURL: pkg:github/aleksa-sukovic/iclr2024-reward-design-for-justifiable-rl

Stars: 0
Forks: 0
Open issues: 0

License: mit
Language: Jupyter Notebook
Size: 2.2 MB
Dependencies parsed at: Pending

Created at: over 1 year ago
Updated at: over 1 year ago
Pushed at: over 1 year ago
Last synced at: over 1 year ago

Topics: alignment, preference-based-reinforcement-learning, preference-learning, reinforcement-learning, reward-design

    Loading...