GitHub topics: linearucb
ReinerJasin/Multi-Armed-Bandit
Implementation of the Multi-Armed Bandit where each arm returns continuous numerical rewards. Covers Epsilon-Greedy, UCB1, and Thompson Sampling with detailed explanations.
Language: Jupyter Notebook - Size: 3.14 MB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 1 - Forks: 0

cwuu/DataMining-LearningFromLargeDataSet-Task4
ETH Zurich Fall 2017
Language: Python - Size: 1.78 MB - Last synced at: about 2 years ago - Pushed at: about 7 years ago - Stars: 1 - Forks: 0
