Topic: "parallel-training"
NoteDance/Note
Machine learning library, Distributed training, Deep learning, Reinforcement learning, Models, TensorFlow, PyTorch
Language: Python - Size: 10.1 MB - Last synced at: about 10 hours ago - Pushed at: about 11 hours ago - Stars: 64 - Forks: 2

explosion/spacy-ray
☄️ Parallel and distributed training with spaCy and Ray
Language: Python - Size: 95.7 KB - Last synced at: 16 days ago - Pushed at: almost 2 years ago - Stars: 54 - Forks: 9

Tikquuss/meta_XLM
Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
Language: Jupyter Notebook - Size: 32.7 MB - Last synced at: 4 days ago - Pushed at: about 4 years ago - Stars: 20 - Forks: 4

PinJhih/ddp-trainer
A simple package for distributed model training using Distributed Data Parallel (DDP) in PyTorch.
Language: Python - Size: 14.6 KB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 7 - Forks: 0

jiankaiwang/distributed_training
This repository is a tutorial targeting how to train a deep neural network model in a higher efficient way. In this repository, we focus on two main frameworks that are Keras and Tensorflow.
Language: Jupyter Notebook - Size: 58.6 KB - Last synced at: over 1 year ago - Pushed at: over 4 years ago - Stars: 6 - Forks: 4

taka-rl/tic-tac-toe_q_learning
tic-tac-toe with q-learning
Language: Python - Size: 1.31 MB - Last synced at: about 1 month ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0
