An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: pipeline-parallelism

deepspeedai/DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language: Python - Size: 217 MB - Last synced at: 1 day ago - Pushed at: 1 day ago - Stars: 39,507 - Forks: 4,485

bigscience-workshop/petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Language: Python - Size: 4.06 MB - Last synced at: 1 day ago - Pushed at: 11 months ago - Stars: 9,731 - Forks: 567

hpcaitech/ColossalAI

Making large AI models cheaper, faster and more accessible

Language: Python - Size: 63.2 MB - Last synced at: 6 days ago - Pushed at: 6 days ago - Stars: 41,044 - Forks: 4,524

Shenggan/awesome-distributed-ml

A curated list of awesome projects and papers for distributed training or inference

Size: 44.9 KB - Last synced at: 7 days ago - Pushed at: 10 months ago - Stars: 239 - Forks: 28

1set-t/ai-model

Industrial-grade weather visualization system that transforms AI model predictions into professional meteorological plots, emphasizing operational forecasting capabilities.

Size: 1.95 KB - Last synced at: 11 days ago - Pushed at: 11 days ago - Stars: 0 - Forks: 0

gty111/gLLM

gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling

Language: Python - Size: 1.42 MB - Last synced at: 16 days ago - Pushed at: 16 days ago - Stars: 29 - Forks: 1

ai-decentralized/BloomBee

Decentralized LLMs fine-tuning and inference with offloading

Language: Python - Size: 36.7 MB - Last synced at: 19 days ago - Pushed at: 19 days ago - Stars: 94 - Forks: 16

torchpipe/torchpipe

Serving Inside Pytorch

Language: C++ - Size: 41.6 MB - Last synced at: 13 days ago - Pushed at: 13 days ago - Stars: 163 - Forks: 13

xrsrke/pipegoose

Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*

Language: Python - Size: 1.26 MB - Last synced at: 18 days ago - Pushed at: over 1 year ago - Stars: 84 - Forks: 19

InternLM/InternEvo

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

Language: Python - Size: 6.8 MB - Last synced at: 25 days ago - Pushed at: 25 days ago - Stars: 393 - Forks: 69

kakaobrain/torchgpipe

A GPipe implementation in PyTorch

Language: Python - Size: 449 KB - Last synced at: 28 days ago - Pushed at: about 1 year ago - Stars: 843 - Forks: 99

PaddlePaddle/PaddleFleetX

飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。

Language: Python - Size: 637 MB - Last synced at: 28 days ago - Pushed at: about 1 year ago - Stars: 470 - Forks: 165

Oneflow-Inc/libai

LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training

Language: Python - Size: 34.7 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 406 - Forks: 56

ParCIS/Chimera

Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.

Language: Python - Size: 1.05 MB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 62 - Forks: 8

alibaba/EasyParallelLibrary

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

Language: Python - Size: 771 KB - Last synced at: 4 months ago - Pushed at: over 2 years ago - Stars: 267 - Forks: 49

AlibabaPAI/DAPPLE

An Efficient Pipelined Data Parallel Approach for Training Large Model

Language: Python - Size: 1.64 MB - Last synced at: 4 months ago - Pushed at: over 4 years ago - Stars: 73 - Forks: 17

Coobiw/MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

Language: Jupyter Notebook - Size: 73.1 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 420 - Forks: 23

torchpipe/torchpipe.github.io

Docs for torchpipe: https://github.com/torchpipe/torchpipe

Language: MDX - Size: 7.86 MB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 4 - Forks: 1

saareliad/FTPipe

FTPipe and related pipeline model parallelism research.

Language: Python - Size: 11.4 MB - Last synced at: 4 months ago - Pushed at: about 2 years ago - Stars: 41 - Forks: 7

fanpu/DynPartition

Official implementation of DynPartition: Automatic Optimal Pipeline Parallelism of Dynamic Neural Networks over Heterogeneous GPU Systems for Inference Tasks

Language: Python - Size: 135 MB - Last synced at: over 1 year ago - Pushed at: about 2 years ago - Stars: 5 - Forks: 0

nawnoes/pytorch-gpt-x

Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.

Language: Python - Size: 2.98 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 29 - Forks: 2

garg-aayush/model-parallelism

Model parallelism for NN architectures with skip connections (eg. ResNets, UNets)

Language: Python - Size: 6.85 MB - Last synced at: over 2 years ago - Pushed at: about 3 years ago - Stars: 2 - Forks: 0

LER0ever/HPGO

Development of Project HPGO | Hybrid Parallelism Global Orchestration

Size: 5.29 MB - Last synced at: 11 days ago - Pushed at: over 4 years ago - Stars: 3 - Forks: 0

Related Keywords
pipeline-parallelism 23 model-parallelism 11 pytorch 11 deep-learning 9 data-parallelism 8 machine-learning 6 tensor-parallelism 5 large-scale 4 inference 4 distributed-training 4 gpipe 3 transformer 3 nlp 3 distributed-systems 3 transformers 2 self-supervised-learning 2 sequence-parallelism 2 gpt 2 fine-tuning 2 pretraining 2 large-language-models 2 llama 2 tensorrt 2 neural-networks 2 deepspeed 2 deployment 2 mixture-of-experts 2 gpu 2 llm-serving 2 serving 2 benchmark 1 cloud 1 distributed-algorithm 1 elastic 1 fleet-api 1 tensorflow 1 parallelism 1 checkpointing 1 zero3 1 transformers-models 1 ring-attention 1 multi-modal 1 llm-training 1 llm-framework 1 llava 1 llama3 1 internlm2 1 internlm 1 multimodal 1 rust 1 pipedream 1 treelstm 1 scheduling 1 reinforcement-learning 1 dynpartition 1 dynamic-neural-network 1 t5 1 deep-neural-networks 1 video-large-language-models 1 video-language-model 1 qwen 1 multimodal-large-language-models 1 model-parallel 1 mllm 1 hybrid-parallelism 1 distribution-strategy-planner 1 memory-efficient 1 distributed-deep-learning 1 vision-transformer 1 oneflow 1 unsupervised-learning 1 paddlepaddle 1 paddlecloud 1 lightning 1 mlops 1 llms 1 image-classification 1 face-detection 1 data-science 1 high-performance-computing 1 hpc 1 heterogeneous-training 1 foundation-models 1 distributed-computing 1 big-model 1 ai 1 volunteer-computing 1 pretrained-models 1 mixtral 1 language-models 1 guanaco 1 falcon 1 chatbot 1 bloom 1 zero 1 trillion-parameters 1 compression 1 billion-parameters 1 gemma 1 flash-attention 1