An open API service providing repository metadata for many open source software ecosystems.

Topic: "parameter-efficient-tuning"

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

Language: Python - Size: 96.5 MB - Last synced at: 3 days ago - Pushed at: about 1 month ago - Stars: 2,783 - Forks: 372

ttengwang/Awesome_Prompting_Papers_in_Computer_Vision

A curated list of prompt-based paper in computer vision and vision-language learning.

Size: 72.3 KB - Last synced at: 3 days ago - Pushed at: almost 2 years ago - Stars: 926 - Forks: 71

NVlabs/DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

Language: Python - Size: 3.06 MB - Last synced at: 3 months ago - Pushed at: about 1 year ago - Stars: 837 - Forks: 60

jianghaojun/Awesome-Parameter-Efficient-Transfer-Learning

A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.

Size: 165 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 372 - Forks: 23

HenryHZY/Awesome-Multimodal-LLM

Research Trends in LLM-guided Multimodal Learning.

Size: 17.6 KB - Last synced at: 7 days ago - Pushed at: about 2 years ago - Stars: 356 - Forks: 16

calpt/awesome-adapter-resources

Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning

Language: Python - Size: 213 KB - Last synced at: 10 days ago - Pushed at: over 1 year ago - Stars: 200 - Forks: 10

JieShibo/PETL-ViT

[ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass

Language: Python - Size: 18.5 MB - Last synced at: 8 months ago - Pushed at: over 2 years ago - Stars: 180 - Forks: 8

ZO-Bench/ZO-LLM

[ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".

Language: Python - Size: 156 KB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 105 - Forks: 14

eric-ai-lab/PEViT

Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"

Language: Python - Size: 2.96 MB - Last synced at: 8 months ago - Pushed at: over 2 years ago - Stars: 104 - Forks: 5

thunlp/Prompt-Transferability

On Transferability of Prompt Tuning for Natural Language Processing

Language: Python - Size: 629 MB - Last synced at: 5 months ago - Pushed at: over 1 year ago - Stars: 99 - Forks: 11

ShiZhengyan/DePT

[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

Language: Python - Size: 3.71 MB - Last synced at: 8 months ago - Pushed at: over 1 year ago - Stars: 96 - Forks: 16

changdaeoh/BlackVIP

Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"

Language: Python - Size: 1.81 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 82 - Forks: 8

morningmoni/UniPELT

Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022

Language: Python - Size: 1.59 MB - Last synced at: 5 months ago - Pushed at: over 3 years ago - Stars: 61 - Forks: 5

LeapLabTHU/Cross-Modal-Adapter

[Pattern Recognition 2025] Cross-Modal Adapter for Vision-Language Retrieval

Language: Python - Size: 10.5 MB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 55 - Forks: 2

Paranioar/UniPT

[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"

Language: Python - Size: 15.6 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 49 - Forks: 0

zhangyikaii/LAMDA-ZhiJian

ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse

Language: Python - Size: 13 MB - Last synced at: about 2 months ago - Pushed at: about 2 years ago - Stars: 49 - Forks: 2

WillDreamer/Aurora

[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model

Language: Python - Size: 118 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 48 - Forks: 3

OSU-MLB/ViT_PEFT_Vision

[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition

Language: Jupyter Notebook - Size: 3.58 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 39 - Forks: 0

jaisidhsingh/LoRA-CLIP

Easy wrapper for inserting LoRA layers in CLIP.

Language: Python - Size: 60.5 KB - Last synced at: about 2 months ago - Pushed at: over 1 year ago - Stars: 38 - Forks: 2

bighuang624/VoP

[CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval

Size: 2.93 KB - Last synced at: 5 months ago - Pushed at: over 2 years ago - Stars: 38 - Forks: 3

ImKeTT/AdaVAE

[Preprint] AdaVAE: Exploring Adaptive GPT-2s in VAEs for Language Modeling PyTorch Implementation

Language: Python - Size: 888 KB - Last synced at: 7 months ago - Pushed at: about 2 years ago - Stars: 35 - Forks: 4

AuroraZengfh/RobustMerge

[NeurIPS'25 Spotlight🔥] Official Implementation of RobustMerge: Parameter-Efficient Model Merging for MLLMs with Direction Robustness

Language: Python - Size: 1.98 MB - Last synced at: 8 days ago - Pushed at: 8 days ago - Stars: 29 - Forks: 2

mlvlab/ProMetaR

Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".

Language: Python - Size: 2.85 MB - Last synced at: 8 months ago - Pushed at: 8 months ago - Stars: 27 - Forks: 1

auniquesun/PPT

[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"

Language: Jupyter Notebook - Size: 11 MB - Last synced at: 9 months ago - Pushed at: 9 months ago - Stars: 22 - Forks: 5

danelpeng/Awesome-Continual-Leaning-with-PTMs

This is a curated list of "Continual Learning with Pretrained Models" research.

Size: 351 KB - Last synced at: 12 days ago - Pushed at: 6 months ago - Stars: 19 - Forks: 0

Allen0307/AdapterBias

Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Language: Python - Size: 4.46 MB - Last synced at: 5 months ago - Pushed at: over 3 years ago - Stars: 18 - Forks: 0

westlake-repl/Adapter4Rec

Multi-domain Recommendation with Adapter Tuning

Language: Python - Size: 46.9 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 17 - Forks: 1

gauss5930/AlpaGasus2-QLoRA

This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!

Language: Python - Size: 3 MB - Last synced at: 4 days ago - Pushed at: almost 2 years ago - Stars: 15 - Forks: 3

yunqing-me/AdAM

The Thirty-Sixth Annual Conference on Neural Information Processing Systems (NeurIPS) 2022

Language: Python - Size: 44.1 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 13 - Forks: 1

daskol/lotr

Low Tensor Rank adaptation of large language models

Language: Python - Size: 132 KB - Last synced at: 22 days ago - Pushed at: over 1 year ago - Stars: 10 - Forks: 1

declare-lab/domadapter

Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"

Language: Python - Size: 538 KB - Last synced at: 7 months ago - Pushed at: over 2 years ago - Stars: 10 - Forks: 1

siyi-wind/AViT

[MICCAI ISIC Workshop 2023] AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation Datasets (an official implementation)

Language: Python - Size: 593 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 9 - Forks: 1

louisc-s/QLoRA-Fine-tuning-for-Film-Character-Styled-Responses-from-LLM

Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses

Language: Python - Size: 63.1 MB - Last synced at: 9 days ago - Pushed at: almost 2 years ago - Stars: 8 - Forks: 1

joaopauloschuler/kEffNetV1

This repository contains the source code for the paper "Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks".

Language: Jupyter Notebook - Size: 131 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 6 - Forks: 1

Paranioar/GSSF

[TIP2024] The code of "GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning"

Size: 5.86 KB - Last synced at: 11 months ago - Pushed at: 11 months ago - Stars: 5 - Forks: 0

adarobustness/adaptation_robustness

Evaluate robustness of adaptation methods on large vision-language models

Language: Shell - Size: 1.66 MB - Last synced at: about 2 years ago - Pushed at: about 2 years ago - Stars: 5 - Forks: 0

yejoon-lee/kr3

KR3: Korean Restaurant Review with Ratings / Experiments on Parameter-efficient Tuning and Task-adaptive Pre-training

Language: Jupyter Notebook - Size: 600 KB - Last synced at: over 2 years ago - Pushed at: almost 3 years ago - Stars: 5 - Forks: 2

gimpong/ICCV23-IDPT Fork of zyh16143998882/ICCV23-IDPT

The code for the paper "Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models" (ICCV'23).

Language: Python - Size: 931 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 4 - Forks: 0

sergio11/llm_finetuning_and_evaluation

The LLM FineTuning and Evaluation project 🚀 enhances FLAN-T5 models for tasks like summarizing Spanish news articles 🇪🇸📰. It features detailed notebooks 📚 on fine-tuning and evaluating models to optimize performance for specific applications. 🔍✨

Language: Jupyter Notebook - Size: 499 KB - Last synced at: 7 months ago - Pushed at: 8 months ago - Stars: 2 - Forks: 1

Paranioar/SHERL

[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"

Size: 8.79 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 2 - Forks: 0

sebastianpinedaar/finetuning_text_classifiers

A simple code-base for finetuning Large Language Models for classification tasks.

Language: Jupyter Notebook - Size: 52.7 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 2 - Forks: 0

MIFA-Lab/SAFE

Implementation for NeurIPS 2024 paper "SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models" (https://arxiv.org/abs/2411.02175)

Language: Python - Size: 497 KB - Last synced at: 11 months ago - Pushed at: 11 months ago - Stars: 1 - Forks: 0

lucalila/fishpal

Master Thesis on "Comparing Modular Approaches for Parameter-Efficient Fine-Tuning"

Language: Python - Size: 9.05 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 1 - Forks: 0

kantkrishan0206-crypto/LoRAForge-

Build a production‑grade, modular pipeline for fine‑tuning large language models with LoRA on domain‑specific tasks (e.g., legal QA, medical summarization, financial reasoning).

Language: Makefile - Size: 1.25 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 0 - Forks: 0

safeernabi4/language-learning-prompts

LLM prompts for language learning.

Size: 7.81 KB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 0 - Forks: 0

nikhil-chigali/AdapterBERT

This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.

Language: Python - Size: 130 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

adarobustness/corruption

The code for generating natural distribution shifts on image and text datasets.

Language: Python - Size: 4.47 MB - Last synced at: over 2 years ago - Pushed at: over 2 years ago - Stars: 0 - Forks: 0

Related Topics
transfer-learning 13 parameter-efficient-learning 10 nlp 9 pytorch 9 lora 9 natural-language-processing 7 deep-learning 7 prompt-tuning 7 peft 7 adapter 6 adapters 6 large-language-models 6 fine-tuning 5 transformers 5 parameter-efficient-fine-tuning 5 machine-learning 4 llm 3 qlora 3 prompt-learning 3 visual-prompting 3 pretrained-models 3 vision-transformer 3 multimodal 3 vision-and-language 3 few-shot-learning 3 text-classification 3 bert 3 huggingface-transformers 2 iccv2023 2 awesome 2 prefix-tuning 2 text-generation 2 huggingface 2 bert-fine-tuning 2 domain-adaptation 2 low-rank-adaptation 2 pre-trained-model 2 transformer 2 robustness 2 vision-language 2 continual-learning 2 llama2 2 memory-efficient-tuning 2 generative-ai 2 computer-vision 2 multimodal-deep-learning 2 instruction-tuning 2 memory-efficient-learning 2 vision-language-pretraining 2 cross-modal 2 model-merging 2 in-context-learning 2 image-text-matching 2 experiment-tracking 1 gpt-finetuning 1 industrial-nlp 1 legal-nlp 1 vision-language-models 1 bitfit 1 adapter-tuning 1 variational-autoencoder 1 vae 1 representation-learning 1 gpt-2 1 controllable-generation 1 lotr 1 vision-language-model 1 testcase-generator 1 image-recognition 1 clip-model 1 cvpr2024 1 visual-recognition 1 vision-recognition 1 keras-neural-network 1 ethical-ai 1 flan-t5 1 ppo 1 prompt-engineering 1 python 1 rlhf 1 tinyllama 1 transformer-models 1 cvpr2023 1 chatbot 1 finetuning-llms 1 alpaca 1 zeroth-order-optimization 1 multi-modal-large-language-model 1 multi-task-learnning 1 visual-prompt 1 zero-shot-learning 1 llama 1 medical-nlp 1 mistral 1 mixed-precision 1 quantization 1 tuning 1 knowledge-distillation 1 model-reuse 1 regularization 1