An open API service providing repository metadata for many open source software ecosystems.

Topic: "parameter-efficient-tuning"

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

Language: Jupyter Notebook - Size: 96.9 MB - Last synced at: about 3 hours ago - Pushed at: 3 days ago - Stars: 2,689 - Forks: 360

ttengwang/Awesome_Prompting_Papers_in_Computer_Vision

A curated list of prompt-based paper in computer vision and vision-language learning.

Size: 72.3 KB - Last synced at: 11 days ago - Pushed at: over 1 year ago - Stars: 918 - Forks: 72

NVlabs/DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

Language: Python - Size: 3.06 MB - Last synced at: 4 months ago - Pushed at: 7 months ago - Stars: 668 - Forks: 44

jianghaojun/Awesome-Parameter-Efficient-Transfer-Learning

A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.

Size: 165 KB - Last synced at: 9 months ago - Pushed at: 9 months ago - Stars: 372 - Forks: 23

HenryHZY/Awesome-Multimodal-LLM

Research Trends in LLM-guided Multimodal Learning.

Size: 17.6 KB - Last synced at: 3 days ago - Pushed at: over 1 year ago - Stars: 358 - Forks: 16

calpt/awesome-adapter-resources

Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning

Language: Python - Size: 213 KB - Last synced at: 6 days ago - Pushed at: 12 months ago - Stars: 189 - Forks: 11

JieShibo/PETL-ViT

[ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass

Language: Python - Size: 18.5 MB - Last synced at: 18 days ago - Pushed at: over 1 year ago - Stars: 180 - Forks: 8

eric-ai-lab/PEViT

Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"

Language: Python - Size: 2.96 MB - Last synced at: 18 days ago - Pushed at: over 1 year ago - Stars: 104 - Forks: 5

thunlp/Prompt-Transferability

On Transferability of Prompt Tuning for Natural Language Processing

Language: Python - Size: 629 MB - Last synced at: 19 days ago - Pushed at: 12 months ago - Stars: 99 - Forks: 11

ShiZhengyan/DePT

[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

Language: Python - Size: 3.71 MB - Last synced at: 15 days ago - Pushed at: about 1 year ago - Stars: 96 - Forks: 16

changdaeoh/BlackVIP

Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"

Language: Python - Size: 1.81 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 82 - Forks: 8

zhangyikaii/LAMDA-ZhiJian

ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse

Language: Python - Size: 13 MB - Last synced at: 4 days ago - Pushed at: over 1 year ago - Stars: 51 - Forks: 2

Paranioar/UniPT

[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"

Language: Python - Size: 15.6 MB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 49 - Forks: 0

WillDreamer/Aurora

[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model

Language: Python - Size: 118 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 48 - Forks: 3

morningmoni/UniPELT

Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022

Language: Python - Size: 1.59 MB - Last synced at: over 1 year ago - Pushed at: about 3 years ago - Stars: 45 - Forks: 2

bighuang624/VoP

[CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval

Size: 2.93 KB - Last synced at: about 2 months ago - Pushed at: about 2 years ago - Stars: 38 - Forks: 3

ImKeTT/AdaVAE

[Preprint] AdaVAE: Exploring Adaptive GPT-2s in VAEs for Language Modeling PyTorch Implementation

Language: Python - Size: 888 KB - Last synced at: 10 days ago - Pushed at: over 1 year ago - Stars: 35 - Forks: 4

ZO-Bench/ZO-LLM

[ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".

Language: Python - Size: 156 KB - Last synced at: 10 months ago - Pushed at: 10 months ago - Stars: 34 - Forks: 3

OSU-MLB/ViT_PEFT_Vision

[CVPR'25] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition

Language: Jupyter Notebook - Size: 3.54 MB - Last synced at: 18 days ago - Pushed at: 18 days ago - Stars: 31 - Forks: 0

jaisidhsingh/LoRA-CLIP

Easy wrapper for inserting LoRA layers in CLIP.

Language: Python - Size: 60.5 KB - Last synced at: 5 days ago - Pushed at: 10 months ago - Stars: 31 - Forks: 3

LeapLabTHU/Cross-Modal-Adapter

[arXiv] Cross-Modal Adapter for Text-Video Retrieval

Size: 3.39 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 29 - Forks: 2

mlvlab/ProMetaR

Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".

Language: Python - Size: 2.85 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 27 - Forks: 1

auniquesun/PPT

[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"

Language: Jupyter Notebook - Size: 11 MB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 22 - Forks: 5

westlake-repl/Adapter4Rec

Multi-domain Recommendation with Adapter Tuning

Language: Python - Size: 46.9 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 17 - Forks: 1

Allen0307/AdapterBias

Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Language: Python - Size: 4.46 MB - Last synced at: about 2 years ago - Pushed at: almost 3 years ago - Stars: 17 - Forks: 0

danelpeng/Awesome-Continual-Leaning-with-PTMs

This is a curated list of "Continual Learning with Pretrained Models" research.

Size: 254 KB - Last synced at: 7 days ago - Pushed at: 30 days ago - Stars: 16 - Forks: 0

gauss5930/AlpaGasus2-QLoRA

This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!

Language: Python - Size: 3 MB - Last synced at: 4 days ago - Pushed at: over 1 year ago - Stars: 15 - Forks: 3

yunqing-me/AdAM

The Thirty-Sixth Annual Conference on Neural Information Processing Systems (NeurIPS) 2022

Language: Python - Size: 44.1 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 13 - Forks: 1

declare-lab/domadapter

Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"

Language: Python - Size: 538 KB - Last synced at: 9 days ago - Pushed at: about 2 years ago - Stars: 10 - Forks: 1

daskol/lotr

Low Tensor Rank adaptation of large language models

Language: Python - Size: 132 KB - Last synced at: 10 days ago - Pushed at: 11 months ago - Stars: 9 - Forks: 1

siyi-wind/AViT

[MICCAI ISIC Workshop 2023] AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation Datasets (an official implementation)

Language: Python - Size: 593 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 9 - Forks: 1

louisc-s/QLoRA-Fine-tuning-for-Film-Character-Styled-Responses-from-LLM

Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses

Language: Python - Size: 63.1 MB - Last synced at: 4 days ago - Pushed at: over 1 year ago - Stars: 8 - Forks: 1

joaopauloschuler/kEffNetV1

This repository contains the source code for the paper "Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks".

Language: Jupyter Notebook - Size: 131 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 6 - Forks: 1

oppolla/Self-Organizing-Virtual-Lifeform

SOVL System (Self-Organizing Virtual Lifeform): An AI agent with autonomous learning capabilities, combining a base LLM with a scaffolded second dynamic LLM for continuous learning via sleep and dream mechanisms

Language: Python - Size: 2.8 MB - Last synced at: 6 days ago - Pushed at: 6 days ago - Stars: 5 - Forks: 1

Paranioar/GSSF

[TIP2024] The code of "GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning"

Size: 5.86 KB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 5 - Forks: 0

adarobustness/adaptation_robustness

Evaluate robustness of adaptation methods on large vision-language models

Language: Shell - Size: 1.66 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 5 - Forks: 0

yejoon-lee/kr3

KR3: Korean Restaurant Review with Ratings / Experiments on Parameter-efficient Tuning and Task-adaptive Pre-training

Language: Jupyter Notebook - Size: 600 KB - Last synced at: almost 2 years ago - Pushed at: over 2 years ago - Stars: 5 - Forks: 2

gimpong/ICCV23-IDPT Fork of zyh16143998882/ICCV23-IDPT

The code for the paper "Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models" (ICCV'23).

Language: Python - Size: 931 KB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 4 - Forks: 0

Jaso1024/Self-Attention-Factor-Tuning

Code for SAFT: Self-Attention Factor-Tuning, a 16x more efficient solution for fine-tuning neural networks

Language: Python - Size: 75.2 KB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 3 - Forks: 0

sergio11/llm_finetuning_and_evaluation

The LLM FineTuning and Evaluation project πŸš€ enhances FLAN-T5 models for tasks like summarizing Spanish news articles πŸ‡ͺπŸ‡ΈπŸ“°. It features detailed notebooks πŸ“š on fine-tuning and evaluating models to optimize performance for specific applications. πŸ”βœ¨

Language: Jupyter Notebook - Size: 499 KB - Last synced at: 5 days ago - Pushed at: about 1 month ago - Stars: 2 - Forks: 1

Paranioar/SHERL

[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"

Size: 8.79 KB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 2 - Forks: 0

sebastianpinedaar/finetuning_text_classifiers

A simple code-base for finetuning Large Language Models for classification tasks.

Language: Jupyter Notebook - Size: 52.7 KB - Last synced at: 9 months ago - Pushed at: 9 months ago - Stars: 2 - Forks: 0

MIFA-Lab/SAFE

Implementation for NeurIPS 2024 paper "SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models" (https://arxiv.org/abs/2411.02175)

Language: Python - Size: 497 KB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 1 - Forks: 0

lucalila/fishpal

Master Thesis on "Comparing Modular Approaches for Parameter-Efficient Fine-Tuning"

Language: Python - Size: 9.05 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 0

nikhil-chigali/AdapterBERT

This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.

Language: Python - Size: 130 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

adarobustness/corruption

The code for generating natural distribution shifts on image and text datasets.

Language: Python - Size: 4.47 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

Related Topics
transfer-learning 12 parameter-efficient-learning 10 pytorch 9 lora 8 nlp 8 prompt-tuning 7 natural-language-processing 7 deep-learning 7 adapters 6 fine-tuning 6 peft 6 large-language-models 6 adapter 5 parameter-efficient-fine-tuning 5 vision-transformer 4 transformers 4 llm 3 qlora 3 prompt-learning 3 bert 3 multimodal 3 visual-prompting 3 machine-learning 3 text-classification 3 pretrained-models 3 bert-fine-tuning 2 memory-efficient-learning 2 image-text-matching 2 cross-modal 2 pre-trained-model 2 memory-efficient-tuning 2 awesome 2 instruction-tuning 2 transformer 2 text-generation 2 huggingface-transformers 2 vision-and-language 2 robustness 2 continual-learning 2 few-shot-learning 2 generative-ai 2 computer-vision 2 multimodal-deep-learning 2 llama2 2 vision-language-pretraining 2 controllable-generation 1 visual-prompt 1 gpt-2 1 zero-shot-learning 1 lotr 1 cross-attention 1 neural-network 1 neural-networks 1 cvpr2023 1 zeroth-order-optimization 1 low-rank-adaptation 1 commonsense-reasoning 1 deep-neural-networks 1 large-vision-language-models 1 cvpr2025 1 vision-recognition 1 visual-recognition 1 in-context-learning 1 multimodal-large-language-models 1 multimodal-learning 1 language-model 1 nlp-machine-learning 1 cvpr2024 1 dreaming-ai 1 dynamic-fusion 1 gestational-learning 1 lifelong-learning 1 lora-adapters 1 memory-augmented-ai 1 multi-model-systems 1 neurosymbolic-ai 1 self-organizing-systems 1 temperament-model 1 ethical-ai 1 flan-t5 1 ppo 1 prompt-engineering 1 python 1 rlhf 1 tinyllama 1 transformer-models 1 representation-learning 1 vae 1 variational-autoencoder 1 domain-adaptation 1 knowledge-distillation 1 model-merging 1 model-reuse 1 regularization 1 toolbox 1 pretrained-language-model 1 pretrained-language-models 1 prompt 1 diffusion-models 1 embodied-ai 1