An open API service providing repository metadata for many open source software ecosystems.

Topic: "parameter-efficient-fine-tuning"

NVlabs/DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

Language: Python - Size: 3.06 MB - Last synced at: 11 days ago - Pushed at: 11 months ago - Stars: 837 - Forks: 60

synbol/Awesome-Parameter-Efficient-Transfer-Learning

Collection of awesome parameter-efficient fine-tuning resources.

Size: 205 KB - Last synced at: 10 days ago - Pushed at: about 2 months ago - Stars: 570 - Forks: 15

Paranioar/Awesome_Matching_Pretraining_Transfering

The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.

Size: 369 KB - Last synced at: 11 days ago - Pushed at: 9 months ago - Stars: 428 - Forks: 49

Chongjie-Si/Subspace-Tuning

A generalized framework for subspace tuning methods in parameter efficient fine-tuning.

Language: Python - Size: 53.3 MB - Last synced at: 1 day ago - Pushed at: 3 months ago - Stars: 154 - Forks: 5

juzhengz/LoRI

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

Language: Python - Size: 23.5 MB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 131 - Forks: 7

liuqidong07/MOELoRA-peft

[SIGIR'24] The official implementation code of MOELoRA.

Language: Python - Size: 10.2 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 105 - Forks: 11

ShiZhengyan/DePT

[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

Language: Python - Size: 3.71 MB - Last synced at: 5 months ago - Pushed at: over 1 year ago - Stars: 96 - Forks: 16

ziplab/SPT

[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.

Language: Python - Size: 15.5 MB - Last synced at: 5 months ago - Pushed at: almost 2 years ago - Stars: 67 - Forks: 2

miccunifi/KDPL

[ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation

Language: Python - Size: 5.06 MB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 62 - Forks: 1

astra-vision/FAMix

[CVPR 2024] Domain generalization by interpolating original feature styles with styles obtained using random descriptions in natural language

Language: Python - Size: 54.3 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 51 - Forks: 3

iboing/CorDA

CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)

Language: Python - Size: 1.96 MB - Last synced at: 8 months ago - Pushed at: 8 months ago - Stars: 41 - Forks: 1

OSU-MLB/ViT_PEFT_Vision

[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition

Language: Jupyter Notebook - Size: 3.58 MB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 39 - Forks: 0

umbertocappellazzo/PETL_AST

This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".

Language: Python - Size: 3.09 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 32 - Forks: 1

astra-vision/ProLIP

An extremely simple method for validation-free efficient adaptation of CLIP-like VLMs that is robust to the learning rate.

Language: Shell - Size: 3.9 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 24 - Forks: 2

auniquesun/PPT

[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"

Language: Jupyter Notebook - Size: 11 MB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 22 - Forks: 5

fredzzhang/atlas

Official PyTorch implementation for NeurIPS'24 paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"

Language: Python - Size: 689 KB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 18 - Forks: 0

PurdueDigitalTwin/MACP

[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.

Language: Python - Size: 106 MB - Last synced at: 5 days ago - Pushed at: over 1 year ago - Stars: 18 - Forks: 1

CASE-Lab-UMD/Router-Tuning-Mixture-of-Depths

The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for Enabling Dynamic Depth in Transformers. (EMNLP 2025)"

Language: Python - Size: 467 MB - Last synced at: 21 days ago - Pushed at: 21 days ago - Stars: 15 - Forks: 2

cityuhkai/SBoRA

Language: Python - Size: 4.24 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 9 - Forks: 0

Raman1121/FairTune

A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis

Language: Python - Size: 235 KB - Last synced at: 9 months ago - Pushed at: over 1 year ago - Stars: 7 - Forks: 1

ssfgunner/SNELL

[NeurIPS 2024] This is the official repository for our paper: ''Expanding Sparse Tuning for Low Memory Usage''.

Language: Python - Size: 431 KB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 6 - Forks: 0

YuanheZ/LoRA-One

LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently (ICML2025 Oral)

Language: Jupyter Notebook - Size: 4.61 MB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 5 - Forks: 0

fork123aniket/LLM-RAG-powered-QA-App

A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App

Language: Python - Size: 22.5 KB - Last synced at: about 1 month ago - Pushed at: 8 months ago - Stars: 5 - Forks: 1

ltlhuuu/PSEC

[ICLR 2025] The offical Implementation of "PSEC: Skill Expansion and Composition in Parameter Space"

Language: Python - Size: 67.5 MB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 3 - Forks: 0

GeorgeVern/lmcor

Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"

Language: Python - Size: 401 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 3 - Forks: 0

punpunzaz10/TADFormer

Efficiently implement multi-task learning with TADFormer, a task-adaptive dynamic transformer. Explore the code on GitHub! 🚀🌟

Language: Python - Size: 126 KB - Last synced at: 4 days ago - Pushed at: 4 days ago - Stars: 2 - Forks: 0

Paranioar/SHERL

[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"

Size: 8.79 KB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 2 - Forks: 0

Md-Emon-Hasan/Fine-Tuning

End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment

Language: Jupyter Notebook - Size: 5.53 MB - Last synced at: about 2 months ago - Pushed at: 3 months ago - Stars: 1 - Forks: 0

Lake-Wang/NLP_Adapter_Parameter_Allocation

This project investigates the robustness of parameter allocation strategies in Mix-and-Match (MAM) Adapters for PEFT across different tunable budgets. Our ablation study reveals that optimal allocation ratios vary by task and scale, challenging the generalizability of default MAM configurations.

Language: Python - Size: 498 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 1 - Forks: 0

Sid3503/LoRA

A beginner-friendly guide to Low-Rank Adaptation (LoRA) - the efficient fine-tuning technique for LLMs. Explains core concepts with intuitive visuals, math, and minimal code.

Language: Jupyter Notebook - Size: 1.27 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 1 - Forks: 0

rochitasundar/Generative-AI-with-Large-Language-Models

This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".

Language: Jupyter Notebook - Size: 218 KB - Last synced at: over 1 year ago - Pushed at: almost 2 years ago - Stars: 1 - Forks: 0

Thiraput01/QwenMed

Qwen3 fine-tuned on medical datasets with reasoning data

Language: Jupyter Notebook - Size: 182 KB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 0 - Forks: 0

wahabzh/lora-smollm-finetuning

⚙️ LoRA implementation for efficient SmolLM fine-tuning. Achieves comparable performance with only 0.24% trainable parameters.

Language: Jupyter Notebook - Size: 155 KB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 0 - Forks: 0

samay-jain/Fine_tuning_Distilbert_Model_using_LoRA_Low-Rank-Adaptation

Parameter-efficient fine-tuning of DistilBERT using LoRA for sentiment and topic classification, with CLI, API, and interactive chatbot interfaces.

Language: Python - Size: 171 MB - Last synced at: about 1 month ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

AkshaySyal/Parameter-Efficient-Fine-Tuning-with-LoRA

Demonstrated fine-tuning the Flan-T5 model for dialogue summarization using both full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA, evaluating the performance improvements with ROUGE metrics.

Language: Jupyter Notebook - Size: 30.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

tachyon-beep/quicksilver

A PyTorch-based morphogenetic engine enabling frozen neural networks to undergo localized, seed-driven structural evolution to adapt to new tasks.

Size: 5.86 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

surabhiwaingankar/NeutralABSA

Enhancing Neutral Sentiment Classification in Aspect-Based Sentiment Analysis (ABSA)

Language: Jupyter Notebook - Size: 1.55 MB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

Vamsi-Dath/QLoRA_for_Software_Bug_Detection

Fine-Tuning for Bug Detection and Bug Fix using Low-Rank Adapters. ⁠Preprocessed buggy–fixed code pairs into structured AST representations, highlighting semantic differences to optimize fine-grained model training. Fine-tuned CodeLlama-7B using QLoRA (4-bit), updating only 0.1% of model weights (q_proj, v_proj)

Language: Jupyter Notebook - Size: 2.98 MB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

XelfXendr/peft_unlearning

Repository exploring the use of parameter-efficient finetuning methods for sensitive information unlearning from LLMs

Language: Jupyter Notebook - Size: 1.56 MB - Last synced at: 4 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

iurada/talos-task-arithmetic

Official repository of our work "Efficient Model Editing with Task-Localized Sparse Fine-tuning" accepted at ICLR 2025

Language: Python - Size: 84 KB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 0 - Forks: 0

lliutianc/roselora

Language: Python - Size: 14.3 MB - Last synced at: 7 months ago - Pushed at: 7 months ago - Stars: 0 - Forks: 0

Hamid-Nasiri/EDoRA

EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition

Language: Python - Size: 1.69 MB - Last synced at: 8 months ago - Pushed at: 8 months ago - Stars: 0 - Forks: 0

RuvenGuna94/Dialogue-Summary-PEFT-Fine-Tuning

This notebook fine-tunes the FLAN-T5 model for dialogue summarization, comparing full fine-tuning with Parameter-Efficient Fine-Tuning (PEFT). It evaluates performance using ROUGE metrics, demonstrating PEFT's efficiency while achieving competitive results.

Language: Jupyter Notebook - Size: 289 KB - Last synced at: 3 days ago - Pushed at: 8 months ago - Stars: 0 - Forks: 0

qiqinyi/GenAI-with-LLMs

My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.

Language: Jupyter Notebook - Size: 28.9 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

giuseppedipoce/Task-Arithmetic-Tuning-of-MobileNetV2-

This repository contain a project which goal is to find new parameter efficient fine tuning framework in order to improve performance of Deep Artificial Neural Network onto "out of distribution" data (OOD). In this specific case you can find Multi-task Learning problem.

Language: Jupyter Notebook - Size: 2.08 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

architkaila/Fine-Tuning-LLMs-for-Medical-Entity-Extraction

Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts

Language: Python - Size: 761 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

Andy-LZH/peft4clip

Parameter Efficient Fine-Tuning for CLIP

Language: Python - Size: 19.7 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

alinourian/Fine-tuning-Mistral-7b-QA

Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)

Language: Jupyter Notebook - Size: 20.5 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 0 - Forks: 0

Related Topics
large-language-models 17 lora 16 peft 13 fine-tuning 11 low-rank-adaptation 11 deep-learning 11 transfer-learning 10 machine-learning 8 pytorch 8 nlp 6 parameter-efficient-tuning 5 natural-language-processing 5 transformer 5 huggingface 4 adapter 4 computer-vision 4 large-vision-language-models 3 llm 3 flan-t5 3 prompt-tuning 3 llms 3 transformers 3 commonsense-reasoning 3 task-arithmetic 3 qlora 3 peft-fine-tuning-llm 3 vision-transformer 3 generative-ai 2 python 2 mixture-of-experts 2 vision-language-model 2 prompt-learning 2 model-merging 2 few-shot-learning 2 scene-understanding 2 test-time-adaptation 2 awesome-list 2 pretrained-models 2 continual-learning 2 memory-efficient-tuning 2 vision-and-language 2 instruction-tuning 2 language-model 2 fine-tuning-llm 2 hugging-face 2 sentiment-analysis 2 reinforcement-learning 2 proximal-policy-optimization 2 kl-divergence 2 mixture-of-depths 1 kdpl 1 visual-recognition 1 llama 1 autonomous-vehicles 1 cooperative-perception 1 cvf-conference 1 model-adaptation 1 object-tracking 1 perception 1 wacv 1 wacv2024 1 data-augmentation 1 bert-model 1 clip 1 decision-making 1 aspect-term-extraction 1 robotics 1 text-summarization 1 neural-network-search 1 neural-networks 1 context-aware-system 1 eleutherai 1 llm-inference 1 llm-serving 1 llm-training 1 llmops 1 question-answering 1 glue 1 ray 1 ray-serve 1 knowledge-distillation 1 retrieval-augmented-generation 1 ai 1 finetuning 1 cvpr2025 1 pre-trained-model 1 vision-recognition 1 aspect-polarity-classification 1 style-mixing 1 vision-language 1 natural-language-understanding 1 dialogue-summarization 1 rouge 1 unlearning 1 topic-classification 1 text-classification 1 contrastive-language-image-pretraining 1 few-shot-classifcation 1 open-classification 1 ag-news 1