Topic: "p-tuning"
liucongg/ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
Language: Python - Size: 1.35 MB - Last synced at: 5 days ago - Pushed at: over 1 year ago - Stars: 2,744 - Forks: 311

PhoebusSi/Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
Language: Jupyter Notebook - Size: 137 MB - Last synced at: 6 days ago - Pushed at: over 1 year ago - Stars: 2,744 - Forks: 253

THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Language: Python - Size: 1.41 MB - Last synced at: 5 days ago - Pushed at: over 1 year ago - Stars: 2,037 - Forks: 203

THUDM/P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Language: Python - Size: 5.98 MB - Last synced at: 4 days ago - Pushed at: over 2 years ago - Stars: 932 - Forks: 112

yuanjie-ai/ChatLLM
轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等
Language: Jupyter Notebook - Size: 40.9 MB - Last synced at: 4 days ago - Pushed at: 8 months ago - Stars: 445 - Forks: 56

openhackathons-org/End-to-End-LLM
This repository is an AI Bootcamp material that consist of a workflow for LLM
Language: Jupyter Notebook - Size: 24.3 MB - Last synced at: 28 days ago - Pushed at: 28 days ago - Stars: 84 - Forks: 36

FreedomIntelligence/DPTDR
Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
Language: Python - Size: 40 KB - Last synced at: 20 days ago - Pushed at: almost 2 years ago - Stars: 25 - Forks: 5

avnlp/llm-finetuning
Language: Python - Size: 817 KB - Last synced at: 30 days ago - Pushed at: 3 months ago - Stars: 2 - Forks: 0

HROlive/Poland-End-To-End-LLM-Bootcamp
This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.
Language: Jupyter Notebook - Size: 20.6 MB - Last synced at: 10 days ago - Pushed at: about 1 year ago - Stars: 2 - Forks: 1

yuchengml/Adaptation-Tuning-PEFT
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
Language: Python - Size: 130 KB - Last synced at: 12 months ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 0

NJUxlj/p-tuning-v2-reproduce
Reproduce a prompt-learning method: P-Tuning V2, from the paper 《P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks》, model usage: Deberta + ChatGLM2, additional_task: RACE
Language: Python - Size: 249 KB - Last synced at: 9 days ago - Pushed at: 9 days ago - Stars: 0 - Forks: 0

bugface/P-tuning-v2-MRC-NER
P-tuning-v2 integrated mrc for ner
Language: Python - Size: 1.77 MB - Last synced at: almost 2 years ago - Pushed at: about 2 years ago - Stars: 0 - Forks: 0
