GitHub topics: prompt-compression
atjsh/llmlingua-2-js
JavaScript/TypeScript implementation of LLMLingua-2 (Experimental)
Language: TypeScript - Size: 349 KB - Last synced at: 3 days ago - Pushed at: 4 days ago - Stars: 7 - Forks: 0

centminmod/or-cli
Python command-line tool for interacting with AI models through the OpenRouter API/Cloudflare AI Gateway, or local self-hosted Ollama. Optionally support Microsoft LLMLingua prompt token compression
Size: 17.3 MB - Last synced at: 3 days ago - Pushed at: 12 days ago - Stars: 7 - Forks: 2

kaistAI/GenPI
This repository is the official implementation of Generative Context Distillation.
Language: Python - Size: 2.57 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 4 - Forks: 0

ksm26/Prompt-Compression-and-Query-Optimization
Enhance the performance and cost-efficiency of large-scale Retrieval Augmented Generation (RAG) applications. Learn to integrate vector search with traditional database operations and apply techniques like prefiltering, postfiltering, projection, and prompt compression.
Language: Jupyter Notebook - Size: 88.9 KB - Last synced at: 3 months ago - Pushed at: 11 months ago - Stars: 0 - Forks: 0

contextcrunch-ai/contextcrunch-python
Compress LLM Prompts and save 80%+ on GPT-4 in Python
Language: Python - Size: 7.81 KB - Last synced at: about 1 month ago - Pushed at: over 1 year ago - Stars: 3 - Forks: 0
