An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: generate-text

inferless/qwen3-8b

Qwen3-8B is a language model that supports seamless switching between “thinking” mode-for advanced math, coding, and logical inference-and “non-thinking” mode for fast, natural conversation. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 28.3 KB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 0 - Forks: 0

inferless/phi-4-GGUF

A 14B model optimized in GGUF format for efficient inference, designed to excel in complex reasoning tasks. <metadata> gpu: A100 | collections: ["llama.cpp","GGUF"] </metadata>

Language: Python - Size: 14.6 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 6

inferless/tinyllama-1-1b-chat-v1-0

A chat model fine-tuned on TinyLlama, a compact 1.1B Llama model pretrained on 3 trillion tokens. <metadata> gpu: T4 | collections: ["vLLM"] </metadata>

Language: Python - Size: 23.4 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 1 - Forks: 2

inferless/llama-2-13b-chat-hf

A 13B model fine-tuned with reinforcement learning from human feedback, part of Meta’s Llama 2 family for dialogue tasks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 43.9 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/falcon-7b-instruct

A 7B instruction-tuned language model that excels in following detailed prompts and effectively performing a wide variety of natural language processing tasks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 25.4 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/vicuna-7b-8k

A GPTQ‑quantized variant of Vicuna 7B v1.3, optimized for conversational AI and instruction‑following with efficient, robust performance. <metadata> gpu: T4 | collections:["GPTQ"] </metadata>

Language: Python - Size: 26.4 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/vicuna-13b-8k

A GPTQ‑quantized, 13‑billion‑parameter uncensored language model with an extended 8K context window, designed for dynamic, high‑performance conversational tasks. <metadata> gpu: T4 | collections: ["GPTQ"] </metadata>

Language: Python - Size: 19.5 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/meditron-7b-gptq

An AWQ-quantized open-source medical LLM designed for exam question answering, differential diagnosis support, and providing comprehensive disease, symptom, cause, and treatment information. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 38.1 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

inferless/openhermes-2-5-mistral-7b

A quantized model fine-tuned for rapid, efficient, and robust conversational and instruction tasks. <metadata> gpu: A100 | collections: ["vLLM","AWQ"] </metadata>

Language: Python - Size: 46.9 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 3 - Forks: 0

inferless/starling-lm-7b-alpha-gptq

A GPTQ‑quantized 7B model optimized for efficient, high‑quality text generation across diverse tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 38.1 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/tenyxchat-7b

A 7B chat model fine-tuned for robust conversational AI, delivering efficient, context-aware dialogue responses. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 28.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 3

inferless/llama-2-7b-gptq

A 7B conversational model fine-tuned with RLHF, deployable efficiently via vLLM for low-latency serving. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 43.9 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 12

inferless/tinyllama-1-1b-chat-vllm-gguf

Deploy GGUF quantized version of Tinyllama-1.1B GGUF vLLM for efficient inference. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 31.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 2 - Forks: 7

inferless/ministral-8b-instruct

An 8B instruction-tuned model optimized for generating coherent, context-rich responses across diverse applications. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 19.5 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

inferless/llama-2-7b-chat

AWQ quantized model offers significant memory savings and faster inference while maintaining strong conversational quality. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 23.4 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/phi-2

A small language model delivering robust text generation and instruction following with efficient long-context comprehension. <metadata> gpu: T4 | collections: ["vLLM","Batch Input Processing"] </metadata>

Language: Python - Size: 44.9 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 5

inferless/mixtral-echo

Tryecho's Mixtral-echo is a adapter for Mixtral-8x7B which is a pretrained generative Sparse Mixture of Experts. It outperforms Llama 2 70B on most benchmarks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

inferless/vicuna-7b-1.1

Open-source chatbot fine-tuned from LLaMA on 70K ShareGPT conversations, optimized for research and conversational tasks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 33.2 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/phi-3-5-moe-instruct

An instruction-tuned variant of Phi-3.5, delivering efficient, context-aware responses across diverse language tasks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 26.4 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/llama-2-7b-hf

A 7B parameter model fine-tuned for dialogue, utilizing supervised learning and RLHF, supports a context length of up to 4,000 tokens. <metadata> gpu: A10 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 32.2 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 1 - Forks: 3

inferless/llama-3-1-70b-awq

AWQ version of 70B decoder‑only Transformer model builds on the Llama 3.1, fine‑tuned on instruction and dialogue data (via RLHF‑style techniques) to excel at instruction following, chat, and multi‑turn Q&A tasks. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 17.6 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/smaug-72b

Smaug-72B topped the Hugging Face LLM leaderboard and it’s the first model with an average score of 80, making it the world’s best open-source foundation model. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 33.2 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 17 - Forks: 5

inferless/mixtral-8x7b Fork of rbgo404/Mixral-8x7B

An Mixture‑of‑Experts model featuring eight experts and a lightweight gating network which delivers state‑of‑the‑art text generation efficiency and quality. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 52.7 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/mistral-7b-v0-1-multi-lora-adapter

Mistral‑7B‑v0.1 with four LoRA adapters: French, SQL, DPO, and ORCA. Letting you swap adapters instantly per inference. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 29.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/gemma-7b

Decoder‑only Transformer model by Google pretrained on diverse web and code corpora and optimized for zero and few‑shot text generation tasks including summarization, translation, and conversational agents. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 49.8 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 1 - Forks: 2

inferless/huatuogpt-o1-70b

A medical LLM built on LLaMA-3.1-70B, employing detailed step-by-step reasoning for complex medical problem-solving. <metadata> gpu: A100 | collections: ["HF Transformers","Variable Inputs"] </metadata>

Language: Python - Size: 37.1 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

inferless/phi4-vllm-gptq Fork of inferless/phi4-vllm-gguf

A 14B model optimized in GPTQ format for efficient inference, designed to excel in complex reasoning tasks. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 56.6 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 2

inferless/gpt-neo-125m

An autoregressive transformer model which replicates GPT‑3’s architecture and is trained on The Pile, enabling versatile text generation and experimentation in NLP.

Language: Python - Size: 20.5 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 23

inferless/solar-10.7b-instruct

A 10.7B language model by UpStage, fine-tuned for advanced text generation, precise instruction-following, and diverse NLP applications, delivering remarkably robust performance across creative and enterprise tasks at scale. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 17.6 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 0

inferless/flan-ul2

A large, instruction-tuned model built on UL2, optimized for diverse text-to-text tasks like summarization, translation, and question answering.

Language: Python - Size: 29.3 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 1

inferless/llama-2-13b-chat-awq

A conversational variant of Meta's Llama-2 model with 13 billion parameters, optimized for chat and instruction-following tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 74.2 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 0 - Forks: 3

inferless/llama3-tenyxchat-70b

A fine-tuned, conversational variant of the Llama3-70B model. It uses Direct Preference Optimization (DPO) with the UltraFeedback dataset for alignment and is optimized for multi-turn chat interactions. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 21.5 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 1 - Forks: 2

inferless/facebook-bart-cnn

A variant of the BART model designed specifically for natural language summarization. It was pre-trained on a large corpus of English text and later fine-tuned on the CNN/Daily Mail dataset. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 18.6 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 8 - Forks: 3

inferless/llama2-13b-8bit-gptq

A quantized version of 13B fine-tuned model, optimized for dialogue use cases. <metadata> gpu: T4 | collections: ["HF Transformers","GPTQ"] </metadata>

Language: Python - Size: 29.3 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 0 - Forks: 3

inferless/llama-2-7b-chat-gguf

Quantized GGUF model which dramatically reduces memory requirements while preserving conversational quality. <metadata> gpu: A100 | collections: ["Using NFS Volumes", "llama.cpp"] </metadata>

Language: Python - Size: 16.6 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 0 - Forks: 1

inferless/Command-r-v01

35B model delivering high performance in reasoning, summarization, and question answering. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 25.4 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 2 - Forks: 4

inferless/mistral-7b-instruct-v0.2

An 7B model with a 32k token context window and optimized attention mechanisms for superior dialogue and reasoning. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 27.3 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 0 - Forks: 0

inferless/phi4-vllm-gguf Fork of rbgo404/phi4-vllm-gguf

A 14B model optimized in GGUF format for efficient inference, designed to excel in complex reasoning tasks. <metadata> gpu: A100 | collections: ["vLLM","GGUF"] </metadata>

Language: Python - Size: 44.9 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 0 - Forks: 1

inferless/mistral-small-3.1-24b-instruct

Advanced multimodal language model developed by Mistral AI with enhanced text performance, robust vision capabilities, and an expanded context window of up to 128,000 tokens. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 28.3 KB - Last synced at: 2 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 7

inferless/gemma-2b-it

2B instruct-tuned model for delivering coherent and instruction-following responses across a wide range of tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 1 - Forks: 2

inferless/llama-3.1-8b-instruct-gguf

An 8B-parameter, instruction-tuned variant of Meta's Llama-3.1 model, optimized in GGUF format for efficient inference. <metadata> gpu: A100 | collections: ["lama.cpp"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 6

inferless/gemma-3-27b-it

Gemma-3-27B-it is a multimodal model that handles both text and image inputs, supports over 140 languages, and features a context window of up to 128,000 tokens. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 15.6 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/gemma-2-9b-it

Instruct-tuned model for instruction following, delivering coherent, high-quality responses across a broad spectrum of tasks. <metadata> gpu: A10 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 21.5 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/openchat-3.5

A fine-tuned chat model with C-RLFT - a strategy inspired by offline reinforcement learning, optimized for natural, context-aware conversations, excelling in instruction following and text generation tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 16.6 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 3

inferless/neuralhermes-2.5-mistral-7b-gptq

A GPTQ‑quantized 7B language model based on Mistral, fine‑tuned for robust, efficient conversational and text generation tasks. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 20.5 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/mixtral-8x7b-v0.1

A GPTQ-quantized variant of the Mixtral 8x7B model, fine-tuned for efficient text generation and conversational applications. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 2 - Forks: 4

inferless/llama-3.2-3b-instruct

3B compact instruction-tuned model generate detailed responses across a range of tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 43 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/dolphin-2.5-mixtral-8x7b-gptq

A GPTQ‑quantized version of Eric Hartford’s Dolphin 2.5 Mixtral 8x7B model, fine‑tuned for coding and conversational tasks. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 11.7 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 5 - Forks: 10

inferless/mistral-7b

A 7B autoregressive language model by Mistral AI, optimized for efficient text generation and robust reasoning. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 34.2 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 3 - Forks: 11

inferless/qwq-32b-preview

A 32B experimental reasoning model for advanced text generation and robust instruction following. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 13 - Forks: 5

inferless/mistral-small-24b-instruct

24B instruction-tuned model, delivering context-aware, reliable responses optimized for performance and efficiency. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 28.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

inferless/deepseek-r1-distill-qwen-32b

A distilled DeepSeek-R1 variant built on Qwen2.5-32B, fine-tuned with curated data for enhanced performance and efficiency. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 23.4 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 4 - Forks: 13

inferless/zephyr-7b-beta

A 7B fine-tuned model for instruction-following and context-aware text generation across a wide range of diverse applications. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 21.5 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/qwen2-72b-instruct

A 72B instruct-tuned language model, AWQ-quantized for efficient inference and robust performance on diverse instruction tasks. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 39.1 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 3

inferless/neural-chat-7b-v3-1

A fine-tuned 7B model based on mistralai/Mistral-7B-v0.1, aligned with DPO on Open-Orca/SlimOrca via Intel Gaudi 2, optimized for high-performance chat. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 18.6 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 1

inferless/llama-3

A robust 8B parameter base model for diverse language tasks, offering strong performance in multilingual scenarios. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 26.4 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 2 - Forks: 7

inferless/llama-2-70b-chat-gptq

GPTQ quantized model fine-tuned for dialogue applications. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 15.6 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 1

inferless/llama-2-13b-chat-gptq

GPTQ quantized model, fine-tuned to delivers efficient dialogue performance and human-like responses. <metadata> gpu: A100 | collections: ["vLLM","GPTQ"] </metadata>

Language: Python - Size: 15.6 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 2

inferless/llama-3.1-8b-instruct

An 8B multilingual instruction model fine-tuned with RLHF for chat completion, supporting up to 128k tokens. <metadata> gpu: A100 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 19.5 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 5

inferless/phi-3-128k

An instruction-tuned mini LLM with a 128k token context window, enabling efficient long-context comprehension and generation. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>

Language: Python - Size: 23.4 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 4

inferless/gpt-neo-dynamic-batching

Deploying a GPT-Neo model with Dynamic Batching where requests are dynamically batched. <metadata> collections: ["Dynamic Batching","HF Transformers"] </metadata>

Language: Python - Size: 38.1 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

inferless/mistral-7b-instruct-v0.3

7B model fine-tuned for precise instruction following and robust contextual understanding. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 32.2 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

inferless/phi-4

A 14B model designed to excel in complex reasoning tasks, particularly within STEM domains. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>

Language: Python - Size: 44.9 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 4

inferless/zephyr-7b-streaming

Zephyr-7B model with Server-Sent Events (SSE) enables real-time streaming for chat based applications. <metadata> gpu: A100 | collections: ["Streaming LLMs", "SSE Events"] </metadata>

Language: Python - Size: 30.3 KB - Last synced at: 3 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 1

birdhouses/ArticleAI

Shopify AI blogger. Generate blog posts with ChatGPT!

Language: Python - Size: 20.5 KB - Last synced at: 3 months ago - Pushed at: 10 months ago - Stars: 1 - Forks: 0

mukul-mschauhan/image-to-text

Transform your images into valuable insights and creative content using Google Gemini

Language: Python - Size: 31.3 KB - Last synced at: 9 months ago - Pushed at: 9 months ago - Stars: 0 - Forks: 0

ttop32/KoGPT2novel

Generate novel text - novel finetuned from skt KoGPT2 base v2 - 한국어

Language: Jupyter Notebook - Size: 138 KB - Last synced at: about 2 months ago - Pushed at: over 2 years ago - Stars: 12 - Forks: 2

SFLazarus/HiddenMarkovModel

Implementing Hidden MarkovModel to generate new text and complete sentences

Language: Jupyter Notebook - Size: 9.77 KB - Last synced at: about 1 year ago - Pushed at: about 4 years ago - Stars: 1 - Forks: 0

MuntahaShams/Character-level-LSTM-Pytorch

In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. This model will be able to generate new text based on the text from the book!

Language: Jupyter Notebook - Size: 13.3 MB - Last synced at: 3 months ago - Pushed at: about 5 years ago - Stars: 3 - Forks: 0

FancySnacks/Sequence_Generator

Generate sequence of combinations put together from ascii characters or from custom text files

Language: Python - Size: 102 KB - Last synced at: 11 months ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

MKamelll/nietzsche-markov-chain

A Markov chain attempt for text generation 🐔

Language: JavaScript - Size: 48.8 KB - Last synced at: 2 months ago - Pushed at: about 6 years ago - Stars: 0 - Forks: 0