Topic: "text-tokenization"
alasdairforsythe/tokenmonster
Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript
Language: Go - Size: 734 KB - Last synced at: 27 days ago - Pushed at: 10 months ago - Stars: 575 - Forks: 20

twardoch/split-markdown4gpt
A Python tool for splitting large Markdown files into smaller sections based on a specified token limit. This is particularly useful for processing large Markdown files with GPT models, as it allows the models to handle the data in manageable chunks.
Language: Python - Size: 80.1 KB - Last synced at: 20 days ago - Pushed at: about 2 months ago - Stars: 23 - Forks: 2

SayamAlt/Resume-Classification-using-fine-tuned-BERT
Successfully developed a resume classification model which can accurately classify the resume of any person into its corresponding job with a tremendously high accuracy of more than 99%.
Language: Jupyter Notebook - Size: 1.19 MB - Last synced at: 20 days ago - Pushed at: over 2 years ago - Stars: 7 - Forks: 4

markiskorova/Machine-Learning-NLP-Predict-Author
Machine Learning & Natural Language Processing: Reads Classic Novels and Predicts the Author of a Phrase
Language: Python - Size: 3.49 MB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 1 - Forks: 0

adilrasheed139/AI-Powered-Resume-Screening-using-BERT
Successfully developed a resume classification model which can accurately classify the resume of any person into its corresponding job with a tremendously high accuracy of more than 99%.
Language: Jupyter Notebook - Size: 1.19 MB - Last synced at: 18 days ago - Pushed at: 4 months ago - Stars: 1 - Forks: 0

SayamAlt/Fake-News-Classification-using-fine-tuned-BERT
Successfully developed a text classification model to predict whether a given news text is fake or not by fine-tuning a pretrained BERT transformed model imported from Hugging Face.
Language: Jupyter Notebook - Size: 18 MB - Last synced at: 16 days ago - Pushed at: 4 months ago - Stars: 1 - Forks: 0

katanabana/Nihotip
Nihotip is a web app that lets users explore Japanese text through interactive tokenization and detailed insights. Built with React and Python, it offers a dynamic way to analyze words and symbols with tooltips for deeper understanding.
Language: JavaScript - Size: 44.6 MB - Last synced at: 12 days ago - Pushed at: 7 months ago - Stars: 1 - Forks: 0

victoryosiobe/kingchop
Kingchop ⚔️ is a JavaScript English based library for tokenizing text (chopping text). It uses vast rules for tokenizing, and you can adjust them easily.
Language: JavaScript - Size: 85 KB - Last synced at: 8 days ago - Pushed at: 9 months ago - Stars: 1 - Forks: 0

SayamAlt/Financial-News-Sentiment-Analysis
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
Language: Jupyter Notebook - Size: 745 KB - Last synced at: 2 months ago - Pushed at: 12 months ago - Stars: 1 - Forks: 0

Software-Research-Lab/dropsuit-tok
The tok function is a JavaScript and Node.js function that processes object instances and tokenizes text arrays. It returns tokenized words number, tokenized words array, and tokenized words concatenated string. It's part of the open-source DropSuit NLP library under the Apache License 2.0.
Language: JavaScript - Size: 375 KB - Last synced at: 5 months ago - Pushed at: almost 2 years ago - Stars: 1 - Forks: 0

SayamAlt/News-Category-Classification
Successfully developed a news category classification model using fine-tuned BERT which can accurately classify any news text into its respective category i.e. Politics, Business, Technology and Entertainment.
Language: Jupyter Notebook - Size: 3.69 MB - Last synced at: 2 months ago - Pushed at: over 2 years ago - Stars: 1 - Forks: 0

SayamAlt/Mental-Health-Classification-using-fine-tuned-DistilBERT
Successfully established a multiclass text classification model by fine-tuning pretrained DistilBERT transformer model to classify several distinct types of mental health statuses such as anxiety, stress, personality disorder, etc. with an accuracy of 77%.
Language: Jupyter Notebook - Size: 2.07 MB - Last synced at: about 2 months ago - Pushed at: 3 months ago - Stars: 0 - Forks: 0

SayamAlt/Luxury-Apparel-Product-Category-Classification-using-fine-tuned-DistilBERT
Successfully developed a multiclass text classification model by fine-tuning pretrained DistilBERT transformer model to classify various distinct types of luxury apparels into their respective categories i.e. pants, accessories, underwear, shoes, etc.
Language: Jupyter Notebook - Size: 3.7 MB - Last synced at: about 2 months ago - Pushed at: 4 months ago - Stars: 0 - Forks: 0

SayamAlt/Cyberbullying-Classification-using-fine-tuned-DistilBERT
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
Language: Jupyter Notebook - Size: 7.24 MB - Last synced at: 2 months ago - Pushed at: 10 months ago - Stars: 0 - Forks: 0

SayamAlt/English-to-Spanish-Language-Translation-using-Seq2Seq-and-Attention
Successfully established a Seq2Seq with attention model which can perform English to Spanish language translation up to an accuracy of almost 97%.
Language: Jupyter Notebook - Size: 1.18 MB - Last synced at: 2 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

SayamAlt/Global-News-Headlines-Text-Summarization
Successfully established a text summarization model using Seq2Seq modeling with Luong Attention, which can give a short and concise summary of the global news headlines.
Language: Jupyter Notebook - Size: 513 KB - Last synced at: 2 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

SayamAlt/Symptoms-Disease-Text-Classification
Successfully developed a fine-tuned BERT transformer model which can accurately classify symptoms to their corresponding diseases upto an accuracy of 89%.
Language: Jupyter Notebook - Size: 860 KB - Last synced at: 2 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

Aalaa4444/Text_Processing-and-Unique_Word_Extraction_fromHTML
Extract text content from an HTML page, process it, and extract unique words from the processed text. This notebook utilizes various text processing techniques including cleaning, normalization, tokenization, lemmatization or stemming, and stop words removal.
Language: Jupyter Notebook - Size: 12.7 KB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

cedrickchee/tokenizers Fork of huggingface/tokenizers
💥Fast State-of-the-Art Tokenizers optimized for Research and Production
Size: 717 KB - Last synced at: about 1 year ago - Pushed at: over 5 years ago - Stars: 0 - Forks: 0
