GitHub / tranquoctrinh / huggingface-transformers-examples
Fine-tuning (or training from scratch) the library models for language modeling on a text dataset for GPT, GPT-2, ALBERT, BERT, DitilBERT, RoBERTa, XLNet... GPT and GPT-2 are trained or fine-tuned using a causal language modeling (CLM) loss while ALBERT, BERT, DistilBERT and RoBERTa are trained or fine-tuned using a masked language modeling (MLM) loss.
Stars: 0
Forks: 0
Open issues: 0
License: mit
Language: Python
Size: 38.1 KB
Dependencies parsed at: Pending
Created at: about 3 years ago
Updated at: 24 days ago
Pushed at: 24 days ago
Last synced at: 24 days ago
Topics: causal-language-modeling, huggingface, language-model, text-classification, text-generation, transformers