GitHub / pradeepdev-1995 / BERT-models-finetuning
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pradeepdev-1995%2FBERT-models-finetuning
Stars: 1
Forks: 0
Open issues: 0
License: None
Language: Python
Size: 124 KB
Dependencies parsed at: Pending
Created at: almost 5 years ago
Updated at: almost 4 years ago
Pushed at: almost 5 years ago
Last synced at: about 2 years ago
Topics: albert, bert, data-science, deep-learning, distilbert, fine-tuning, machine-learning, multi-class-classification, nlp, pretrained-models, python, roberta, transformer, xlnet, xlnet-pytorch