Search
Results
BERT Transformers – How Do They Work? | Exxact Blog
[https://www.exxactcorp.com/blog/Deep-Learning/how-do-bert-transformers-work] - - public:mzimmerm
Excellent document about BERT transformers / models and their parameters: - L=number of layers. - H=size of the hidden layer = number of vectors for each word in the sentence. - A = Number of self-attention heads - Total parameters.
Fine-tune a pretrained model
[https://huggingface.co/docs/transformers/training] - - public:mzimmerm
Use the Bert model to train on Yelp dataset