Search
Results
Mit wenigen Klicks zum eigenen KI-Chatbot: Warum ihr dieses Tool kennen solltet
[https://t3n.de/news/lokaler-ki-chatbot-lm-studio-1642969/] - - public:mzimmerm
Train and use my model
Who needs GitHub Copilot when you can roll your own AI code assistant at home • The Register
7 steps to master large language models (LLMs) | Data Science Dojo
LLM for a new language : MachineLearning
[https://www.reddit.com/r/MachineLearning/comments/12xu5ls/p_llm_for_a_new_language/] - - public:mzimmerm
High level how to train a model
LLaMA 7B GPU Memory Requirement - Transformers - Hugging Face Forums
[https://discuss.huggingface.co/t/llama-7b-gpu-memory-requirement/34323/6] - - public:mzimmerm
With the optimizers of bitsandbytes (like 8 bit AdamW), you would need 2 bytes per parameter, or 14 GB of GPU memory.
BigCode - Open and responsible development of LLMs for code
[https://www.bigcode-project.org/] - - public:mzimmerm
BigCode is an open scientific collaboration working on the responsible development and use of large language models for code
Replit — How to train your own Large Language Models
[https://blog.replit.com/llm-training] - - public:mzimmerm
Hi level only talk about training for a language
How to train a new language model from scratch using Transformers and Tokenizers
[https://huggingface.co/blog/how-to-train] - - public:mzimmerm
Describes how to train a new language (desperanto) model.