Search
Results
OpenAI’s GPT-4o Mini isn’t much better than rival LLMs • The Register
Optimum
[https://huggingface.co/docs/optimum/index] - - public:mzimmerm
Optimum is an extension of Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. It is also the repository of small, mini, tiny models.
google/bert_uncased_L-4_H-256_A-4 · Hugging Face
[https://huggingface.co/google/bert_uncased_L-4_H-256_A-4] - - public:mzimmerm
Repository of all Bert models, including small. Start using this model for testing.
(2) Are there any tiny (1-3b) models finetuned for coding available in GGUF format? : LocalLLaMA
[https://www.reddit.com/r/LocalLLaMA/comments/16csdq6/are_there_any_tiny_13b_models_finetuned_for/] - - public:mzimmerm
bigcode (BigCode)
[https://huggingface.co/bigcode] - - public:mzimmerm
Research community developing various code models, small and big. Models may not be instruct
WizardLM (WizardLM)
deepseek-ai (DeepSeek)
[https://huggingface.co/deepseek-ai] - - public:mzimmerm
They have the 1.3B version!!! This may be the best to start with Newspeak. Should work train even on huggingcface
stabilityai (Stability AI) - Stable Diffusion running on Huggingface
[https://huggingface.co/stabilityai] - - public:mzimmerm
Chat, models. Not open source, but instruct and relatively small (3B). The 3B instruct may be the best to try on Newspeak.