Bridgy Fed
Core documentation for Fediverse bridging (bluesky, web, etc)
Core documentation for Fediverse bridging (bluesky, web, etc)
Document about LM Studio
Optimum is an extension of Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. It is also the repository of small, mini, tiny models.
Excellent document about BERT transformers / models and their parameters: - L=number of layers. - H=size of the hidden layer = number of vectors for each word in the sentence. - A = Number of self-attention heads - Total parameters.
LM Studio can be installed on Linux with APU or GPU (looks like it needs the AI CPU though??) and run LLM. Install on Laptop and test if it works.
Top of the guide describing ROCm on Linux. There are 2 core approaches: Using RPM (Package manager), or using AMD installer. I should use Package manager. Also single-version vs. multi-version. I should use single-version, latest.
GGML quantized models. They would let you leverage CPU and system RAM, instead of having to rely on a GPU’s. This could save you a fortune, especially if go for some used AMD Epyc platforms. This could be more viable for the larger models, especially the 30B/65B parameters models which would still press or exceed the VRAM on the P40.
training a model like Llama with 2.7 billion parameters outperformed a larger model like Vicuna with 13 billion parameters. Especially when considering resource consumption, this might be a good alternative to using a 7B Foundation model instead of a full-blown ChatGPT. The best price-to-performance base model for our use case turned out to be Mistral 7b. The model is compact enough to fit into an affordable GPU with 24GB VRAM and outperforms the other models with 7B parameters.
Simpliest start with ai. Use the Github code linked in
Hi level only talk about training for a language
Describes how to train a new language (desperanto) model.
Want, WantedBy, Before, After etc -
Emscripten is a complete Open Source compiler toolchain to WebAssembly. Using Emscripten you can: Compile C and C++ code, or any other language that uses LLVM, into WebAssembly, and run it on the Web, Node.js, or other Wasm runtimes.
Main man/doc page of systemctl
Good guide and doc for Bash arrays. Also clearly states difference between array (indexed array=list) and associative array (keyd array=map)
Great summary of org-babel especially all the tricky header variables.
Core site for Material design documentation
Example of testing a Flutter app.