Search
Results
The Israeli National AI Program - artificial intelligence
caddy webserver to return 404 on file not found
How LLMs Work, Explained Without Math - miguelgrinberg.com
abi/secret-llama: Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
3Blue1Brown - Learn math and AI with videos
Ilus AI - AI illustration Generator
Meta AI
Understanding AI from Scratch – Neural Networks Course - YouTube
210,000 CODERS lost jobs as NVIDIA released NEW coding language. - YouTube
GPT in 60 lines of NumPy
GPT in 500 lines of SQL
10 Legal AI Tools for Legal Practices and Professionals in 2024
What Kind of Mind Does ChatGPT Have? | The New Yorker 👻
Resume Parser
A resume parser extracts, analyzes, and organizes data from resumes to identify suitable candidates. This tool streamlines the recruitment process, minimizes errors, and saves time, thus enhancing recruiters' efficiency.
Introducing DBRX: A New State-of-the-Art Open LLM | Databricks
The Best GPUs for Deep Learning in 2023 — An In-depth Analysis
Run an LLM Locally with LM Studio - KDnuggets
Document about LM Studio
Optimum
Optimum is an extension of Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. It is also the repository of small, mini, tiny models.
google-research/bert: TensorFlow code and pre-trained models for BERT
BERT Transformers – How Do They Work? | Exxact Blog
Excellent document about BERT transformers / models and their parameters: - L=number of layers. - H=size of the hidden layer = number of vectors for each word in the sentence. - A = Number of self-attention heads - Total parameters.
google/bert_uncased_L-4_H-256_A-4 · Hugging Face
Repository of all Bert models, including small. Start using this model for testing.
Generative pre-trained transformer - Wikipedia
AMD Ryzen AI CPUs & Radeon 7000 GPUs Can Run Localized Chatbots Using LLMs Just Like NVIDIA's Chat With RTX
LM Studio can be installed on Linux with APU or GPU (looks like it needs the AI CPU though??) and run LLM. Install on Laptop and test if it works.
What is Epoch in Machine Learning?| UNext | UNext
Training and Validation Loss in Deep Learning | Baeldung on Computer Science
A Step-by-Step Guide to Model Evaluation in Python | by Shreya Singh | Medium
Copilot
Red Quill - AI generated sex stories
Free Artificial Intelligence (AI) Courses Online with Certificates [2024]
Our free, hands-on data science courses - milan.zimmermann@gmail.com - Gmail
SageMaker Studio Lab
My account on SageMaker studio. The give out 4 hours of GPU a day!
AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs For Desktop with Ryzen AI
8000G is the APU series for AI
Solving Transformer by Hand: A Step-by-Step Math Example | by Fareed Khan | Level Up Coding
Doing what a transformer is doing, by hand
Kaggle: Your Home for Data Science
Kaggle is like huggingface. They can run notebooks, and give GPU power to notebooks
Statistical Foundations of Machine Learning | Kaggle
Mini course of statistical foundations of ML
stabilityai (Stability AI)
My account on Stability AI - it is just a link to huggingface
Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4
Comparison of efficiency of all LLM models on hugging face
6 Ways to Run LLMs Locally (also how to use HuggingFace)
Various methods to run LLM models locally hugging face is only one of them.
Training Bert on Yelp - Copy of training.ipynb - Colaboratory
(1) Interesting cheap GPU option: Instinct Mi50 : LocalLLaMA
AMD seems to sell these accelerators, which are like video cards.
Ditching CUDA for AMD ROCm for more accessible LLM training and inference. | by Rafael Manzano Masapanta | Medium
Train LLM on AMD APU. In this scenario, we’ll use an APU because most laptops with a Ryzen CPU include an iGPU; specifically, this post should work with iGPUs based on the “GCN 5.0” architecture, or “Vega” for friends. We’ll use an AMD Ryzen 2200G in this post, an entry-level processor equipped with 4C/4T and an integrated GPU.
(1) Most cost effective GPU for local LLMs? : LocalLLaMA
GGML quantized models. They would let you leverage CPU and system RAM, instead of having to rely on a GPU’s. This could save you a fortune, especially if go for some used AMD Epyc platforms. This could be more viable for the larger models, especially the 30B/65B parameters models which would still press or exceed the VRAM on the P40.
Optimizing LLMs for Speed and Memory
7 Steps to Mastering Large Language Models (LLMs) - KDnuggets
A Step-by-Step Guide to Training Your Own Large Language Models (LLMs). | by Sanjay Singh | GoPenAI
GenAI Stack Exchange
7 steps to master large language models (LLMs) | Data Science Dojo
LLM for a new language : MachineLearning
High level how to train a model
Up to date List of LLM Models
OSCAR dataset
The OSCAR project (Open Super-large Crawled Aggregated coRpus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications.