Search
Results
AMD Radeon 880M RDNA3.5 iGPU appears in Geekbench OpenCL test matching 780M score - VideoCardz.com
Integrated GPU support · Issue #2637 · ollama/ollama
Latest (0.1.27) docker image with ROCm works for me on Ryzen 5600G with 8GB VRAM allocation. Prompt processing is 2x faster than with CPU. Generation runs at max speed even if CPU is busy running other processes. I am on Fedora 39. Container setup: HSA_OVERRIDE_GFX_VERSION=9.0.0
AMD Ryzen 5 2500U Specs | TechPowerUp CPU Database
SDB:AMD GPGPU - openSUSE Wiki
Use this to install ROCm on Tumbleweed. Does NOT talk about Pytorch, the tag is for completeness.
CPU Database | TechPowerUp
SHOWS EXACTS SPECS OF CPU OR APU INCLUDING GRAPHICS
GeForce GTX 1060 (Mobile) vs GeForce GTX 1060 vs Ryzen 7 5700G with Radeon Graphics vs Radeon Ryzen 5 5600GT vs Ryzen 5 2500U with Radeon Vega [videocardbenchmark.net] by PassMark Software
Compare Distinct graphics cards with AMD APUs. Result: Buy Ryzen 5700G, or AMD Ryzen 5 5600GT as GPU is the same. BUT Ryzen 5700G has 25% better CPU
AMD Releases ROCm 6.0.2 With Improved Stability For Instinct MI300 Series - Phoronix Forums
This guy claims he got both gfx900 and gfx902 working. I asked to try my test.
SDB:AMD GPGPU - openSUSE Wiki
GPU Database | TechPowerUp
Database of AMD GPUs. Here we can see that 2500U is Vega 8 mobile , which is GCN 5.0. This is only supported in ROCm up to 4.5.2!!
Is Rusticl the future of OpenCL for AMD cards on Linux? - LuxCoreRender Forums
Rusticl seems equivalent of rocm.
nktice/AMD-AI: AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 22.04 / 23.04
[rocm] No GPU support after rebuild with ROCm 6.0 (#6) · Issues · Arch Linux / Packaging / Packages / python-pytorch · GitLab
This site seems to claim there is a known fix. Looks like someone is fixing it???
Installing PyTorch for ROCm — ROCm installation (Linux)
Installing PyTorch for ROCm - this document claims gfx900 compatibility
Syntax of gfx900, gfx902, gfx909 and gfx90c Instructions — LLVM 19.0.0git documentation
Looks like gfx900, gfx902, gfx909 and gfx90c are the same
Need help with getting results on Ryzen 5600G with RoCm 5.5 - PyTorch Forums
ROCm 5.xx ever planning to include gfx90c GPUs? · Issue #1743 · ROCm/ROCm
Suggested git-build of pytorch on gfx90c FAILED for me
Document describing installation. The only supported claim: Ubuntu 22.04.3 Desktop Version with HWE Ubuntu kernel 6.5 Yes
Is there a support plan for Renoir apu ? · Issue #1101 · ROCm/ROCm
LM Studio on Discord
AMD Ryzen AI CPUs & Radeon 7000 GPUs Can Run Localized Chatbots Using LLMs Just Like NVIDIA's Chat With RTX
LM Studio can be installed on Linux with APU or GPU (looks like it needs the AI CPU though??) and run LLM. Install on Laptop and test if it works.
bash - How can I fetch VRAM and GPU cache size in Linux? - Stack Overflow
If you have AMD GPU as I do then you can grab PCI ID for the device with lspci command executed with -D flag (shows PCI doamin) and read the following file cat /sys/bus/pci/devices/${pci_slot}/mem_info_vram_total, it contains GPU VRAM size in bytes.
ROCm installation for Linux — ROCm installation (Linux)
Top of the guide describing ROCm on Linux. There are 2 core approaches: Using RPM (Package manager), or using AMD installer. I should use Package manager. Also single-version vs. multi-version. I should use single-version, latest.
AMD Extends Support for PyTorch Machine Learning Development on Select RDNA™ 3 GPUs with ROCm™ 5.7 | PyTorch
The links in “How to guide“ provide instructions that are hopeful. Maybe start with those instructions!
Installing ROCM in TW and Leap - English / Hardware - openSUSE Forums
This guy seems to claim ROCM can run on Tumbleweed using Distrobox. But what is distrobox?
Doesn't ROCm support AMD's integrated GPU (APU)? · Issue #2216 · ROCm/ROCm
This guy claims successful installation of ROCm on Ubuntu - this seems to be workable for Tumbleweed as well. See the comment “nav9 commented on Jul 16, 2023“
Ditching CUDA for AMD ROCm for more accessible LLM training and inference. | by Rafael Manzano Masapanta | Medium
Train LLM on AMD APU. In this scenario, we’ll use an APU because most laptops with a Ryzen CPU include an iGPU; specifically, this post should work with iGPUs based on the “GCN 5.0” architecture, or “Vega” for friends. We’ll use an AMD Ryzen 2200G in this post, an entry-level processor equipped with 4C/4T and an integrated GPU.
Configuring UMA Frame Buffer Size on Desktop Systems with Integrated Graphics | AMD
UMA buffer size is the size of memory used by APU. It is set on the motherboard, often limited to 2GB. But LLM AI could use 16GB or more.