Jump to content

LLM

From NicheWork

Earlier this year, I purchased my first desktop in a while so that I could get a “decent” graphics card that would support running local large language models. I bought an MSI system with a refurbished NVIDIA 4090.

The thing about buying refurbished equipment, though, is that it doesn't have the full warranty. Instead it had a 6 month warranty.

So, on cue, about 11 months after I bought the card, it failed.

Since it was providing graphics output for the system, this meant I couldn't use my desktop.

I pined for the LLM, but, in the meantime, I saw that not only were AMD less expensive, they are also much more supported for LLMs than they were even recently and, hey, [ AMD's ROCm is open source while CUDA is proprietary.

So I got an AMD Radeon™ AI PRO R9700 with 33% more memory than my old NVIDIA 4090.

With this, I've set up Ollama running in Docker to control the memory and using qwen3-coder:30b with a 256k context. I used this LLM to generate some clocks, as my first test.