Tech blog
  • Blog
  • Github
  • Twitter
  • luigi@grigio.org

llama

A collection of 3 posts
localai

llama.cpp benchmarks on AMD Ryzen 7 7700

Inspired by this reddit post, here my results # HIP_VISIBLE_DEVICES=1 llama-bench --model /models/Qwen3-4B-IQ4_NL.gguf -ngl 99 --flash-attn --no-mmap ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen3
06 Jul 2025 2 min read
Stable Diffusion SDXL and LLAMA2 webui on Docker
docker

Stable Diffusion SDXL and LLAMA2 webui on Docker

Running artificial intelligence on your own computer is fun, but is also a pain. Installing Stable Diffusion WebUI or Oobabooga Text generation UI is different, depending on your operating system and on your hardware (NVIDIA, AMD ROCM, APPLE M2, CPU,..) Docker allows to isolate as much as possible python packages,
02 Oct 2023 3 min read
Far funzionare l'Intelligenza Artificiale più potente sul computer di casa. LLAMA2 70B e Stable Diffusion SDXL
ai Featured

Far funzionare l'Intelligenza Artificiale più potente sul computer di casa. LLAMA2 70B e Stable Diffusion SDXL

L'AI è qui per rimanere, la questione di fondo è chi controlla l'intelligenza artificiale? In questo post vediamo come farla funzionare sul ns computer
17 Aug 2023 5 min read
Page 1 of 1
Tech blog © 2026
  • Sign up
Powered by Ghost