3 min read

Stable Diffusion SDXL and LLAMA2 webui on Docker

Stable Diffusion SDXL and LLAMA2 webui on Docker

Running artificial intelligence on your own computer is fun, but is also a pain. Installing Stable Diffusion WebUI or Oobabooga Text generation UI is different, depending on your operating system and on your hardware (NVIDIA, AMD ROCM, APPLE M2, CPU,..)

Docker allows to isolate as much as possible python packages, rocm libraries and more, so your host computer remains clean. In my situation I have no other choice than installing a prerelease of pytorch and enable weird ROCM specific environment variables, but IT WORKS!

Stable Diffusion is more difficult to make it work on Docker because some dependencies requires other dependencies so you have to choose if to enable these deps inside the docker image or install them everytime at runtime.

Stable Diffusion SDXL

# Dockerfile.rocm
FROM rocm/dev-ubuntu-22.04
ENV DEBIAN_FRONTEND=noninteractive \
    PYTHONUNBUFFERED=1 \
    PYTHONIOENCODING=UTF-8
WORKDIR /sdtemp
RUN apt-get update &&\
    apt-get install -y \
    wget \
    git \
    python3 \
    python3-pip \
    python-is-python3
RUN python -m pip install --upgrade pip wheel
RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /sdtemp

## NEW
ENV TORCH_COMMAND="pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6"

RUN python -m $TORCH_COMMAND

EXPOSE 7860

RUN python launch.py --skip-torch-cuda-test --exit
RUN python -m pip install opencv-python-headless

# extensions deps / roop
RUN apt-get install python-dev-is-python3 -y
RUN python -m pip install --use-deprecated=legacy-resolver insightface==0.7.3 onnxruntime ifnude # roop extension dep

WORKDIR /stablediff-web

# docker-compose.yml

version: '3'

services:
stablediff-rocm:
    build: 
      context: .
      dockerfile: Dockerfile.rocm
    container_name: stablediff-rocm-runner
    environment:
      TZ: "Asia/Jakarta"
      ROC_ENABLE_PRE_VEGA: 1
      COMMANDLINE_ARGS: "--listen --precision full --no-half --enable-insecure-extension-access"
      ## IMPORTANT!
      HSA_OVERRIDE_GFX_VERSION: "10.3.0"
      ROCR_VISIBLE_DEVICES: 1
      #PYTORCH_HIP_ALLOC_CONF: "garbage_collection_threshold:0.6,max_split_size_mb:128"

    entrypoint: ["/bin/sh", "-c"]
    command: >
      "rocm-smi; . /stablediff.env; echo launch.py $$COMMANDLINE_ARGS;
      if [ ! -d /stablediff-web/.git ]; then
        cp -a /sdtemp/. /stablediff-web/
      fi;
      if [ ! -f /stablediff-web/models/Stable-diffusion/*.ckpt ]; then
        echo 'Please copy stable diffusion model to stablediff-models directory'
        echo 'You may need sudo to perform this action'
        exit 1
      fi;
      python launch.py"
    ports:
      - "7860:7860"
    devices:
      - "/dev/kfd:/dev/kfd"
      - "/dev/dri:/dev/dri"
    group_add:
      - video
    ipc: host
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
    volumes:
      - ./insightface:/root/.insightface
      - ./ifnude:/root/.ifnude
      - ./cache:/root/.cache
      - ./stablediff.env:/stablediff.env
      - ./stablediff-web:/stablediff-web
      - ./stablediff-models:/stablediff-web/models/Stable-diffusion


Oobabooga text generation webui

# Dockerfile.rocm
FROM rocm/dev-ubuntu-22.04
ENV DEBIAN_FRONTEND=noninteractive \
    PYTHONUNBUFFERED=1 \
    PYTHONIOENCODING=UTF-8
WORKDIR /sdtemp
RUN apt-get update &&\
    apt-get install -y \
    wget \
    git \
    python3 \
    python3-pip \
    python-is-python3
RUN python -m pip install --upgrade pip wheel

RUN git clone https://github.com/oobabooga/text-generation-webui /sdtemp


EXPOSE 7860

RUN python -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6

RUN python -m pip install -r requirements_nocuda.txt &&  python -m pip install llama-cpp-python


# docker-compose.yml

version: '3'

services:

  oobabooga-rocm:
    build:
      context: .
      dockerfile: Dockerfile.rocm
    container_name: oobabooga-rocm-runner
    restart: unless-stopped
    environment:
      TZ: "Europe/Rome"
      ROC_ENABLE_PRE_VEGA: 1
      COMMANDLINE_ARGS: "--listen --precision full --no-half"
      ## IMPORTANT!
      HSA_OVERRIDE_GFX_VERSION: "10.3.0"
      ROCR_VISIBLE_DEVICES: 1
      #PYTORCH_HIP_ALLOC_CONF: "garbage_collection_threshold:0.6,max_split_size_mb:128"

    entrypoint: ["/bin/sh", "-c"]
    command: >
      "rocm-smi;
       python server.py --listen"
    ports:
      - "7861:7860"
    devices:
      - "/dev/kfd:/dev/kfd"
      - "/dev/dri:/dev/dri"
    group_add:
      - video
    ipc: host
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
    volumes:
      - /mnt/<the-dir-with-models>:/sdtemp/models


I'm not completly satisfied with this solution, some extensions needs more deps at runtime but it works and my Debian 12 host is clean from sperimental dependencies and AMD ROCM hacks. A similar tecnique will work also for CPU and NVIDIA but I didn't test that scenario. Enjoy you local AI.