<aside> 🗓️ Linux host set-up instructions for Dockge, a self-hosted Docker Compose stacks management tool with a feature-rich interface for self-hosting and home lab setups. It provides access to an all-in-one view of logs, a YAML editor, a web terminal, container controls, and monitoring.
</aside>
Revision: 20250121-0 (init
: 20240706)
This post details installing the Dockge docker compose
manager on a host with Docker, and an optional Nvidia GPU. We will deploy a few stacks to demonstrate the tool's use. Please note that although we will mention HTTPS reverse proxies (that would upgrade http://127.0.0.1:3001/ to https://dockge.example.com, for example), their setup is not covered in this post.
Dockge is a self-hosted Docker management tool created by louislam, the developer behind the popular Uptime Kuma project. It's designed to manage Docker Compose stacks, offering a user-friendly and feature-rich interface for self-hosting and home lab setups.
Dockge’s WebUI is designed with user convenience in mind. It provides easy access to many functions that streamline stack management. It provides a unified view of logs, a YAML editor, a web terminal, container controls, and monitoring, all from a single interface.
Dockge follows a file-based structure. Each stack’s subdirectory can be configured to use a relative path to the directory where the compose.yaml
of that stack is placed.
The tool provides a few other features, such as an easy “docker to docker compose” interface that takes a docker run
and proposes a matching compose.yaml
file.
For an up-to-date listing of capabilities, please see the GitHub page at https://github.com/louislam/dockge.
In addition to installing Dockge using docker
, we will use it with a few stacks, among which watchtower, dashdot, and CTPO. This setup is done on an Ubuntu 24.04 host but should be adaptable to other Linux distributions with minor alterations. GPU setups require the Nvidia runtime installed on the host system; for details, see:
Setting up NVIDIA docker & podman (Rev: 20250326-0)
Using docker
and following the “basic” installation process, the default stacks directory will be in /opt/stacks
, and the default port 5001
.
Adapting from Dockge’s GitHub page, on an Ubuntu 24.04 host, we need sudo
to create the directories and curl
the compose.yaml
file (feel free to confirm that its content matches the one from the official Dockge page before starting the service):
# Create directories that store the stacks and stores Dockge's stack
sudo mkdir -p /opt/stacks /opt/dockge
cd /opt/dockge
# Download the compose.yaml
sudo curl <https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml> --output compose.yaml
# Start the server
# add a sudo before if the user is not in the docker group
docker compose up -d
As long as no error occurred, the service is now running on http://localhost:5001
If a firewall is not blocking the port, it can be accessed from another host on the subnet. With a reverse proxy configured, after configuring it, Dockge’s WebUI can be accessed securely to do the initial setup.
After the service starts, go to the URL and port the installer describes. In general, it will be http://localhost:5001
With a reverse proxy configured, an HTTPS reverse proxy to the IP and port on the subnet can be set, for example, to https://dockge.example.com
.
Set a preferred username and password to access Dockge, and let’s install some services on it.
Self-describing itself as “a process for automating Docker container base image updates” on its GitHub, the tool will check for new container image versions at intervals and update those if a new one is available,
In its quick start section is the start invocation:
docker run --detach \\
--name watchtower \\
--volume /var/run/docker.sock:/var/run/docker.sock \\
containrrr/watchtower
Dockge has a useful “Docker Run” entry box in its UI. When pasting the above docker run
and using the Convert to Compose
button, we get an already populated UI with an automatically converted compose.yaml
content:
# ignored options for 'watchtower'
# --detach
version: "3.3"
services:
watchtower:
container_name: watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
image: containrrr/watchtower
networks: {}
In the UI, set General -> Stack Name
to watchtower
. This will create a directory in the /opt/stacks
named watchtower
, and all content relative to this “stack” will be placed within, such as .env
, if any.
We do not need other components from the UI, but looking at the watchtower documentation (at https://containrrr.dev/watchtower/arguments/), we will alter the compose.yaml
file to:
restart: unless-stopped
command: --cleanup --interval 86400 --include-stopped
/etc/timezone
and /etc/localtime
to the container (as ro
).labels
(more on this shortly)With those changes, the final compose.yaml
is:
services:
# Watchtower - Auto update containers
watchtower:
container_name: watchtower
image: containrrr/watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: --cleanup --interval 86400 --include-stopped --label-enable
labels:
com.centurylinklabs.watchtower.enable: true
This service does not have a UI nor expose ports, nor does it have a UI so that we can use the Save
button.
Once this is done, we will see the service as Inactive
. Using the Start
button will show it as active
. Container logs can be seen in the Terminal
section to investigate if a problem has occurred within the newly run container.
Use the >_ bash
button to get a running shell
within the terminal, which might be helpful for some containers. watchtower
does not have either bash
or sh
available, so the button will not function for this container.
By default, watchtower will monitor all containers running within the Docker daemon to which it is pointed […] you can restrict watchtower to monitoring a subset of the running containers by specifying the container names as arguments when launching watchtower.
Depending how many containers will run under Dockge, and how we will update those, it might be desirable to use one of --label-enable
(and its disable mode) and label all such containers, or --disable-containers
followed by a list of the containers to skip.
If using labels, those are added as follows (here, not to update this container but all others):
services:
builtcontainer:
image: localbuild:local
labels:
com.centurylinklabs.watchtower.enable: false
With --label-enable
we would set the label to true
to update only those containers.
<aside> ℹ️
This mode is an alternative deployment option enabling the tool’s metric collection.
</aside>
It is also possible to run watchtower in API mode. This mode is needed to pass some metrics. A token (please update its value) is required to prevent any call from triggering an update.
Using API mode disables periodic updates unless another flag is passed. We also use the schedule
flag (using a format similar to cron) to request daily updates at 1:30 a.m. local time.
When in API mode, watchtower exposes port 8080. In this updated compose.yaml
, we map it to host port 28080.
services:
watchtower:
container_name: watchtower
image: containrrr/watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: --cleanup --schedule "0 30 1 * * *" --include-stopped --label-enable
--http-api-update --http-api-metrics --http-api-periodic-polls
environment:
- WATCHTOWER_HTTP_API_TOKEN=secret-token
ports:
- 28080:8080
Dozzle is a handy tool for seeing the logs of running docker containers.
It is a log viewer designed to simplify the process of monitoring and debugging containers. It is a lightweight, web-based application that provides real-time log streaming, filtering, and searching capabilities through an intuitive user interface.
Running it in compose can be done following the instructions at https://dozzle.dev/guide/getting-started. We will create a new “dozzle” stack with the following compose.yaml
services:
dozzle:
container_name: dozzle
image: amir20/dozzle:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/data
ports:
- 8008:8080
environment:
DOZZLE_AUTH_PROVIDER: simple
DOZZLE_ENABLE_ACTIONS: true
labels:
com.centurylinklabs.watchtower.enable: true
Our changes are that we are using an alternate port to listen to (8008) and authorizing the tool to perform actions (start, stop, restart) on our containers. Because of this, we will require a username and password to access the WebUI and will be using Watchtower to update the container.
“Save” the stack and create a data
directory in the stack location (in /opt/stacks/dozzle
—we will likely need to sudo
to do this). Then, create and edit a data/users.yml
file containing content adapted from Dozzle’s “File Based User Management” page.
After “Start”ing the stack, we can see the logs of other running containers in its WebUI at http://IP:8008
A couple of notes:
Dashdot is “a modern server dashboard” that displays views about the running system’s resources on a webpage. Dashdot integrates with server “main page” tools such as HomePage, Homarr, or Heimdall, which is a preferred use case (note that a https reverse proxy needs to be available for this integration to be functional).
Per the instructions at https://getdashdot.com/docs/installation/docker-compose (the non-GPU compose.yaml
file is also at this link):
services:
dash:
image: mauricenino/dashdot:nvidia
restart: unless-stopped
privileged: true
deploy:
resources:
reservations:
devices:
- capabilities:
- gpu
ports:
- '80:3001'
volumes:
- /:/mnt/host:ro
environment:
DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu'
To get more details on why the privileged
and /
mount are present, please see https://getdashdot.com/docs/installation
Per those settings, dashdot will use port 80; we prefer to use a different port. The deploy:
section defines the device access, here to a GPU.
Many configuration options can be added to dashdot, per https://getdashdot.com/docs/configuration/basic
The final dashdot
stack’s compose.yaml
in use for this setup uses port 3001 and adds a few environment variables:
services:
dash:
image: mauricenino/dashdot:nvidia
container_name: dashdot-nvidia
restart: unless-stopped
privileged: true
deploy:
resources:
reservations:
devices:
- capabilities:
- gpu
ports:
- '3001:3001'
volumes:
- /:/mnt/host:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu'
DASHDOT_SHOW_HOST: true
DASHDOT_CUSTOM_HOST: hostname
DASHDOT_OVERRIDE_OS: 'Ubuntu 24.04'
labels:
com.centurylinklabs.watchtower.enable: true
, with:
DASHDOT_CUSTOM_HOST
is used to allow control of the hostname displayed.DASHDOT_OVERRIDE_OS
in use to avoid the tool from giving details about the running container (compared to the docker host, here, running Ubuntu 24.04
)The “CTPO: CUDA + TensorFlow + PyTorch + OpenCV Docker containers" provide easy-to-use Jupyter Notebooks with TensorFlow, PyTorch, and OpenCV built with CUDA enabled. A CUDA-optimized version and a CPU-bound version are available.
Because those services are used as needed, we will use the ability Dockge grants us to start and stop them as needed from its WebUI.
The latest
Docker image is available as infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest
The README.md
’s docker compose
section gives us the following:
services:
jupyter_ctpo:
container_name: jupyter_ctpo
image: infotrend/ctpo-jupyter-cuda_tensorflow_pytorch_opencv:latest
restart: unless-stopped
ports:
- 8888:8888
volumes:
- ./iti:/iti
- ./home:/home/jupyter
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
labels:
com.centurylinklabs.watchtower.enable: true
In this setup, we note the following:
local_port:container_port
volumes
(mount points) created for the /iti
directory, where the Jupyter interface starts from, and /home/jupyter
, where user configurations are stored. Because Dockge will use those local to the directory in /opt/stacks
where the service is started, it is convenient to access its content.environment:
settings, we ensure that the container has full access to the NVIDIA
device(s), and point the resources
(in deploy:
to the first GPU available on the host (adapt as needed).From the Dockge main menu, use + Compose
and paste the above compose.yaml
content, naming the stack jupyter_ctpo
, and Deploy
it. We await the docker pull
to complete before being able to go to http://127.0.0.1:8888/, enter the Jupyter access token (here set as iti
), and confirm access to the GPU by running a new terminal and typing nvidia-smi
.
Following the same concepts as in the “GPU setup”, the CPU-bound version can be deployed as jupyter_tpo
using a compose,yml
as:
services:
jupyter_tpo:
container_name: jupyter_tpo
image: infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest
restart: unless-stopped
ports:
- 8889:8888
volumes:
- ./iti:/iti
- ./home:/home/jupyter
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- NVIDIA_VISIBLE_DEVICES=void
labels:
com.centurylinklabs.watchtower.enable: true
Note that we changed the base container image and the container_name
, and are using a local port different from the CTPO version.
The NVIDIA_VISIBLE_DEVICES=void
is here if the default docker runtime is set to nvidia-docker
and is only used in this case, so keeping it is benign.
This time, the Jupyter Lab is accessed from http://127.0.0.1:8889/
Syncthing is a free, open-source, continuous file synchronization program that allows users to synchronize files and folders across multiple devices. It works on multiple platforms, including some NAS systems, providing a decentralized approach to file syncing without relying on cloud services or central servers. It focuses on simplicity, reliability, privacy, and security: users have full control over their data, as files are transferred directly between devices using end-to-end encryption while allowing for complex sync setups, including one-way syncs and selective file syncing. For new users, it is recommended to look at https://docs.syncthing.net/intro/getting-started.html to decide if the tool matches your requirements (versus tools such as Time Machine for Macs or borg backup or similar).
For the following to be pertinent, a data source and a data destination need to exist in your network. i.e., a system needs to either have storage to sync data from another client (or a NAS on which it is installed to sync data to).
The following syncthing
Dockge stack matches this second case of a Send Only
configuration to the NAS. For this use, we prefer being able to run as root
so that the tool can read every single file it encounters (which is not the recommended way per the “Please consider using a normal user account” message that we will encounter; see the additions to the environment
section) and we will mount two directories /opt
(where Dockge’s stacks are located, which might include models for AI tools) and /home
(where the different user directories are present) to be shared with SyncThing peers (in our case the local NAS). The compose.yaml
with those settings is as follows (please adapt hostname
as preferred):