<aside> đź’ˇ A guide detailing installing OpenStack on a Ubuntu 22.04 server Linux operating system using a private network.
</aside>
Revision: 20240512-0 (init
: 20240220)
Kolla Ansible provides production-ready containers (here, Docker) and deployment tools for operating OpenStack clouds. This guide explains installing a single host (all-in-one
) OpenStack Cloud on a Ubuntu 22.04 server Linux operating system using a private network. We specify values and variables that can easily be adapted to others’ networks. We do not address encryption for the different OpenStack services and will use an HTTPS reverse proxy to access the dashboard. Please note that this setup requires two physical NICs in the computer you will use.
microstack
(sunbeam
) project.SECURE_PROXY_SSL_HEADER
, as detailed at https://docs.openstack.org/security-guide/dashboard/https-hsts-xss-ssrf.html./
and a larger /data
disk, so we prefer the VM disk images and volumes to be created on the Cinder-mounted NFS drive. If you have a large /
that can accommodate all your VMs, you can ignore the “NFS for Cinder” and cinder
related variables in globals.yml
.
/
, will allow you to see all the volumes created in the /data/nfs
directory./etc/kolla/config/nfs_shares
.We recommend obtaining the source for this document. Once you have obtained the source markdown file, open it in an editor and perform a find and replace for the different values you will need to customize for your setup. This will allow you to copy/paste directly from the source file.
Values to adjust (in no particular order):
eno1
is the host's primary NIC.10.0.0.17
the DHCP (or manual) IP of that primary NIC.enp1s0
is the secondary NIC of the host that should not have an IP and will be used for neutron.kaosu
, the user we are using for installation./data
the location where we prepare the installation (in a kaos
directory) and store Cinder’s NFS disks.10.0.0.1
with your network’s gateway.10.0.0.75
is the start IP for the OpenSack Floating IPs range.10.0.0.89
is the end IP for the OpenStack Floating IPs range.10.0.0.254
the OpenStack internal VIP address.os.example.com
, the URL for OpenStack for our HTTPS upgrading reverse proxy.We are not addressing user choices like Cinder or values for disk size/memory/number of cores/quotas in the my-init-runonce.sh
script or later command lines.
Most steps in the “Post-installation” section require you to select your preferred user/project/IPs; adapt as needed in those steps.
eno1
is the primary NIC, with IP 10.0.0.17
dhcp6: false
in the netplan
(see below).enp1s0
is the secondary NIC, which should not have an IP assigned.
/etc/netplan/00-installer-config.yaml
and set dhcp4: false
and dhcp6: false
for enp1s0
then sudo netplan apply
ssh
set up.kaosu
user for our OpenStack Kolla Ansible installation./data
directory for the different components to install.10.0.0.1
.eno1
on 10.0.0.17
).10.0.0.75
- 10.0.0.89
.10.0.0.254
.To enable the 6.x kernel:
sudo apt-get install -y linux-generic-hwe-22.04
sudo reboot -h now
Latest instructions from https://docs.docker.com/engine/install/ubuntu/.
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \\
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] <https://download.docker.com/linux/ubuntu> \\
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \\
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
# logout from ssh and log back in, test that a sudo-less docker is available to your user
docker run hello-world
To make our koas
user use the sudo
command without being prompted for a password:
sudo visudo -f /etc/sudoers.d/kaos-Overrides
# Add and adapt kaosu as needed
kaosu ALL=(ALL) NOPASSWD:ALL
# save the file and test in a new terminal or login
sudo echo works
Additional details at https://docs.openstack.org/kolla-ansible/latest/reference/storage/cinder-guide.html and https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-22-04.
We want to use NFS on /data/nfs
to store Cinder-created volumes:
# Install nfs server
sudo apt-get install -y nfs-kernel-server
# Create the destination directory and make it nfs-permissions ready
mkdir -p /data/nfs
sudo chown nobody:nogroup /data/nfs
# edit the `exports` configuration file
sudo nano /etc/exports
# Wihin this file: add the directory and the access host (ourselves, ie, our 10. IP) to the authorized list
/data/nfs 10.0.0.17(rw,sync,no_subtree_check)
# After saving, restart the nfs server
sudo systemctl restart nfs-kernel-server
# Prepare the cinder configuration to enable the NFS mount
sudo mkdir -p /etc/kolla/config
sudo nano /etc/kolla/config/nfs_shares
# Add the "remote" to mount in the file and save
10.0.0.17:/data/nfs
Latest instructions at https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html.
For this install, we will work from/data/kaos
as the kaosu
user.
cd /data
# if the directory creation fails, create the directory as root: sudo mkdir kaos
# and chown it: sudo chown $USER.$USER kaos
mkdir kaos
cd kaos
sudo apt-get install -y git python3-dev libffi-dev gcc libssl-dev build-essential
sudo apt-get install -y python3-venv python3-pip
python3 -m venv venv
source venv/bin/activate
pip install -U pip
# Install a few things that might otherwise fail during ansible prechecks
sudo apt-get install -y build-essential libpython3-dev libdbus-1-dev cmake libglib2.0-dev
pip install docker pkgconfig dbus-python
pip install 'ansible-core>=2.14,<2.16'pip install git+https://opendev.org/openstack/kolla-ansible@master
sudo mkdir -p /etc/kolla
sudo chown $USER.$USER /etc/kolla
cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
# we are going to do an all-in-one (single host) install
cp venv/share/kolla-ansible/ansible/inventory/all-in-one /etc/kolla/.
# Install Ansible Galaxy requirements
kolla-ansible install-deps
# generate passwords (into /etc/kolla/passwords.yml)
kolla-genpwd
Edit and adapt the sudo nano /etc/kolla/globals.yml
file as follows (search for matching keys):
kolla_base_distro: "ubuntu”
kolla_internal_vip_address: "10.0.0.254"
network_interface: "eno1"
neutron_external_interface: "enp1s0”
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
Before we try the deployment, let’s ensure the Python interpreter is the venv
one. At the top of the /data/kaos/all-in-one
file, add localhost ansible_python_interpreter=/data/kaos/venv/bin/python
.
If all goes well, you will have a PLAY RECAP
at the end of a successful install, which might look similar to the following:
PLAY RECAP ******************************************************************************************************************
localhost : ok=352 changed=242 unreachable=0 failed=0 skipped=258 rescued=0 ignored=1
The Dashboard will be on our host's port 80, so at http://10.0.0.17/. The admin
user password can be found using fgrep keystone_admin_password /etc/kolla/passwords.yml
.
(still using the venv
)
Install the python openstack
command: pip install python-openstackclient -c <https://releases.openstack.org/constraints/upper/master
>
kolla-ansible post-deploy -i /etc/kolla/all-in-one
will create a cloud.yml
file that can be added to your default config: cp /etc/kolla/clouds.yaml ~/.config/openstack
(requires the venv
, the openstack
command line, the cloud.yml
file, and the generated/etc/kolla/admin-openrc.sh
script)
In /data/kaos
, there is a venv/share/kolla-ansible/init-runonce
script to create some of the basic configurations for your cloud. Most people will be fine with modifying its EXT_NET_CIDR
, EXT_NET_RANGE
, and EXT_NET_GATEWAY
variables.
The file below is adapted to our configuration as amy-init-runonce.sh
executable script. It uses larger tiny
images (5GB, a Ubuntu server is over 2GB), has each larger instance only use a base image of 20GB (since you can specify your preferred disk image size during the instance creation process), its instance names following the m<number_of_cores>
naming convention and adds xxlarge
and xxxlarge
memory instances.