It Only Took Me 12 Years!

As many of you know, I tend to run a lot of interesting things in my “home lab” network. Over the years, I have used lots of different techniques for this. Sometimes I ran on “bare metal” as the kids call it these days, other times I have used a variety of virtualization techniques with LXD being at the top of that list most recently.

About twelve years ago, I discovered a really interesting open source project called OpenStack that had been started by NASA and Rackspace Hosting. The goal of the project was to create a cloud computing infrastructure that would rival the “big boys” and run on commodity hardware. Back then, I had three spare “old” work laptops in my office and I tried like hell to get it running on them. It turned out to be really really hard at the time.

Fast forward to a couple of years ago. In the intervening time, Canonical (maker of Ubuntu Linux among other things) had really leaned in hard to OpenStack and I thought I’d give it another go. This time they had done a lot of the heavy lifting for me using the juju operator framework and a bunch of charms. I managed (after many, many attempts where I learned something new and then had to start over again to get a clean install) to get OpenStack up and running.

Then I had a power failure in my house and my server rebooted. Unfortunately, my config got “borked” as a result and I threw up my hands in frustration again and went back to LXD. So much for me having my own private cloud.

Well, a few months ago I had a problem with my LXD cluster where it lost access to its database and instead of trying to recover the config (I had backups of everything anyhow), I decided to try the latest “cool thing”(TM) from Canonical – OpenStack Sunbeam. I had seen lots of advertisements about “Install OpenStack Sunbeam in Five Easy Steps” and thought I’d give it a whirl. This version of OpenStack did still use juju under the covers, but it was focused on setting things up on a single machine using MicroK8s – a lightweight version of Kubernetes.

The machine I am running on is based on an AMD Threadripper 2950x processor with 32 cores and has 128 GB of RAM and an NVIDIA RTX 4070 ti video card for GPU work. I call it “beast” affectionately. The goal was to get OpenStack Sunbeam running on beast and move all of my workloads to machines there. Not to share a spoiler, but I have managed to do with with two exceptions – one being my UniFi Console (the software that runs my Ubiquiti wireless network) because it had a subtle networking problem onboarding the devices that the OVN network on OpenStack struggled with and the other being my qemu emulated s390x and ppc64el versions of Ubuntu due to their need for a network tap and bridge configuration that could have probably been done on OVN, but was too “thinky” for me at the time.

Here’s how I did it. First, I started with a clean install of Ubuntu 22.04 Server on the machine. Given that 24.04 is just around the corner I gave a fleeting thought to using it but decided not to press my luck and try to stick to the script as much as I could. I have three ethernet interfaces available to me on this machine (along with a WiFi one that makes four) so I plugged two of them into my switch and put a static IP on one of them that will be used as my primary access point to the machine over ssh. The other one I just let dhcp4 autoconfigure during the install.

After rebooting the fresh machine, I applied all of the updates and enrolled it into UbuntuPro. If you aren’t using Pro, you really should. It’s an amazing free service from Canonical for up to 5 machines to get things like Livepatch (a magical way of updating your running kernel without a reboot) and extended support for the OS and a ton of open source software. I also enrolled the machine in my local instance of Landscape, Canonical’s systems management service.

I turned off the dhcp4 on my unused networking connection using netplan by editing the /etc/netplan/00-installer-config.yaml file and making this change:

enp4s0:
    dhcp4: false

I also marked the unplugged port as “optional: true” so that it wouldn’t hang the boot process until it timed out trying to connect.

Since I’m not sure why but the Ubuntu Server install creates a pretty small LVM volume to install the OS in, I extended it to take up the entire disk by doing the following:

$ sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

I then rebooted the machine and installed the OpenStack snap and ran the “prepare-node” script as instructed:

$ sudo snap install openstack --channel 2023.2/edge
$ sunbeam prepare-node-script | bash -x && newgrp snap_daemon

I then set the FQDN for the machine by editing the /etc/hosts file and adding:

127.0.0.1 beast.example.com beast

I then ran the following command as well for good measure:

$ sudo hostnamectl set-hostname beast.example.com

A quick check of hostname -f reveals that the changes were accepted by the OS. Now I bootstrapped the machine:

$ sunbeam cluster bootstrap

I managed to run into an error that the Discourse forum was happy to help me address where I had to add a “-m /snap/openstack/current/etc/manifests/edge.yml” switch that corrected it. Might not be an issue by the time you give it a try but I documented it here for completeness. Also, if you wanted to have ceph up and running on the new machine simply add “–role compute –role storage” to the command as well.

When the bootstrap process prompted me for configuration items, I said I didn’t have a proxy and provided my “regular” network as the management network. For the MetalLB address allocation range, I gave it 10 IPs out of that same network.

The next step was to configure the OpenStack instance on the machine:

$ sunbeam configure --openrc demo-openrc

I told it I wanted remote network access, used my “regular” network as the external network, gave it my gateway and a 20 IP block in that same network as a start and end range. I told it I wanted a flat network and that I wanted Openstack populated with a demo user which I provided the credentials to be used for. I gave it a different subnet for the project network and told it what my nameserver was. I finally told it I wanted to enable ping and ssh and gave it the name of that unused ethernet interface as the free network interface.

I generated the admin credentials into a configuration file I could “source” from the command line later and displayed the dashboard URL as follows:

$ sunbeam openrc > sunbeam-admin
$ sunbeam dashboard-url

I then logged into that URL with the credentials from the ~/sunbeam-admin file so that I could further configure things.

I created the production domain from the Identity -> Domains link (press the Set Domain Context button next to it after it is created to use it) and the production project from the Identity -> Projects link. Then I created an admin user that used those from the Identity -> Users link. I then logged out and back in as my new user.

From the Project -> Network -> Network Topology link, I created a router and a network with a new subnet. I then clicked on the Router and added an interface to that new subnet. You should see a line now from the router to the subnet indicating that it is connected to the external network through that router.

I created a floating IP address on my “regular” network by using the Admin -> Network -> Floating IPs and create an ssh key pair from Project -> Compute -> Key Pairs. I saved off the downloaded PEM file to my ~/.ssh directory to use to ssh into machines on the new server.

I finally created a test instance and assigned it a floating IP address. One trick I discovered was to answer “No” to “create volume” when you are creating the server. Without doing that you get an error trying to create the server. Perhaps it had something to do with me not setting up ceph? I don’t know. I was then able to use that PEM file to ssh into the newly created server and everything worked!

I hope you have enjoyed this little walk down me finally getting a Win on OpenStack after more than a decade. It certainly has come a long way in terms of capabilities as well as usability. Oh, and I have rebooted this server multiple times without losing the config so I hope it will be super-stable for me in the coming years.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *