It Only Took Me 12 Years!

As many of you know, I tend to run a lot of interesting things in my “home lab” network. Over the years, I have used lots of different techniques for this. Sometimes I ran on “bare metal” as the kids call it these days, other times I have used a variety of virtualization techniques with LXD being at the top of that list most recently.

About twelve years ago, I discovered a really interesting open source project called OpenStack that had been started by NASA and Rackspace Hosting. The goal of the project was to create a cloud computing infrastructure that would rival the “big boys” and run on commodity hardware. Back then, I had three spare “old” work laptops in my office and I tried like hell to get it running on them. It turned out to be really really hard at the time.

Fast forward to a couple of years ago. In the intervening time, Canonical (maker of Ubuntu Linux among other things) had really leaned in hard to OpenStack and I thought I’d give it another go. This time they had done a lot of the heavy lifting for me using the juju operator framework and a bunch of charms. I managed (after many, many attempts where I learned something new and then had to start over again to get a clean install) to get OpenStack up and running.

Then I had a power failure in my house and my server rebooted. Unfortunately, my config got “borked” as a result and I threw up my hands in frustration again and went back to LXD. So much for me having my own private cloud.

Well, a few months ago I had a problem with my LXD cluster where it lost access to its database and instead of trying to recover the config (I had backups of everything anyhow), I decided to try the latest “cool thing”(TM) from Canonical – OpenStack Sunbeam. I had seen lots of advertisements about “Install OpenStack Sunbeam in Five Easy Steps” and thought I’d give it a whirl. This version of OpenStack did still use juju under the covers, but it was focused on setting things up on a single machine using MicroK8s – a lightweight version of Kubernetes.

The machine I am running on is based on an AMD Threadripper 2950x processor with 32 cores and has 128 GB of RAM and an NVIDIA RTX 4070 ti video card for GPU work. I call it “beast” affectionately. The goal was to get OpenStack Sunbeam running on beast and move all of my workloads to machines there. Not to share a spoiler, but I have managed to do with with two exceptions – one being my UniFi Console (the software that runs my Ubiquiti wireless network) because it had a subtle networking problem onboarding the devices that the OVN network on OpenStack struggled with and the other being my qemu emulated s390x and ppc64el versions of Ubuntu due to their need for a network tap and bridge configuration that could have probably been done on OVN, but was too “thinky” for me at the time.

Here’s how I did it. First, I started with a clean install of Ubuntu 22.04 Server on the machine. Given that 24.04 is just around the corner I gave a fleeting thought to using it but decided not to press my luck and try to stick to the script as much as I could. I have three ethernet interfaces available to me on this machine (along with a WiFi one that makes four) so I plugged two of them into my switch and put a static IP on one of them that will be used as my primary access point to the machine over ssh. The other one I just let dhcp4 autoconfigure during the install.

After rebooting the fresh machine, I applied all of the updates and enrolled it into UbuntuPro. If you aren’t using Pro, you really should. It’s an amazing free service from Canonical for up to 5 machines to get things like Livepatch (a magical way of updating your running kernel without a reboot) and extended support for the OS and a ton of open source software. I also enrolled the machine in my local instance of Landscape, Canonical’s systems management service.

I turned off the dhcp4 on my unused networking connection using netplan by editing the /etc/netplan/00-installer-config.yaml file and making this change:

    dhcp4: false

I also marked the unplugged port as “optional: true” so that it wouldn’t hang the boot process until it timed out trying to connect.

Since I’m not sure why but the Ubuntu Server install creates a pretty small LVM volume to install the OS in, I extended it to take up the entire disk by doing the following:

$ sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

I then rebooted the machine and installed the OpenStack snap and ran the “prepare-node” script as instructed:

$ sudo snap install openstack --channel 2023.2/edge
$ sunbeam prepare-node-script | bash -x && newgrp snap_daemon

I then set the FQDN for the machine by editing the /etc/hosts file and adding: beast

I then ran the following command as well for good measure:

$ sudo hostnamectl set-hostname

A quick check of hostname -f reveals that the changes were accepted by the OS. Now I bootstrapped the machine:

$ sunbeam cluster bootstrap

I managed to run into an error that the Discourse forum was happy to help me address where I had to add a “-m /snap/openstack/current/etc/manifests/edge.yml” switch that corrected it. Might not be an issue by the time you give it a try but I documented it here for completeness. Also, if you wanted to have ceph up and running on the new machine simply add “–role compute –role storage” to the command as well.

When the bootstrap process prompted me for configuration items, I said I didn’t have a proxy and provided my “regular” network as the management network. For the MetalLB address allocation range, I gave it 10 IPs out of that same network.

The next step was to configure the OpenStack instance on the machine:

$ sunbeam configure --openrc demo-openrc

I told it I wanted remote network access, used my “regular” network as the external network, gave it my gateway and a 20 IP block in that same network as a start and end range. I told it I wanted a flat network and that I wanted Openstack populated with a demo user which I provided the credentials to be used for. I gave it a different subnet for the project network and told it what my nameserver was. I finally told it I wanted to enable ping and ssh and gave it the name of that unused ethernet interface as the free network interface.

I generated the admin credentials into a configuration file I could “source” from the command line later and displayed the dashboard URL as follows:

$ sunbeam openrc > sunbeam-admin
$ sunbeam dashboard-url

I then logged into that URL with the credentials from the ~/sunbeam-admin file so that I could further configure things.

I created the production domain from the Identity -> Domains link (press the Set Domain Context button next to it after it is created to use it) and the production project from the Identity -> Projects link. Then I created an admin user that used those from the Identity -> Users link. I then logged out and back in as my new user.

From the Project -> Network -> Network Topology link, I created a router and a network with a new subnet. I then clicked on the Router and added an interface to that new subnet. You should see a line now from the router to the subnet indicating that it is connected to the external network through that router.

I created a floating IP address on my “regular” network by using the Admin -> Network -> Floating IPs and create an ssh key pair from Project -> Compute -> Key Pairs. I saved off the downloaded PEM file to my ~/.ssh directory to use to ssh into machines on the new server.

I finally created a test instance and assigned it a floating IP address. One trick I discovered was to answer “No” to “create volume” when you are creating the server. Without doing that you get an error trying to create the server. Perhaps it had something to do with me not setting up ceph? I don’t know. I was then able to use that PEM file to ssh into the newly created server and everything worked!

I hope you have enjoyed this little walk down me finally getting a Win on OpenStack after more than a decade. It certainly has come a long way in terms of capabilities as well as usability. Oh, and I have rebooted this server multiple times without losing the config so I hope it will be super-stable for me in the coming years.

Posted in Uncategorized | Leave a comment

YubiKey All The Things

As I continue to learn how to secure my various “things”, I’m getting more and more of a fan of physical two factor authentication that doesn’t involve sending six digit codes over the public SMS network. As such, I’ve been playing around with the YubiKey 5 NFC device, a little USB second factor that costs about $50 US and is really handy.

The first thing I wanted to secure with my YubiKey was logins to my various devices. Long-time readers of this blog know that means I need things to work in Windows, Linux and OpenBSD. I thought it would be helpful to outline below how I did that.


Now for the fun part. Setting up the YubiKey on OpenBSD! I followed this guide to set things up.

If you intend to use your YubiKey on OpenBSD, you will want to do this first, before anything else. The reason for that is that you will need to capture the private ID and private key for your YubiKey slot which can only be done at the time that you generate it. After it has been written to the key, it can’t be retrieved (otherwise, cloning a YubiKey would be a trivial exercise).

Another downside to the implementation of login_yubikey is that it acts as the sole credential to log you in – in other words, it replaces your password and there is no second factor.

First off, you will want to install the YubiKey personalization app:

$ doas pkg_add yubikey-personalization-gui

The limitation to how login_yubikey is implemented currently (as of 7.3) is that you can only have one key. There is no ability to register a second one. However, you can make a backup of the private identity and secret key at the time you generate them and store them in the same place you keep your backup YubiKey.

So, launch the YubiKey Personalization Tool GUI application and insert your YubiKey that you will be using as your only key for OpenBSD. In the UI, click on Yubico OTP from the upper left-hand menu and press the “Quick” button that shows up on the screen. Uncheck the “Hide values” and copy off to a safe place the Public Identity, Private Identity and Secret Key. Select that slot you want to use (in my case, slot 1) and press the “Write Configuration” button and it should write to your YubiKey.

Now create a file called /var/db/yubikey/user.uid and put your private identity value in there (replacing “user” in the filename with your userid). Put the secret key into one called /var/db/yubikey/user.key (again, replacing “user” with your userid). Set up the right permissions on the two files:

# chown root:auth /var/db/yubikey/*
chmod o-rw /var/db/yubikey/*

Finally, edit the /etc/login.conf file and add ‘yubikey’ at the beginning of the auth-defaults entry like this:

# Default allowed authentication styles

Now if you reboot, you will find that your password no longer works. Touching your YubiKey after you insert it, however, should replace your password and log you in just fine. According to a wonderful contributor on, there is a challenge-response capability that I might be able to use to meet my 2FA requirement; however, I’ll have to tackle that in another post sometime later.

Windows 11

The first thing I needed to do is to make sure that all of my Windows machines were running local accounts. I’m not a fan of Microsoft’s strong push to force everyone to use a Microsoft cloud login for their local machines. Windows 10 at least allowed you the option to ignore the strong push in the UI to set things up with a Microsoft login. For Windows 11, if that option exists, I cannot find it in the UI so I have to initially set up the machine with a Microsoft cloud authentication and then after the OS install is complete, switch it over to a local account.

By the way, if you are curious about how to switch from a Microsoft account to a local account, you need to bring up the settings pane in the UI, then navigate to “Accounts”. From there, click on “Your info” and select the item under “Account settings”. That will allow you to convert your existing Microsoft account to a local one. Or you can just create a new local account, set it as admin and delete the Microsoft one.

After you have either verified you are running a local account or migrated your account and rebooted / logged back in, you will want to create a second admin account that you can use without a YubiKey to prevent you from locking your keys in the car. While I acknowledge this makes things less secure, I created mine with a 20+ character password and no password recovery questions that make sense to anyone (just gibberish in the answers) and rolled with it. If you are feeling that your threat model wouldn’t support such a “back door”, there is no requirement to create such an account, just be warned that you could potentially lose access if you screw up.

That said, I then logged out of my normal user account and logged into this backup admin account. From there, I installed the “Login Confirmation” application from Yubico and rebooted the machine.

Upon reboot, the login screen looks like you require your YubiKey to log in. Actually, not yet until you configure things. Once you log in, run the “Login Confirmation” application you just installed. I switched to “Advanced configuration” so that I could control the behavior of the application as I set it up. On the next screen, I selected the following options:

  • Slot 2
  • Use existing secret if configured – generate if not configured
  • Generate recovery code (I do this for the first machine I setup and then save it off, after that I don’t generate a new code)
  • Create backup device for each user (only do this if you have purchased 2 separate YubiKeys – I have and I keep my backup in a safe / secure location in case I lose the primary)

You then need to pick the accounts you want to secure with your YubiKey(s) (again, I only pick my primary account, not the new admin account I created in case I’m locked out) and click “Next”. You’ll then be prompted to insert your primary YubiKey, then your secondary one. At this point, you should be good to go. I reboot just for grins, and then verify that (1) I cannot log into my primary account unless one of the two YubiKeys is inserted; (2) I can log into my emergency admin account without a YubiKey; and (3) that both YubiKeys work for logging into my primary account.

Linux (Ubuntu)

First things first, we need to add the official PPA for Yubico to apt:

$ sudo add-apt-repository ppa:yubico/stable
$ sudo apt update

Now go ahead and install all of the Yubico software:

$ sudo apt install yubikey-manager yubikey-personalization libpam-yubico libpam-u2f yubikey-manager-qt yubioath-desktop

Next, you will need to set the PIN on your FIDO2 capability on both of your YubiKeys. To do this, run the Yubico Authenticator app and select “YubiKey” from the hamburger menu. Insert your primary YubiKey and scroll down to the Configuration section in the GUI. If you click on the right arrow next to WebAitjm )FIDO2/U2F), you can program the PIN. Do this for your backup key as well.

To associate the U2F key(s) with your Ubuntu account, open terminal and insert your YubiKey:

$ mkdir -p ~/.config/Yubico
$ pamu2fcfg > ~/.config/Yubico/u2f_keys

You will be prompted to enter your PIN that you set above and then when the YubiKey lights up, touch the “y” symbol on the physical key and it will save the information on your account. Now repeat the process with your backup YubiKey:

$ pamu2fcfg -n >> ~/.config/Yubico/u2f_keys

Now, let’s add some additional security by moving the config files to a root-only accessible location and update the configuration for PAM to point to it:

$ sudo mkdir /etc/Yubico
$ sudo mv ~/.config/Yubico/u2f_keys /etc/Yubico/u2f_keys

Now we will need to change a configuration file in /etc/pam.d/sudo and add the following line after the “@include common-auth” line:

auth    required authfile=/etc/Yubico/u2f_keys

Now, let’s test things to be sure that sudo is working with the YubiKey. To do this, open a fresh terminal window, insert your YubiKey and run “sudo echo test”, you should have to enter your password and then touch the YubiKey’s metal button and it will work. Without the YubiKey inserted, the sudo command (even with your password) should fail.

So now we need to repeat this process with the following files:

runuser    /etc/pam.d/runuser
runuser -l    /etc/pam.d/runuser-l
su    /etc/pam.d/su
sudo -i    /etc/pam.d/sudo-i
su -l    /etc/pam.d/su-l

Now we need to configure the system to require the YubiKey for login. To do this, we perform the same thing as above but in the /etc/pam.d/gdm-password file. Do this also for the /etc/pam.d/login file if you want to protect console text-based login. Finally, create a log file for the system to use by touching /var/log/pam_u2f.log and add this to the end of the lines above that you want to debug:

auth    required authfile=/etc/Yubico/u2f_keys debug debug_file=/var/log/pam_u2f.log

Reboot your system and you should be pretty much locked down to use the YubiKey for anything important.

LUKS Full Disk Encryption (Ubuntu)

OK, if you really want to take your 2FA to the next level, you can make it so that your YubiKey is required as a second factor to unlock your LUKS-encrypted disk. Not a replacement for your password, but a true second factor that is required in addition to your password in order to unlock the disk. I was able to get this to work by following the instructions in this post as well as the GitHub repo. To summarize, see below.

If you are using your YubiKey for Windows 11 login as I outlined above, your second slot in the key is already in use so DO NOT do what I am about to tell you. If, however, you are not using that second slot for Windows login, you need to install the YubiKey personalization software and then initialize slot 2 (for both your primary and backup YubiKeys if you have two):

$ sudo apt install yubikey-personalization
$ ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -oserial-api-visible

Now, with either your existing slot 2 configuration for Windows 11 login, or with the new one that you did above, you will need to enroll your YubiKey to the LUKS slot. Figure out first what your partition name is using the lsblk command. You are looking for a partition labeled as “crypt”. In my case it is /dev/nvme0n1p3 (not the /dev/nvme0n1p3_crypt):

$ sudo apt install yubikey-luks
$ sudo yubikey-luks-enroll -d /dev/nvme0n1p3 -s 1

You will be prompted for a challenge passphrase to use to unlock your drive as the first factor, with the YubiKey being the second factor. Since you are using a higher security (2FA) mechanism to unlock the drive, there is no need for this challenge passphrase to be crazy long. You can use a much longer passphrase for slot 0 to unlock the drive without the YubiKey as a failsafe.

Repeat this in a different slot with your backup YubiKey. If you would like to see which slots are in use in your current LUKS partition, use the command:

$ sudo cryptsetup luksDump /dev/nvme0n1p3

That is also a good way to confirm you have the right partition name.

Now, you will need to update /etc/crypttab to add a keyscript:

cryptroot /dev/nvme0n1p3 none    luks,keyscript=/usr/share/yubikey-luks/ykluks-keyscript

After editing the file, you will need to transfer the changes to initramfs:

$ sudo update-initramfs -u

At this point, you should be able to reboot your machine and verify that you can unlock the disk with your original LUKS passphrase (what is now your fallback) as well as your new challenge passphrase and the YubiKey.

If you would like to update your YubiKey challenge passphrase in the future, you simply use the same command you did to set enroll it initially, but append a “-c” to clear out the old LUKS slot:

$ sudo yubikey-luks-enroll -d /dev/nvme0n1p3 -s 1 -c

When you are asked to “Enter any remaining passphrase”, that is where you enter your (hopefully) much longer fallback passphrase that doesn’t require the YubiKey, then you are asked to supply the new passphrase twice.

If you would like to update your fallback passphrase that doesn’t require a YubiKey, you can use the command:

$ sudo cryptsetup luksChangeKey /dev/nvme0n1p3 -S 0

You should be prompted for your old password and the new password twice.

UbuntuOne Single Sign On

For us Ubuntu users, eventually you end up creating an UbuntuOne account for things like access to Launchpad, access to your free UbuntuPro tokens, etc. Part of the setup for this is to supply a second factor for 2FA logins to increase the security of your account. The way I have mine set up, I have added Google Authenticator as one of my additional factors but I’d like to have my YubiKey be my primary second factor with Authenticator as my fallback if I don’t have access to the YubiKey (either primary or secondary)..

To set this up, navigate to and log into your account. Then navigate to My Account -> Authentication Devices. On that screen you will see that you can “Add a New Authentication Device”. When you click on that, select YubiKey and follow the on-screen instructions. I would recommend doing this with your primary YubiKey as well as your backup one.


If you would like to use your YubiKey for a second factor when logging into GitHub, it’s pretty easy to do. Simply log into your GitHub account, click on your picture in the upper right header and select Settings from the dropdown menu. On the settings screen, select “Password and Authentication” to navigate to that settings page. On this page, you will need to enable 2FA if you haven’t already done so. I use Google Authenticator as my fallback 2FA method here as well.

To add your YubiKey(s), click on the “Edit” button next to the “Security Keys” section and press the “Register new security key” button. You will be prompted to name your key (i.e. “Primary YubiKey” or “Secondary YubiKey” are what I used) and then you will be prompted to insert your key and press its metal button. Repeat this process with any additional YubiKey(s) you might have and then you have added this as a second factor for GitHub.


Adding a second factor to your SSH key infrastructure is fundamentally a really (did I say really?) good idea. The way you set this up server-side depends on the operating system that is hosting the ssh server. I’ll break it down below:


I followed the howto on the Yubico site and followed the instructions for the “Non-discoverable” keys which are stated as being higher security than “discoverable” keys. Starting out, I first checked the firmware of your YubiKey as follows:

$ lsusb | grep Yubico

# Get the two 4-digit numbers separated by a colon and use them in place of the xxxx:xxxx below
$ lsusb -d xxxx:xxxx -v 2>/dev/null | grep -i bcddevice

In my case, my firmware was version 5.12. This actually turned out to be 5.1.2 which put me below the minimum firmware version 5.2.3 for the stronger encryption. Also, you can’t update the firmware on your YubiKey – it is set at the factory. Ah well.

Given that, I’ll generate my keypair. On your desktop machine, generated the U2F/FIDO2 protected key pair:

$ ssh-keygen -t ecdsa-sk # Older YubiKey firmware
$ ssh-keygen -t ed25519-sk # Firmware version 5.2.3+

When I ran the top command (because my YubiKey had the older 5.1.2 firmware), I got an error that said “You may need to touch your authenticator to authorize key generation” and yet I was never offered the attempt to do so. Therefore, I added the -vvv switch to the command and saw an error saying that my device “does not support credprot, refusing to create unprotected resident/verify-required key”.

After doing some more digging, I discovered that there is a command I can run to validate the capabilities of my YubiKey:

$ sudo apt install fido2-tools
$ fido2-token -I /dev/hidraw1

What I discovered to my disappointment is that my primary YubiKey (which I have had for several years) does not support the “credProtect” feature (it should show up in extension strings in the output of that command. My new secondary key, however, did. Therefore, I worked using my newer key and placed an order to Yubico for another one to use as my new primary. Sigh… My newer YubiKey also had firmware 5.4.3 so I’ll be able to use the newer crypto. Probably better in the long run.

I chose the filename of “id_primary_yubikey” for my primary key and “id_backup_yubikey” for my backup keys. This generated a pair of files, one without a suffix and the other with a .pub suffix (indicating that it is the public key) for each of my two YubiKeys.

So. We now have a new primary and backup keypair in our local .ssh directory. How do we get this to work on our remote server(s)?

It’s pretty straightforward. Take the contents of the .ssh/ file and append it to the ~/.ssh/authorized_keys file on the remote server you are trying to ssh into. Repeat this process with the .ssh/ file. Now when you ssh into the remote system using the identity you generated:

$ ssh -i ~/.ssh/id_primary_yubikey user@remote-system

You should notice that the Yubico symbol lights up on your YubiKey asking you to touch it. When you do so, you should be logged into the remote system.

If you would like to add your new identity to your SSH agent:

$ ssh-add ~/.ssh/id_primary_yubikey

Now you should be able to ssh directly into your remote system without having to supply the identity file.

If you still can’t get into the remote system, it is possible that it is not configured to support the algorithm. To see what algorithms your remote system accepts, log into it and run the following command:

$ ssh -Q PubkeyAcceptedAlgorithms


Congratulations. You have now YubiKey’ed “All the Things!” Take that secondary YubiKey you bought and lock it in a safe somewhere. Keep the other one with you and you are now more secure than you were when you started.

Posted in Uncategorized | Leave a comment

Imitation is the Sincerest Form of Flattery

As many long-term readers of this blog know, I am pretty firmly entrenched into my Gnome 3 workflow and I try to keep my desktop experience as consistent as possible between the various machines and operating systems that I run. Over the years, I have been spending more and more time using Ubuntu and have become a real fan of the look and feel of the Yaru theme.

I thought it would be fun to document how I make my OpenBSD 7.3 system look and feel as close as I can to my current Ubuntu 23.10 (daily) system. As a result, this blog post might not be too useful for most of my readers so feel free to bail out. If, however, you have an interest in knowing how to tweak OpenBSD in this way, then by all means press on!

Installing the Basics

First, we assume a fresh install of OpenBSD that is fully patched up. We need to get some housekeeping out of the way so that we have a basic system on which to start customizing so I’ll detail the steps below:

# Set up APMD on the laptop for power management
$ doas rcctl enable apmd
$ doas rcctl set apmd flags -A
$ doas rcctl start apmd

# Add my user to the staff login class
$ doas usermod -L staff USERNAME

# Modify the following in /etc/login.conf

# Modify /etc/sysctl.conf

# Install the base software needed
$ doas pkg_add gnome gnome-tweaks gnome-extras vim
$ doas rcctl disable xenodm
$ doas rcctl enable multicast messagebus avahi_daemon gdm

# Install additional software and utilities
$ doas pkg_add filerfox chromium libreoffice nextcloudclient
$ doas pkg_add keepassxc aisleriot evolution evolution-ews
$ doas pkg_add tor-browser shotwell gimp

Tweaking the Themes and Extensions

OK. After rebooting and logging into a vanilla Gnome desktop through gdm, time to add the yaru theme. This can be found by searching for “yaru-remix” on and installing it manually as follows:

# Download yaru-remix-complete-20.10.tar.xz
$ cd ~
$ mkdir .themes
$ cd .themes
$ mv ~/Downloads/yaru-remix-complete-20.10.tar.xz
$ unxz yaru-remix-complete-20.10.tar.xz
$ tar xf yaru-remix-complete-20.10.tar
$ mv themes/* .
$ rmdir themes
$ doas mv icons/* /usr/local/share/icons
$ rmdir icons
$ doas mv wallpaper/* /usr/local/share/backgrounds/gnome
$ rmdir wallpaper
$ rm yaru-remix-complete-20.10.tar

At this point, launch “extensions” and turn on “User Themes”. To pick up this change, you will have to restart Gnome so I normally just do a reboot. Once I’m back, I fire up “Tweaks” and on the “Appearance” tab, I select “Yaru-remix-dark” for Icons, Shell and Legacy Applications. I also turn on minimize and maximize on the “Window Titlebars” page and enable Mouse Click Emulation – Fingers” on the “Keyboard & Mouse” page.

Now we have something that is starting to look like Ubuntu. The next step will be to use the wonderful Gnome extension “Dash to Dock” to get that good old “Unity” looking launcher on the left. First, download the latest version of the extension (for your version of Gnome Shell – Settings -> About) from and drop to a terminal.

$ mkdir -p ~/.local/share/gnome-shell/extensions/
$ cd ~/.local/share/gnome-shell/extensions/
$ unzip ~/Downloads/*.zip

Now fire up “Extensions” and turn on Dash to Dock. Press the “Settings” button to get to the settings for the extension. Position the dock on the screen to the “Left”, select “Panel mode” and set the icon size limit to what works for you. On the “Launchers” tab, I turn off the “Show trash can” and “Show volumes and devices” because I don’t use that functionality and would rather have room for more stuff that I can “pin” to the dock.

I typically pin Chromium, Firefox, Evolution, Files and Terminal to my dock. To accomplish this, I typically just launch the application with the Meta key and type it’s name, then hit Enter. It should now have an icon as a running application in the dock. I right-click on it and select “Pin to Dock”. I then position it by dragging it to where I’d like to see it.

Don’t Forget the Terminal

OK. So now we have the Yaru theme and a dock on the left that looks a lot like what you have in Ubuntu. It’s time to start tweaking other aspects of the setup. To get the Ubuntu font, you will need to:

$ doas pkg_add ubuntu-fonts
$ doas fc-cache

Launch tweaks again and set the Interface Text to “Ubuntu Medium” and the “Legacy Window Titles” to “Ubuntu Bold”. I also change my Antialiasing to Subpixel because it is a laptop with an LCD screen.

Now for the terminal window. From the hamburger menu on the right of the titlebar of the terminal, select “Preferences” and the “Unnamed”. On the “Text” tab, click the “Custom font” checkbox. Switch to the “Colors” tab. Here, we are going to change a lot of things. The first thing I change is to uncheck the “Use colors from system theme” and select “GNOME dark” from the “Built-in schemes”.

Then, I set the following colors:

  • Background color: #481036
  • Palette Color 4 (the blue one in the top row): #1572E6

Now, I need to modify things so that when I do an “ls”, I actually get colors. To accomplish that, I install the “colorls” package and alias “ls” to “colorls -G”:

$ doas pkg_add colorls

# Add the following to ~/.profile:
export ENV=$HOME/.kshrc

# Create ~/.kshrc
alias ls="colorls -G"

If you reboot, you should notice that typing “ls” shows your files color-coded based on the file type. Now, we need to edit the PS1 environment variable to get our terminal’s command prompt to look like the one in Ubuntu. To do this, add the following to your ~/.profile:

export PS1='\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

Now, a final reboot should have you looking pretty darned similar to Ubuntu. I’m not doing anything nutty like changing my default shell from ksh to bash (although I guess you could do that). Happy OpenBSD-ing!

Posted in Uncategorized | Tagged | Leave a comment

Fast Follower – Unlimited POWER!!!!!

In my most recent post, I covered how I was able to successfully install Ubuntu 20.04 LTS for s390x on an emulated mainframe using QEMU. While I have physical hardware for AMD64, ARM64 and RISC-V, there is another currently-supported processor architecture for Ubuntu that I don’t have the hardware for and that is IBM POWER. This CPU is the latest evolution of the PowerPC RISC CPU that was developed by IBM and Motorola in the 90’s and that was the core of Apple laptops and desktops for many years.

The current version of this architecture is available as the iSeries from IBM and has many variants still used in industrial and automotive applications as well as on some gaming consoles in the recent past. Given my success with Ubuntu 20.04 LTS on s390x with QEMU, I’m going to try replicating that for this platform as well. If I am successful, I will then have all of the processor architectures covered for Ubuntu Server.

First off, I installed the QEMU bits I needed:

$ sudo apt install qemu-system-ppc64 qemu

A quick peek in /usr/bin shows that “qemu-system-ppc64le” is installed. The “little endian” (a CS term that refers to how bytes in a word are stored physically in memory with the least significant byte first and the most significant byte last) version is how the POWER architecture version of Ubuntu is implemented.

In order to get our POWER guest connected to the network, we need to reconfigure this machine to use a bridge interface just like we did for the s390x. Do this by editing your /etc/netplan/*.yaml file as follows:

# This is the network config written by 'subiquity'
      dhcp4: no
      dhcp4: no
        - eth0
  version: 2

Apply the changes:

$ sudo netplan apply --debug

Set up the tunneling support needed by QEMU:

$ sudo apt install qemu-kvm bridge-utils
$ sudo ip tuntap add tap0 mode tap
$ sudo brctl addif br0 tap0

Now create the virtual storage device that we will be installing Ubuntu LTS onto:

$ qemu-img create -f qcow2 ubuntu-run.qcow2 10G

At this point, we need to download the installer and extract the kernel and initrd images from it, just like we did with s390x:

$ wget

Then, create a script to launch the installer:

#! /usr/bin/bash
emu-system-ppc64le -machine pseries -cpu power9 -nodefaults -nographic -serial telnet::4441,server -m 8192 -smp 2 -cdrom ubuntu-22.04-beta-live-server-ppc64el.iso -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio-net-pci -net tap,ifname=tap0,script=no,downscript=no

After you run this, it will wait for a telnet connection to proceed. In a separate ssh into the host, you need to enable the network:

$ ip link set up dev tap0
$ ip a # confirm that tap0 shows "up"

Now, you can telnet into the virtual serial console and run the install as you normally do:

$ telnet localhost 4441

After the install finishes and reboots, kill the qemu session with a CTRL+C in its ssh terminal and modify your run script:

#! /usr/bin/bash

qemu-system-ppc64le -machine pseries -cpu power9 -nodefaults -nographic -serial telnet::4441,server -m 8192 -smp 2 -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio-net-pci -net tap,ifname=tap0,script=no,downscript=no

Then, create a systemd service file for it as /etc/systemd/system/ppc64le.service:

Description=Launch ppc64le  emulator session

Create a script file called /root/ to bring up the qemu network interface (don’t forget to chmod +x the file):

#! /usr/bin/bash
sleep 15
brctl addif br0 tap0
ip link set up dev tap0

Then create a systemd service file for it as /etc/systemd/system/ppc64le-network.service:

Description=Enable ppc64le networking

Finally enable and start the services:

$ sudo systemctl enable ppc64le-network.service
$ sudo systemctl start ppc64le-network.service
$ sudo systemctl enable ppc64le.service
$ sudo systemctl start ppc64le.service

A reboot should automatically show the two new services running and you should be able to ssh into your new POWER machine running Ubuntu 22.04 over the network as if it were physical hardware. It might be a bit slow, but it works!

Posted in Uncategorized | 2 Comments

What’s a Mainframe?

For those of you born… shall we say… more recently than some of us, you might not be familiar with the term “mainframe” or think that it is some ancient server lost to the mists of time. The generic term refers to any large single or multi-user computer that was typically larger than one single cabinet in a datacenter. In the vernacular of today, this almost exclusively refers to multi-user hardware from IBM running either a proprietary operating system such as MVS.

Mainframes are still much in use today, running old legacy applications and serving as large servers that (interestingly enough) might run a variant of the Linux operating system. Interestingly, Ubuntu has been available for this platform for some time and I decided I wanted to add this unique architecture to my collection of machines at my disposal.

Unfortunately I don’t have gigawatts of power at my house nor the necessary external watercooling that some of these beasts require (plus, my wife would have had my head) so I decided to go out on a limb and try to spin a modern Ubuntu LTS up on emulated “zSeries” (that’s what IBM calls it these days) hardware.

I initially tried getting this working using the v4 version of Hercules based on an interesting blog post, but was unsuccessful. If you are interested in the details, jump to the end of this post and maybe you can figure out what I was doing wrong. In the meanwhile, here is what I did to get things working using QEMU.

Ubuntu 20.04 LTS on zSeries using QEMU

First things first, I did a search to see if I could find a simple how-to that walked me through the process. I did find some, but they were a bit out of date and also didn’t go into the networking aspects of QEMU to a level where I could successfully spin things up. Here are the posts that I based most of this blog’s work upon:

The obvious initial step is to install QEMU as well as the special features that allow it to emulate a zSeries processor from IBM:

$ sudo apt install qemu-system-s390x qemu

The next step in the process is to create a network bridge on the machine that you are using as the host. Do do this, edit your /etc/netplan/*.yaml file (substitute your NIC name and IP information):

# This is the network config written by 'subiquity'
      dhcp4: no
      dhcp4: no
        - enx000ec6306fb8
  version: 2

You have to then activate the changes to your network. Note that running the command below could result in you not having remote access to your machine you are doing this on so you might have to re-ssh into the box from another terminal window.

$ sudo netplan apply --debug

Then, set up the qemu tap device that uses the bridge you created above:

$ sudo apt install qemu-kvm bridge-utils
$ sudo ip tuntap add tap0 mode tap
$ sudo brctl addif br0 tap0

Now you need to create a disk image to use as your virtual storage device to install Ubuntu on:

$ sudo qemu-img create -f qcow2 ubuntu-run.qcow2 10G

You can make the image any size you want. I also tried this with a “raw” formatted image but ended up having better luck using the qcow2 format instead. Now you have a place to install your Ubuntu s390x machine.

Now, you need to download the latest install image. I tried to get this working with 22.04 LTS but the installer kept crashing on me at various points in the process. I suspect it might have a dislike for the virtual serial console over telnet business. Therefore, I proceeded with 20.04 LTS instead:

$ wget

Now you will need to mount the ISO and extract the kernel and initrd images because those will be used by QEMU in its command-line:

$ mkdir tmpmnt
$ sudo mount -o loop ./ubuntu-22.04.1-live-server-s390.iso tmpmnt
$ cp tmpmnt/boot/kernel.ubuntu .
$ cp tmpmnt/boot/initrd.ubuntu .
$ sudo umount tmpmnt

To make things easier for myself, I created a script to launch the emulated s390x environment named

#! /usr/bin/bash

qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on,msa5-base=off -serial telnet::4441,server -display none -m 8192 --cdrom ubuntu-22.04.1-live-server-s390x.iso -kernel kernel.ubuntu -initrd initrd.ubuntu -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no

Once I was ready to install things, I ran the script:

$ chmod +x
$ sudo ./

At this point, the emulator is paused, waiting for you to connect (via telnet port 4441 on the localhost address) to a virtual serial console. Therefore, from another terminal window on this machine, connect to the virtual serial console:

$ telnet localhost 4441

At this point you should see the system boot up. It takes some time in the emulated environment. Eventually you get to the Ubuntu installer. From yet another terminal window on this machine, bring the tap0 interface up (I find that it doesn’t come up on its own until something is actually attached to it from qemu):

$ ip link set up dev tap0
$ ip a # confirm that tap0 shows "up"

Back in the installer, I chose to run it in “rich mode” and chose the following options:

  • English
  • Ubuntu Server
  • On the “ccw screen” – just hit “Continue”
  • Take the network defaults (should be DHCP from your network) or configure to match a static IP address
  • No proxy
  • Take the default mirror address
  • Skip the 3rd party drivers

The install took a while but eventually completed successfully. Modify the file to be as follows (take out the installer, etc.):

#! /usr/bin/bash

cd /root
ip tuntap add tap0 mode tap
brctl addif br0 tap0

qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on,msa5-base=off -smp 2 -serial telnet::4441,server -display none -m 8192 -drive file=ubuntu-run.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-ccw,devno=fe.0.0001,drive=drive-virtio-disk0,bootindex=1 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no

I then created a systemd service to start the qemu session at boot time after the network is operational by editing /etc/systemd/system/s390x.service:

Description=Launch s390x  emulator session



Next, create another script to bring up the tun0 interface (I called mine /root/

#! /usr/bin/bash

sleep 15
ip link set up dev tap0

Set the execute bit on the script:

$ sudo chmod +x /root/

Then, create a second systemd service to start networking after qemu has attached to tun0:

Description=Enable s390x networking



Finally enable and start the service:

$ sudo systemctl enable s390x-network.service
$ sudo systemctl start s390x-network.service

Rebooting then confirmed that the service is started at boot time and reachable from the network. At this point you should be able to ssh into the s390x “machine” as if it were a real mainframe running Ubuntu on your network.

Using Hercules instead of QEMU

This turned out to be a bit of a dead-end, for me at least. The newer installer (22.04) had a kernel fault (I suspect because the virtual processor was too “old” as configured to be supported – probably fixable) so I used an older version. I managed to get it to ping the network, but the DNS wasn’t working and even with a hardcoded manual IP address for the Ubuntu server, it didn’t work.


Started with 22.04 server

Install Hercules v4 (the one that installs with apt is v3)

$ sudo apt install git wget time build-essential cmake flex gawk m4 autoconf automake libtool-bin libltdl-dev libbz2-dev zlib1g-dev libcap2-bin libregina3-dev net-tools
$ git clone
$ cd hyperion
$ ./util/bldlvlck # make sure everything is “OK”
$ ./configure
$ make
$ sudo make install

Installing the helper scripts

$ cd ~
$ wget
$ mkdir ubuntu-hercules
$ cd ubuntu-hercules
$ tar xvf ../ubuntuOnHercules.tar.gz

Install Ubuntu

$ cd ubuntu-hercules
$ wget
$ LD_LIBRARY_PATH=~/hyperion/.libs ./makeNewUbuntuDisk -c 48000 -v 22 # makes a 32g disk for Ubuntu 22.04
# Modify ./hercules-ubuntu.cfg to have the DASD show up with a -v22.disk filename
$ LD_LIBRARY_PATH=~/hyperion/.libs ./ –help
# In my case, I need to set the default gateway and DNS server and change the hostname
$ sudo LD_LIBRARY_PATH=~/hyperion/.libs ./ –iso ubuntu-22.04-live-server-s390x.iso –dns 192.168.x.x –gw 192.168.x.x –host s390x
# When you get asked questions in the install, prefix your answer with a period (‘.’)
# Choose CTC for network and then pick id #1 for read and #2 for write (at least that’s the one that worked for me)
# Use Linux communication protocol when prompted
# Do not autoconfigure the network but do it manually
# Select for your IP address and for your gw and for dns server
# The system seems to have a problem resolving DNS so I used the IP address of the Ubuntu archive mirror

Posted in Uncategorized | 1 Comment

Fiber + Static IP = Self-Hosting Glory!

Recently, a new Internet Service Provider (ISP) became available in my area. Now, no longer confined to a choice between the cable TV company and the telephone company to supply the bits to my house, I had the option of true gigabit fiber to my house as a choice! Needless to say, I had some questions.

The first question was, “How difficult is it to get a static IP address?” I wanted to know this because the cable TV company wanted you to switch from a residential service to a business service and then there was some sort of biological sampling required, signing over your firstborn child and some “feats of strength” required to get one of these magical things. For the new ISP, the answer was simple – send us an email asking for one and it will cost you $10 US per month to keep it. Wow. That was easy. On to the next question.

The next question was the tricky one. My cable TV provider purposely blocked certain ports such as port 25 (SMTP) and there was no way around that. I asked the new ISP if they blocked any ports and the answer was, “No. Why would we do that?” Again – amazing! At this point, I was ready to start moving all of my stuff from the cloud to my house. First things first, I had multiple HTTPS-secured websites to move. Uh oh. How do I serve up multiple websites with multiple different certificates from a single public IP address? Time to test my Google Fu.

Turns out, my OpenBSD 7.1 router could come to the rescue. By doing a reverse-proxy setup with Apache2 and SSL termination, I could accept HTTPS traffic for multiple sites on my single IP address, serve up the right certificate to the browser on the other side of the communication and then pass along the traffic in the clear (HTTP) on port 80 to various servers on my home network. Finding blog posts about this was easy. Making it worked proved to be a bit tricky. I’m sure I could have done this with the OpenBSD httpd daemon (which has a much smaller attack surface that massive old Apache2) but that will be some research and investigation for another post (hopefully) in the future.

OpenBSD Reverse Proxy + SSL Termination

First off, something rare for this blog – a picture! This is the logical traffic flow for my setup:

SSL Termination / Reverse Proxy

To pull this off, I have to first install and enable Apache2 on my OpenBSD Octeon Router:

Next, I have to get HTTPS certificates for my various sites. While I would have loved to have done this using certbot, I couldn’t because there was a C language library needed by Python3 to allow this that wasn’t available on the Octeon build (because my router doesn’t use and Intel/AMD CPU). I then tried using acme-client but found the configuration to be too challenging to pull off right away. Perhaps another blog post in the future. Anyhow, I used a Linux box and ran certbot to generate each of my certificates. I then wrote a little bash script to use scp to copy them to the right folder on my OpenBSD router and scheduled it with cron. Kickin’ it old school!

$ doas pkg_add apache2
$ doas rcctl disable httpd
$ doas rcctl enable apache2
$ doas rcctl start apache2

After that, it was time to write the necessary configuration in /etc/apache2/httpd2.conf for each of the sites. As you can see, this assumes that the SSL certificates are in the /etc/ssl/private directory on my OpenBSD router:

<VirtualHost *:80>

    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]

    ProxyPass "/" ""
    ProxyPassReverse "/" ""
    ProxyPreserveHost On

<VirtualHost *:443>

    ProxyPass "/" ""
    ProxyPassReverse "/" ""
    ProxyPreserveHost On

    SSLEngine On
    SSLCertificateFile /etc/ssl/private/
    SSLCertificateKeyFile /etc/ssl/private/
    SSLCertificateChainFile /etc/ssl/private/

    SSLProxyEngine On

    <Location "/">
        RequestHeader set X-Forwarded-Proto "https"
        RequestHeader set X-Forwarded-Ssl on
        RequestHeader set X-Url-Scheme https
        RequestHeader set X-Forwarded-Port "443"

It is also necessary to further edit the /etc/apache2/http2.conf file to uncomment the “LoadModule” configuration lines for the services being used in the above configuration. The modules to load include ssl_module, proxy_module, proxy_connect_module, proxy_http_module, ssl_module, rewrite_module. After this, simply do an “rcctl restart apache2” and ensure that you were successful. If not, go back and double-check the configuration file.

Next, you will need to make sure that your pf firewall allows port 80 and 443 through so that your site can be reached from off of the OpenBSD machine. To do this, add the following to your /etc/pf.conf file:

# Allow serving of HTTP
pass in on { $wan } proto tcp from any to any port 80
# Allow serving of HTTPS
pass in on { $wan } proto tcp from any to any port 443

Reload the rules for pf using “$ doas pfctl -f /etc/pf.conf” and that step is done. You will also need to likely map ports 80 and 443 from your residential gateway (provided by your ISP) to send them to the OpenBSD router. At this point you should be able to hit your SSL protected site from outside of your network. I always test this by turning off the wifi on my cell phone and using it’s browser on the telco’s network. As you add more “internal” websites, simply duplicate those two sections above and restart your Apache2 daemon on the OpenBSD router.

What About Email?

This one turned out to be very, very interesting. And by that I mean really stinking hard! The basics of it weren’t that bad. Here, I was able to use the wonderful “relayd” service that is native to OpenBSD to take all of the traffic I receive for the various email communication ports and fan them out to the appropriate back-end servers.

At first, I thought I would have to create a separate server for each email domain I wanted to host. Each of those servers would have to have its own SMTP server and each would have to have its own IMAP server. Also, if I wanted to have webmail for a particular domain, I would have to set it up to be an additional pair of entries in the http/https configuration in the previous section.

However, when I started configuring the DNS entries for all of this, I realized the error in my thinking. I only had a single public IP address so I needed the moral equivalent of that reverse proxy magic that I built using Apache2 on my OpenBSD router. How does one do this in the world of SMTP and IMAP? Well, it turns out there is a solution called Server Name Indication (or SNI) that is supported by the major SMTP and IMAP services in the Linux world. Therefore, I elected to host my email on Linux. Perhaps I will do a future blog post on how I migrated this to OpenBSD?

First things first, I needed to set up the necessary DNS entries to ensure that not only will my mail get routed to me, but that it will be considered deliverable and not “spammy” in an way. These included the following entries for each domain:

A * 15 min TTL
A * 15 min TTL
A 15 min TTL
MX @ 10 mail 15 min TTL
@ IN TXT "v=spf1 mx a -all"
_dmarc IN TXT="v=DMARC1;p=quarantine;"
mail._domainkey IN TXT "v=DKIM1; h=sha256; k=rsa ; p=*"

For the above, the “” is your static IP address from your ISP and you obviously need to fill in bits with your domain name as well as the DKIM content represented by the p=* section in the last entry. Perhaps I’ll do a full setup post in the future on this topic.

After setting up DNS, you will then need to configure your mail server. I chose postfix for the SMTP server as it supports SNI and dovecot for the IMAP server for the same reason. Once that was done and I could access things securely from within my private network, I then set up relayd on my OpenBSD router:

$ doas rcctl enable relayd
$ doas rcctl start relayd

I then wrote the following configuration file in /etc/relayd.conf to map the necessary ports to the mail server:

ext_addr=""  # private IP address of OpenBSD Router
mail_host="" # private IP address of mail server

relay smtp {
    listen on $ext_addr port 25
    forward to $mail_host port 25

relay submission_tls {
    listen on $ext_addr port 465
    forward to $mail_host port 465

relay submission_starttls {
    listen on #ext_addr port 587
    forward to $mail_host port 587
relay imaps {
    listen on $ext_addr port 993
    forward to $mail_host port 993

After restarting relayd, we need to add some entries to /etc/pf.conf to ensure that the traffic actually gets through the OpenBSD firewall and hits relayd:

# Allow servicing of SMTP
pass in on { $wan } proto tcp from any to any port 25
# Allow servicing of Submission TLS
pass in on { $wan } proto tcp from any to any port 465
# Allow servicing of Submission startTLS
pass in on { $wan } proto tcp from any to any port 587
# Allow servicing of IMAPS
pass in on { $wan } proto tcp from any to any port 993

Now reload your pf rules with “$ doas pfctl -f /etc/pf.conf” and your machine should be relaying traffic. Finally, you will need to port map ports 24, 465, 587 and 993 on your residential gateway provided to you by your ISP and traffic should start flowing through. Test this from outside of your network and verify that everything is working as expected.


Using these techniques, you should be able to host any number of SSL enabled websites and properly secured email domains on private servers within your home network. This means that you can save some money by not having to use virtual servers in the cloud and also increase the privacy of your services because you physically control the servers themselves.

Don’t forget to back up your data from these servers and then store it somewhere offsite (preferably in two places) in an encrypted fashion. One thing the cloud does make simple is just checking a couple of checkboxes and you suddenly have snapshots of your virtual server stored offsite. You can never have too many backups.

Anyhow, I hope this was helpful for everyone!

Posted in Uncategorized | Tagged , , , | Leave a comment

The Most Metal Thing I’ve Done Today

As a middle-aged electric bass player, the “metal moments” of my life have been coming with less frequency than they did when I was younger. As a result, I tend to look for opportunities to be “metal” on any given day. To that end, I want to explore Canonical’s Metal as a Service or MaaS. Yeah, I know, I went for the cheap pun!

For those of you who aren’t familiar with this awesome piece of software, it essentially allows you to take a collection of physical servers on a private network and turn them into a cluster that allows you to pass out physical or virtual servers to users and then reclaim them when you are done. It does all of this using standard protocols that make life very, very easy. For example, the MaaS servers boot off of DHCP/PXE from an image hosted on the controller so that the OS image doesn’t live on the physical disk of the machine, freeing its built-in storage up for use by the cluster. Additionally, the software supports things like the Intel Active Management Technology (AMT) and its ability to allow remote power on / power off of machines that have this capability (along with many other more enterprise-y technologies for similar control).

For the purpose of this post, I’m going to create a MaaS cluster out of six machines that I have dedicated to the purpose and will be using them to host various projects in my home lab. As long-time readers of this blog know, I am a fan of the Lenovo Thinkpad family of laptops so as a result (like many in my cult) I have quite a stack of them lying around at any given time. For the purpose of this, I will be harnessing the power of my W520, W530 and W541 machines – all of which support the AMT (and more importantly I haven’t CoreBoot-ed yet so it still is enabled).

In addition, I have what I call my “Beast”, a tower machine with a Threadripper CPU that has 32 virtual cores, my NAS box (another AMD cpu machine that has a bunch of spinning physical disks) and finally the machine I’m using for my controller. For that purpose, I dragged out an old Dell laptop I had lying around. It only has one NIC (a WiFi card that I used to attach to my home network) but I picked up a USB-3 gigabit Ethernet adapter that is well supported by Linux to use to run the private network.

The controller machine connects to my home network ( as well as to a small 8-port managed Gigabit switch that all five of the worker nodes will be solely attached to ( That’s the physical network layout. Pretty simple. I also took the time to put a proper AMT password on the machines that support this technology which the MaaS controller will use to reboot them as needed. For the two AMD machines, I have to physically press the power button – at some point I might get an IP enabled power strip that is supported by MaaS and use it to allow them to be “remote controlled” as well but this works just fine for the time being. You might also want to check that virtualization is turned on in the BIOS for any of the machines you are using.

I’m using Ubuntu 22.04 Server for the controller machine and am running it pretty much vanilla except for some network configuration to allow it to serve as a packet router from the private network to my home network so that machines in the cluster can download packages as needed. I could work around that by hosting a mirror on my controller with the packages I needed (I think) but this was easier. For most of this post, I’m basing my configuration on the MaaS 3.2 documentation.

I downloaded the latest 22.04 server from the Ubuntu website and then used the “Startup Disk Creator” application that ships as part of the base OS on my laptop to create a bootable USB drive. After booting from the USB drive on the Dell laptop, the only configuration change I made to the default install was to enable an SSH server on the machine so I can remote in and do everything I need to from my laptop (except for pressing the power buttons a few times on the worker nodes).

Once the controller is installed and booted up, I have to make some network configuration changes to allow it to have a static IP address on both the home network side (WiFi) as well as on the private network that it will be managing. To do this, I edit the /etc/netplan/00-installer-config.yaml file to look like the following:

      dhcp4: false
      optional: true
      addresses: []
        dhcp4: false
        optional: true
        addresses: []
          addresses: []
          - to: default
            password: "********"
  version: 2

After saving these changes, I ran “sudo netplan try” to test the configuration and ensure that everything is working the way I wanted it to. Once I was satisfied with the network, I updated the machine (“sudo apt update” and then “sudo apt upgrade”). After that, I reboot the machine to pick up the new kernel I downloaded in the updates.

I want my machines on the private network to be able to reach the Internet through the MaaS controller. To make things simple, I’m just going to set up a basic router on this machine using a guide I found here:

# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# sysctl net.ipv4.ip_forward=1
# iptables -A FORWARD -i enx000ec6306fb8 -o wlp1s0 -j ACCEPT
# iptables -A FORWARD -i wlp1s0 -o enx000ec6306fb8 -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -t nat -A POSTROUTING -o wlp1s0 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o enx000ec6306fb8 -j MASQUERADE
# apt install iptables-persistent

After running the “apt install…” command, make sure you tell it to persist the IPV4 and IPV6 rules and they will be stored in /etc/iptables under files called “rules.v4” and “rules.v6”. At this point, because I’m old-school, I do a reboot.

For my lab, I want to be as close to a “production” environment as I can get. Therefore, I’m opting for a “region+rack” configuration. Using snaps, installing MaaS is… well… a snap:

$ sudo snap install --channel=3.2 maas

The next thing we need to do is set up a PostgreSQL database for this instance of MaaS:

$ sudo snap install maas-test-db

At this point, it is time to initialize your instance of MaaS:

$ sudo maas init region+rack --database-uri maas-test-db:///

I took the default for my MaaS URL ( I then ran the command “$ sudo maas createadmin” and provided my admin credentials and my Launchpad user for my ssh keys.

At this point, I logged into my MaaS instance from that URL and did some configuration. First, I named my instance and set the DNS forwarder to one that I liked. Next, we need to enable DHCP for the private network so that it can PXE boot new machines on the network. To do this, navigate to the Subnets tab and click on the hyperlink in the “vlan” column that corresponds to the private network. Click “Configure DHCP” and then fill in the Subnet dropdown to correspond to the IP address range of your private network then save the change. You should now notice the warning about DHCP not being configured has gone away from the MaaS user interface.

The next thing we need to do is set up the default gateway that is shared by the MaaS DHCP server to the machines. To do this, navigate to the “Subnets” tab and click on the hyperlink in the “subnet” column for your private network. Click “Edit” and fill in the Default Gateway IP address and the DNS address if you’d like. After clicking “Save” your machines will be automatically configured to use the default gateway you provided (in my case, the private network IP address of my MaaS controller).

I first boot up the Thinkpads (that have Intel AMT) on the private network and they PXE boots off of the MaaS controller and eventually show up under the “Machines” tab of the MaaS user interface. I click on each of them in the MaaS user interface and configure their names and their power setup to be Intel AMT and provide my passwords and IP addresses that I set up in the firmware on each of them. I then booted up the AMD machines and in their configuration, just set their power type to “Manual.

At this point, you will need to get the machines into a “usable” state for MaaS so to do that, check the box next to each one on the “Machines” tab and select “Commission” from the actions menu. You’ll have to physically power on any machines that don’t have Intel’s AMT and then they will go through the commissioning process. When done, they will show up as “Ready” on the “Machines” tab.

Now I need to get the default gateway working for each of the machines. There might be an easier way of doing this; however, I haven’t figured it out yet so I’m following part of a guide found here. For each machine, click on it and then navigate to the network tab. When there, check the box next to the network interface that is connected to the private network’s switch and press the “Create Bridge” button. Name the bridge “br-ex”, the type is “Open vSwitch”, select the fabric and subnet corresponding to your private network and pick “auto assign” for the ip mode.

Now, check the boxes next to your “Ready” machines and select “Deploy” from the actions menu. Be sure to check the “Auto Assign as KVM host” to make them available to host virtual machines for you. Press the “Start deployment…” button and be sure to power on any that don’t have Intel AMT technology to control their power state. At this point you should be done with the power button pushing business unless you need to do maintenance on the machines.

This seemed as good a time as any to create a MaaS user for myself. To do this, I navigated to the “Settings” tab and selected “Users” and then clicked “Add User”. I filled in the details (by the way, MaaS enforces no duplicate email addresses among its users so if you are like me and want an admin account and a separate user account, you’ll have to use two email addresses) and clicked “Save” and I was good to go. I logged in as that user and supplied my SSH key from Launchpad.

If you now switch to the main MaaS “KVM” tab, you should see your machines available and be able to add virtual machines. You do this by clicking on one of the hosts and then clicking the “Add Virtual Machine” button. It then shows up as a “New” machine in the MaaS user interface.

I then log in as my “user” account in MaaS and deploy the “New” virtual machines. Once they are completely deployed, you can then ssh into them from a machine that has connectivity to the private network. The only trick I discovered is that you have to log in as the “ubuntu” user, NOT the user you have set up in MaaS.

At this point, I have a working MaaS home lab that I can use for a variety of projects. I hope that you found this post helpful!

Posted in Uncategorized | 2 Comments

Active Directory Needs Friends!

For those of you who didn’t read my predecessor post on setting up a full-blown Active Directory infrastructure on my home network with home directories, roaming user profiles and group policy using only open source software, take a read through that. This is a follow-on post where I have added a second Active Directory domain controller in a private cloud environment and then bridged that private cloud network to my secure home network using WireGuard.

Bridging The Networks

To start off, since I’m using the bleeding-edge Ubuntu version on my primary domain controller, I set up a virtual server in my cloud provider of choice using 21.10 as well. For the private network, I put it on its own private network that does not collide with my home network ( In this case it is

My VPS provider allows me to supply SSH keys at their web console that restricts who can ssh into the remote virtual machine to only those who have the private key that corresponds to the public keys you upload and select. This ensures that I can securely log into the machine as root-level access without fear. The first thing do to, however, when I log into the new server is to update the packages installed on it:

# apt update
# apt upgrade
# reboot

Now for the wireguard setup on the remote virtual machine. For the purposes of this section, we will call it the “server”:

# apt install wireguard wireguard-tools
# wg genkey | sudo tee /etc/wireguard/server_private.key
# wg pubkey | sudo tee /etc/wireguard/server_public.key
# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
# sysctl -p
# vim /etc/wireguard/wg0.conf
Address =
ListenPort = 51820
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT

PublicKey = *** contents of /etc/wireguard/server_public.key from remote ***
Endpoint = # IP address of remote
AllowedIPs =,

Since my local network is on a residential ISP, I need to use the tools on my ISP’s router to port map the Wireguard port that comes in on the public IP address to the OpenBSD router. Now, we will need to set up the WireGuard configuration on the OpenBSD 7.0 router that I use for my secure network at home (private IP is

# pkg_add wireguard-tools
# sysctl net.inet.ip.forwarding=1
# echo 'net.inet.ip.forwarding=1' | tee -a /etc/sysctl.conf
# mkdir /etc/wireguard
# chmod 700 /etc/wireguard
# openssl rand -base64 32 > /etc/wireguard/server_private.key
# wg pubkey < /etc/wireguard/server_private.key > /etc/wireguard/server_public.key
# vim /etc/hostname.wg0
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf
!route add -inet
# vim /etc/wireguard/wg0.conf
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
ListenPort = 51820

PublicKey = *** contents of /etc/wireguard/server/public.key from remote ***
Endpoint = # public IP address of remote
AllowedIPs =,
# vim /etc/pf.conf
... add to end...
pass in on egress proto udp from any to any port 51820 keep state
pass on wg0
pass out on egress inet from (wg0:network) to any nat-to (egress:0)
# pfctl -f /etc/pf.conf
# sh /etc/netstart wg0

Now, run the following command on the remote Linux box to start the Wireguard service:

# systemctl enable wg-quick@wg0.service
# systemctl start wg-quick@wg0.service

At this point, you should be able to check the status of the Wireguard network on both sides with the command wg show and that should show both ends connected. You should be able to ping hosts on the remote network from each end.

So far, the only problem I have found with this setup to bridge the networks, is that my Windows machines that are multi-homed (i.e. one interface – wired ethernet – is connected to my ISP’s network and one – wireless – is connected to my secure network) needs to have a route manually added as follows:

C:\WINDOWS\system32> route add -p MASK

In this case, the network is the remote network and the IP references my OpenBSD 7.0 router.

Remote Samba Active Directory Server

Now that we have a remote network that is securely bridged to our local private network on which the current Samba Active Directory infrastructure is running, it is time to create the VPC virtual server that will be running our Active Directory remote server. My particular VPC service allows me to create a server that is on the same private network as my remote “router” that is running Wireguard, so I create such a server and call it (put in your own AD domain name there).

First things first, the remote AD server must have a route to the Wireguard network. This is not a necessary step on the home network side because the Wireguard server is running on the OpenBSD 7.0 router and by definition is the default route for the servers on that network. This is not the case for the servers on the private network at the VPC. To do this, we simply need to add a persistent route. So as to not mess things up with the default network configuration on the remote host, I decided to create a (yuck) SystemD (blech) service:

# apt update
# apt upgrade
# apt install network-tools
# vim /usr/sbin/
#! /bin/sh
/usr/sbin/route add -net gw eth1
# chmod +x /usr/sbin/route
# vim /etc/systemd/system/MY-NETWORK.service
Description=Route to Wireguard server


# systemctl daemon-reload
# systemctl enable MY-NETWORK.service
# systemctl start MY-NETWORK.service

At this point, you should be able to ping the domain controller on the remote (home) network and from that domain controller, you should be able to ping the new host.

Now we need to do the standard networking configuration ‘stuff’ that Samba likes. First, edit the /etc/hosts file to remove the “ DC2” line and replace it with one tying it to the static private IP address that has been assigned to this virtual host. In this case, “ DC2”.

Here we need to add the necessary packages to host an Active Directory domain controller:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Next, disable systemd’s resolver and add the remote AD server as the DNS name server and also add the Active Directory domain:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf

Now, go ahead and reboot the remote machine and when you log back into it, test to see if DNS is working properly:

# nslookup
# nslookup    name =
# host -t SRV has SRV record 0 100 389

Rename the /etc/krb5.conf file and the /etc/samba/smb.conf file like you did when you created the domain controller on your local network. Then, create a new /etc/krb5.conf file:

    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

At this point, we need to set up an NTP server and sync it to the one at our original Active Directory domain controller:

# apt install chrony ntpdate
# ntpdate
# echo "server minpoll 0 maxpoll 5 maxdelay .05" > /etc/chrony/chrony.conf
# systemctl enable chrony
# systemctl start chrony

Now we need to authenticate against Kerberos and get a ticket:

# kinit administrator
... provide your AD\Administrator password ...
# klist

At this point, it’s time to join the domain as a new domain controller:

# samba-tool domain join DC -U"AD\administrator"

After the tool finishes (it produces a lot of output), you need to copy the generated Kerberos configuration file to the /etc directory:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

You need to manually create the systemd service and set things up so that everything fires up when you reboot the server:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
Description=Samba Active Directory Domain Controller

ExecStart=/usr/sbin/samba -D
ExecReload=/bin/kill -HUP $MAINPID

# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

OK. At this point we have a Samba Active Directory domain controller running. We need to get SysVol replication going now to ensure that the two controllers are bidirectionally synchronized.

Bidirectional SysVol Replication

To get the SysVol replication going bidirectionally, I followed the guide here. First, you need some tools installed on both DCs:

# apt install rsync unison

Generate an ssh key on both domain controllers:

# ssh-keygen -t rsa

Now, copy the /root/.ssh/ contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

Now, copy the /root/.ssh/ contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

On your new remote DC (DC2 in my example), do the following to ensure that your incoming ssh connection isn’t rate limited:

# mkdir /root/.ssh/ctl
cat << EOF > /root/.ssh/ctl/config
Host *
ControlMaster auto
ControlPath ~/.ssh/ctl/%h_%p_%r
ControlPersist 1

Now, to be able to log what happens during the sync on the local DC (DC1 in my example), do the following to create the appropriate log files:

# touch /var/log/sysvol-sync.log
# chmod 640 /var/log/sysvol-sync.log

Now, do the following on the local DC (DC1 in my example):

install -o root -g root -m 0750 -d /root/.unison
cat << EOF > /root/.unison/default.prf
# Unison preferences file
# Roots of the synchronization
# copymax & maxthreads params were set to 1 for easier troubleshooting.
# Have to experiment to see if they can be increased again.
root = /var/lib/samba
# Note that 2 x / behind DC2, it is required
root = ssh://root@DC2//var/lib/samba 
# Paths to synchronize
path = sysvol
#ignore = Path stats    ## ignores /var/www/stats
copyprog = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --inplace --compress
copyprogrest = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --partial --inplace --compress
copyquoterem = true
copymax = 1
logfile = /var/log/sysvol-sync.log

Now, run the following command on your local DC (DC1 in my example):

# /usr/bin/rsync -XAavz --log-file /var/log/sysvol-sync.log --delete-after -f"+ */" -f"- *"  /var/lib/samba/sysvol root@DC2:/var/lib/samba  &&  /usr/bin/unison

This should synchronize the two sysvols. If you followed my previous how-to and set up Group Policy, this can take some time as there are a lot of files involved that are stored on the SysVol. After it is complete, you can verify this by doing the following on your remote DC (DC2 in my example):

# ls /var/lib/samba/sysvol/

You should see the same file structure under that directory on both servers. This will copy everything including your group policy stuff as well.

Now that you have done the initial sync, just add the following to your crontab on the local DC (DC1 in my example):

# crontab -e
*/5 * * * * /usr/bin/unison -silent

You should monitor /var/log/sysvol-sync.log on your local DC (DC1 in my example) to ensure that everything is synchronizing and staying that way over time.

Hope this little “how-to” helps folks!

Posted in Uncategorized | Leave a comment

Active Directory Says What?

Many of the long-time readers of this blog are going to probably have a panic attack when they read this article because they are going to be asking themselves the question, “Why in the heck does he want to install Active Directory in his life?” The reason, like so many answers to so many of these questions I ask myself is “Because I can!” LOL!!

So I have a small home network that is my playground for learning new technologies and practicing and growing my security skills. I try to keep it segregated from my true home network that my family uses because I don’t want my latest experiment to get in the way of any of them connecting to the Internet successfully.

Just for fun, however, I’m going to start on a path to try a new experiment – I’d like to have the ability to add a new machine to my network and not have to spend half a day setting it up. Furthermore, I’d like to put everything I can either on a local file server that backs up to the cloud or in the cloud that backs up to a local file server in such a way that I can totally destroy any of my machines and be able to reproduce it at the push of a button. The ultimate in home disaster recovery.

What does this buy me? Well, for one, it lets me be even more aggressive in my experimentation. If I lay waste to a machine because of a failed experiment, no big deal – I just nuke and automatically repave it. For another, it makes it way easier to recover a family member’s setup when something goes wrong. I can just rebuild the machine and know they won’t lose anything. That alone will save me lots of time troubleshooting the latest problems with stuff.

So, why Active Directory? I choose this technology because pretty much everything (OpenBSD is going to be interesting) will authenticate centrally with it and yes, I do have to run some Windows and Mac machines on my network, I can’t do it all on OpenBSD and Linux so it’s a good common ground.

Now, I will die before installing a Windows Server in my infrastructure (LOL) so I have been very careful saying “Active Directory” and not “Windows Server”, or “Azure AD”. I’m going to see how far Samba 4 has come since the last time I played with it. If I can do the full meal deal of authentication, authorization, roaming user profiles and network home directories on a Windows machine, then I can fill in around the edges on my non Windows machines using NFS and other techniques.

Setting up Ubuntu

First things first, I want to start with a clean install of my domain controller. To this end, I’ll nuke and repave my 32-core Threadripper box in my basement with the latest Ubuntu 21.10 build on it and install samba on bare metal. I had originally thought about doing this on a VM or on a Docker container, but I want the reliability and control-ability of a bare metal install with a static IP address, etc. Therefore, after carefully backing up the local files that I wanted to save off of this machine (ha – that’s a lie, I just booted from a USB thumb drive and Gparted the drives with new partition tables), I installed a fresh copy of Ubuntu 21.10 with 3rd party drivers for my graphics card.

Once I had the base OS laid down, I used the canonical documentation from (not documentation from Canonical, the owner of Ubuntu <g>), along with some blog posts (1), (2), and (3) to determine my full course of action. I’ll outline the various steps below.

Active Directory Domain Controller

First things first, we need to get the network set up the way Samba wants it on this machine. That consists of setting up a static IP address on the two NICs in my server (one for my “secure” home network and one for my insecure “family” network) and setting the hostname and /etc/hosts file changes. Specifically, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:    localhost    DC1

It is important to note that Ubuntu will put in an additional line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

Now for a fun one. We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf

At this point, you should have the networking changes in place you need for now. You’ll have to later loop back around and change /etc/resolv.conf to use this machine’s IP address as the nameserver once you have Samba running with it’s built-in DNS server up and running but we don’t want to lose name resolution in the meanwhile so I’ve hard coded it to point to my local DNS server on OpenBSD.

Now it’s time to install the necessary packages to make this machine an active directory domain controller:

# apt update
# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Specify the FQDN of your server when prompted on the ugly purple screens for things like your Kerberos server and your Administrative server.

Now, it’s time to create the configuration files for Kerberos and Samba. To do this, I ran the following commands:

# mv /etc/krb5.conf /etc/krb5.conf.orig
# mv /etc/samba/smb.conf /etc/samba/smb.conf.orig
# samba-tool domain provision --use-rfc2307 --interactive

I take the defaults, being careful to double-check the DNS forwarder IP address (that’s where the DNS server that will be serving your AD network will forward requests it cannot resolve) and then entered my Administrator password. Keep in mind that be default, the password complexity requirements are set pretty high (which I like) so pick a good one.

Now use the following command to move the Kerberos configuration file that was generated by the Samba provisioning process to its correct location:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

Next, we need to set things up so that the right services are started when you reboot the machine. To do that, issue the following commands:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
Description=Samba Active Directory Domain Controller

ExecStart=/usr/sbin/samba -D
ExecReload=/bin/kill -HUP $MAINPID

# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

Now go back and update the /etc/resolv.conf file to use the new Samba-supplied DNS service:

# vim /etc/resolv.conf

This is probably a good time to reboot your machine. When you do so, don’t forget to check that /etc/resolv.conf hasn’t been messed with by Ubuntu. If it has, double-check the work you did above and keep trying reboots until it sticks.

Now we need to create the reverse zone for DNS:

# samba-tool dns zonecreate -U Administrator
# samba-tool dns add 2.1 PTR -U Administrator

If you have multiple NICs in your AD server, you will need to repeat this process for their networks. At this point, double-check that the DNS responder is coming back with what it needs to in order to serve the black magic of the Active Directory clients:

# nslookup


# nslookup        name =

# host -t SRV has SRV record 0 100 389
# host -t SRV has SRV record 0 100 88
# host -t A has address

If you have multiple NICs in your AD server, you might want to double-check the DNS A records that are returned are reachable from the networks your clients typically use. Since I have a “home” network and a “secure” network, I can manage DNS and DHCP on my secure network so I tend to make sure that my domain controller hostname resolves to an IP address on the secure network. The Windows DHCP admin tools are pretty handy for checking on this and making changes.

Verify that the Samba service has file serving running correctly by listing all of the shares from this server as an anonymous user:

# smbclient -L localhost -N

You should see sysvol, netlogon and IPC$ listed. Any error about SMB1 being disabled is actually a good thing. Validate that a user can successfully log in:

# smbclient //localhost/netlogon -UAdministrator -c 'ls'

You should see a listing of the netlogon share directory which should be empty. Now check that you can successfully authenticate against Kerberos:

# kinit administrator
# klist

You should see a message about when your administrator password will expire if you are successfully authenticated by Kerberos. The klist command should show the ticket that was generated by you logging in as Administrator.

If you look at the documentation in the Samba Wiki, you’ll see that ntp seems to be a better service to use over chrony or optnntpd. If you look at the documentation for chrony (which everyone seems to use), you’ll get a different story. However, when I used chrony, I kept getting NTP errors on my Windows clients so I’m configuring in this post with ntp.

# apt install ntp
# samba -b | grep 'NTP'
    NTP_SIGND_SOCKET_DIR: /var/lib/samba/ntp_signd
# chown root:ntp /var/lib/samba/ntp_signd/
# chmod 750 /var/lib/samba/ntp_signd/
# vim /etc/ntp.conf
restrict mask nomodify notrap
disable auth
# systemctl restart ntp

To be clear, the lines I’m showing after editing the ntp.conf file are lines that you ADD to the file. Also, if you have more than one NIC in the server, you’ll need to add them in on the restrict and broadcast lines as a second line for each.

Now, let’s test that everything is working by enrolling a Windows 10 machine into the domain. Ensure first that you are on the right network and just for safety’s sake, do a reboot so you pick up the DNS server, etc. I have modified the DHCP server on my network to pass the correct information that a client needs as follows (from /etc/dhcpd.conf in OpenBSD):

option domain-name "";
option domain-name-servers;
option ntp-servers;

Microsoft has done a bang-up job of hiding this in the UI compared to where it has been for literally decades (“get off my lawn!!”). I prefer the old-fashioned way so I ran the following using Windows key + R to get the old UI I’m most comfortable with:


Press the “Change” button and then select “Domain” and enter “” as the name of your domain. That should prompt you for your admin credentials. I typically use AD\administrator as my userid just to be safe. In a matter of seconds, you should be welcomed to the domain.

For safety’s sake, I recommend clearing out your application and system event logs on that machine, rebooting and logging in as your domain admin. Once that’s done, examine the event viewer to ensure that you aren’t seeing any errors that might indicate something isn’t configured correctly on the server. Remember to click the “other user” button on the Windows 10 login screen and use the AD\Administrator to tell Windows which domain you want to log into.

There is a warning (DNS Client Events, Event ID: 8020) that I see in the System event log. This appears to be a problem where the Windows machine tries to re-register with dynamic DNS in Samba with exactly the same info that is already registered for it and Samba returns an error. You can still resolve the client machine from the server so it worked the first time, I think it can be safely ignored for now.

For ease of maintenance you might want to install the “Windows RSAT Tools” on your Windows machine that give you a good UI for managing all of the fun stuff that Active Directory brings to the table. They are a free download.

I really do NOT recommend using your domain controller as a file server. To set that up on another machine, please see the next section.

Samba File Server in a Domain

Thankfully, the wonderful documentation on the Samba WIKI has an entire entry dedicated to setting up Samba as a domain member. First things first, we need to configure the network settings on our file server to use the Active Directory server as the DNS server.

As I did with the domain controller above, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:    localhost    NAS

It is important to note that Ubuntu will put in an additional line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf

After a quick reboot and verification that the resolv.conf changes survived, we need to install some packages:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user smbclient

Now we need to now configure Kerberos and Samba. First, if there are files currently at /etc/krb5.conf and/or /etc/samba/smb.conf, remove them. Create a new /etc/krb5.conf file with the following contents:

    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

Next, it will be necessary to synchronize time to the domain controller. Since this server won’t be broadcasting network time to client machines (i.e. it isn’t a domain controller), I’ll be setting it up with chrony which is built into Ubuntu.

# apt install chrony ntpdate
# ntpdate
# vim /etc/chrony/chrony.conf
server minpoll 0 maxpoll 5 maxdelay .05
# systemctl enable chrony
# systemctl start chrony

That line under the vim command should be the only line in the file. To validate that everything is working, a call to systemctl status chrony should show that it is active and running. First things first, we need to set up the /etc/samba/smb.conf file:

    workgroup = AD        
    security = ADS        
    realm = AD.EXAMPLE.COM       
    netbios name = NAS
    domain master = no
    local master = no
    preferred master = no

    idmap config * : backend = tdb
    idmap config * : range = 50000-100000
    vfs objects = acl_xattr        
    map acl inherit = Yes        
    store dos attributes = Yes

    winbind use default domain = true
    winbind offline logon = false
    winbind nss info = rfc2307
    winbind refresh tickets = Yes
    winbind enum users = Yes
    winbind enum groups = Yes

Now we will need to join the domain:

# kinit administrator
# samba-tool domain join AD -U AD\\Administrator
# net ads join -U AD\\Administrator

You’ll probably get a DNS error when you join the domain. Regardless, add an A record and a PTR record for the server into the DNS as follows:

# samba-tool dns add 3.1 PTR -U Administrator
# samba-tool dns add A

If you have multiple NICs in your file server, make sure you repeat the process for the IP address ranges assigned to them. Now, add the “winbind” parameter as follows to /etc/nsswitch.conf:

# vim /etc/nsswitch.conf
passwd: files winbind systemd
group: files winbind systemd
shadow: files winbind

Next, we will need to enable and start and restart some services:

# systemctl enable smbd nmbd winbind
# systemctl start smbd nmbd winbind
# pam-auth-update

Before proceeding any further, you should probably reboot the machine. Now for some tests to make sure that everything is working ok:

# wbinfo --ping-dc
checking the NETLOGON for domain[AD] dc connection to "" succeeded.
# wbinfo -g
... list of domain groups ...
# wbinfo -u
... list of domain users ...
# getent group
... list of Linux groups and Windows groups...
# getent passwd
... list of Linux users and Windows users...

Windows Home Directories

A common configuration done by Windows Domain administrators is to create a default “Home” drive (typically mapped to the H: drive letter) for users. To do this, we will want to first set up a file share on the server. The goal will be to set up a mapped “HOME” directory for each domain user. We’ll start off by adding the following to the /etc/samba/smb.conf file:

    comment = Home directories
    path = /path/to/folder
    read only = no
    acl_xattr:ignore system acls = yes

After issuing an “smbcontrol all reload-config” on the file server to reload the changes to the config file, you should now be able to see a share called \\nas\users. When you create the directory on the filesystem, issue the following commands:

# chown "Administrator":"Domain Users" /path/to/folder/
# chmod 0770 /path/to/folder/

It is important to grant the “SeDiskOperatorPrivilege” to the “Domain Admins” group as follows. This has to be done on the file server itself.

# net rpc rights grant "AD\\Domain Admins" SeDiskOperatorPrivilege -U "AD\administrator"

Finally, from the “Active Directory Users and Groups”, select the user in the “Users” folder, right click and select “Properties”. After changing to the “Profile” tab, select the “Connect” radio button under the “Home folder”, choose H: as the drive letter and put in \\nas\users\{user name} for the “To:” entry field. This should automatically create the directory and set the correct permissions on it.

Now log out of the domain and back in as the user account you modified above and you should automatically get an H: drive that maps to that folder on the file server.

User Profiles

OK, so the cool kids on their Windows networks also have this thing called a “Roaming User Profile” that allows you to put their user profile on a file server and then they can move from one machine to another and simply access their stuff as if it was all the same machine. I wanted to see how Samba handled this and sure enough, I got a hit in the Samba wiki that indicated it was possible.

First things first, we need to create a share on our file server to hold the profiles, so I added this to my /etc/samba/smb.conf file:

    comment = Users profiles
    path = /path/to/profile/directory
    browseable = No
    read only = No
    csc policy = disable
    vfs objects = acl_xattr
    acl_xattr:ignore system acls = yes

After making that change, I need to create the directory to hold the profiles and set the UNIX ownership and permissions like I did with the home directories above:

# mkdir /path/to/profile/directory
# chown "AD\Administrator":"AD\Domain Users" /path/to/profile/directory
# chmod 0700 /path/to/profile/directory

After a quick “smbcontrol all reload-config” to pull the new changes in, we now have a share on the file server called “profiles” that will hold the resulting Windows user profiles. I used the “Active Directory Users & Computers” tool on my Windows machine (logged in as Administrator), opened the property dialog for my users, navigated to the “Profile” tab and entered the UNC name for the profile directory \\NAS\profiles\{user-name}. The key is to know that, depending on the version of Windows, the system will add a suffix (in my case “.v6”) to that directory name and it will initially be created empty. When you log out, it will actually copy the stuff into the directory and you should see the directories and files show up on your file server. It seems this is the consistent behavior. For example, saving a file into the “Documents” directory on the Windows machine isn’t propagated to the server’s file system until that user logs out.

It really was that easy!

Group Policy

Given the fact that I had, at this point a fully functional Active Directory infrastructure with network home directories, roaming user profiles and all of it was running on Open Source platforms, I thought I’d really try to push it over the edge and dip my toe in the water around Group Policy. Group Policy is some magic stuff based on LDAP that, in the Windows world, allows you to automatically configure an end-user’s workstation. I found documentation in the Samba wiki that indicated it was possible to make this work so I thought I’d give it a try and see what I needed to do.

It looked like the first thing I needed to do was load the Samba “ADMX” templates into the AD domain controller. To do that, I used the following command:

# samba-tool gpo admxload -H -U Administrator

Sure enough, logging into my Windows machine as a domain admin, I was able to see that the command had indeed injected the Samba files into the Sysvol:

H:\> dir \\DC1\SysVol\\Policies\PolicyDefinitions

That command aove should show you the en-US directory and the samba.admx file. Now we need to download the Microsoft ADMX templates and install them:

# apt install msitools
# cd /tmp
# wget ''
# msiextract Administrative\ Templates\ \(.admx\)\ for\ Windows\ 10\ October\ 2020\ Update.msi
# samba-tool gpo admxload -U Administrator --admx-dir=Program\ Files/Microsoft\ Group\ Policy/Windows\ 10\ October\ 2020\ Update\ \(20H2\)/PolicyDefinitions/

The last line will take a few seconds as it processes the files and loads them into the SysVol. You can again confirm the presence of the new policies using the “dir” command above from your Windows machine. At this point, you have the group policies set up and installed into your environment and should be able to manipulate them using the “Group Policy Management Console” on your Windows workstation.


While this is probably one of my stranger, and more technical posts, I think this is a cool example of how you can totally eliminate paid software from your server infrastructure and yet still have the full functionality of something like Active Directory in your tool belt.

Posted in Uncategorized | 5 Comments

Thinkpad T14 (AMD) Gen 2 – A Brave New World!

As long-time readers of this blog are aware, I’m a bit of a Thinkpad fanatic. I fell in love with these durable machines when I was working for IBM back in the late 90’s and accidentally had one fall out of my bag, bounce down the jetway stairs and hit the runway hard – amazingly enough it had a few scuffs but zero damage! After the purchase of the brand by Lenovo, I was a bit worried, but they continue to crank out (at least in the Thinkpad T and X model lines) high-quality, powerful machines.

Thinkpad T480 – RIP

I ran into a nasty problem with my Thinkpad T480 where the software on the machine actually physically damaged the hardware. I know! I thought that was impossible too (other than the 70’s PET machine that had a software-controlled relay on the motherboard that you could trigger continuously until it burned out) but nope – the problem is real.

Essentially, the Thunderbolt I/O port on the machine is driven by firmware running out of an NVRAM chip on the motherboard that can be software-updated as new firmware comes out. As with any NVRAM chip, there are a finite number of write-cycles before the chip dies, but the number of times you will update your firmware is pretty small so it works out well.

Unfortunately, Lenovo pushed out a firmware update that wrote continuously to the NVRAM chip and if you didn’t patch fast enough (they did release an urgent/critical update), then the write-cycles would be exceeded, the chip would fail and the bring-up code would not detect the presence of the bus and thus you had no more Thunderbolt on the laptop. Well, I didn’t update fast enough so “boom” – it is now a Thunderbolt-less laptop.

The New T124 (AMD) Gen 2

Well, enter the need for a new laptop. I decided to jump ship from the Intel train and try life out on the “other side” but ordering a Thinkpad T14 (AMD) Gen 2 machine with 16gb of soldered RAM (there is a slot that I will be populating today that can take it up to 48gb max – I’m going with 32gb total by installing an $80 16gb DIMM) and the Ryzen Pro 5650U that has 6 cores and 12 threads of execution. The screen was a 1920×1080 400 nit panel and looks really nice.

When the laptop showed up, I booted the OpenBSD installer from 6.9-current and grabbed a dmesg and discovered that I lost the Lenovo lottery and had a Realtek WiFi card in the machine. Well, the good news was that I had upgraded the card in my T480 to an Intel AX200 so I swapped it for the one I took out of the T480 and then used it in the T14 to replace the Realtek card. Worked like a charm.

The Ethernet interface on this machine is a bit odd. It’s a Realtek chipset as well, but it shows up as two interfaces (re0 and re1). The deal is that re0 is the interface that is exposed when the machine is plugged into a side-connecting docking station and re1 is the interface that is connected to the built-in Ethernet port. The device driver code that is in 6.9-current as of this writing works just fine with it, however, so I’m happy.

Now for the bad news. Every Thinkpad I have owned for the last decade allows me to plug an m.2 2240 SATA drive into the WWAN slot and it works great. I assumed that would be the case with this machine. While I had the bottom off to replace the WiFi card, I slipped the 1TB drive from the WWAN slot of my T480 into the WWAN slot of the T14 and booted up. I was immediately presented with an error message stating effectively that the WWAN slot was white-listed by Lenovo and would only accept “approved” network cards. I was beyond frustrated by this.

Given that I want to get this machine into my production workflow, I decided that I’d slog along for the time being by putting a larger m.2 2280 NVMe drive in, installing rEFInd to allow me to boot multiple partitions from a single drive and then clone the 512gb drive that is in the machine to the 1GB drive out of the T480. Then, the remaining space on the new drive will contain an encrypted partition for my OpenBSD install.

Installing rEFInd

I followed the instructions from the rEFInd site on how to manually install under Windows 10 and the steps I followed included downloading and unpacking the ZIP file and then running the following commands from an administrative command prompt:

C:\Users\xxxx\Downloads\refind-bin-0.13.2\> mountvol R: /s
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> xcopy /E refind R:\EFI\refind\
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> r:
R:\> cd \EFI\refind
R:\EFI\refind\> del /s drivers_aa64
R:\EFI\refind\> del /s drivers_ia32
R:\EFI\refind\> del /s tools_aa64
R:\EFI\refind\> del /s tools_ia32
R:\EFI\refind\> del refind_aa64.efi
R:\EFI\refind\> del refind_x64.efi
R:\EFI\refind\> rmdir drivers_aa64
R:\EFI\refind\> rmdir drivers_ia32
R:\EFI\refind\> rmdir tools_aa64
R:\EFI\refind\> rmdir tools_ia32R:\EFI\refind\> rename refind.conf-sample refind.conf
R:\EFI\refind\> mkdir images
R:\EFI\refind\> copy C:\Users\xxx\Pictures\mtstmichel.jpg images
R:\EFI\refind\> bcdedit /set "{bootmgr}" path \EFI\refind\refind_x64.efi

That next to the last line is because I wanted to have a picture of my “happy place” (Mount Saint Michel off of the northern coast of France) as the background for rEFInd. I edited the refind.conf file and added the following lines:

banner images\mtstmichel.jpg
banner_scale fillscreen

A quick reboot shows that rEFInd is installed correctly and has my customized background. Don’t be alarmed that the first time you boot up with rEFInd is slow, I think it is doing some scanning and processing and caching because the second and subsequent boots are faster.

Cloning the Drives

The process that I am going to follow, at a high level, is to first clone the contents of my primary 1TB 2280 NVMe drive in my T480 to a spare 256GB drive. I will then erase the 1TB drive and clone the contents of my T14’s drive to it (it’s only 512GB). I will then erase the 512GB drive and clone the 256GB drive back to it. Finally, for good operational security (OpSec) purposes, I’ll use the open source Windows program Eraser erase the 256GB drive. At this point I should have a bootable T480 (with a fried Thunderbolt bus – grr…) on the 512GB drive, and a bootable T14 on the 1TB drive.

I’m using Clonezilla, an open source tool that I burn to a bootable USB drive to do the cloning. For hardware that I am using to accomplish all of this, first I use a Star Tech device that allows me to plug m.2 drives into a little box that then acts as a 2.5 inch SSD drive. I plug that into a Wavlink USB drive docking station that can hold either 3.5″ or 2.5″ drives.

Another piece of software that I use as part of this process is GPartEd Live – an open source tool that allows you to create a USB drive that boots into the GPartEd software (the Gnu Partition Editor). This allows me to view the partition structure of one drive and create an analagous partition structure on another drive. The built-in tools for Windows to do this work (Disk Manager for example) can create hidden partitions under the covers that can cause problems with this process. I prefer to use GPartEd to ensure that I can see and control everything that is going on.

Step One is to take the T480, boot it into Windows and connect the Wavlink device to it with the 256GB NVMe drive plugged into it via the StarTech adapter. While I’m using Eraser to wipe the 256GB drive, I also go into Windows settings and decrypt the Windows disk by turning off BitLocker for it. This may not be necessary but it makes me feel more comfortable to do the cloning with unencrypted Windows drives because the key for the encryption is store in the TPM device on the motherboard and I’m not sure if the fact that the underlying hardware changes would muck that up. After the erase and decrypt is finished, I shrank the partition using “Disk Management” on Windows to be smaller than the new physical disk. If you don’t do this, then Clonezilla won’t allow you to clone from a larger partition to a smaller one.

Next we will need to reboot the machine to GPartEd Live. For the destination drive, you will need to use the “Device” menu and create a new GPT partition table. Take a look at the source drive and make a note of the various partitions, their flags, and their sizes. On the destination drive, recreate that partition structure with the same flags and the same or slightly larger size. I generally bump up the size of the partition by just a bit in order to avoid getting into trouble with rounding the size for display on the screen. If you get it wrong, don’t worry, Clonezilla will yell at you and you’ll have to go back and do this over again. 🙂

When launching Clonezilla, since I have the high resolution display on the T480 (a mistake I’ll never make again, HiDPI is a PITA in everything but Windows) I had to use my cell phone to zoom in on the microscopic text and select the “use 800×600 with large fonts from RAM” option. With readable text, I then make sure that I’m choosing “device-device” from the first menu (not the default). Next, select “Beginner Mode” to reduce the complexity of the choices you’ll have to make. After that, you want to select “part_to_local_part” to clone from one partition on the source drive to the corresponding partition on the destination drive. Finally, select the source partition and the destination partition. I recommend you do the smaller partitions first and then let the main C: partition (the largest one) grind because it can take a long time to clone.

After cloning the T480 drive, I removed it from the machine and was ready to clone the T14’s drive to it. This is where I ran into a “keying” problem with m.2 drives. Some are “B” keyed, and some are “B+M” keyed. This refers to the number of cutouts where they plug into the slot. Well, it looks like the NVMe drives in both the T480 and the T14 don’t fit the StarTech adapter. After some juggling around I found an old 256MB drive that I was able to use to get the swap completed.

Creating the OpenBSD Partition

To do this, I will use “Disk Manager” on Windows and shrink the NTFS partition (if necessary) to make room for OpenBSD and then create a new partition on the drive that takes up the remaining space. If you check the “don’t assign a drive letter” box and the “don’t format the partition” box, you’ll get a raw, unformatted partition that takes up the remaining space on the disk.

That new raw partition will be changed in OpenBSD to be the home of the encrypted slice on which I’ll be installing the operating system. After creating that partition, it’s time to download the 6.9-current .IMG file for the latest snapshot and use Rufus on Windows to create the USB drive and reboot from it.

Once in the OpenBSD installer, drop immediately to the shell and convert that NTFS partition into an OpenBSD partition. That will be where we we put the encrypted slice that we will be installing to. To do this, run the following commands:

# cd /dev
# sh ./MAKEDEV sd0
# fdisk -E sd0

sd0: 1> print
sd0: 1> edit 4
Partition id: A6
Partition offset <ENTER>
Partition size <ENTER>Partition name: OpenBSD
sd0*: 1> write
sd0: 1> exit

The print command above should show you the 4 partitions on your drive (the EFI partition, the Windows partition, the WindowsRecovery partition and your fourth partition that will hold OpenBSD that you created above).

Now that you have a partition for OpenBSD, you’ll want to copy the EFI bootloader over to your EFI drive. You’ll later make a configuration change in rEFInd to not only display it on the screen, but also show a cool OpenBSD “Puffy” logo for it!

# cd /dev
# sh ./MAKEDEV sd1
# mount /dev/sd1i /mnt
# mkdir /mnt2
# mount /dev/sd0i /mnt2
# mkdir /mnt2/EFI/OpenBSD
# cp /mnt/efi/boot/* /mnt2/EFI/OpenBSD
# umount /mnt
# umount /mnt2

Now that you have an OpenBSD EFI bootloader in its own directory on the EFI partition, you’ll want to create the encrypted slice for the operating system install:

# disklabel -E sd0

sd0> a a
sd0> offset: <ENTER>
sd0> size: <ENTER>
sd0> FS type: RAID
sd0*> w
sd0> q

# bioctl -c C -l sd0a softraid0
New passphrase: <your favorite passphrase>
Re-type passphrase: <your favorite passphrase>

Pay attention to the virtual device name that bioctl spits out for your new encrypted “drive”. That’s what you will tell the OpenBSD installer to use. To re-enter the installer, type “exit” at the command prompt. Do your install of the operating system as you normally do. When you reboot, go into Windows.

First, download an icon for OpenBSD from here (or pick your favorite elsewhere). Next, bring up an administrative command prompt and use the following commands to mount the EFI partition and add the icon for OpenBSD:

C:\Windows\system32> mountvol R: /s
C:\Windows\system32> r:
R:> cd \EFI\refind
R:\EFI\refind> copy "C:\Users\<YOUR USER>\Download\495_openbsd_icon.png" icons\os_openbsd.png

Save your changes, exit notepad and then reboot. rEFInd is smart enough to find your OpenBSD partition and use the icon you just added. When you select it from the rEFInd UI, you should be presented with your OpenBSD encrypted disk password and be able to boot for the first time. I ran into a weird thing with my snapshot where it couldn’t download the firmware. I formatted a USB thumb drive as FAT32, downloaded the amdgpu, iwx, uvideo and vmm firmware from the site, mounted the drive in my OpenBSD system and ran fw_update -p /mnt to get the firmware.

At this point, you should be able to reboot and select either Windows or OpenBSD from your rEFInd interface. My hope is that Lenovo will remove this absurd white-listing of the WWAN devices from their UEFI/BIOS code and I’ll be able to plug drives into it again; however, if (and this is more likely) they do not, I’ll at some point buy a 2TB m.2 NVMe drive for this machine, repeat this process and be able to add Linux to it.

I hope folks find this guide helpful.

Posted in Uncategorized | 2 Comments