YubiKey All The Things

As I continue to learn how to secure my various “things”, I’m getting more and more of a fan of physical two factor authentication that doesn’t involve sending six digit codes over the public SMS network. As such, I’ve been playing around with the YubiKey 5 NFC device, a little USB second factor that costs about $50 US and is really handy.

The first thing I wanted to secure with my YubiKey was logins to my various devices. Long-time readers of this blog know that means I need things to work in Windows, Linux and OpenBSD. I thought it would be helpful to outline below how I did that.

OpenBSD

Now for the fun part. Setting up the YubiKey on OpenBSD! I followed this guide to set things up.

If you intend to use your YubiKey on OpenBSD, you will want to do this first, before anything else. The reason for that is that you will need to capture the private ID and private key for your YubiKey slot which can only be done at the time that you generate it. After it has been written to the key, it can’t be retrieved (otherwise, cloning a YubiKey would be a trivial exercise).

Another downside to the implementation of login_yubikey is that it acts as the sole credential to log you in – in other words, it replaces your password and there is no second factor.

First off, you will want to install the YubiKey personalization app:

$ doas pkg_add yubikey-personalization-gui

The limitation to how login_yubikey is implemented currently (as of 7.3) is that you can only have one key. There is no ability to register a second one. However, you can make a backup of the private identity and secret key at the time you generate them and store them in the same place you keep your backup YubiKey.

So, launch the YubiKey Personalization Tool GUI application and insert your YubiKey that you will be using as your only key for OpenBSD. In the UI, click on Yubico OTP from the upper left-hand menu and press the “Quick” button that shows up on the screen. Uncheck the “Hide values” and copy off to a safe place the Public Identity, Private Identity and Secret Key. Select that slot you want to use (in my case, slot 1) and press the “Write Configuration” button and it should write to your YubiKey.

Now create a file called /var/db/yubikey/user.uid and put your private identity value in there (replacing “user” in the filename with your userid). Put the secret key into one called /var/db/yubikey/user.key (again, replacing “user” with your userid). Set up the right permissions on the two files:

# chown root:auth /var/db/yubikey/*
chmod o-rw /var/db/yubikey/*

Finally, edit the /etc/login.conf file and add ‘yubikey’ at the beginning of the auth-defaults entry like this:

# Default allowed authentication styles
auth-defaults:auth=yubikey,passwd,skey:

Now if you reboot, you will find that your password no longer works. Touching your YubiKey after you insert it, however, should replace your password and log you in just fine. According to a wonderful contributor on DaemonForums.org, there is a challenge-response capability that I might be able to use to meet my 2FA requirement; however, I’ll have to tackle that in another post sometime later.

Windows 11

The first thing I needed to do is to make sure that all of my Windows machines were running local accounts. I’m not a fan of Microsoft’s strong push to force everyone to use a Microsoft cloud login for their local machines. Windows 10 at least allowed you the option to ignore the strong push in the UI to set things up with a Microsoft login. For Windows 11, if that option exists, I cannot find it in the UI so I have to initially set up the machine with a Microsoft cloud authentication and then after the OS install is complete, switch it over to a local account.

By the way, if you are curious about how to switch from a Microsoft account to a local account, you need to bring up the settings pane in the UI, then navigate to “Accounts”. From there, click on “Your info” and select the item under “Account settings”. That will allow you to convert your existing Microsoft account to a local one. Or you can just create a new local account, set it as admin and delete the Microsoft one.

After you have either verified you are running a local account or migrated your account and rebooted / logged back in, you will want to create a second admin account that you can use without a YubiKey to prevent you from locking your keys in the car. While I acknowledge this makes things less secure, I created mine with a 20+ character password and no password recovery questions that make sense to anyone (just gibberish in the answers) and rolled with it. If you are feeling that your threat model wouldn’t support such a “back door”, there is no requirement to create such an account, just be warned that you could potentially lose access if you screw up.

That said, I then logged out of my normal user account and logged into this backup admin account. From there, I installed the “Login Confirmation” application from Yubico and rebooted the machine.

Upon reboot, the login screen looks like you require your YubiKey to log in. Actually, not yet until you configure things. Once you log in, run the “Login Confirmation” application you just installed. I switched to “Advanced configuration” so that I could control the behavior of the application as I set it up. On the next screen, I selected the following options:

  • Slot 2
  • Use existing secret if configured – generate if not configured
  • Generate recovery code (I do this for the first machine I setup and then save it off, after that I don’t generate a new code)
  • Create backup device for each user (only do this if you have purchased 2 separate YubiKeys – I have and I keep my backup in a safe / secure location in case I lose the primary)

You then need to pick the accounts you want to secure with your YubiKey(s) (again, I only pick my primary account, not the new admin account I created in case I’m locked out) and click “Next”. You’ll then be prompted to insert your primary YubiKey, then your secondary one. At this point, you should be good to go. I reboot just for grins, and then verify that (1) I cannot log into my primary account unless one of the two YubiKeys is inserted; (2) I can log into my emergency admin account without a YubiKey; and (3) that both YubiKeys work for logging into my primary account.

Linux (Ubuntu)

First things first, we need to add the official PPA for Yubico to apt:

$ sudo add-apt-repository ppa:yubico/stable
$ sudo apt update

Now go ahead and install all of the Yubico software:

$ sudo apt install yubikey-manager yubikey-personalization libpam-yubico libpam-u2f yubikey-manager-qt yubioath-desktop

Next, you will need to set the PIN on your FIDO2 capability on both of your YubiKeys. To do this, run the Yubico Authenticator app and select “YubiKey” from the hamburger menu. Insert your primary YubiKey and scroll down to the Configuration section in the GUI. If you click on the right arrow next to WebAitjm )FIDO2/U2F), you can program the PIN. Do this for your backup key as well.

To associate the U2F key(s) with your Ubuntu account, open terminal and insert your YubiKey:

$ mkdir -p ~/.config/Yubico
$ pamu2fcfg > ~/.config/Yubico/u2f_keys

You will be prompted to enter your PIN that you set above and then when the YubiKey lights up, touch the “y” symbol on the physical key and it will save the information on your account. Now repeat the process with your backup YubiKey:

$ pamu2fcfg -n >> ~/.config/Yubico/u2f_keys

Now, let’s add some additional security by moving the config files to a root-only accessible location and update the configuration for PAM to point to it:

$ sudo mkdir /etc/Yubico
$ sudo mv ~/.config/Yubico/u2f_keys /etc/Yubico/u2f_keys

Now we will need to change a configuration file in /etc/pam.d/sudo and add the following line after the “@include common-auth” line:

auth    required    pam_u2f.so authfile=/etc/Yubico/u2f_keys

Now, let’s test things to be sure that sudo is working with the YubiKey. To do this, open a fresh terminal window, insert your YubiKey and run “sudo echo test”, you should have to enter your password and then touch the YubiKey’s metal button and it will work. Without the YubiKey inserted, the sudo command (even with your password) should fail.

So now we need to repeat this process with the following files:

runuser    /etc/pam.d/runuser
runuser -l    /etc/pam.d/runuser-l
su    /etc/pam.d/su
sudo -i    /etc/pam.d/sudo-i
su -l    /etc/pam.d/su-l

Now we need to configure the system to require the YubiKey for login. To do this, we perform the same thing as above but in the /etc/pam.d/gdm-password file. Do this also for the /etc/pam.d/login file if you want to protect console text-based login. Finally, create a log file for the system to use by touching /var/log/pam_u2f.log and add this to the end of the pam_u2f.so lines above that you want to debug:

auth    required    pam_u2f.so authfile=/etc/Yubico/u2f_keys debug debug_file=/var/log/pam_u2f.log

Reboot your system and you should be pretty much locked down to use the YubiKey for anything important.

LUKS Full Disk Encryption (Ubuntu)

OK, if you really want to take your 2FA to the next level, you can make it so that your YubiKey is required as a second factor to unlock your LUKS-encrypted disk. Not a replacement for your password, but a true second factor that is required in addition to your password in order to unlock the disk. I was able to get this to work by following the instructions in this post as well as the GitHub repo. To summarize, see below.

If you are using your YubiKey for Windows 11 login as I outlined above, your second slot in the key is already in use so DO NOT do what I am about to tell you. If, however, you are not using that second slot for Windows login, you need to install the YubiKey personalization software and then initialize slot 2 (for both your primary and backup YubiKeys if you have two):

$ sudo apt install yubikey-personalization
$ ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -oserial-api-visible

Now, with either your existing slot 2 configuration for Windows 11 login, or with the new one that you did above, you will need to enroll your YubiKey to the LUKS slot. Figure out first what your partition name is using the lsblk command. You are looking for a partition labeled as “crypt”. In my case it is /dev/nvme0n1p3 (not the /dev/nvme0n1p3_crypt):

$ sudo apt install yubikey-luks
$ sudo yubikey-luks-enroll -d /dev/nvme0n1p3 -s 1

You will be prompted for a challenge passphrase to use to unlock your drive as the first factor, with the YubiKey being the second factor. Since you are using a higher security (2FA) mechanism to unlock the drive, there is no need for this challenge passphrase to be crazy long. You can use a much longer passphrase for slot 0 to unlock the drive without the YubiKey as a failsafe.

Repeat this in a different slot with your backup YubiKey. If you would like to see which slots are in use in your current LUKS partition, use the command:

$ sudo cryptsetup luksDump /dev/nvme0n1p3

That is also a good way to confirm you have the right partition name.

Now, you will need to update /etc/crypttab to add a keyscript:

cryptroot /dev/nvme0n1p3 none    luks,keyscript=/usr/share/yubikey-luks/ykluks-keyscript

After editing the file, you will need to transfer the changes to initramfs:

$ sudo update-initramfs -u

At this point, you should be able to reboot your machine and verify that you can unlock the disk with your original LUKS passphrase (what is now your fallback) as well as your new challenge passphrase and the YubiKey.

If you would like to update your YubiKey challenge passphrase in the future, you simply use the same command you did to set enroll it initially, but append a “-c” to clear out the old LUKS slot:

$ sudo yubikey-luks-enroll -d /dev/nvme0n1p3 -s 1 -c

When you are asked to “Enter any remaining passphrase”, that is where you enter your (hopefully) much longer fallback passphrase that doesn’t require the YubiKey, then you are asked to supply the new passphrase twice.

If you would like to update your fallback passphrase that doesn’t require a YubiKey, you can use the command:

$ sudo cryptsetup luksChangeKey /dev/nvme0n1p3 -S 0

You should be prompted for your old password and the new password twice.

UbuntuOne Single Sign On

For us Ubuntu users, eventually you end up creating an UbuntuOne account for things like access to Launchpad, access to your free UbuntuPro tokens, etc. Part of the setup for this is to supply a second factor for 2FA logins to increase the security of your account. The way I have mine set up, I have added Google Authenticator as one of my additional factors but I’d like to have my YubiKey be my primary second factor with Authenticator as my fallback if I don’t have access to the YubiKey (either primary or secondary)..

To set this up, navigate to https://login.ubuntu.com/ and log into your account. Then navigate to My Account -> Authentication Devices. On that screen you will see that you can “Add a New Authentication Device”. When you click on that, select YubiKey and follow the on-screen instructions. I would recommend doing this with your primary YubiKey as well as your backup one.

GitHub

If you would like to use your YubiKey for a second factor when logging into GitHub, it’s pretty easy to do. Simply log into your GitHub account, click on your picture in the upper right header and select Settings from the dropdown menu. On the settings screen, select “Password and Authentication” to navigate to that settings page. On this page, you will need to enable 2FA if you haven’t already done so. I use Google Authenticator as my fallback 2FA method here as well.

To add your YubiKey(s), click on the “Edit” button next to the “Security Keys” section and press the “Register new security key” button. You will be prompted to name your key (i.e. “Primary YubiKey” or “Secondary YubiKey” are what I used) and then you will be prompted to insert your key and press its metal button. Repeat this process with any additional YubiKey(s) you might have and then you have added this as a second factor for GitHub.

SSH 2FA

Adding a second factor to your SSH key infrastructure is fundamentally a really (did I say really?) good idea. The way you set this up server-side depends on the operating system that is hosting the ssh server. I’ll break it down below:

Ubuntu

I followed the howto on the Yubico site and followed the instructions for the “Non-discoverable” keys which are stated as being higher security than “discoverable” keys. Starting out, I first checked the firmware of your YubiKey as follows:

$ lsusb | grep Yubico

# Get the two 4-digit numbers separated by a colon and use them in place of the xxxx:xxxx below
$ lsusb -d xxxx:xxxx -v 2>/dev/null | grep -i bcddevice

In my case, my firmware was version 5.12. This actually turned out to be 5.1.2 which put me below the minimum firmware version 5.2.3 for the stronger encryption. Also, you can’t update the firmware on your YubiKey – it is set at the factory. Ah well.

Given that, I’ll generate my keypair. On your desktop machine, generated the U2F/FIDO2 protected key pair:

$ ssh-keygen -t ecdsa-sk # Older YubiKey firmware
$ ssh-keygen -t ed25519-sk # Firmware version 5.2.3+

When I ran the top command (because my YubiKey had the older 5.1.2 firmware), I got an error that said “You may need to touch your authenticator to authorize key generation” and yet I was never offered the attempt to do so. Therefore, I added the -vvv switch to the command and saw an error saying that my device “does not support credprot, refusing to create unprotected resident/verify-required key”.

After doing some more digging, I discovered that there is a command I can run to validate the capabilities of my YubiKey:

$ sudo apt install fido2-tools
$ fido2-token -I /dev/hidraw1

What I discovered to my disappointment is that my primary YubiKey (which I have had for several years) does not support the “credProtect” feature (it should show up in extension strings in the output of that command. My new secondary key, however, did. Therefore, I worked using my newer key and placed an order to Yubico for another one to use as my new primary. Sigh… My newer YubiKey also had firmware 5.4.3 so I’ll be able to use the newer crypto. Probably better in the long run.

I chose the filename of “id_primary_yubikey” for my primary key and “id_backup_yubikey” for my backup keys. This generated a pair of files, one without a suffix and the other with a .pub suffix (indicating that it is the public key) for each of my two YubiKeys.

So. We now have a new primary and backup keypair in our local .ssh directory. How do we get this to work on our remote server(s)?

It’s pretty straightforward. Take the contents of the .ssh/id_primary_yubikey.pub file and append it to the ~/.ssh/authorized_keys file on the remote server you are trying to ssh into. Repeat this process with the .ssh/id_backup_yubikey.pub file. Now when you ssh into the remote system using the identity you generated:

$ ssh -i ~/.ssh/id_primary_yubikey user@remote-system

You should notice that the Yubico symbol lights up on your YubiKey asking you to touch it. When you do so, you should be logged into the remote system.

If you would like to add your new identity to your SSH agent:

$ ssh-add ~/.ssh/id_primary_yubikey

Now you should be able to ssh directly into your remote system without having to supply the identity file.

If you still can’t get into the remote system, it is possible that it is not configured to support the sk-ssh-ed25519@openssh.com algorithm. To see what algorithms your remote system accepts, log into it and run the following command:

$ ssh -Q PubkeyAcceptedAlgorithms

Conclusion

Congratulations. You have now YubiKey’ed “All the Things!” Take that secondary YubiKey you bought and lock it in a safe somewhere. Keep the other one with you and you are now more secure than you were when you started.

Posted in Uncategorized | Leave a comment

Imitation is the Sincerest Form of Flattery

As many long-term readers of this blog know, I am pretty firmly entrenched into my Gnome 3 workflow and I try to keep my desktop experience as consistent as possible between the various machines and operating systems that I run. Over the years, I have been spending more and more time using Ubuntu and have become a real fan of the look and feel of the Yaru theme.

I thought it would be fun to document how I make my OpenBSD 7.3 system look and feel as close as I can to my current Ubuntu 23.10 (daily) system. As a result, this blog post might not be too useful for most of my readers so feel free to bail out. If, however, you have an interest in knowing how to tweak OpenBSD in this way, then by all means press on!

Installing the Basics

First, we assume a fresh install of OpenBSD that is fully patched up. We need to get some housekeeping out of the way so that we have a basic system on which to start customizing so I’ll detail the steps below:

# Set up APMD on the laptop for power management
$ doas rcctl enable apmd
$ doas rcctl set apmd flags -A
$ doas rcctl start apmd

# Add my user to the staff login class
$ doas usermod -L staff USERNAME

# Modify the following in /etc/login.conf
...
staff:\
    :datasize-cur=4096M:\
    :datasize-max=infinity:\
    :maxproc-max=512:\
    :maxproc-cur=256:\
    :openfiles-max=102400:\
    :openfiles-cur=102400:\

# Modify /etc/sysctl.conf
kern.maxfiles=102400

# Install the base software needed
$ doas pkg_add gnome gnome-tweaks gnome-extras vim
$ doas rcctl disable xenodm
$ doas rcctl enable multicast messagebus avahi_daemon gdm

# Install additional software and utilities
$ doas pkg_add filerfox chromium libreoffice nextcloudclient
$ doas pkg_add keepassxc aisleriot evolution evolution-ews
$ doas pkg_add tor-browser shotwell gimp

Tweaking the Themes and Extensions

OK. After rebooting and logging into a vanilla Gnome desktop through gdm, time to add the yaru theme. This can be found by searching for “yaru-remix” on https://www.gnome-look.org and installing it manually as follows:

# Download yaru-remix-complete-20.10.tar.xz
$ cd ~
$ mkdir .themes
$ cd .themes
$ mv ~/Downloads/yaru-remix-complete-20.10.tar.xz
$ unxz yaru-remix-complete-20.10.tar.xz
$ tar xf yaru-remix-complete-20.10.tar
$ mv themes/* .
$ rmdir themes
$ doas mv icons/* /usr/local/share/icons
$ rmdir icons
$ doas mv wallpaper/* /usr/local/share/backgrounds/gnome
$ rmdir wallpaper
$ rm yaru-remix-complete-20.10.tar

At this point, launch “extensions” and turn on “User Themes”. To pick up this change, you will have to restart Gnome so I normally just do a reboot. Once I’m back, I fire up “Tweaks” and on the “Appearance” tab, I select “Yaru-remix-dark” for Icons, Shell and Legacy Applications. I also turn on minimize and maximize on the “Window Titlebars” page and enable Mouse Click Emulation – Fingers” on the “Keyboard & Mouse” page.

Now we have something that is starting to look like Ubuntu. The next step will be to use the wonderful Gnome extension “Dash to Dock” to get that good old “Unity” looking launcher on the left. First, download the latest version of the extension (for your version of Gnome Shell – Settings -> About) from https://extensions.gnome.org/extension/307/dash-to-dock and drop to a terminal.

$ mkdir -p ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ cd ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ unzip ~/Downloads/dash-to-docmicxgx.gmail.com*.zip

Now fire up “Extensions” and turn on Dash to Dock. Press the “Settings” button to get to the settings for the extension. Position the dock on the screen to the “Left”, select “Panel mode” and set the icon size limit to what works for you. On the “Launchers” tab, I turn off the “Show trash can” and “Show volumes and devices” because I don’t use that functionality and would rather have room for more stuff that I can “pin” to the dock.

I typically pin Chromium, Firefox, Evolution, Files and Terminal to my dock. To accomplish this, I typically just launch the application with the Meta key and type it’s name, then hit Enter. It should now have an icon as a running application in the dock. I right-click on it and select “Pin to Dock”. I then position it by dragging it to where I’d like to see it.

Don’t Forget the Terminal

OK. So now we have the Yaru theme and a dock on the left that looks a lot like what you have in Ubuntu. It’s time to start tweaking other aspects of the setup. To get the Ubuntu font, you will need to:

$ doas pkg_add ubuntu-fonts
$ doas fc-cache

Launch tweaks again and set the Interface Text to “Ubuntu Medium” and the “Legacy Window Titles” to “Ubuntu Bold”. I also change my Antialiasing to Subpixel because it is a laptop with an LCD screen.

Now for the terminal window. From the hamburger menu on the right of the titlebar of the terminal, select “Preferences” and the “Unnamed”. On the “Text” tab, click the “Custom font” checkbox. Switch to the “Colors” tab. Here, we are going to change a lot of things. The first thing I change is to uncheck the “Use colors from system theme” and select “GNOME dark” from the “Built-in schemes”.

Then, I set the following colors:

  • Background color: #481036
  • Palette Color 4 (the blue one in the top row): #1572E6

Now, I need to modify things so that when I do an “ls”, I actually get colors. To accomplish that, I install the “colorls” package and alias “ls” to “colorls -G”:

$ doas pkg_add colorls

# Add the following to ~/.profile:
export ENV=$HOME/.kshrc

# Create ~/.kshrc
alias ls="colorls -G"

If you reboot, you should notice that typing “ls” shows your files color-coded based on the file type. Now, we need to edit the PS1 environment variable to get our terminal’s command prompt to look like the one in Ubuntu. To do this, add the following to your ~/.profile:

export PS1='\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

Now, a final reboot should have you looking pretty darned similar to Ubuntu. I’m not doing anything nutty like changing my default shell from ksh to bash (although I guess you could do that). Happy OpenBSD-ing!

Posted in Uncategorized | Tagged | Leave a comment

Fast Follower – Unlimited POWER!!!!!

In my most recent post, I covered how I was able to successfully install Ubuntu 20.04 LTS for s390x on an emulated mainframe using QEMU. While I have physical hardware for AMD64, ARM64 and RISC-V, there is another currently-supported processor architecture for Ubuntu that I don’t have the hardware for and that is IBM POWER. This CPU is the latest evolution of the PowerPC RISC CPU that was developed by IBM and Motorola in the 90’s and that was the core of Apple laptops and desktops for many years.

The current version of this architecture is available as the iSeries from IBM and has many variants still used in industrial and automotive applications as well as on some gaming consoles in the recent past. Given my success with Ubuntu 20.04 LTS on s390x with QEMU, I’m going to try replicating that for this platform as well. If I am successful, I will then have all of the processor architectures covered for Ubuntu Server.

First off, I installed the QEMU bits I needed:

$ sudo apt install qemu-system-ppc64 qemu

A quick peek in /usr/bin shows that “qemu-system-ppc64le” is installed. The “little endian” (a CS term that refers to how bytes in a word are stored physically in memory with the least significant byte first and the most significant byte last) version is how the POWER architecture version of Ubuntu is implemented.

In order to get our POWER guest connected to the network, we need to reconfigure this machine to use a bridge interface just like we did for the s390x. Do this by editing your /etc/netplan/*.yaml file as follows:

# This is the network config written by 'subiquity'
network:
  ethernets:
    eth0:
      dhcp4: no
  bridges:
    br0:
      dhcp4: no
      addresses:
        - 192.168.1.199/24
      gateway4: 192.168.1.1
      nameservers:
        addresses:
          - 192.168.1.1
        search:
          - example.com
      interfaces:
        - eth0
  version: 2

Apply the changes:

$ sudo netplan apply --debug

Set up the tunneling support needed by QEMU:

$ sudo apt install qemu-kvm bridge-utils
$ sudo ip tuntap add tap0 mode tap
$ sudo brctl addif br0 tap0

Now create the virtual storage device that we will be installing Ubuntu LTS onto:

$ qemu-img create -f qcow2 ubuntu-run.qcow2 10G

At this point, we need to download the installer and extract the kernel and initrd images from it, just like we did with s390x:

$ wget https://old-releases.ubuntu.com/releases/22.04/ubuntu-22.04-beta-live-server-ppc64el.iso

Then, create a script to launch the installer:

#! /usr/bin/bash
 
emu-system-ppc64le -machine pseries -cpu power9 -nodefaults -nographic -serial telnet::4441,server -m 8192 -smp 2 -cdrom ubuntu-22.04-beta-live-server-ppc64el.iso -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio-net-pci -net tap,ifname=tap0,script=no,downscript=no

After you run this, it will wait for a telnet connection to proceed. In a separate ssh into the host, you need to enable the network:

$ ip link set up dev tap0
$ ip a # confirm that tap0 shows "up"

Now, you can telnet into the virtual serial console and run the install as you normally do:

$ telnet localhost 4441

After the install finishes and reboots, kill the qemu session with a CTRL+C in its ssh terminal and modify your run script:

#! /usr/bin/bash

qemu-system-ppc64le -machine pseries -cpu power9 -nodefaults -nographic -serial telnet::4441,server -m 8192 -smp 2 -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio-net-pci -net tap,ifname=tap0,script=no,downscript=no

Then, create a systemd service file for it as /etc/systemd/system/ppc64le.service:

[Unit]
Description=Launch ppc64le  emulator session
After=getty.target
Before=ppc64le-network.service
 
[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/root/run-ubuntu.sh
TimeoutStartSec=0
 
[Install]
WantedBy=default.target

Create a script file called /root/tuntap.sh to bring up the qemu network interface (don’t forget to chmod +x the file):

#! /usr/bin/bash
 
sleep 15
brctl addif br0 tap0
ip link set up dev tap0

Then create a systemd service file for it as /etc/systemd/system/ppc64le-network.service:

[Unit]
Description=Enable ppc64le networking
After=getty.target
 
[Service]
Type=simple
RemainAfterExit=no
ExecStart=/root/tuntap.sh
TimeoutStartSec=0
 
[Install]
WantedBy=default.target

Finally enable and start the services:

$ sudo systemctl enable ppc64le-network.service
$ sudo systemctl start ppc64le-network.service
$ sudo systemctl enable ppc64le.service
$ sudo systemctl start ppc64le.service

A reboot should automatically show the two new services running and you should be able to ssh into your new POWER machine running Ubuntu 22.04 over the network as if it were physical hardware. It might be a bit slow, but it works!

Posted in Uncategorized | 2 Comments

What’s a Mainframe?

For those of you born… shall we say… more recently than some of us, you might not be familiar with the term “mainframe” or think that it is some ancient server lost to the mists of time. The generic term refers to any large single or multi-user computer that was typically larger than one single cabinet in a datacenter. In the vernacular of today, this almost exclusively refers to multi-user hardware from IBM running either a proprietary operating system such as MVS.

Mainframes are still much in use today, running old legacy applications and serving as large servers that (interestingly enough) might run a variant of the Linux operating system. Interestingly, Ubuntu has been available for this platform for some time and I decided I wanted to add this unique architecture to my collection of machines at my disposal.

Unfortunately I don’t have gigawatts of power at my house nor the necessary external watercooling that some of these beasts require (plus, my wife would have had my head) so I decided to go out on a limb and try to spin a modern Ubuntu LTS up on emulated “zSeries” (that’s what IBM calls it these days) hardware.

I initially tried getting this working using the v4 version of Hercules based on an interesting blog post, but was unsuccessful. If you are interested in the details, jump to the end of this post and maybe you can figure out what I was doing wrong. In the meanwhile, here is what I did to get things working using QEMU.

Ubuntu 20.04 LTS on zSeries using QEMU

First things first, I did a search to see if I could find a simple how-to that walked me through the process. I did find some, but they were a bit out of date and also didn’t go into the networking aspects of QEMU to a level where I could successfully spin things up. Here are the posts that I based most of this blog’s work upon:

The obvious initial step is to install QEMU as well as the special features that allow it to emulate a zSeries processor from IBM:

$ sudo apt install qemu-system-s390x qemu

The next step in the process is to create a network bridge on the machine that you are using as the host. Do do this, edit your /etc/netplan/*.yaml file (substitute your NIC name and IP information):

# This is the network config written by 'subiquity'
network:
  ethernets:
    enx00ec6306fb8:
      dhcp4: no
  bridges:
    br0:
      dhcp4: no
      addresses:
        - 192.168.1.99/24
      gateway4: 192.168.1.1
      nameservers:
        addresses:
          - 192.168.1.1
        search:
          - example.com
      interfaces:
        - enx000ec6306fb8
  version: 2

You have to then activate the changes to your network. Note that running the command below could result in you not having remote access to your machine you are doing this on so you might have to re-ssh into the box from another terminal window.

$ sudo netplan apply --debug

Then, set up the qemu tap device that uses the bridge you created above:

$ sudo apt install qemu-kvm bridge-utils
$ sudo ip tuntap add tap0 mode tap
$ sudo brctl addif br0 tap0

Now you need to create a disk image to use as your virtual storage device to install Ubuntu on:

$ sudo qemu-img create -f qcow2 ubuntu-run.qcow2 10G

You can make the image any size you want. I also tried this with a “raw” formatted image but ended up having better luck using the qcow2 format instead. Now you have a place to install your Ubuntu s390x machine.

Now, you need to download the latest install image. I tried to get this working with 22.04 LTS but the installer kept crashing on me at various points in the process. I suspect it might have a dislike for the virtual serial console over telnet business. Therefore, I proceeded with 20.04 LTS instead:

$ wget https://old-releases.ubuntu.com/releases/20.04/ubuntu-20.04.4-live-server-s390x.iso

Now you will need to mount the ISO and extract the kernel and initrd images because those will be used by QEMU in its command-line:

$ mkdir tmpmnt
$ sudo mount -o loop ./ubuntu-22.04.1-live-server-s390.iso tmpmnt
$ cp tmpmnt/boot/kernel.ubuntu .
$ cp tmpmnt/boot/initrd.ubuntu .
$ sudo umount tmpmnt

To make things easier for myself, I created a script to launch the emulated s390x environment named run-s390x.sh:

#! /usr/bin/bash

qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on,msa5-base=off -serial telnet::4441,server -display none -m 8192 --cdrom ubuntu-22.04.1-live-server-s390x.iso -kernel kernel.ubuntu -initrd initrd.ubuntu -drive file=ubuntu-run.qcow2,format=qcow2 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no

Once I was ready to install things, I ran the script:

$ chmod +x run-s390x.sh
$ sudo ./run-s390x.sh

At this point, the emulator is paused, waiting for you to connect (via telnet port 4441 on the localhost address) to a virtual serial console. Therefore, from another terminal window on this machine, connect to the virtual serial console:

$ telnet localhost 4441

At this point you should see the system boot up. It takes some time in the emulated environment. Eventually you get to the Ubuntu installer. From yet another terminal window on this machine, bring the tap0 interface up (I find that it doesn’t come up on its own until something is actually attached to it from qemu):

$ ip link set up dev tap0
$ ip a # confirm that tap0 shows "up"

Back in the installer, I chose to run it in “rich mode” and chose the following options:

  • English
  • Ubuntu Server
  • On the “ccw screen” – just hit “Continue”
  • Take the network defaults (should be DHCP from your network) or configure to match a static IP address
  • No proxy
  • Take the default mirror address
  • Skip the 3rd party drivers

The install took a while but eventually completed successfully. Modify the run-s390x.sh file to be as follows (take out the installer, etc.):

#! /usr/bin/bash

cd /root
ip tuntap add tap0 mode tap
brctl addif br0 tap0

qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on,msa5-base=off -smp 2 -serial telnet::4441,server -display none -m 8192 -drive file=ubuntu-run.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-ccw,devno=fe.0.0001,drive=drive-virtio-disk0,bootindex=1 -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no

I then created a systemd service to start the qemu session at boot time after the network is operational by editing /etc/systemd/system/s390x.service:

[Unit]
Description=Launch s390x  emulator session
After=getty.target
Before=s390x-network.service

[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/root/run-s390x.sh
TimeoutStartSec=0

[Install]
WantedBy=default.target

Next, create another script to bring up the tun0 interface (I called mine /root/tuntap.sh):

#! /usr/bin/bash

sleep 15
ip link set up dev tap0

Set the execute bit on the script:

$ sudo chmod +x /root/tuntap.sh

Then, create a second systemd service to start networking after qemu has attached to tun0:

[Unit]
Description=Enable s390x networking
After=getty.target

[Service]
Type=simple
RemainAfterExit=no
ExecStart=/root/tuntap.sh
TimeoutStartSec=0

[Install]
WantedBy=default.target

Finally enable and start the service:

$ sudo systemctl enable s390x-network.service
$ sudo systemctl start s390x-network.service

Rebooting then confirmed that the service is started at boot time and reachable from the network. At this point you should be able to ssh into the s390x “machine” as if it were a real mainframe running Ubuntu on your network.

Using Hercules instead of QEMU

This turned out to be a bit of a dead-end, for me at least. The newer installer (22.04) had a kernel fault (I suspect because the virtual processor was too “old” as configured to be supported – probably fixable) so I used an older version. I managed to get it to ping the network, but the DNS wasn’t working and even with a hardcoded manual IP address for the Ubuntu server, it didn’t work.

From:
https://sdl-hercules-390.github.io/html/hercinst.html#install
http://www.fargos.net/packages/README_UbuntuOnHercules.html

Started with 22.04 server

Install Hercules v4 (the one that installs with apt is v3)

$ sudo apt install git wget time build-essential cmake flex gawk m4 autoconf automake libtool-bin libltdl-dev libbz2-dev zlib1g-dev libcap2-bin libregina3-dev net-tools
$ git clone https://github.com/SDL-Hercules-390/hyperion.git
$ cd hyperion
$ ./util/bldlvlck # make sure everything is “OK”
$ ./configure
$ make
$ sudo make install

Installing the helper scripts

$ cd ~
$ wget http://www.fargos.net/packages/ubuntuOnHercules.tar.gz
$ mkdir ubuntu-hercules
$ cd ubuntu-hercules
$ tar xvf ../ubuntuOnHercules.tar.gz

Install Ubuntu


$ cd ubuntu-hercules
$ wget https://old-releases.ubuntu.com/releases/22.04/ubuntu-22.04-live-server-s390x.iso
$ LD_LIBRARY_PATH=~/hyperion/.libs ./makeNewUbuntuDisk -c 48000 -v 22 # makes a 32g disk for Ubuntu 22.04
# Modify ./hercules-ubuntu.cfg to have the DASD show up with a -v22.disk filename
$ LD_LIBRARY_PATH=~/hyperion/.libs ./boot_ubuntu.sh –help
# In my case, I need to set the default gateway and DNS server and change the hostname
$ sudo LD_LIBRARY_PATH=~/hyperion/.libs ./boot_ubuntu.sh –iso ubuntu-22.04-live-server-s390x.iso –dns 192.168.x.x –gw 192.168.x.x –host s390x
# When you get asked questions in the install, prefix your answer with a period (‘.’)
# Choose CTC for network and then pick id #1 for read and #2 for write (at least that’s the one that worked for me)
# Use Linux communication protocol when prompted
# Do not autoconfigure the network but do it manually
# Select 10.1.1.2/24 for your IP address and 10.1.1.1 for your gw and 8.8.8.8 for dns server
# The system seems to have a problem resolving DNS so I used the IP address of the Ubuntu archive mirror

Posted in Uncategorized | 1 Comment

Fiber + Static IP = Self-Hosting Glory!

Recently, a new Internet Service Provider (ISP) became available in my area. Now, no longer confined to a choice between the cable TV company and the telephone company to supply the bits to my house, I had the option of true gigabit fiber to my house as a choice! Needless to say, I had some questions.

The first question was, “How difficult is it to get a static IP address?” I wanted to know this because the cable TV company wanted you to switch from a residential service to a business service and then there was some sort of biological sampling required, signing over your firstborn child and some “feats of strength” required to get one of these magical things. For the new ISP, the answer was simple – send us an email asking for one and it will cost you $10 US per month to keep it. Wow. That was easy. On to the next question.

The next question was the tricky one. My cable TV provider purposely blocked certain ports such as port 25 (SMTP) and there was no way around that. I asked the new ISP if they blocked any ports and the answer was, “No. Why would we do that?” Again – amazing! At this point, I was ready to start moving all of my stuff from the cloud to my house. First things first, I had multiple HTTPS-secured websites to move. Uh oh. How do I serve up multiple websites with multiple different certificates from a single public IP address? Time to test my Google Fu.

Turns out, my OpenBSD 7.1 router could come to the rescue. By doing a reverse-proxy setup with Apache2 and SSL termination, I could accept HTTPS traffic for multiple sites on my single IP address, serve up the right certificate to the browser on the other side of the communication and then pass along the traffic in the clear (HTTP) on port 80 to various servers on my home network. Finding blog posts about this was easy. Making it worked proved to be a bit tricky. I’m sure I could have done this with the OpenBSD httpd daemon (which has a much smaller attack surface that massive old Apache2) but that will be some research and investigation for another post (hopefully) in the future.

OpenBSD Reverse Proxy + SSL Termination

First off, something rare for this blog – a picture! This is the logical traffic flow for my setup:

SSL Termination / Reverse Proxy

To pull this off, I have to first install and enable Apache2 on my OpenBSD Octeon Router:

Next, I have to get HTTPS certificates for my various sites. While I would have loved to have done this using certbot, I couldn’t because there was a C language library needed by Python3 to allow this that wasn’t available on the Octeon build (because my router doesn’t use and Intel/AMD CPU). I then tried using acme-client but found the configuration to be too challenging to pull off right away. Perhaps another blog post in the future. Anyhow, I used a Linux box and ran certbot to generate each of my certificates. I then wrote a little bash script to use scp to copy them to the right folder on my OpenBSD router and scheduled it with cron. Kickin’ it old school!

$ doas pkg_add apache2
$ doas rcctl disable httpd
$ doas rcctl enable apache2
$ doas rcctl start apache2

After that, it was time to write the necessary configuration in /etc/apache2/httpd2.conf for each of the sites. As you can see, this assumes that the SSL certificates are in the /etc/ssl/private directory on my OpenBSD router:

<VirtualHost *:80>
    ServerName www.example1.com
    ServerAlias www.example1.com

    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]

    ProxyPass "/" "http://192.168.1.101/"
    ProxyPassReverse "/" "http://192.168.1.101"
    ProxyPreserveHost On
</VirtualHost>

<VirtualHost *:443>
    ServerName www.example1.com
    ServerAlias www.example1.com

    ProxyPass "/" "http://192.168.1.101/"
    ProxyPassReverse "/" "http://192.168.1.101"
    ProxyPreserveHost On

    SSLEngine On
    SSLCertificateFile /etc/ssl/private/www.example1.com/cert.pem
    SSLCertificateKeyFile /etc/ssl/private/www.example1.com/privkey.pem
    SSLCertificateChainFile /etc/ssl/private/www.example1.com/fullchain.pem

    SSLProxyEngine On

    <Location "/">
        SSLRequireSSL
        RequestHeader set X-Forwarded-Proto "https"
        RequestHeader set X-Forwarded-Ssl on
        RequestHeader set X-Url-Scheme https
        RequestHeader set X-Forwarded-Port "443"
    </Location>
</VirtualHost>

It is also necessary to further edit the /etc/apache2/http2.conf file to uncomment the “LoadModule” configuration lines for the services being used in the above configuration. The modules to load include ssl_module, proxy_module, proxy_connect_module, proxy_http_module, ssl_module, rewrite_module. After this, simply do an “rcctl restart apache2” and ensure that you were successful. If not, go back and double-check the configuration file.

Next, you will need to make sure that your pf firewall allows port 80 and 443 through so that your site can be reached from off of the OpenBSD machine. To do this, add the following to your /etc/pf.conf file:

# Allow serving of HTTP
pass in on { $wan } proto tcp from any to any port 80
# Allow serving of HTTPS
pass in on { $wan } proto tcp from any to any port 443

Reload the rules for pf using “$ doas pfctl -f /etc/pf.conf” and that step is done. You will also need to likely map ports 80 and 443 from your residential gateway (provided by your ISP) to send them to the OpenBSD router. At this point you should be able to hit your SSL protected site from outside of your network. I always test this by turning off the wifi on my cell phone and using it’s browser on the telco’s network. As you add more “internal” websites, simply duplicate those two sections above and restart your Apache2 daemon on the OpenBSD router.

What About Email?

This one turned out to be very, very interesting. And by that I mean really stinking hard! The basics of it weren’t that bad. Here, I was able to use the wonderful “relayd” service that is native to OpenBSD to take all of the traffic I receive for the various email communication ports and fan them out to the appropriate back-end servers.

At first, I thought I would have to create a separate server for each email domain I wanted to host. Each of those servers would have to have its own SMTP server and each would have to have its own IMAP server. Also, if I wanted to have webmail for a particular domain, I would have to set it up to be an additional pair of entries in the http/https configuration in the previous section.

However, when I started configuring the DNS entries for all of this, I realized the error in my thinking. I only had a single public IP address so I needed the moral equivalent of that reverse proxy magic that I built using Apache2 on my OpenBSD router. How does one do this in the world of SMTP and IMAP? Well, it turns out there is a solution called Server Name Indication (or SNI) that is supported by the major SMTP and IMAP services in the Linux world. Therefore, I elected to host my email on Linux. Perhaps I will do a future blog post on how I migrated this to OpenBSD?

First things first, I needed to set up the necessary DNS entries to ensure that not only will my mail get routed to me, but that it will be considered deliverable and not “spammy” in an way. These included the following entries for each domain:

A * 1.2.3.4 15 min TTL
A * 1.2.3.4 15 min TTL
A mail.example1.com 1.2.3.4 15 min TTL
MX @ 10 mail 15 min TTL
@ IN TXT "v=spf1 mx a -all"
_dmarc IN TXT="v=DMARC1;p=quarantine;rua=mailto:admin@example1.com"
mail._domainkey IN TXT "v=DKIM1; h=sha256; k=rsa ; p=*"

For the above, the “1.2.3.4” is your static IP address from your ISP and you obviously need to fill in bits with your domain name as well as the DKIM content represented by the p=* section in the last entry. Perhaps I’ll do a full setup post in the future on this topic.

After setting up DNS, you will then need to configure your mail server. I chose postfix for the SMTP server as it supports SNI and dovecot for the IMAP server for the same reason. Once that was done and I could access things securely from within my private network, I then set up relayd on my OpenBSD router:

$ doas rcctl enable relayd
$ doas rcctl start relayd

I then wrote the following configuration file in /etc/relayd.conf to map the necessary ports to the mail server:

ext_addr="192.168.1.2"  # private IP address of OpenBSD Router
mail_host="192.168.1.201" # private IP address of mail server

relay smtp {
    listen on $ext_addr port 25
    forward to $mail_host port 25
}

relay submission_tls {
    listen on $ext_addr port 465
    forward to $mail_host port 465
}

relay submission_starttls {
    listen on #ext_addr port 587
    forward to $mail_host port 587
}
25
relay imaps {
    listen on $ext_addr port 993
    forward to $mail_host port 993
}

After restarting relayd, we need to add some entries to /etc/pf.conf to ensure that the traffic actually gets through the OpenBSD firewall and hits relayd:

# Allow servicing of SMTP
pass in on { $wan } proto tcp from any to any port 25
# Allow servicing of Submission TLS
pass in on { $wan } proto tcp from any to any port 465
# Allow servicing of Submission startTLS
pass in on { $wan } proto tcp from any to any port 587
# Allow servicing of IMAPS
pass in on { $wan } proto tcp from any to any port 993

Now reload your pf rules with “$ doas pfctl -f /etc/pf.conf” and your machine should be relaying traffic. Finally, you will need to port map ports 24, 465, 587 and 993 on your residential gateway provided to you by your ISP and traffic should start flowing through. Test this from outside of your network and verify that everything is working as expected.

Conclusion

Using these techniques, you should be able to host any number of SSL enabled websites and properly secured email domains on private servers within your home network. This means that you can save some money by not having to use virtual servers in the cloud and also increase the privacy of your services because you physically control the servers themselves.

Don’t forget to back up your data from these servers and then store it somewhere offsite (preferably in two places) in an encrypted fashion. One thing the cloud does make simple is just checking a couple of checkboxes and you suddenly have snapshots of your virtual server stored offsite. You can never have too many backups.

Anyhow, I hope this was helpful for everyone!

Posted in Uncategorized | Tagged , , , | Leave a comment

The Most Metal Thing I’ve Done Today

As a middle-aged electric bass player, the “metal moments” of my life have been coming with less frequency than they did when I was younger. As a result, I tend to look for opportunities to be “metal” on any given day. To that end, I want to explore Canonical’s Metal as a Service or MaaS. Yeah, I know, I went for the cheap pun!

For those of you who aren’t familiar with this awesome piece of software, it essentially allows you to take a collection of physical servers on a private network and turn them into a cluster that allows you to pass out physical or virtual servers to users and then reclaim them when you are done. It does all of this using standard protocols that make life very, very easy. For example, the MaaS servers boot off of DHCP/PXE from an image hosted on the controller so that the OS image doesn’t live on the physical disk of the machine, freeing its built-in storage up for use by the cluster. Additionally, the software supports things like the Intel Active Management Technology (AMT) and its ability to allow remote power on / power off of machines that have this capability (along with many other more enterprise-y technologies for similar control).

For the purpose of this post, I’m going to create a MaaS cluster out of six machines that I have dedicated to the purpose and will be using them to host various projects in my home lab. As long-time readers of this blog know, I am a fan of the Lenovo Thinkpad family of laptops so as a result (like many in my cult) I have quite a stack of them lying around at any given time. For the purpose of this, I will be harnessing the power of my W520, W530 and W541 machines – all of which support the AMT (and more importantly I haven’t CoreBoot-ed yet so it still is enabled).

In addition, I have what I call my “Beast”, a tower machine with a Threadripper CPU that has 32 virtual cores, my NAS box (another AMD cpu machine that has a bunch of spinning physical disks) and finally the machine I’m using for my controller. For that purpose, I dragged out an old Dell laptop I had lying around. It only has one NIC (a WiFi card that I used to attach to my home network) but I picked up a USB-3 gigabit Ethernet adapter that is well supported by Linux to use to run the private network.

The controller machine connects to my home network (10.0.0.0/24) as well as to a small 8-port managed Gigabit switch that all five of the worker nodes will be solely attached to (192.168.100.0/24). That’s the physical network layout. Pretty simple. I also took the time to put a proper AMT password on the machines that support this technology which the MaaS controller will use to reboot them as needed. For the two AMD machines, I have to physically press the power button – at some point I might get an IP enabled power strip that is supported by MaaS and use it to allow them to be “remote controlled” as well but this works just fine for the time being. You might also want to check that virtualization is turned on in the BIOS for any of the machines you are using.

I’m using Ubuntu 22.04 Server for the controller machine and am running it pretty much vanilla except for some network configuration to allow it to serve as a packet router from the private network to my home network so that machines in the cluster can download packages as needed. I could work around that by hosting a mirror on my controller with the packages I needed (I think) but this was easier. For most of this post, I’m basing my configuration on the MaaS 3.2 documentation.

I downloaded the latest 22.04 server from the Ubuntu website and then used the “Startup Disk Creator” application that ships as part of the base OS on my laptop to create a bootable USB drive. After booting from the USB drive on the Dell laptop, the only configuration change I made to the default install was to enable an SSH server on the machine so I can remote in and do everything I need to from my laptop (except for pressing the power buttons a few times on the worker nodes).

Once the controller is installed and booted up, I have to make some network configuration changes to allow it to have a static IP address on both the home network side (WiFi) as well as on the private network that it will be managing. To do this, I edit the /etc/netplan/00-installer-config.yaml file to look like the following:

network:
  ethernets:
    enx000ec6306fb8:
      dhcp4: false
      optional: true
      addresses: [192.168.100.1/24]
    wifis:
      wlp1s0:
        dhcp4: false
        optional: true
        addresses: [10.0.0.5/24]
        nameservers: 
          addresses: [8.8.8.8]
        routes:
          - to: default
            via: 10.0.0.1
        access-points:
          "my_ssid_name":
            password: "********"
  version: 2

After saving these changes, I ran “sudo netplan try” to test the configuration and ensure that everything is working the way I wanted it to. Once I was satisfied with the network, I updated the machine (“sudo apt update” and then “sudo apt upgrade”). After that, I reboot the machine to pick up the new kernel I downloaded in the updates.

I want my machines on the private network to be able to reach the Internet through the MaaS controller. To make things simple, I’m just going to set up a basic router on this machine using a guide I found here:

# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# sysctl net.ipv4.ip_forward=1
# iptables -A FORWARD -i enx000ec6306fb8 -o wlp1s0 -j ACCEPT
# iptables -A FORWARD -i wlp1s0 -o enx000ec6306fb8 -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -t nat -A POSTROUTING -o wlp1s0 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o enx000ec6306fb8 -j MASQUERADE
# apt install iptables-persistent

After running the “apt install…” command, make sure you tell it to persist the IPV4 and IPV6 rules and they will be stored in /etc/iptables under files called “rules.v4” and “rules.v6”. At this point, because I’m old-school, I do a reboot.

For my lab, I want to be as close to a “production” environment as I can get. Therefore, I’m opting for a “region+rack” configuration. Using snaps, installing MaaS is… well… a snap:

$ sudo snap install --channel=3.2 maas

The next thing we need to do is set up a PostgreSQL database for this instance of MaaS:

$ sudo snap install maas-test-db

At this point, it is time to initialize your instance of MaaS:

$ sudo maas init region+rack --database-uri maas-test-db:///

I took the default for my MaaS URL (http://10.0.0.5:5240/MAAS). I then ran the command “$ sudo maas createadmin” and provided my admin credentials and my Launchpad user for my ssh keys.

At this point, I logged into my MaaS instance from that URL and did some configuration. First, I named my instance and set the DNS forwarder to one that I liked. Next, we need to enable DHCP for the private network so that it can PXE boot new machines on the network. To do this, navigate to the Subnets tab and click on the hyperlink in the “vlan” column that corresponds to the private network. Click “Configure DHCP” and then fill in the Subnet dropdown to correspond to the IP address range of your private network then save the change. You should now notice the warning about DHCP not being configured has gone away from the MaaS user interface.

The next thing we need to do is set up the default gateway that is shared by the MaaS DHCP server to the machines. To do this, navigate to the “Subnets” tab and click on the hyperlink in the “subnet” column for your private network. Click “Edit” and fill in the Default Gateway IP address and the DNS address if you’d like. After clicking “Save” your machines will be automatically configured to use the default gateway you provided (in my case, the private network IP address of my MaaS controller).

I first boot up the Thinkpads (that have Intel AMT) on the private network and they PXE boots off of the MaaS controller and eventually show up under the “Machines” tab of the MaaS user interface. I click on each of them in the MaaS user interface and configure their names and their power setup to be Intel AMT and provide my passwords and IP addresses that I set up in the firmware on each of them. I then booted up the AMD machines and in their configuration, just set their power type to “Manual.

At this point, you will need to get the machines into a “usable” state for MaaS so to do that, check the box next to each one on the “Machines” tab and select “Commission” from the actions menu. You’ll have to physically power on any machines that don’t have Intel’s AMT and then they will go through the commissioning process. When done, they will show up as “Ready” on the “Machines” tab.

Now I need to get the default gateway working for each of the machines. There might be an easier way of doing this; however, I haven’t figured it out yet so I’m following part of a guide found here. For each machine, click on it and then navigate to the network tab. When there, check the box next to the network interface that is connected to the private network’s switch and press the “Create Bridge” button. Name the bridge “br-ex”, the type is “Open vSwitch”, select the fabric and subnet corresponding to your private network and pick “auto assign” for the ip mode.

Now, check the boxes next to your “Ready” machines and select “Deploy” from the actions menu. Be sure to check the “Auto Assign as KVM host” to make them available to host virtual machines for you. Press the “Start deployment…” button and be sure to power on any that don’t have Intel AMT technology to control their power state. At this point you should be done with the power button pushing business unless you need to do maintenance on the machines.

This seemed as good a time as any to create a MaaS user for myself. To do this, I navigated to the “Settings” tab and selected “Users” and then clicked “Add User”. I filled in the details (by the way, MaaS enforces no duplicate email addresses among its users so if you are like me and want an admin account and a separate user account, you’ll have to use two email addresses) and clicked “Save” and I was good to go. I logged in as that user and supplied my SSH key from Launchpad.

If you now switch to the main MaaS “KVM” tab, you should see your machines available and be able to add virtual machines. You do this by clicking on one of the hosts and then clicking the “Add Virtual Machine” button. It then shows up as a “New” machine in the MaaS user interface.

I then log in as my “user” account in MaaS and deploy the “New” virtual machines. Once they are completely deployed, you can then ssh into them from a machine that has connectivity to the private network. The only trick I discovered is that you have to log in as the “ubuntu” user, NOT the user you have set up in MaaS.

At this point, I have a working MaaS home lab that I can use for a variety of projects. I hope that you found this post helpful!

Posted in Uncategorized | 2 Comments

Active Directory Needs Friends!

For those of you who didn’t read my predecessor post on setting up a full-blown Active Directory infrastructure on my home network with home directories, roaming user profiles and group policy using only open source software, take a read through that. This is a follow-on post where I have added a second Active Directory domain controller in a private cloud environment and then bridged that private cloud network to my secure home network using WireGuard.

Bridging The Networks

To start off, since I’m using the bleeding-edge Ubuntu version on my primary domain controller, I set up a virtual server in my cloud provider of choice using 21.10 as well. For the private network, I put it on its own private network that does not collide with my home network (192.168.1.0/24). In this case it is 192.168.2.0/24.

My VPS provider allows me to supply SSH keys at their web console that restricts who can ssh into the remote virtual machine to only those who have the private key that corresponds to the public keys you upload and select. This ensures that I can securely log into the machine as root-level access without fear. The first thing do to, however, when I log into the new server is to update the packages installed on it:

# apt update
# apt upgrade
# reboot

Now for the wireguard setup on the remote virtual machine. For the purposes of this section, we will call it the “server”:

# apt install wireguard wireguard-tools
# wg genkey | sudo tee /etc/wireguard/server_private.key
# wg pubkey | sudo tee /etc/wireguard/server_public.key
# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
# sysctl -p
net.ipv4.ip_forwrd=1
net.ipv6.conf.all.forwarding=1
# vim /etc/wireguard/wg0.conf
[Interface]
Address = 10.10.10.1/32
ListenPort = 51820
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT

[Peer]
PublicKey = *** contents of /etc/wireguard/server_public.key from remote ***
Endpoint = 1.2.3.4:51820 # IP address of remote
AllowedIPs = 10.10.10.2/32, 192.168.1.0/24

Since my local network is on a residential ISP, I need to use the tools on my ISP’s router to port map the Wireguard port that comes in on the public IP address to the OpenBSD router. Now, we will need to set up the WireGuard configuration on the OpenBSD 7.0 router that I use for my secure network at home (private IP is 192.168.1.1):

# pkg_add wireguard-tools
# sysctl net.inet.ip.forwarding=1
# echo 'net.inet.ip.forwarding=1' | tee -a /etc/sysctl.conf
# mkdir /etc/wireguard
# chmod 700 /etc/wireguard
# openssl rand -base64 32 > /etc/wireguard/server_private.key
# wg pubkey < /etc/wireguard/server_private.key > /etc/wireguard/server_public.key
# vim /etc/hostname.wg0
inet 10.10.10.2 255.255.255.0
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf
!route add -inet 192.168.2.0/24 10.10.10.2
# vim /etc/wireguard/wg0.conf
[Interface]
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
ListenPort = 51820

[Peer]
PublicKey = *** contents of /etc/wireguard/server/public.key from remote ***
Endpoint = 2.3.4.5:51820 # public IP address of remote
AllowedIPs = 10.10.10.2/32, 192.168.2.0/24
# vim /etc/pf.conf
... add to end...
pass in on egress proto udp from any to any port 51820 keep state
pass on wg0
pass out on egress inet from (wg0:network) to any nat-to (egress:0)
# pfctl -f /etc/pf.conf
# sh /etc/netstart wg0

Now, run the following command on the remote Linux box to start the Wireguard service:

# systemctl enable wg-quick@wg0.service
# systemctl start wg-quick@wg0.service

At this point, you should be able to check the status of the Wireguard network on both sides with the command wg show and that should show both ends connected. You should be able to ping hosts on the remote network from each end.

So far, the only problem I have found with this setup to bridge the networks, is that my Windows machines that are multi-homed (i.e. one interface – wired ethernet – is connected to my ISP’s network and one – wireless – is connected to my secure network) needs to have a route manually added as follows:

C:\WINDOWS\system32> route add -p 192.168.2.0 MASK 255.255.255.0 192.168.1.1

In this case, the 192.168.2.0/24 network is the remote network and the 192.168.1.1 IP references my OpenBSD 7.0 router.

Remote Samba Active Directory Server

Now that we have a remote network that is securely bridged to our local private network on which the current Samba Active Directory infrastructure is running, it is time to create the VPC virtual server that will be running our Active Directory remote server. My particular VPC service allows me to create a server that is on the same private network as my remote “router” that is running Wireguard, so I create such a server and call it AD2.ad.example.com (put in your own AD domain name there).

First things first, the remote AD server must have a route to the Wireguard network. This is not a necessary step on the home network side because the Wireguard server is running on the OpenBSD 7.0 router and by definition is the default route for the servers on that network. This is not the case for the servers on the private network at the VPC. To do this, we simply need to add a persistent route. So as to not mess things up with the default network configuration on the remote host, I decided to create a (yuck) SystemD (blech) service:

# apt update
# apt upgrade
# apt install network-tools
# vim /usr/sbin/MY-NETWORK.sh
#! /bin/sh
/usr/sbin/route add -net 192.168.1.0/24 gw 192.168.2.2 eth1
# chmod +x /usr/sbin/route
# vim /etc/systemd/system/MY-NETWORK.service
[Unit]
Description=Route to Wireguard server
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/usr/sbin/MY-NETWORK.sh

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable MY-NETWORK.service
# systemctl start MY-NETWORK.service

At this point, you should be able to ping the domain controller on the remote (home) network and from that domain controller, you should be able to ping the new host.

Now we need to do the standard networking configuration ‘stuff’ that Samba likes. First, edit the /etc/hosts file to remove the “127.0.1.1 DC2.ad.example.com DC2” line and replace it with one tying it to the static private IP address that has been assigned to this virtual host. In this case, “192.168.2.3 DC2.ad.example.com DC2”.

Here we need to add the necessary packages to host an Active Directory domain controller:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Next, disable systemd’s resolver and add the remote AD server as the DNS name server and also add the Active Directory domain:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

Now, go ahead and reboot the remote machine and when you log back into it, test to see if DNS is working properly:

# nslookup DC1.ad.example.com
Server:     192.168.1.2
Name:      DC1.ad.example.com
# nslookup 192.168.1.2
2.1.168.192.in-addr.arpa    name = DC1.ad.example.com
# host -t SRV _ldap._tcp.ad.example.com
_ldap._tcp.ad.example.com has SRV record 0 100 389 dc1.ad.example.com

Rename the /etc/krb5.conf file and the /etc/samba/smb.conf file like you did when you created the domain controller on your local network. Then, create a new /etc/krb5.conf file:

[libdefaults]
    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

At this point, we need to set up an NTP server and sync it to the one at our original Active Directory domain controller:

# apt install chrony ntpdate
# ntpdate 192.168.1.2
# echo "server 192.168.1.2 minpoll 0 maxpoll 5 maxdelay .05" > /etc/chrony/chrony.conf
# systemctl enable chrony
# systemctl start chrony

Now we need to authenticate against Kerberos and get a ticket:

# kinit administrator
... provide your AD\Administrator password ...
# klist

At this point, it’s time to join the domain as a new domain controller:

# samba-tool domain join ad.example.com DC -U"AD\administrator"

After the tool finishes (it produces a lot of output), you need to copy the generated Kerberos configuration file to the /etc directory:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

You need to manually create the systemd service and set things up so that everything fires up when you reboot the server:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
[Unit]
Description=Samba Active Directory Domain Controller
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/sbin/samba -D
PIDFILE=/run/samba/samba.pid
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

OK. At this point we have a Samba Active Directory domain controller running. We need to get SysVol replication going now to ensure that the two controllers are bidirectionally synchronized.

Bidirectional SysVol Replication

To get the SysVol replication going bidirectionally, I followed the guide here. First, you need some tools installed on both DCs:

# apt install rsync unison

Generate an ssh key on both domain controllers:

# ssh-keygen -t rsa

Now, copy the /root/.ssh/id_rsa.pub contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

Now, copy the /root/.ssh/id_rsa.pub contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

On your new remote DC (DC2 in my example), do the following to ensure that your incoming ssh connection isn’t rate limited:

# mkdir /root/.ssh/ctl
cat << EOF > /root/.ssh/ctl/config
Host *
ControlMaster auto
ControlPath ~/.ssh/ctl/%h_%p_%r
ControlPersist 1
EOF

Now, to be able to log what happens during the sync on the local DC (DC1 in my example), do the following to create the appropriate log files:

# touch /var/log/sysvol-sync.log
# chmod 640 /var/log/sysvol-sync.log

Now, do the following on the local DC (DC1 in my example):

install -o root -g root -m 0750 -d /root/.unison
cat << EOF > /root/.unison/default.prf
# Unison preferences file
# Roots of the synchronization
#
# copymax & maxthreads params were set to 1 for easier troubleshooting.
# Have to experiment to see if they can be increased again.
root = /var/lib/samba
# Note that 2 x / behind DC2, it is required
root = ssh://root@DC2//var/lib/samba 
# 
# Paths to synchronize
path = sysvol
#
#ignore = Path stats    ## ignores /var/www/stats
auto=true
batch=true
perms=0
rsync=true
maxthreads=1
retry=3
confirmbigdeletes=false
servercmd=/usr/bin/unison
copythreshold=0
copyprog = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --inplace --compress
copyprogrest = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --partial --inplace --compress
copyquoterem = true
copymax = 1
logfile = /var/log/sysvol-sync.log
EOF

Now, run the following command on your local DC (DC1 in my example):

# /usr/bin/rsync -XAavz --log-file /var/log/sysvol-sync.log --delete-after -f"+ */" -f"- *"  /var/lib/samba/sysvol root@DC2:/var/lib/samba  &&  /usr/bin/unison

This should synchronize the two sysvols. If you followed my previous how-to and set up Group Policy, this can take some time as there are a lot of files involved that are stored on the SysVol. After it is complete, you can verify this by doing the following on your remote DC (DC2 in my example):

# ls /var/lib/samba/sysvol/ad.example.com

You should see the same file structure under that directory on both servers. This will copy everything including your group policy stuff as well.

Now that you have done the initial sync, just add the following to your crontab on the local DC (DC1 in my example):

# crontab -e
*/5 * * * * /usr/bin/unison -silent

You should monitor /var/log/sysvol-sync.log on your local DC (DC1 in my example) to ensure that everything is synchronizing and staying that way over time.

Hope this little “how-to” helps folks!

Posted in Uncategorized | Leave a comment

Active Directory Says What?

Many of the long-time readers of this blog are going to probably have a panic attack when they read this article because they are going to be asking themselves the question, “Why in the heck does he want to install Active Directory in his life?” The reason, like so many answers to so many of these questions I ask myself is “Because I can!” LOL!!

So I have a small home network that is my playground for learning new technologies and practicing and growing my security skills. I try to keep it segregated from my true home network that my family uses because I don’t want my latest experiment to get in the way of any of them connecting to the Internet successfully.

Just for fun, however, I’m going to start on a path to try a new experiment – I’d like to have the ability to add a new machine to my network and not have to spend half a day setting it up. Furthermore, I’d like to put everything I can either on a local file server that backs up to the cloud or in the cloud that backs up to a local file server in such a way that I can totally destroy any of my machines and be able to reproduce it at the push of a button. The ultimate in home disaster recovery.

What does this buy me? Well, for one, it lets me be even more aggressive in my experimentation. If I lay waste to a machine because of a failed experiment, no big deal – I just nuke and automatically repave it. For another, it makes it way easier to recover a family member’s setup when something goes wrong. I can just rebuild the machine and know they won’t lose anything. That alone will save me lots of time troubleshooting the latest problems with stuff.

So, why Active Directory? I choose this technology because pretty much everything (OpenBSD is going to be interesting) will authenticate centrally with it and yes, I do have to run some Windows and Mac machines on my network, I can’t do it all on OpenBSD and Linux so it’s a good common ground.

Now, I will die before installing a Windows Server in my infrastructure (LOL) so I have been very careful saying “Active Directory” and not “Windows Server”, or “Azure AD”. I’m going to see how far Samba 4 has come since the last time I played with it. If I can do the full meal deal of authentication, authorization, roaming user profiles and network home directories on a Windows machine, then I can fill in around the edges on my non Windows machines using NFS and other techniques.

Setting up Ubuntu

First things first, I want to start with a clean install of my domain controller. To this end, I’ll nuke and repave my 32-core Threadripper box in my basement with the latest Ubuntu 21.10 build on it and install samba on bare metal. I had originally thought about doing this on a VM or on a Docker container, but I want the reliability and control-ability of a bare metal install with a static IP address, etc. Therefore, after carefully backing up the local files that I wanted to save off of this machine (ha – that’s a lie, I just booted from a USB thumb drive and Gparted the drives with new partition tables), I installed a fresh copy of Ubuntu 21.10 with 3rd party drivers for my graphics card.

Once I had the base OS laid down, I used the canonical documentation from wiki.samba.org (not documentation from Canonical, the owner of Ubuntu <g>), along with some blog posts (1), (2), and (3) to determine my full course of action. I’ll outline the various steps below.

Active Directory Domain Controller

First things first, we need to get the network set up the way Samba wants it on this machine. That consists of setting up a static IP address on the two NICs in my server (one for my “secure” home network and one for my insecure “family” network) and setting the hostname and /etc/hosts file changes. Specifically, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:

127.0.0.1    localhost
192.168.1.2  DC1.ad.example.com    DC1

It is important to note that Ubuntu will put in an additional 127.0.0.1 line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

DC1.ad.example.com

Now for a fun one. We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.1
search ad.example.com

At this point, you should have the networking changes in place you need for now. You’ll have to later loop back around and change /etc/resolv.conf to use this machine’s IP address as the nameserver once you have Samba running with it’s built-in DNS server up and running but we don’t want to lose name resolution in the meanwhile so I’ve hard coded it to point to my local DNS server on OpenBSD.

Now it’s time to install the necessary packages to make this machine an active directory domain controller:

# apt update
# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Specify the FQDN of your server when prompted on the ugly purple screens for things like your Kerberos server and your Administrative server.

Now, it’s time to create the configuration files for Kerberos and Samba. To do this, I ran the following commands:

# mv /etc/krb5.conf /etc/krb5.conf.orig
# mv /etc/samba/smb.conf /etc/samba/smb.conf.orig
# samba-tool domain provision --use-rfc2307 --interactive

I take the defaults, being careful to double-check the DNS forwarder IP address (that’s where the DNS server that will be serving your AD network will forward requests it cannot resolve) and then entered my Administrator password. Keep in mind that be default, the password complexity requirements are set pretty high (which I like) so pick a good one.

Now use the following command to move the Kerberos configuration file that was generated by the Samba provisioning process to its correct location:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

Next, we need to set things up so that the right services are started when you reboot the machine. To do that, issue the following commands:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
[Unit]
Description=Samba Active Directory Domain Controller
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/sbin/samba -D
PIDFILE=/run/samba/samba.pid
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

Now go back and update the /etc/resolv.conf file to use the new Samba-supplied DNS service:

# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

This is probably a good time to reboot your machine. When you do so, don’t forget to check that /etc/resolv.conf hasn’t been messed with by Ubuntu. If it has, double-check the work you did above and keep trying reboots until it sticks.

Now we need to create the reverse zone for DNS:

# samba-tool dns zonecreate 192.168.1.2 168.192.in-addr.arpa -U Administrator
# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa 2.1 PTR DC1.ad.example.com -U Administrator

If you have multiple NICs in your AD server, you will need to repeat this process for their networks. At this point, double-check that the DNS responder is coming back with what it needs to in order to serve the black magic of the Active Directory clients:

# nslookup DC1.ad.example.com
Server:        192.168.1.2
Address:       192.168.1.2#53

Name:    DC1.ad.example.com
Address: 192.168.1.2

# nslookup 192.168.1.2
2.1.168.192.in-addr.arpa        name = DC1.ad.exmple.com

# host -t SRV _ldap._tcp.ad.example.com
_ldap._tcp.ad.example.com has SRV record 0 100 389 dc1.ad.example.com
# host -t SRV _kerberos._udp.ad.example.com
_kerberos._udp.ad.example.com has SRV record 0 100 88 dc1.ad.example.com
# host -t A dc1.ad.example.com
dc1.ad.example.com has address 192.168.1.2

If you have multiple NICs in your AD server, you might want to double-check the DNS A records that are returned are reachable from the networks your clients typically use. Since I have a “home” network and a “secure” network, I can manage DNS and DHCP on my secure network so I tend to make sure that my domain controller hostname resolves to an IP address on the secure network. The Windows DHCP admin tools are pretty handy for checking on this and making changes.

Verify that the Samba service has file serving running correctly by listing all of the shares from this server as an anonymous user:

# smbclient -L localhost -N

You should see sysvol, netlogon and IPC$ listed. Any error about SMB1 being disabled is actually a good thing. Validate that a user can successfully log in:

# smbclient //localhost/netlogon -UAdministrator -c 'ls'

You should see a listing of the netlogon share directory which should be empty. Now check that you can successfully authenticate against Kerberos:

# kinit administrator
# klist

You should see a message about when your administrator password will expire if you are successfully authenticated by Kerberos. The klist command should show the ticket that was generated by you logging in as Administrator.

If you look at the documentation in the Samba Wiki, you’ll see that ntp seems to be a better service to use over chrony or optnntpd. If you look at the documentation for chrony (which everyone seems to use), you’ll get a different story. However, when I used chrony, I kept getting NTP errors on my Windows clients so I’m configuring in this post with ntp.

# apt install ntp
# samba -b | grep 'NTP'
    NTP_SIGND_SOCKET_DIR: /var/lib/samba/ntp_signd
# chown root:ntp /var/lib/samba/ntp_signd/
# chmod 750 /var/lib/samba/ntp_signd/
# vim /etc/ntp.conf
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
broadcast 192.168.1.255
disable auth
broadcastclient
# systemctl restart ntp

To be clear, the lines I’m showing after editing the ntp.conf file are lines that you ADD to the file. Also, if you have more than one NIC in the server, you’ll need to add them in on the restrict and broadcast lines as a second line for each.

Now, let’s test that everything is working by enrolling a Windows 10 machine into the domain. Ensure first that you are on the right network and just for safety’s sake, do a reboot so you pick up the DNS server, etc. I have modified the DHCP server on my network to pass the correct information that a client needs as follows (from /etc/dhcpd.conf in OpenBSD):

option domain-name "ad.example.com";
option domain-name-servers 192.168.1.2;
option ntp-servers 192.168.1.2;

Microsoft has done a bang-up job of hiding this in the UI compared to where it has been for literally decades (“get off my lawn!!”). I prefer the old-fashioned way so I ran the following using Windows key + R to get the old UI I’m most comfortable with:

sysdm.cpl

Press the “Change” button and then select “Domain” and enter “ad.example.com” as the name of your domain. That should prompt you for your admin credentials. I typically use AD\administrator as my userid just to be safe. In a matter of seconds, you should be welcomed to the domain.

For safety’s sake, I recommend clearing out your application and system event logs on that machine, rebooting and logging in as your domain admin. Once that’s done, examine the event viewer to ensure that you aren’t seeing any errors that might indicate something isn’t configured correctly on the server. Remember to click the “other user” button on the Windows 10 login screen and use the AD\Administrator to tell Windows which domain you want to log into.

There is a warning (DNS Client Events, Event ID: 8020) that I see in the System event log. This appears to be a problem where the Windows machine tries to re-register with dynamic DNS in Samba with exactly the same info that is already registered for it and Samba returns an error. You can still resolve the client machine from the server so it worked the first time, I think it can be safely ignored for now.

For ease of maintenance you might want to install the “Windows RSAT Tools” on your Windows machine that give you a good UI for managing all of the fun stuff that Active Directory brings to the table. They are a free download.

I really do NOT recommend using your domain controller as a file server. To set that up on another machine, please see the next section.

Samba File Server in a Domain

Thankfully, the wonderful documentation on the Samba WIKI has an entire entry dedicated to setting up Samba as a domain member. First things first, we need to configure the network settings on our file server to use the Active Directory server as the DNS server.

As I did with the domain controller above, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:

127.0.0.1    localhost
192.168.1.3  NAS.ad.example.com    NAS

It is important to note that Ubuntu will put in an additional 127.0.0.1 line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

NAS.ad.example.com

We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

After a quick reboot and verification that the resolv.conf changes survived, we need to install some packages:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user smbclient

Now we need to now configure Kerberos and Samba. First, if there are files currently at /etc/krb5.conf and/or /etc/samba/smb.conf, remove them. Create a new /etc/krb5.conf file with the following contents:

[libdefaults]
    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

Next, it will be necessary to synchronize time to the domain controller. Since this server won’t be broadcasting network time to client machines (i.e. it isn’t a domain controller), I’ll be setting it up with chrony which is built into Ubuntu.

# apt install chrony ntpdate
# ntpdate 192.168.1.2
# vim /etc/chrony/chrony.conf
server 192.168.1.2 minpoll 0 maxpoll 5 maxdelay .05
# systemctl enable chrony
# systemctl start chrony

That line under the vim command should be the only line in the file. To validate that everything is working, a call to systemctl status chrony should show that it is active and running. First things first, we need to set up the /etc/samba/smb.conf file:

[global]        
    workgroup = AD        
    security = ADS        
    realm = AD.EXAMPLE.COM       
    netbios name = NAS
    domain master = no
    local master = no
    preferred master = no

    idmap config * : backend = tdb
    idmap config * : range = 50000-100000
 
    vfs objects = acl_xattr        
    map acl inherit = Yes        
    store dos attributes = Yes

    winbind use default domain = true
    winbind offline logon = false
    winbind nss info = rfc2307
    winbind refresh tickets = Yes
    winbind enum users = Yes
    winbind enum groups = Yes

Now we will need to join the domain:

# kinit administrator
# samba-tool domain join AD -U AD\\Administrator
# net ads join -U AD\\Administrator

You’ll probably get a DNS error when you join the domain. Regardless, add an A record and a PTR record for the server into the DNS as follows:

# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa 3.1 PTR NAS.ad.example.com -U Administrator
# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa NAS.ad.example.com A 192.168.1.3

If you have multiple NICs in your file server, make sure you repeat the process for the IP address ranges assigned to them. Now, add the “winbind” parameter as follows to /etc/nsswitch.conf:

# vim /etc/nsswitch.conf
passwd: files winbind systemd
group: files winbind systemd
shadow: files winbind

Next, we will need to enable and start and restart some services:

# systemctl enable smbd nmbd winbind
# systemctl start smbd nmbd winbind
# pam-auth-update

Before proceeding any further, you should probably reboot the machine. Now for some tests to make sure that everything is working ok:

# wbinfo --ping-dc
checking the NETLOGON for domain[AD] dc connection to "dc1.ad.example.com" succeeded.
# wbinfo -g
... list of domain groups ...
# wbinfo -u
... list of domain users ...
# getent group
... list of Linux groups and Windows groups...
# getent passwd
... list of Linux users and Windows users...

Windows Home Directories

A common configuration done by Windows Domain administrators is to create a default “Home” drive (typically mapped to the H: drive letter) for users. To do this, we will want to first set up a file share on the server. The goal will be to set up a mapped “HOME” directory for each domain user. We’ll start off by adding the following to the /etc/samba/smb.conf file:

[users]
    comment = Home directories
    path = /path/to/folder
    read only = no
    acl_xattr:ignore system acls = yes

After issuing an “smbcontrol all reload-config” on the file server to reload the changes to the config file, you should now be able to see a share called \\nas\users. When you create the directory on the filesystem, issue the following commands:

# chown "Administrator":"Domain Users" /path/to/folder/
# chmod 0770 /path/to/folder/

It is important to grant the “SeDiskOperatorPrivilege” to the “Domain Admins” group as follows. This has to be done on the file server itself.

# net rpc rights grant "AD\\Domain Admins" SeDiskOperatorPrivilege -U "AD\administrator"

Finally, from the “Active Directory Users and Groups”, select the user in the “Users” folder, right click and select “Properties”. After changing to the “Profile” tab, select the “Connect” radio button under the “Home folder”, choose H: as the drive letter and put in \\nas\users\{user name} for the “To:” entry field. This should automatically create the directory and set the correct permissions on it.

Now log out of the domain and back in as the user account you modified above and you should automatically get an H: drive that maps to that folder on the file server.

User Profiles

OK, so the cool kids on their Windows networks also have this thing called a “Roaming User Profile” that allows you to put their user profile on a file server and then they can move from one machine to another and simply access their stuff as if it was all the same machine. I wanted to see how Samba handled this and sure enough, I got a hit in the Samba wiki that indicated it was possible.

First things first, we need to create a share on our file server to hold the profiles, so I added this to my /etc/samba/smb.conf file:

[profiles]
    comment = Users profiles
    path = /path/to/profile/directory
    browseable = No
    read only = No
    csc policy = disable
    vfs objects = acl_xattr
    acl_xattr:ignore system acls = yes

After making that change, I need to create the directory to hold the profiles and set the UNIX ownership and permissions like I did with the home directories above:

# mkdir /path/to/profile/directory
# chown "AD\Administrator":"AD\Domain Users" /path/to/profile/directory
# chmod 0700 /path/to/profile/directory

After a quick “smbcontrol all reload-config” to pull the new changes in, we now have a share on the file server called “profiles” that will hold the resulting Windows user profiles. I used the “Active Directory Users & Computers” tool on my Windows machine (logged in as Administrator), opened the property dialog for my users, navigated to the “Profile” tab and entered the UNC name for the profile directory \\NAS\profiles\{user-name}. The key is to know that, depending on the version of Windows, the system will add a suffix (in my case “.v6”) to that directory name and it will initially be created empty. When you log out, it will actually copy the stuff into the directory and you should see the directories and files show up on your file server. It seems this is the consistent behavior. For example, saving a file into the “Documents” directory on the Windows machine isn’t propagated to the server’s file system until that user logs out.

It really was that easy!

Group Policy

Given the fact that I had, at this point a fully functional Active Directory infrastructure with network home directories, roaming user profiles and all of it was running on Open Source platforms, I thought I’d really try to push it over the edge and dip my toe in the water around Group Policy. Group Policy is some magic stuff based on LDAP that, in the Windows world, allows you to automatically configure an end-user’s workstation. I found documentation in the Samba wiki that indicated it was possible to make this work so I thought I’d give it a try and see what I needed to do.

It looked like the first thing I needed to do was load the Samba “ADMX” templates into the AD domain controller. To do that, I used the following command:

# samba-tool gpo admxload -H dc1.ad.example.com -U Administrator

Sure enough, logging into my Windows machine as a domain admin, I was able to see that the command had indeed injected the Samba files into the Sysvol:

H:\> dir \\DC1\SysVol\ad.example.com\Policies\PolicyDefinitions

That command aove should show you the en-US directory and the samba.admx file. Now we need to download the Microsoft ADMX templates and install them:

# apt install msitools
# cd /tmp
# wget 'https://download.microsoft.com/download/3/0/6/30680643-987a-450c-b906-a455fff4aee8/Administrative%20Templates%20(.admx)%20for%20Windows%2010%20October%202020%20Update.msi'
# msiextract Administrative\ Templates\ \(.admx\)\ for\ Windows\ 10\ October\ 2020\ Update.msi
# samba-tool gpo admxload -U Administrator --admx-dir=Program\ Files/Microsoft\ Group\ Policy/Windows\ 10\ October\ 2020\ Update\ \(20H2\)/PolicyDefinitions/

The last line will take a few seconds as it processes the files and loads them into the SysVol. You can again confirm the presence of the new policies using the “dir” command above from your Windows machine. At this point, you have the group policies set up and installed into your environment and should be able to manipulate them using the “Group Policy Management Console” on your Windows workstation.

Conclusion

While this is probably one of my stranger, and more technical posts, I think this is a cool example of how you can totally eliminate paid software from your server infrastructure and yet still have the full functionality of something like Active Directory in your tool belt.

Posted in Uncategorized | 5 Comments

Thinkpad T14 (AMD) Gen 2 – A Brave New World!

As long-time readers of this blog are aware, I’m a bit of a Thinkpad fanatic. I fell in love with these durable machines when I was working for IBM back in the late 90’s and accidentally had one fall out of my bag, bounce down the jetway stairs and hit the runway hard – amazingly enough it had a few scuffs but zero damage! After the purchase of the brand by Lenovo, I was a bit worried, but they continue to crank out (at least in the Thinkpad T and X model lines) high-quality, powerful machines.

Thinkpad T480 – RIP

I ran into a nasty problem with my Thinkpad T480 where the software on the machine actually physically damaged the hardware. I know! I thought that was impossible too (other than the 70’s PET machine that had a software-controlled relay on the motherboard that you could trigger continuously until it burned out) but nope – the problem is real.

Essentially, the Thunderbolt I/O port on the machine is driven by firmware running out of an NVRAM chip on the motherboard that can be software-updated as new firmware comes out. As with any NVRAM chip, there are a finite number of write-cycles before the chip dies, but the number of times you will update your firmware is pretty small so it works out well.

Unfortunately, Lenovo pushed out a firmware update that wrote continuously to the NVRAM chip and if you didn’t patch fast enough (they did release an urgent/critical update), then the write-cycles would be exceeded, the chip would fail and the bring-up code would not detect the presence of the bus and thus you had no more Thunderbolt on the laptop. Well, I didn’t update fast enough so “boom” – it is now a Thunderbolt-less laptop.

The New T124 (AMD) Gen 2

Well, enter the need for a new laptop. I decided to jump ship from the Intel train and try life out on the “other side” but ordering a Thinkpad T14 (AMD) Gen 2 machine with 16gb of soldered RAM (there is a slot that I will be populating today that can take it up to 48gb max – I’m going with 32gb total by installing an $80 16gb DIMM) and the Ryzen Pro 5650U that has 6 cores and 12 threads of execution. The screen was a 1920×1080 400 nit panel and looks really nice.

When the laptop showed up, I booted the OpenBSD installer from 6.9-current and grabbed a dmesg and discovered that I lost the Lenovo lottery and had a Realtek WiFi card in the machine. Well, the good news was that I had upgraded the card in my T480 to an Intel AX200 so I swapped it for the one I took out of the T480 and then used it in the T14 to replace the Realtek card. Worked like a charm.

The Ethernet interface on this machine is a bit odd. It’s a Realtek chipset as well, but it shows up as two interfaces (re0 and re1). The deal is that re0 is the interface that is exposed when the machine is plugged into a side-connecting docking station and re1 is the interface that is connected to the built-in Ethernet port. The device driver code that is in 6.9-current as of this writing works just fine with it, however, so I’m happy.

Now for the bad news. Every Thinkpad I have owned for the last decade allows me to plug an m.2 2240 SATA drive into the WWAN slot and it works great. I assumed that would be the case with this machine. While I had the bottom off to replace the WiFi card, I slipped the 1TB drive from the WWAN slot of my T480 into the WWAN slot of the T14 and booted up. I was immediately presented with an error message stating effectively that the WWAN slot was white-listed by Lenovo and would only accept “approved” network cards. I was beyond frustrated by this.

Given that I want to get this machine into my production workflow, I decided that I’d slog along for the time being by putting a larger m.2 2280 NVMe drive in, installing rEFInd to allow me to boot multiple partitions from a single drive and then clone the 512gb drive that is in the machine to the 1GB drive out of the T480. Then, the remaining space on the new drive will contain an encrypted partition for my OpenBSD install.

Installing rEFInd

I followed the instructions from the rEFInd site on how to manually install under Windows 10 and the steps I followed included downloading and unpacking the ZIP file and then running the following commands from an administrative command prompt:

C:\Users\xxxx\Downloads\refind-bin-0.13.2\> mountvol R: /s
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> xcopy /E refind R:\EFI\refind\
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> r:
R:\> cd \EFI\refind
R:\EFI\refind\> del /s drivers_aa64
R:\EFI\refind\> del /s drivers_ia32
R:\EFI\refind\> del /s tools_aa64
R:\EFI\refind\> del /s tools_ia32
R:\EFI\refind\> del refind_aa64.efi
R:\EFI\refind\> del refind_x64.efi
R:\EFI\refind\> rmdir drivers_aa64
R:\EFI\refind\> rmdir drivers_ia32
R:\EFI\refind\> rmdir tools_aa64
R:\EFI\refind\> rmdir tools_ia32R:\EFI\refind\> rename refind.conf-sample refind.conf
R:\EFI\refind\> mkdir images
R:\EFI\refind\> copy C:\Users\xxx\Pictures\mtstmichel.jpg images
R:\EFI\refind\> bcdedit /set "{bootmgr}" path \EFI\refind\refind_x64.efi

That next to the last line is because I wanted to have a picture of my “happy place” (Mount Saint Michel off of the northern coast of France) as the background for rEFInd. I edited the refind.conf file and added the following lines:

banner images\mtstmichel.jpg
banner_scale fillscreen

A quick reboot shows that rEFInd is installed correctly and has my customized background. Don’t be alarmed that the first time you boot up with rEFInd is slow, I think it is doing some scanning and processing and caching because the second and subsequent boots are faster.

Cloning the Drives

The process that I am going to follow, at a high level, is to first clone the contents of my primary 1TB 2280 NVMe drive in my T480 to a spare 256GB drive. I will then erase the 1TB drive and clone the contents of my T14’s drive to it (it’s only 512GB). I will then erase the 512GB drive and clone the 256GB drive back to it. Finally, for good operational security (OpSec) purposes, I’ll use the open source Windows program Eraser erase the 256GB drive. At this point I should have a bootable T480 (with a fried Thunderbolt bus – grr…) on the 512GB drive, and a bootable T14 on the 1TB drive.

I’m using Clonezilla, an open source tool that I burn to a bootable USB drive to do the cloning. For hardware that I am using to accomplish all of this, first I use a Star Tech device that allows me to plug m.2 drives into a little box that then acts as a 2.5 inch SSD drive. I plug that into a Wavlink USB drive docking station that can hold either 3.5″ or 2.5″ drives.

Another piece of software that I use as part of this process is GPartEd Live – an open source tool that allows you to create a USB drive that boots into the GPartEd software (the Gnu Partition Editor). This allows me to view the partition structure of one drive and create an analagous partition structure on another drive. The built-in tools for Windows to do this work (Disk Manager for example) can create hidden partitions under the covers that can cause problems with this process. I prefer to use GPartEd to ensure that I can see and control everything that is going on.

Step One is to take the T480, boot it into Windows and connect the Wavlink device to it with the 256GB NVMe drive plugged into it via the StarTech adapter. While I’m using Eraser to wipe the 256GB drive, I also go into Windows settings and decrypt the Windows disk by turning off BitLocker for it. This may not be necessary but it makes me feel more comfortable to do the cloning with unencrypted Windows drives because the key for the encryption is store in the TPM device on the motherboard and I’m not sure if the fact that the underlying hardware changes would muck that up. After the erase and decrypt is finished, I shrank the partition using “Disk Management” on Windows to be smaller than the new physical disk. If you don’t do this, then Clonezilla won’t allow you to clone from a larger partition to a smaller one.

Next we will need to reboot the machine to GPartEd Live. For the destination drive, you will need to use the “Device” menu and create a new GPT partition table. Take a look at the source drive and make a note of the various partitions, their flags, and their sizes. On the destination drive, recreate that partition structure with the same flags and the same or slightly larger size. I generally bump up the size of the partition by just a bit in order to avoid getting into trouble with rounding the size for display on the screen. If you get it wrong, don’t worry, Clonezilla will yell at you and you’ll have to go back and do this over again. 🙂

When launching Clonezilla, since I have the high resolution display on the T480 (a mistake I’ll never make again, HiDPI is a PITA in everything but Windows) I had to use my cell phone to zoom in on the microscopic text and select the “use 800×600 with large fonts from RAM” option. With readable text, I then make sure that I’m choosing “device-device” from the first menu (not the default). Next, select “Beginner Mode” to reduce the complexity of the choices you’ll have to make. After that, you want to select “part_to_local_part” to clone from one partition on the source drive to the corresponding partition on the destination drive. Finally, select the source partition and the destination partition. I recommend you do the smaller partitions first and then let the main C: partition (the largest one) grind because it can take a long time to clone.

After cloning the T480 drive, I removed it from the machine and was ready to clone the T14’s drive to it. This is where I ran into a “keying” problem with m.2 drives. Some are “B” keyed, and some are “B+M” keyed. This refers to the number of cutouts where they plug into the slot. Well, it looks like the NVMe drives in both the T480 and the T14 don’t fit the StarTech adapter. After some juggling around I found an old 256MB drive that I was able to use to get the swap completed.

Creating the OpenBSD Partition

To do this, I will use “Disk Manager” on Windows and shrink the NTFS partition (if necessary) to make room for OpenBSD and then create a new partition on the drive that takes up the remaining space. If you check the “don’t assign a drive letter” box and the “don’t format the partition” box, you’ll get a raw, unformatted partition that takes up the remaining space on the disk.

That new raw partition will be changed in OpenBSD to be the home of the encrypted slice on which I’ll be installing the operating system. After creating that partition, it’s time to download the 6.9-current .IMG file for the latest snapshot and use Rufus on Windows to create the USB drive and reboot from it.

Once in the OpenBSD installer, drop immediately to the shell and convert that NTFS partition into an OpenBSD partition. That will be where we we put the encrypted slice that we will be installing to. To do this, run the following commands:

# cd /dev
# sh ./MAKEDEV sd0
# fdisk -E sd0

sd0: 1> print
sd0: 1> edit 4
Partition id: A6
Partition offset <ENTER>
Partition size <ENTER>Partition name: OpenBSD
sd0*: 1> write
sd0: 1> exit

The print command above should show you the 4 partitions on your drive (the EFI partition, the Windows partition, the WindowsRecovery partition and your fourth partition that will hold OpenBSD that you created above).

Now that you have a partition for OpenBSD, you’ll want to copy the EFI bootloader over to your EFI drive. You’ll later make a configuration change in rEFInd to not only display it on the screen, but also show a cool OpenBSD “Puffy” logo for it!

# cd /dev
# sh ./MAKEDEV sd1
# mount /dev/sd1i /mnt
# mkdir /mnt2
# mount /dev/sd0i /mnt2
# mkdir /mnt2/EFI/OpenBSD
# cp /mnt/efi/boot/* /mnt2/EFI/OpenBSD
# umount /mnt
# umount /mnt2

Now that you have an OpenBSD EFI bootloader in its own directory on the EFI partition, you’ll want to create the encrypted slice for the operating system install:

# disklabel -E sd0

sd0> a a
sd0> offset: <ENTER>
sd0> size: <ENTER>
sd0> FS type: RAID
sd0*> w
sd0> q

# bioctl -c C -l sd0a softraid0
New passphrase: <your favorite passphrase>
Re-type passphrase: <your favorite passphrase>

Pay attention to the virtual device name that bioctl spits out for your new encrypted “drive”. That’s what you will tell the OpenBSD installer to use. To re-enter the installer, type “exit” at the command prompt. Do your install of the operating system as you normally do. When you reboot, go into Windows.

First, download an icon for OpenBSD from here (or pick your favorite elsewhere). Next, bring up an administrative command prompt and use the following commands to mount the EFI partition and add the icon for OpenBSD:

C:\Windows\system32> mountvol R: /s
C:\Windows\system32> r:
R:> cd \EFI\refind
R:\EFI\refind> copy "C:\Users\<YOUR USER>\Download\495_openbsd_icon.png" icons\os_openbsd.png

Save your changes, exit notepad and then reboot. rEFInd is smart enough to find your OpenBSD partition and use the icon you just added. When you select it from the rEFInd UI, you should be presented with your OpenBSD encrypted disk password and be able to boot for the first time. I ran into a weird thing with my snapshot where it couldn’t download the firmware. I formatted a USB thumb drive as FAT32, downloaded the amdgpu, iwx, uvideo and vmm firmware from the site, mounted the drive in my OpenBSD system and ran fw_update -p /mnt to get the firmware.

At this point, you should be able to reboot and select either Windows or OpenBSD from your rEFInd interface. My hope is that Lenovo will remove this absurd white-listing of the WWAN devices from their UEFI/BIOS code and I’ll be able to plug drives into it again; however, if (and this is more likely) they do not, I’ll at some point buy a 2TB m.2 NVMe drive for this machine, repeat this process and be able to add Linux to it.

I hope folks find this guide helpful.

Posted in Uncategorized | 2 Comments

OpenBSD 6.9 – Help with the “Failed to install bootblocks” issue

Hi everyone!

I purposely chose a non-catchy title so that it would be more easily found by the search engines as this one has been a challenge for me in my last several laptop installs and I always manage to fix it after fiddling around for a while. This time around, I thought I’d actually produce a decent (hopefully!) write-up on just how I go about addressing the problem from scratch. This will provide two benefits: 1) I’ll have a nice step by step the next time I install my machine <grin>; and 2) It might help some other intrepid soul who is running into the same issue!

While the FAQ is always the best place to go for the most up to date steps on formatting and installing a system, I tend to run a “weird” setup that it seems like confounds the installer and most easily-accessible information. What I normally do in my Thinkpad laptops is install a second (or third) SSD or NVMe drive and then dedicate the entire disk to a given operating system. For example, if I’m running Windows 10 and OpenBSD 6.9 on my Thinkpad T480, I install Windows on the first drive (so that if my machine falls into evil hands and they power it on, it will just default boot into Windows and they might not even suspect OpenBSD is on the machine) and then I install OpenBSD onto the second drive. I then use the UEFI or BIOS boot menu to choose the OpenBSD drive to boot from.

Install Windows

I started off by installing Windows from a USB key to the primary drive in the laptop. As is my custom, after install, I put on all of the drivers and used the group policy editor to increase the BitLocker encryption from 128-bit AES to 256-bit AES. I also edited the registry to allow Outlook’s OST file to expand beyond the pitiful limit that it defaults to. After a reboot, I start the BitLocker encryption process and connect my email accounts.

If you are installing OpenBSD on a drive that has previously had something on it, it’s always a good idea to erase that drive. I use an open source tool for Windows called Eraser if I’m on Windows or good old dd if I’m on Linux. Eraser’s UI is a bit weird. It requires that you create a task that you can “run manually”, select the disk to be erased (in my cased “Hard disk 1”) and then select an erasure method (I use Pseudorandom 1-pass), then run the task manually.

I then download the install69.img file from my favorite mirror (https://openbsd.cs.toronto.edu/pub/OpenBSD) and use Rufus to transfer it to a bootable USB drive. I reboot, hit <F12> to get a boot menu from the UEFI, select my USB drive and then boot into the OpenBSD installer.

Install OpenBSD

The first thing I do is look at my dmesg to see what devices my drives have been attached to:

# dmesg | grep -i sd

This shows (in my case) that my Windows drive is connected to sd0, my blank drive that I will put OpenBSD on is connected to sd1 and my USB installer device is connected to sd2. Next, I need to create the necessary /dev devices:

# cd /dev
# sh ./MAKEDEV sd1
# sh ./MAKEDEV sd2

If you do a quick ls, you should see that the MAKEDEV script created the necessary device files and you should be good to proceed to the next step. Next, we want to initialize the sd1 drive to a GPT partitioning scheme and create the initial EFI partition on the disk. Fun fact, the EFI partition (while its own partition type) is formatted using FAT32 so thanks Windows 95! Here’s how you do this:

# fdisk -iy -g -b 960 sd1
# newfs_msdos /dev/rsd1i

Note my use of the /dev/r device (the raw device) and not the /dev/sd1i (normal device) in that second command. I’m not entirely sure if that is necessary, but the nice Reddit post that sparked me to think about how to do this did so why not, eh? If you get a weird error message trying to run newfs_msdos, it is likely that you have some previous partitioning data on that drive and it would be a good idea to completely erase it (see above).

Now, we need to mount the new partition, create the necessary directory structure that UEFI looks for and put the UEFI loader file from our installer USB drive into that directory:

# mount /dev/sd2i /mnt
# mount /dev/swd1i /mnt2
# mkdir -p /mnt2/efi/boot
# cp /mnt/efi/boot/* /mnt2/efi/boot

Now, we need to create the slice in the OpenBSD partition for the encrypted filesystem (you can skip this if you want to not have an encrypted drive):

# disklabel -E sd1

a a [ENTER]
offset: the default given
size: *
type: RAID
w [ENTER]
q [ENTER]

At this point, we have a slice set up as type “RAID” so we need to use the bioctl program to set up the encryption information along with the drive’s encryption password:

# bioctl -c C -l /dev/sd1a softraid0

You should see in the response to the above command the name of the new “virtual” encrypted disk. That is the disk that you will be installing OpenBSD onto. When you reach the question in the installation program about “Which disk is the root disk?”, enter that value (in my case, sd4). When i tasks whether or not you want to “Use (W)hole disk MBR, whole disk (G)PT or (E)dit?”, pick the MBR option (I know, this is counter intuitive but trust me here).

After the installer reboots the system, I press the [F12] key to get the boot menu (your key might be different if you aren’t running a Thinkpad) and select the disk I have installed OpenBSD on. I am immediately presented with the password prompt to decrypt the encrypted slice “virtual” disk and, upon entering it, I get the boot prompt. Everything proceeds as normal from that point forward and I am presented with the login prompt for my new system.

Updated Laptop Setup

If you are still with me and want to see how I set up my OpenBSD desktop (I get criticized slightly for making it “too heavy” with “too many packages” but I have to use Ubuntu as well for what I do and I like to have the UI be as consistent across the two operating systems as I can. Therefore I install Gnome 3 along with some gnome tweaks and plugins that give me the same theme and dock as Ubuntu.

To start out, I log in as root and enable my user account:

# echo "permit persist keepenv [my_non_root_user] as root" > /etc/doas.conf

At this point, I log out and back in as my unprivileged user account and work from there using the doas command to escalate privileges when needed. I start out by updating my system:

$ doas syspatch

Now, set up power management (this is a laptop):

$ doas rcctl enable apmd
$ doas rcctl set apmd flags -A
$ doas rcctl start apmd

I also add the following line to /etc/rc.conf.local (I haven’t cracked the code on how to do this with rcctl yet):

ntpd_flags=""

Now I need to make sure that I have the right level of resources available to my non-privileged user for tools like nextcloudclient (which opens a TON of files during its synchronization process). To do this I typically put myself in the “staff” and “operator” groups:

$ doas usermod -G staff MY_USERNAME
$ doas usermod -G operator MY_USERNAME
$ doas usermod -L staff MY_USERNAME

I then make the following changes to the “staff” section in /etc/login.conf:

...
staff:\
  :datasize-cur=4096M\
  :datasize-max=infinity\
  :maxproc-max=512:\
  :maxproc-cur=256:\
  :openfiles-max=102400:\
  :openfiles-cur=102400

I then have to add a line to /etc/sysctl.conf to take complete the work on allowing more open files on this system:

kern.maxfiles=102400

Now that I have modified all of this stuff and patched the system, it’s a good time to reboot.

Next, I add all of the packages I can’t live without (I know it seems like a small list, but they pull in a lot of others):

$ doas pkg_add gnome gnome-tweaks gnome-extras firefox chromium libreoffice nextcloudclient keepassxc \
   aisleriot evolution evolution-ews tor-browser shotwell gimp vim colorls cups reposync

A few changes to /etc/rc.conf.local are needed to boot into Gnome3:

$ doas rcctl disable xenodm
$ doas rcctl enable multicast messagebus avahi_daemon gdm cupsd

To avoid taking a kernel panic in my use-case (I have multiple monitors connected through a Lenovo Thunderbolt/USB-C dock), I have to manually switch to the Intel DRM driver in my /etc/X11/xorg.conf by adding the following section:

Section "Device"
  Identifier "Intel Graphics"
  Driver "intel"
EndSection

At this point, it’s time to reboot and go into GUI land. If you run into a situation where you have a monitor mirrored and no way to turn that feature off, I have found that turning all of the monitors off and back on generally fixes things. Once I have everything the way I would like it, I then download the yaru-remix-complete theme and install it manually by doing this:

$ cd ~
$ mkdir .themes
$ cd .themes
$ mv ~/Downloads/yaru-remix-complete-20.04.tar.xz .
$ unxz yaru-remix-complete-20.04.tar.xz
$ tar xf yaru-remix-complete-20.04.tar
$ mv themes/* .
$ rmdir themes
% doas mv icons/* /usr/local/share/icons
$ rmdir icons
$ doas mv wallpaper/* /usr/local/share/backgrounds/gnome
$ rmdir wallpaper
$ rm yaru-remix-complete-20.04.tar

Now launch gnome-tweaks and from the “Extensions” tab, turn on “user-themes”. Restart gnome-tweaks, go to the “Appearance” tab and select “Yaru-remixt” for applications, icons, and shell. On the “Top Bar” tab, enable “Battery Percentage” and “Weekday”. In the “Window Titlebars” tab, enable “Maximize” and “Minimize”.

Next, we want to put the wonderful extension Dash-To-Dock into the environment. To download it, go to https://extensions.gnome.org/extension/307/dash-to-dock/ and pick the right sehll version and extension version to match your install of Gnome shell. You will have to manually install it because the Gnome shell extension integration doesn’t appear to be enabled for OpenBSD:

$ cd ~/Downloads
$ unzip dash-to-docmicxgx.gmail.com.v67.shell-extension.zip
$ cat metadata.json

The value for “uuid” in that file is what you want to use in the next step:

$ mkdir -p ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ cd ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ unzip ~/Downloads/dash-to-docmicxgx.gmail.com.v67.shell-extension.zip

At this point, reboot to pick up the changes you’ve made, log in and launch gnome-tweaks again. On the “Extensions” tab, enable dash to dock. From the settings gear icon, select “extend to edge” and “show on all monitors” and you should have a very serviceable dock that is quite similar to the one in Ubuntu.

I then switch the terminal to “White on Black” for a better look and a 16-point font, and pin my favorite apps to the dock. Now for some terminal-level tweaks. I typically edit my ~/.profile file and add a couple of things:

export PS1="\[033[01;32m\]\u@\h\[\033[00m\]:\[033[01;34m\]\w[\033[00m\]$ "
export ENV=$HOME/.kshrc
export CVSROOT=/home/cvs

I then edit the ~/.kshrc file to add some aliases:

alias ls="colorls -G"
alias vi="vim"

A couple of other changes I typically make include turning off suspend when I’m plugged in (Settings | Power | Automatic Suspend), setting Firefox as my default browser (Settings | Default Applications), and setting my Time Format to “AM/PM” instead of “24-hour” (Settings | Date & Time).

I also take a moment to switch to “View -> User Interface -> Tabbed” in the Write, Calc, and Present applications in LibreOffice. This gives an interface reminiscent of the one in Microsoft Office – which I find helpful in terms of standardizing my workflow across operating systems.

After installing the appropriate browser security plugins and configuration changes from my favorite https://privacytools.io site, it’s time to set up CVS on my system for development purposes. To do this, I always double-check the AnonCVS link from the OpenBSD website left navigation panel and follow the steps to:

  1. Pre-load the source tree (for src, sys, ports and xenocara)
  2. Follow the instructions to give your non-root user write access to the src, ports and xenocara directories
  3. Mirror the repository with reposync (Note: I have had the best luck using anoncvs.comstyle.com as my mirror)

I then typically will add a crontab entry to keep things in sync:

$ doas crontab -e

...
0    */4    *    *    *    *    -n su -m cvs -c "reposync rsync://anoncvs.comstyle.com/cvs /home/cvs"

After syncing up my NextCloud data and my email data, I now have what I consider to be a secure, fully-functional OpenBSD laptop, configured the way I like it.

Posted in Uncategorized | Leave a comment