Fiber + Static IP = Self-Hosting Glory!

Recently, a new Internet Service Provider (ISP) became available in my area. Now, no longer confined to a choice between the cable TV company and the telephone company to supply the bits to my house, I had the option of true gigabit fiber to my house as a choice! Needless to say, I had some questions.

The first question was, “How difficult is it to get a static IP address?” I wanted to know this because the cable TV company wanted you to switch from a residential service to a business service and then there was some sort of biological sampling required, signing over your firstborn child and some “feats of strength” required to get one of these magical things. For the new ISP, the answer was simple – send us an email asking for one and it will cost you $10 US per month to keep it. Wow. That was easy. On to the next question.

The next question was the tricky one. My cable TV provider purposely blocked certain ports such as port 25 (SMTP) and there was no way around that. I asked the new ISP if they blocked any ports and the answer was, “No. Why would we do that?” Again – amazing! At this point, I was ready to start moving all of my stuff from the cloud to my house. First things first, I had multiple HTTPS-secured websites to move. Uh oh. How do I serve up multiple websites with multiple different certificates from a single public IP address? Time to test my Google Fu.

Turns out, my OpenBSD 7.1 router could come to the rescue. By doing a reverse-proxy setup with Apache2 and SSL termination, I could accept HTTPS traffic for multiple sites on my single IP address, serve up the right certificate to the browser on the other side of the communication and then pass along the traffic in the clear (HTTP) on port 80 to various servers on my home network. Finding blog posts about this was easy. Making it worked proved to be a bit tricky. I’m sure I could have done this with the OpenBSD httpd daemon (which has a much smaller attack surface that massive old Apache2) but that will be some research and investigation for another post (hopefully) in the future.

OpenBSD Reverse Proxy + SSL Termination

First off, something rare for this blog – a picture! This is the logical traffic flow for my setup:

SSL Termination / Reverse Proxy

To pull this off, I have to first install and enable Apache2 on my OpenBSD Octeon Router:

Next, I have to get HTTPS certificates for my various sites. While I would have loved to have done this using certbot, I couldn’t because there was a C language library needed by Python3 to allow this that wasn’t available on the Octeon build (because my router doesn’t use and Intel/AMD CPU). I then tried using acme-client but found the configuration to be too challenging to pull off right away. Perhaps another blog post in the future. Anyhow, I used a Linux box and ran certbot to generate each of my certificates. I then wrote a little bash script to use scp to copy them to the right folder on my OpenBSD router and scheduled it with cron. Kickin’ it old school!

$ doas pkg_add apache2
$ doas rcctl disable httpd
$ doas rcctl enable apache2
$ doas rcctl start apache2

After that, it was time to write the necessary configuration in /etc/apache2/httpd2.conf for each of the sites. As you can see, this assumes that the SSL certificates are in the /etc/ssl/private directory on my OpenBSD router:

<VirtualHost *:80>
    ServerName www.example1.com
    ServerAlias www.example1.com

    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]

    ProxyPass "/" "http://192.168.1.101/"
    ProxyPassReverse "/" "http://192.168.1.101"
    ProxyPreserveHost On
</VirtualHost>

<VirtualHost *:443>
    ServerName www.example1.com
    ServerAlias www.example1.com

    ProxyPass "/" "http://192.168.1.101/"
    ProxyPassReverse "/" "http://192.168.1.101"
    ProxyPreserveHost On

    SSLEngine On
    SSLCertificateFile /etc/ssl/private/www.example1.com/cert.pem
    SSLCertificateKeyFile /etc/ssl/private/www.example1.com/privkey.pem
    SSLCertificateChainFile /etc/ssl/private/www.example1.com/fullchain.pem

    SSLProxyEngine On

    <Location "/">
        SSLRequireSSL
        RequestHeader set X-Forwarded-Proto "https"
        RequestHeader set X-Forwarded-Ssl on
        RequestHeader set X-Url-Scheme https
        RequestHeader set X-Forwarded-Port "443"
    </Location>
</VirtualHost>

It is also necessary to further edit the /etc/apache2/http2.conf file to uncomment the “LoadModule” configuration lines for the services being used in the above configuration. The modules to load include ssl_module, proxy_module, proxy_connect_module, proxy_http_module, ssl_module, rewrite_module. After this, simply do an “rcctl restart apache2” and ensure that you were successful. If not, go back and double-check the configuration file.

Next, you will need to make sure that your pf firewall allows port 80 and 443 through so that your site can be reached from off of the OpenBSD machine. To do this, add the following to your /etc/pf.conf file:

# Allow serving of HTTP
pass in on { $wan } proto tcp from any to any port 80
# Allow serving of HTTPS
pass in on { $wan } proto tcp from any to any port 443

Reload the rules for pf using “$ doas pfctl -f /etc/pf.conf” and that step is done. You will also need to likely map ports 80 and 443 from your residential gateway (provided by your ISP) to send them to the OpenBSD router. At this point you should be able to hit your SSL protected site from outside of your network. I always test this by turning off the wifi on my cell phone and using it’s browser on the telco’s network. As you add more “internal” websites, simply duplicate those two sections above and restart your Apache2 daemon on the OpenBSD router.

What About Email?

This one turned out to be very, very interesting. And by that I mean really stinking hard! The basics of it weren’t that bad. Here, I was able to use the wonderful “relayd” service that is native to OpenBSD to take all of the traffic I receive for the various email communication ports and fan them out to the appropriate back-end servers.

At first, I thought I would have to create a separate server for each email domain I wanted to host. Each of those servers would have to have its own SMTP server and each would have to have its own IMAP server. Also, if I wanted to have webmail for a particular domain, I would have to set it up to be an additional pair of entries in the http/https configuration in the previous section.

However, when I started configuring the DNS entries for all of this, I realized the error in my thinking. I only had a single public IP address so I needed the moral equivalent of that reverse proxy magic that I built using Apache2 on my OpenBSD router. How does one do this in the world of SMTP and IMAP? Well, it turns out there is a solution called Server Name Indication (or SNI) that is supported by the major SMTP and IMAP services in the Linux world. Therefore, I elected to host my email on Linux. Perhaps I will do a future blog post on how I migrated this to OpenBSD?

First things first, I needed to set up the necessary DNS entries to ensure that not only will my mail get routed to me, but that it will be considered deliverable and not “spammy” in an way. These included the following entries for each domain:

A * 1.2.3.4 15 min TTL
A * 1.2.3.4 15 min TTL
A mail.example1.com 1.2.3.4 15 min TTL
MX @ 10 mail 15 min TTL
@ IN TXT "v=spf1 mx a -all"
_dmarc IN TXT="v=DMARC1;p=quarantine;rua=mailto:admin@example1.com"
mail._domainkey IN TXT "v=DKIM1; h=sha256; k=rsa ; p=*"

For the above, the “1.2.3.4” is your static IP address from your ISP and you obviously need to fill in bits with your domain name as well as the DKIM content represented by the p=* section in the last entry. Perhaps I’ll do a full setup post in the future on this topic.

After setting up DNS, you will then need to configure your mail server. I chose postfix for the SMTP server as it supports SNI and dovecot for the IMAP server for the same reason. Once that was done and I could access things securely from within my private network, I then set up relayd on my OpenBSD router:

$ doas rcctl enable relayd
$ doas rcctl start relayd

I then wrote the following configuration file in /etc/relayd.conf to map the necessary ports to the mail server:

ext_addr="192.168.1.2"  # private IP address of OpenBSD Router
mail_host="192.168.1.201" # private IP address of mail server

relay smtp {
    listen on $ext_addr port 25
    forward to $mail_host port 25
}

relay submission_tls {
    listen on $ext_addr port 465
    forward to $mail_host port 465
}

relay submission_starttls {
    listen on #ext_addr port 587
    forward to $mail_host port 587
}
25
relay imaps {
    listen on $ext_addr port 993
    forward to $mail_host port 993
}

After restarting relayd, we need to add some entries to /etc/pf.conf to ensure that the traffic actually gets through the OpenBSD firewall and hits relayd:

# Allow servicing of SMTP
pass in on { $wan } proto tcp from any to any port 25
# Allow servicing of Submission TLS
pass in on { $wan } proto tcp from any to any port 465
# Allow servicing of Submission startTLS
pass in on { $wan } proto tcp from any to any port 587
# Allow servicing of IMAPS
pass in on { $wan } proto tcp from any to any port 993

Now reload your pf rules with “$ doas pfctl -f /etc/pf.conf” and your machine should be relaying traffic. Finally, you will need to port map ports 24, 465, 587 and 993 on your residential gateway provided to you by your ISP and traffic should start flowing through. Test this from outside of your network and verify that everything is working as expected.

Conclusion

Using these techniques, you should be able to host any number of SSL enabled websites and properly secured email domains on private servers within your home network. This means that you can save some money by not having to use virtual servers in the cloud and also increase the privacy of your services because you physically control the servers themselves.

Don’t forget to back up your data from these servers and then store it somewhere offsite (preferably in two places) in an encrypted fashion. One thing the cloud does make simple is just checking a couple of checkboxes and you suddenly have snapshots of your virtual server stored offsite. You can never have too many backups.

Anyhow, I hope this was helpful for everyone!

Posted in Uncategorized | Tagged , , , | Leave a comment

The Most Metal Thing I’ve Done Today

As a middle-aged electric bass player, the “metal moments” of my life have been coming with less frequency than they did when I was younger. As a result, I tend to look for opportunities to be “metal” on any given day. To that end, I want to explore Canonical’s Metal as a Service or MaaS. Yeah, I know, I went for the cheap pun!

For those of you who aren’t familiar with this awesome piece of software, it essentially allows you to take a collection of physical servers on a private network and turn them into a cluster that allows you to pass out physical or virtual servers to users and then reclaim them when you are done. It does all of this using standard protocols that make life very, very easy. For example, the MaaS servers boot off of DHCP/PXE from an image hosted on the controller so that the OS image doesn’t live on the physical disk of the machine, freeing its built-in storage up for use by the cluster. Additionally, the software supports things like the Intel Active Management Technology (AMT) and its ability to allow remote power on / power off of machines that have this capability (along with many other more enterprise-y technologies for similar control).

For the purpose of this post, I’m going to create a MaaS cluster out of six machines that I have dedicated to the purpose and will be using them to host various projects in my home lab. As long-time readers of this blog know, I am a fan of the Lenovo Thinkpad family of laptops so as a result (like many in my cult) I have quite a stack of them lying around at any given time. For the purpose of this, I will be harnessing the power of my W520, W530 and W541 machines – all of which support the AMT (and more importantly I haven’t CoreBoot-ed yet so it still is enabled).

In addition, I have what I call my “Beast”, a tower machine with a Threadripper CPU that has 32 virtual cores, my NAS box (another AMD cpu machine that has a bunch of spinning physical disks) and finally the machine I’m using for my controller. For that purpose, I dragged out an old Dell laptop I had lying around. It only has one NIC (a WiFi card that I used to attach to my home network) but I picked up a USB-3 gigabit Ethernet adapter that is well supported by Linux to use to run the private network.

The controller machine connects to my home network (10.0.0.0/24) as well as to a small 8-port managed Gigabit switch that all five of the worker nodes will be solely attached to (192.168.100.0/24). That’s the physical network layout. Pretty simple. I also took the time to put a proper AMT password on the machines that support this technology which the MaaS controller will use to reboot them as needed. For the two AMD machines, I have to physically press the power button – at some point I might get an IP enabled power strip that is supported by MaaS and use it to allow them to be “remote controlled” as well but this works just fine for the time being. You might also want to check that virtualization is turned on in the BIOS for any of the machines you are using.

I’m using Ubuntu 22.04 Server for the controller machine and am running it pretty much vanilla except for some network configuration to allow it to serve as a packet router from the private network to my home network so that machines in the cluster can download packages as needed. I could work around that by hosting a mirror on my controller with the packages I needed (I think) but this was easier. For most of this post, I’m basing my configuration on the MaaS 3.2 documentation.

I downloaded the latest 22.04 server from the Ubuntu website and then used the “Startup Disk Creator” application that ships as part of the base OS on my laptop to create a bootable USB drive. After booting from the USB drive on the Dell laptop, the only configuration change I made to the default install was to enable an SSH server on the machine so I can remote in and do everything I need to from my laptop (except for pressing the power buttons a few times on the worker nodes).

Once the controller is installed and booted up, I have to make some network configuration changes to allow it to have a static IP address on both the home network side (WiFi) as well as on the private network that it will be managing. To do this, I edit the /etc/netplan/00-installer-config.yaml file to look like the following:

network:
  ethernets:
    enx000ec6306fb8:
      dhcp4: false
      optional: true
      addresses: [192.168.100.1/24]
    wifis:
      wlp1s0:
        dhcp4: false
        optional: true
        addresses: [10.0.0.5/24]
        nameservers: 
          addresses: [8.8.8.8]
        routes:
          - to: default
            via: 10.0.0.1
        access-points:
          "my_ssid_name":
            password: "********"
  version: 2

After saving these changes, I ran “sudo netplan try” to test the configuration and ensure that everything is working the way I wanted it to. Once I was satisfied with the network, I updated the machine (“sudo apt update” and then “sudo apt upgrade”). After that, I reboot the machine to pick up the new kernel I downloaded in the updates.

I want my machines on the private network to be able to reach the Internet through the MaaS controller. To make things simple, I’m just going to set up a basic router on this machine using a guide I found here:

# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# sysctl net.ipv4.ip_forward=1
# iptables -A FORWARD -i enx000ec6306fb8 -o wlp1s0 -j ACCEPT
# iptables -A FORWARD -i wlp1s0 -o enx000ec6306fb8 -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -t nat -A POSTROUTING -o wlp1s0 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o enx000ec6306fb8 -j MASQUERADE
# apt install iptables-persistent

After running the “apt install…” command, make sure you tell it to persist the IPV4 and IPV6 rules and they will be stored in /etc/iptables under files called “rules.v4” and “rules.v6”. At this point, because I’m old-school, I do a reboot.

For my lab, I want to be as close to a “production” environment as I can get. Therefore, I’m opting for a “region+rack” configuration. Using snaps, installing MaaS is… well… a snap:

$ sudo snap install --channel=3.2 maas

The next thing we need to do is set up a PostgreSQL database for this instance of MaaS:

$ sudo snap install maas-test-db

At this point, it is time to initialize your instance of MaaS:

$ sudo maas init region+rack --database-uri maas-test-db:///

I took the default for my MaaS URL (http://10.0.0.5:5240/MAAS). I then ran the command “$ sudo maas createadmin” and provided my admin credentials and my Launchpad user for my ssh keys.

At this point, I logged into my MaaS instance from that URL and did some configuration. First, I named my instance and set the DNS forwarder to one that I liked. Next, we need to enable DHCP for the private network so that it can PXE boot new machines on the network. To do this, navigate to the Subnets tab and click on the hyperlink in the “vlan” column that corresponds to the private network. Click “Configure DHCP” and then fill in the Subnet dropdown to correspond to the IP address range of your private network then save the change. You should now notice the warning about DHCP not being configured has gone away from the MaaS user interface.

The next thing we need to do is set up the default gateway that is shared by the MaaS DHCP server to the machines. To do this, navigate to the “Subnets” tab and click on the hyperlink in the “subnet” column for your private network. Click “Edit” and fill in the Default Gateway IP address and the DNS address if you’d like. After clicking “Save” your machines will be automatically configured to use the default gateway you provided (in my case, the private network IP address of my MaaS controller).

I first boot up the Thinkpads (that have Intel AMT) on the private network and they PXE boots off of the MaaS controller and eventually show up under the “Machines” tab of the MaaS user interface. I click on each of them in the MaaS user interface and configure their names and their power setup to be Intel AMT and provide my passwords and IP addresses that I set up in the firmware on each of them. I then booted up the AMD machines and in their configuration, just set their power type to “Manual.

At this point, you will need to get the machines into a “usable” state for MaaS so to do that, check the box next to each one on the “Machines” tab and select “Commission” from the actions menu. You’ll have to physically power on any machines that don’t have Intel’s AMT and then they will go through the commissioning process. When done, they will show up as “Ready” on the “Machines” tab.

Now I need to get the default gateway working for each of the machines. There might be an easier way of doing this; however, I haven’t figured it out yet so I’m following part of a guide found here. For each machine, click on it and then navigate to the network tab. When there, check the box next to the network interface that is connected to the private network’s switch and press the “Create Bridge” button. Name the bridge “br-ex”, the type is “Open vSwitch”, select the fabric and subnet corresponding to your private network and pick “auto assign” for the ip mode.

Now, check the boxes next to your “Ready” machines and select “Deploy” from the actions menu. Be sure to check the “Auto Assign as KVM host” to make them available to host virtual machines for you. Press the “Start deployment…” button and be sure to power on any that don’t have Intel AMT technology to control their power state. At this point you should be done with the power button pushing business unless you need to do maintenance on the machines.

This seemed as good a time as any to create a MaaS user for myself. To do this, I navigated to the “Settings” tab and selected “Users” and then clicked “Add User”. I filled in the details (by the way, MaaS enforces no duplicate email addresses among its users so if you are like me and want an admin account and a separate user account, you’ll have to use two email addresses) and clicked “Save” and I was good to go. I logged in as that user and supplied my SSH key from Launchpad.

If you now switch to the main MaaS “KVM” tab, you should see your machines available and be able to add virtual machines. You do this by clicking on one of the hosts and then clicking the “Add Virtual Machine” button. It then shows up as a “New” machine in the MaaS user interface.

I then log in as my “user” account in MaaS and deploy the “New” virtual machines. Once they are completely deployed, you can then ssh into them from a machine that has connectivity to the private network. The only trick I discovered is that you have to log in as the “ubuntu” user, NOT the user you have set up in MaaS.

At this point, I have a working MaaS home lab that I can use for a variety of projects. I hope that you found this post helpful!

Posted in Uncategorized | Leave a comment

Active Directory Needs Friends!

For those of you who didn’t read my predecessor post on setting up a full-blown Active Directory infrastructure on my home network with home directories, roaming user profiles and group policy using only open source software, take a read through that. This is a follow-on post where I have added a second Active Directory domain controller in a private cloud environment and then bridged that private cloud network to my secure home network using WireGuard.

Bridging The Networks

To start off, since I’m using the bleeding-edge Ubuntu version on my primary domain controller, I set up a virtual server in my cloud provider of choice using 21.10 as well. For the private network, I put it on its own private network that does not collide with my home network (192.168.1.0/24). In this case it is 192.168.2.0/24.

My VPS provider allows me to supply SSH keys at their web console that restricts who can ssh into the remote virtual machine to only those who have the private key that corresponds to the public keys you upload and select. This ensures that I can securely log into the machine as root-level access without fear. The first thing do to, however, when I log into the new server is to update the packages installed on it:

# apt update
# apt upgrade
# reboot

Now for the wireguard setup on the remote virtual machine. For the purposes of this section, we will call it the “server”:

# apt install wireguard wireguard-tools
# wg genkey | sudo tee /etc/wireguard/server_private.key
# wg pubkey | sudo tee /etc/wireguard/server_public.key
# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
# echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
# sysctl -p
net.ipv4.ip_forwrd=1
net.ipv6.conf.all.forwarding=1
# vim /etc/wireguard/wg0.conf
[Interface]
Address = 10.10.10.1/32
ListenPort = 51820
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT

[Peer]
PublicKey = *** contents of /etc/wireguard/server_public.key from remote ***
Endpoint = 1.2.3.4:51820 # IP address of remote
AllowedIPs = 10.10.10.2/32, 192.168.1.0/24

Since my local network is on a residential ISP, I need to use the tools on my ISP’s router to port map the Wireguard port that comes in on the public IP address to the OpenBSD router. Now, we will need to set up the WireGuard configuration on the OpenBSD 7.0 router that I use for my secure network at home (private IP is 192.168.1.1):

# pkg_add wireguard-tools
# sysctl net.inet.ip.forwarding=1
# echo 'net.inet.ip.forwarding=1' | tee -a /etc/sysctl.conf
# mkdir /etc/wireguard
# chmod 700 /etc/wireguard
# openssl rand -base64 32 > /etc/wireguard/server_private.key
# wg pubkey < /etc/wireguard/server_private.key > /etc/wireguard/server_public.key
# vim /etc/hostname.wg0
inet 10.10.10.2 255.255.255.0
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf
!route add -inet 192.168.2.0/24 10.10.10.2
# vim /etc/wireguard/wg0.conf
[Interface]
PrivateKey = *** contents of /etc/wireguard/server_private.key ***
ListenPort = 51820

[Peer]
PublicKey = *** contents of /etc/wireguard/server/public.key from remote ***
Endpoint = 2.3.4.5:51820 # public IP address of remote
AllowedIPs = 10.10.10.2/32, 192.168.2.0/24
# vim /etc/pf.conf
... add to end...
pass in on egress proto udp from any to any port 51820 keep state
pass on wg0
pass out on egress inet from (wg0:network) to any nat-to (egress:0)
# pfctl -f /etc/pf.conf
# sh /etc/netstart wg0

Now, run the following command on the remote Linux box to start the Wireguard service:

# systemctl enable wg-quick@wg0.service
# systemctl start wg-quick@wg0.service

At this point, you should be able to check the status of the Wireguard network on both sides with the command wg show and that should show both ends connected. You should be able to ping hosts on the remote network from each end.

So far, the only problem I have found with this setup to bridge the networks, is that my Windows machines that are multi-homed (i.e. one interface – wired ethernet – is connected to my ISP’s network and one – wireless – is connected to my secure network) needs to have a route manually added as follows:

C:\WINDOWS\system32> route add -p 192.168.2.0 MASK 255.255.255.0 192.168.1.1

In this case, the 192.168.2.0/24 network is the remote network and the 192.168.1.1 IP references my OpenBSD 7.0 router.

Remote Samba Active Directory Server

Now that we have a remote network that is securely bridged to our local private network on which the current Samba Active Directory infrastructure is running, it is time to create the VPC virtual server that will be running our Active Directory remote server. My particular VPC service allows me to create a server that is on the same private network as my remote “router” that is running Wireguard, so I create such a server and call it AD2.ad.example.com (put in your own AD domain name there).

First things first, the remote AD server must have a route to the Wireguard network. This is not a necessary step on the home network side because the Wireguard server is running on the OpenBSD 7.0 router and by definition is the default route for the servers on that network. This is not the case for the servers on the private network at the VPC. To do this, we simply need to add a persistent route. So as to not mess things up with the default network configuration on the remote host, I decided to create a (yuck) SystemD (blech) service:

# apt update
# apt upgrade
# apt install network-tools
# vim /usr/sbin/MY-NETWORK.sh
#! /bin/sh
/usr/sbin/route add -net 192.168.1.0/24 gw 192.168.2.2 eth1
# chmod +x /usr/sbin/route
# vim /etc/systemd/system/MY-NETWORK.service
[Unit]
Description=Route to Wireguard server
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/usr/sbin/MY-NETWORK.sh

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable MY-NETWORK.service
# systemctl start MY-NETWORK.service

At this point, you should be able to ping the domain controller on the remote (home) network and from that domain controller, you should be able to ping the new host.

Now we need to do the standard networking configuration ‘stuff’ that Samba likes. First, edit the /etc/hosts file to remove the “127.0.1.1 DC2.ad.example.com DC2” line and replace it with one tying it to the static private IP address that has been assigned to this virtual host. In this case, “192.168.2.3 DC2.ad.example.com DC2”.

Here we need to add the necessary packages to host an Active Directory domain controller:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Next, disable systemd’s resolver and add the remote AD server as the DNS name server and also add the Active Directory domain:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

Now, go ahead and reboot the remote machine and when you log back into it, test to see if DNS is working properly:

# nslookup DC1.ad.example.com
Server:     192.168.1.2
Name:      DC1.ad.example.com
# nslookup 192.168.1.2
2.1.168.192.in-addr.arpa    name = DC1.ad.example.com
# host -t SRV _ldap._tcp.ad.example.com
_ldap._tcp.ad.example.com has SRV record 0 100 389 dc1.ad.example.com

Rename the /etc/krb5.conf file and the /etc/samba/smb.conf file like you did when you created the domain controller on your local network. Then, create a new /etc/krb5.conf file:

[libdefaults]
    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

At this point, we need to set up an NTP server and sync it to the one at our original Active Directory domain controller:

# apt install chrony ntpdate
# ntpdate 192.168.1.2
# echo "server 192.168.1.2 minpoll 0 maxpoll 5 maxdelay .05" > /etc/chrony/chrony.conf
# systemctl enable chrony
# systemctl start chrony

Now we need to authenticate against Kerberos and get a ticket:

# kinit administrator
... provide your AD\Administrator password ...
# klist

At this point, it’s time to join the domain as a new domain controller:

# samba-tool domain join ad.example.com DC -U"AD\administrator"

After the tool finishes (it produces a lot of output), you need to copy the generated Kerberos configuration file to the /etc directory:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

You need to manually create the systemd service and set things up so that everything fires up when you reboot the server:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
[Unit]
Description=Samba Active Directory Domain Controller
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/sbin/samba -D
PIDFILE=/run/samba/samba.pid
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

OK. At this point we have a Samba Active Directory domain controller running. We need to get SysVol replication going now to ensure that the two controllers are bidirectionally synchronized.

Bidirectional SysVol Replication

To get the SysVol replication going bidirectionally, I followed the guide here. First, you need some tools installed on both DCs:

# apt install rsync unison

Generate an ssh key on both domain controllers:

# ssh-keygen -t rsa

Now, copy the /root/.ssh/id_rsa.pub contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

Now, copy the /root/.ssh/id_rsa.pub contents from one server into the /root/.ssh/authorized_keys file on the other and vice-versa. Verify that you can log in without passwords from one server to the other. If you are prompted for a password, then edit your /etc/ssh/sshd_config file and add the line “PasswordAuthentication no” and then restart the ssh service. Now you should be able to log in just using public keys and no password from one server to the other and back.

On your new remote DC (DC2 in my example), do the following to ensure that your incoming ssh connection isn’t rate limited:

# mkdir /root/.ssh/ctl
cat << EOF > /root/.ssh/ctl/config
Host *
ControlMaster auto
ControlPath ~/.ssh/ctl/%h_%p_%r
ControlPersist 1
EOF

Now, to be able to log what happens during the sync on the local DC (DC1 in my example), do the following to create the appropriate log files:

# touch /var/log/sysvol-sync.log
# chmod 640 /var/log/sysvol-sync.log

Now, do the following on the local DC (DC1 in my example):

install -o root -g root -m 0750 -d /root/.unison
cat << EOF > /root/.unison/default.prf
# Unison preferences file
# Roots of the synchronization
#
# copymax & maxthreads params were set to 1 for easier troubleshooting.
# Have to experiment to see if they can be increased again.
root = /var/lib/samba
# Note that 2 x / behind DC2, it is required
root = ssh://root@DC2//var/lib/samba 
# 
# Paths to synchronize
path = sysvol
#
#ignore = Path stats    ## ignores /var/www/stats
auto=true
batch=true
perms=0
rsync=true
maxthreads=1
retry=3
confirmbigdeletes=false
servercmd=/usr/bin/unison
copythreshold=0
copyprog = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --inplace --compress
copyprogrest = /usr/bin/rsync -XAavz --rsh='ssh -p 22' --partial --inplace --compress
copyquoterem = true
copymax = 1
logfile = /var/log/sysvol-sync.log
EOF

Now, run the following command on your local DC (DC1 in my example):

# /usr/bin/rsync -XAavz --log-file /var/log/sysvol-sync.log --delete-after -f"+ */" -f"- *"  /var/lib/samba/sysvol root@DC2:/var/lib/samba  &&  /usr/bin/unison

This should synchronize the two sysvols. If you followed my previous how-to and set up Group Policy, this can take some time as there are a lot of files involved that are stored on the SysVol. After it is complete, you can verify this by doing the following on your remote DC (DC2 in my example):

# ls /var/lib/samba/sysvol/ad.example.com

You should see the same file structure under that directory on both servers. This will copy everything including your group policy stuff as well.

Now that you have done the initial sync, just add the following to your crontab on the local DC (DC1 in my example):

# crontab -e
*/5 * * * * /usr/bin/unison -silent

You should monitor /var/log/sysvol-sync.log on your local DC (DC1 in my example) to ensure that everything is synchronizing and staying that way over time.

Hope this little “how-to” helps folks!

Posted in Uncategorized | Leave a comment

Active Directory Says What?

Many of the long-time readers of this blog are going to probably have a panic attack when they read this article because they are going to be asking themselves the question, “Why in the heck does he want to install Active Directory in his life?” The reason, like so many answers to so many of these questions I ask myself is “Because I can!” LOL!!

So I have a small home network that is my playground for learning new technologies and practicing and growing my security skills. I try to keep it segregated from my true home network that my family uses because I don’t want my latest experiment to get in the way of any of them connecting to the Internet successfully.

Just for fun, however, I’m going to start on a path to try a new experiment – I’d like to have the ability to add a new machine to my network and not have to spend half a day setting it up. Furthermore, I’d like to put everything I can either on a local file server that backs up to the cloud or in the cloud that backs up to a local file server in such a way that I can totally destroy any of my machines and be able to reproduce it at the push of a button. The ultimate in home disaster recovery.

What does this buy me? Well, for one, it lets me be even more aggressive in my experimentation. If I lay waste to a machine because of a failed experiment, no big deal – I just nuke and automatically repave it. For another, it makes it way easier to recover a family member’s setup when something goes wrong. I can just rebuild the machine and know they won’t lose anything. That alone will save me lots of time troubleshooting the latest problems with stuff.

So, why Active Directory? I choose this technology because pretty much everything (OpenBSD is going to be interesting) will authenticate centrally with it and yes, I do have to run some Windows and Mac machines on my network, I can’t do it all on OpenBSD and Linux so it’s a good common ground.

Now, I will die before installing a Windows Server in my infrastructure (LOL) so I have been very careful saying “Active Directory” and not “Windows Server”, or “Azure AD”. I’m going to see how far Samba 4 has come since the last time I played with it. If I can do the full meal deal of authentication, authorization, roaming user profiles and network home directories on a Windows machine, then I can fill in around the edges on my non Windows machines using NFS and other techniques.

Setting up Ubuntu

First things first, I want to start with a clean install of my domain controller. To this end, I’ll nuke and repave my 32-core Threadripper box in my basement with the latest Ubuntu 21.10 build on it and install samba on bare metal. I had originally thought about doing this on a VM or on a Docker container, but I want the reliability and control-ability of a bare metal install with a static IP address, etc. Therefore, after carefully backing up the local files that I wanted to save off of this machine (ha – that’s a lie, I just booted from a USB thumb drive and Gparted the drives with new partition tables), I installed a fresh copy of Ubuntu 21.10 with 3rd party drivers for my graphics card.

Once I had the base OS laid down, I used the canonical documentation from wiki.samba.org (not documentation from Canonical, the owner of Ubuntu <g>), along with some blog posts (1), (2), and (3) to determine my full course of action. I’ll outline the various steps below.

Active Directory Domain Controller

First things first, we need to get the network set up the way Samba wants it on this machine. That consists of setting up a static IP address on the two NICs in my server (one for my “secure” home network and one for my insecure “family” network) and setting the hostname and /etc/hosts file changes. Specifically, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:

127.0.0.1    localhost
192.168.1.2  DC1.ad.example.com    DC1

It is important to note that Ubuntu will put in an additional 127.0.0.1 line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

DC1.ad.example.com

Now for a fun one. We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.1
search ad.example.com

At this point, you should have the networking changes in place you need for now. You’ll have to later loop back around and change /etc/resolv.conf to use this machine’s IP address as the nameserver once you have Samba running with it’s built-in DNS server up and running but we don’t want to lose name resolution in the meanwhile so I’ve hard coded it to point to my local DNS server on OpenBSD.

Now it’s time to install the necessary packages to make this machine an active directory domain controller:

# apt update
# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user dnsutils net-tools smbclient

Specify the FQDN of your server when prompted on the ugly purple screens for things like your Kerberos server and your Administrative server.

Now, it’s time to create the configuration files for Kerberos and Samba. To do this, I ran the following commands:

# mv /etc/krb5.conf /etc/krb5.conf.orig
# mv /etc/samba/smb.conf /etc/samba/smb.conf.orig
# samba-tool domain provision --use-rfc2307 --interactive

I take the defaults, being careful to double-check the DNS forwarder IP address (that’s where the DNS server that will be serving your AD network will forward requests it cannot resolve) and then entered my Administrator password. Keep in mind that be default, the password complexity requirements are set pretty high (which I like) so pick a good one.

Now use the following command to move the Kerberos configuration file that was generated by the Samba provisioning process to its correct location:

# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf

Next, we need to set things up so that the right services are started when you reboot the machine. To do that, issue the following commands:

# systemctl mask smbd nmbd winbind
# systemctl disable smbd nmbd winbind
# systemctl stop smbd nmbd winbind
# systemctl unmask samba-ad-dc
# vim /etc/systemd/system/samba-ad-dc.service
[Unit]
Description=Samba Active Directory Domain Controller
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/sbin/samba -D
PIDFILE=/run/samba/samba.pid
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
# systemctl enable samba-ad-dc
# systemctl start samba-ad-dc

Now go back and update the /etc/resolv.conf file to use the new Samba-supplied DNS service:

# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

This is probably a good time to reboot your machine. When you do so, don’t forget to check that /etc/resolv.conf hasn’t been messed with by Ubuntu. If it has, double-check the work you did above and keep trying reboots until it sticks.

Now we need to create the reverse zone for DNS:

# samba-tool dns zonecreate 192.168.1.2 168.192.in-addr.arpa -U Administrator
# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa 2.1 PTR DC1.ad.example.com -U Administrator

If you have multiple NICs in your AD server, you will need to repeat this process for their networks. At this point, double-check that the DNS responder is coming back with what it needs to in order to serve the black magic of the Active Directory clients:

# nslookup DC1.ad.example.com
Server:        192.168.1.2
Address:       192.168.1.2#53

Name:    DC1.ad.example.com
Address: 192.168.1.2

# nslookup 192.168.1.2
2.1.168.192.in-addr.arpa        name = DC1.ad.exmple.com

# host -t SRV _ldap._tcp.ad.example.com
_ldap._tcp.ad.example.com has SRV record 0 100 389 dc1.ad.example.com
# host -t SRV _kerberos._udp.ad.example.com
_kerberos._udp.ad.example.com has SRV record 0 100 88 dc1.ad.example.com
# host -t A dc1.ad.example.com
dc1.ad.example.com has address 192.168.1.2

If you have multiple NICs in your AD server, you might want to double-check the DNS A records that are returned are reachable from the networks your clients typically use. Since I have a “home” network and a “secure” network, I can manage DNS and DHCP on my secure network so I tend to make sure that my domain controller hostname resolves to an IP address on the secure network. The Windows DHCP admin tools are pretty handy for checking on this and making changes.

Verify that the Samba service has file serving running correctly by listing all of the shares from this server as an anonymous user:

# smbclient -L localhost -N

You should see sysvol, netlogon and IPC$ listed. Any error about SMB1 being disabled is actually a good thing. Validate that a user can successfully log in:

# smbclient //localhost/netlogon -UAdministrator -c 'ls'

You should see a listing of the netlogon share directory which should be empty. Now check that you can successfully authenticate against Kerberos:

# kinit administrator
# klist

You should see a message about when your administrator password will expire if you are successfully authenticated by Kerberos. The klist command should show the ticket that was generated by you logging in as Administrator.

If you look at the documentation in the Samba Wiki, you’ll see that ntp seems to be a better service to use over chrony or optnntpd. If you look at the documentation for chrony (which everyone seems to use), you’ll get a different story. However, when I used chrony, I kept getting NTP errors on my Windows clients so I’m configuring in this post with ntp.

# apt install ntp
# samba -b | grep 'NTP'
    NTP_SIGND_SOCKET_DIR: /var/lib/samba/ntp_signd
# chown root:ntp /var/lib/samba/ntp_signd/
# chmod 750 /var/lib/samba/ntp_signd/
# vim /etc/ntp.conf
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
broadcast 192.168.1.255
disable auth
broadcastclient
# systemctl restart ntp

To be clear, the lines I’m showing after editing the ntp.conf file are lines that you ADD to the file. Also, if you have more than one NIC in the server, you’ll need to add them in on the restrict and broadcast lines as a second line for each.

Now, let’s test that everything is working by enrolling a Windows 10 machine into the domain. Ensure first that you are on the right network and just for safety’s sake, do a reboot so you pick up the DNS server, etc. I have modified the DHCP server on my network to pass the correct information that a client needs as follows (from /etc/dhcpd.conf in OpenBSD):

option domain-name "ad.example.com";
option domain-name-servers 192.168.1.2;
option ntp-servers 192.168.1.2;

Microsoft has done a bang-up job of hiding this in the UI compared to where it has been for literally decades (“get off my lawn!!”). I prefer the old-fashioned way so I ran the following using Windows key + R to get the old UI I’m most comfortable with:

sysdm.cpl

Press the “Change” button and then select “Domain” and enter “ad.example.com” as the name of your domain. That should prompt you for your admin credentials. I typically use AD\administrator as my userid just to be safe. In a matter of seconds, you should be welcomed to the domain.

For safety’s sake, I recommend clearing out your application and system event logs on that machine, rebooting and logging in as your domain admin. Once that’s done, examine the event viewer to ensure that you aren’t seeing any errors that might indicate something isn’t configured correctly on the server. Remember to click the “other user” button on the Windows 10 login screen and use the AD\Administrator to tell Windows which domain you want to log into.

There is a warning (DNS Client Events, Event ID: 8020) that I see in the System event log. This appears to be a problem where the Windows machine tries to re-register with dynamic DNS in Samba with exactly the same info that is already registered for it and Samba returns an error. You can still resolve the client machine from the server so it worked the first time, I think it can be safely ignored for now.

For ease of maintenance you might want to install the “Windows RSAT Tools” on your Windows machine that give you a good UI for managing all of the fun stuff that Active Directory brings to the table. They are a free download.

I really do NOT recommend using your domain controller as a file server. To set that up on another machine, please see the next section.

Samba File Server in a Domain

Thankfully, the wonderful documentation on the Samba WIKI has an entire entry dedicated to setting up Samba as a domain member. First things first, we need to configure the network settings on our file server to use the Active Directory server as the DNS server.

As I did with the domain controller above, I used NetworkManager from the Ubuntu desktop to set the static IPs, the gateway and the netmasks and then modified /etc/hosts as follows:

127.0.0.1    localhost
192.168.1.3  NAS.ad.example.com    NAS

It is important to note that Ubuntu will put in an additional 127.0.0.1 line for your host and you need to (apparently, per the documentation) remove that. I then modified my /etc/hostname file as follows:

NAS.ad.example.com

We need to permanently change /etc/resolv.conf and not have Ubuntu overwrite it on the next boot. To do that, we have to:

# systemctl stop systemd-resolved
# systemctl disable systemd-resolved
# unlink /etc/resolv.conf
# vim /etc/resolv.conf
nameserver 192.168.1.2
search ad.example.com

After a quick reboot and verification that the resolv.conf changes survived, we need to install some packages:

# apt install acl attr samba samba-dsdb-modules samba-vfs-modules winbind libpam-winbind libnss-winbind libpam-krb5 krb5-config krb5-user smbclient

Now we need to now configure Kerberos and Samba. First, if there are files currently at /etc/krb5.conf and/or /etc/samba/smb.conf, remove them. Create a new /etc/krb5.conf file with the following contents:

[libdefaults]
    default_realm = AD.EXAMPLE.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true

Next, it will be necessary to synchronize time to the domain controller. Since this server won’t be broadcasting network time to client machines (i.e. it isn’t a domain controller), I’ll be setting it up with chrony which is built into Ubuntu.

# apt install chrony ntpdate
# ntpdate 192.168.1.2
# vim /etc/chrony/chrony.conf
server 192.168.1.2 minpoll 0 maxpoll 5 maxdelay .05
# systemctl enable chrony
# systemctl start chrony

That line under the vim command should be the only line in the file. To validate that everything is working, a call to systemctl status chrony should show that it is active and running. First things first, we need to set up the /etc/samba/smb.conf file:

[global]        
    workgroup = AD        
    security = ADS        
    realm = AD.EXAMPLE.COM       
    netbios name = NAS
    domain master = no
    local master = no
    preferred master = no

    idmap config * : backend = tdb
    idmap config * : range = 50000-100000
 
    vfs objects = acl_xattr        
    map acl inherit = Yes        
    store dos attributes = Yes

    winbind use default domain = true
    winbind offline logon = false
    winbind nss info = rfc2307
    winbind refresh tickets = Yes
    winbind enum users = Yes
    winbind enum groups = Yes

Now we will need to join the domain:

# kinit administrator
# samba-tool domain join AD -U AD\\Administrator
# net ads join -U AD\\Administrator

You’ll probably get a DNS error when you join the domain. Regardless, add an A record and a PTR record for the server into the DNS as follows:

# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa 3.1 PTR NAS.ad.example.com -U Administrator
# samba-tool dns add 192.168.1.2 168.192.in-addr.arpa NAS.ad.example.com A 192.168.1.3

If you have multiple NICs in your file server, make sure you repeat the process for the IP address ranges assigned to them. Now, add the “winbind” parameter as follows to /etc/nsswitch.conf:

# vim /etc/nsswitch.conf
passwd: files winbind systemd
group: files winbind systemd
shadow: files winbind

Next, we will need to enable and start and restart some services:

# systemctl enable smbd nmbd winbind
# systemctl start smbd nmbd winbind
# pam-auth-update

Before proceeding any further, you should probably reboot the machine. Now for some tests to make sure that everything is working ok:

# wbinfo --ping-dc
checking the NETLOGON for domain[AD] dc connection to "dc1.ad.example.com" succeeded.
# wbinfo -g
... list of domain groups ...
# wbinfo -u
... list of domain users ...
# getent group
... list of Linux groups and Windows groups...
# getent passwd
... list of Linux users and Windows users...

Windows Home Directories

A common configuration done by Windows Domain administrators is to create a default “Home” drive (typically mapped to the H: drive letter) for users. To do this, we will want to first set up a file share on the server. The goal will be to set up a mapped “HOME” directory for each domain user. We’ll start off by adding the following to the /etc/samba/smb.conf file:

[users]
    comment = Home directories
    path = /path/to/folder
    read only = no
    acl_xattr:ignore system acls = yes

After issuing an “smbcontrol all reload-config” on the file server to reload the changes to the config file, you should now be able to see a share called \\nas\users. When you create the directory on the filesystem, issue the following commands:

# chown "Administrator":"Domain Users" /path/to/folder/
# chmod 0770 /path/to/folder/

It is important to grant the “SeDiskOperatorPrivilege” to the “Domain Admins” group as follows. This has to be done on the file server itself.

# net rpc rights grant "AD\\Domain Admins" SeDiskOperatorPrivilege -U "AD\administrator"

Finally, from the “Active Directory Users and Groups”, select the user in the “Users” folder, right click and select “Properties”. After changing to the “Profile” tab, select the “Connect” radio button under the “Home folder”, choose H: as the drive letter and put in \\nas\users\{user name} for the “To:” entry field. This should automatically create the directory and set the correct permissions on it.

Now log out of the domain and back in as the user account you modified above and you should automatically get an H: drive that maps to that folder on the file server.

User Profiles

OK, so the cool kids on their Windows networks also have this thing called a “Roaming User Profile” that allows you to put their user profile on a file server and then they can move from one machine to another and simply access their stuff as if it was all the same machine. I wanted to see how Samba handled this and sure enough, I got a hit in the Samba wiki that indicated it was possible.

First things first, we need to create a share on our file server to hold the profiles, so I added this to my /etc/samba/smb.conf file:

[profiles]
    comment = Users profiles
    path = /path/to/profile/directory
    browseable = No
    read only = No
    csc policy = disable
    vfs objects = acl_xattr
    acl_xattr:ignore system acls = yes

After making that change, I need to create the directory to hold the profiles and set the UNIX ownership and permissions like I did with the home directories above:

# mkdir /path/to/profile/directory
# chown "AD\Administrator":"AD\Domain Users" /path/to/profile/directory
# chmod 0700 /path/to/profile/directory

After a quick “smbcontrol all reload-config” to pull the new changes in, we now have a share on the file server called “profiles” that will hold the resulting Windows user profiles. I used the “Active Directory Users & Computers” tool on my Windows machine (logged in as Administrator), opened the property dialog for my users, navigated to the “Profile” tab and entered the UNC name for the profile directory \\NAS\profiles\{user-name}. The key is to know that, depending on the version of Windows, the system will add a suffix (in my case “.v6”) to that directory name and it will initially be created empty. When you log out, it will actually copy the stuff into the directory and you should see the directories and files show up on your file server. It seems this is the consistent behavior. For example, saving a file into the “Documents” directory on the Windows machine isn’t propagated to the server’s file system until that user logs out.

It really was that easy!

Group Policy

Given the fact that I had, at this point a fully functional Active Directory infrastructure with network home directories, roaming user profiles and all of it was running on Open Source platforms, I thought I’d really try to push it over the edge and dip my toe in the water around Group Policy. Group Policy is some magic stuff based on LDAP that, in the Windows world, allows you to automatically configure an end-user’s workstation. I found documentation in the Samba wiki that indicated it was possible to make this work so I thought I’d give it a try and see what I needed to do.

It looked like the first thing I needed to do was load the Samba “ADMX” templates into the AD domain controller. To do that, I used the following command:

# samba-tool gpo admxload -H dc1.ad.example.com -U Administrator

Sure enough, logging into my Windows machine as a domain admin, I was able to see that the command had indeed injected the Samba files into the Sysvol:

H:\> dir \\DC1\SysVol\ad.example.com\Policies\PolicyDefinitions

That command aove should show you the en-US directory and the samba.admx file. Now we need to download the Microsoft ADMX templates and install them:

# apt install msitools
# cd /tmp
# wget 'https://download.microsoft.com/download/3/0/6/30680643-987a-450c-b906-a455fff4aee8/Administrative%20Templates%20(.admx)%20for%20Windows%2010%20October%202020%20Update.msi'
# msiextract Administrative\ Templates\ \(.admx\)\ for\ Windows\ 10\ October\ 2020\ Update.msi
# samba-tool gpo admxload -U Administrator --admx-dir=Program\ Files/Microsoft\ Group\ Policy/Windows\ 10\ October\ 2020\ Update\ \(20H2\)/PolicyDefinitions/

The last line will take a few seconds as it processes the files and loads them into the SysVol. You can again confirm the presence of the new policies using the “dir” command above from your Windows machine. At this point, you have the group policies set up and installed into your environment and should be able to manipulate them using the “Group Policy Management Console” on your Windows workstation.

Conclusion

While this is probably one of my stranger, and more technical posts, I think this is a cool example of how you can totally eliminate paid software from your server infrastructure and yet still have the full functionality of something like Active Directory in your tool belt.

Posted in Uncategorized | 4 Comments

Thinkpad T14 (AMD) Gen 2 – A Brave New World!

As long-time readers of this blog are aware, I’m a bit of a Thinkpad fanatic. I fell in love with these durable machines when I was working for IBM back in the late 90’s and accidentally had one fall out of my bag, bounce down the jetway stairs and hit the runway hard – amazingly enough it had a few scuffs but zero damage! After the purchase of the brand by Lenovo, I was a bit worried, but they continue to crank out (at least in the Thinkpad T and X model lines) high-quality, powerful machines.

Thinkpad T480 – RIP

I ran into a nasty problem with my Thinkpad T480 where the software on the machine actually physically damaged the hardware. I know! I thought that was impossible too (other than the 70’s PET machine that had a software-controlled relay on the motherboard that you could trigger continuously until it burned out) but nope – the problem is real.

Essentially, the Thunderbolt I/O port on the machine is driven by firmware running out of an NVRAM chip on the motherboard that can be software-updated as new firmware comes out. As with any NVRAM chip, there are a finite number of write-cycles before the chip dies, but the number of times you will update your firmware is pretty small so it works out well.

Unfortunately, Lenovo pushed out a firmware update that wrote continuously to the NVRAM chip and if you didn’t patch fast enough (they did release an urgent/critical update), then the write-cycles would be exceeded, the chip would fail and the bring-up code would not detect the presence of the bus and thus you had no more Thunderbolt on the laptop. Well, I didn’t update fast enough so “boom” – it is now a Thunderbolt-less laptop.

The New T124 (AMD) Gen 2

Well, enter the need for a new laptop. I decided to jump ship from the Intel train and try life out on the “other side” but ordering a Thinkpad T14 (AMD) Gen 2 machine with 16gb of soldered RAM (there is a slot that I will be populating today that can take it up to 48gb max – I’m going with 32gb total by installing an $80 16gb DIMM) and the Ryzen Pro 5650U that has 6 cores and 12 threads of execution. The screen was a 1920×1080 400 nit panel and looks really nice.

When the laptop showed up, I booted the OpenBSD installer from 6.9-current and grabbed a dmesg and discovered that I lost the Lenovo lottery and had a Realtek WiFi card in the machine. Well, the good news was that I had upgraded the card in my T480 to an Intel AX200 so I swapped it for the one I took out of the T480 and then used it in the T14 to replace the Realtek card. Worked like a charm.

The Ethernet interface on this machine is a bit odd. It’s a Realtek chipset as well, but it shows up as two interfaces (re0 and re1). The deal is that re0 is the interface that is exposed when the machine is plugged into a side-connecting docking station and re1 is the interface that is connected to the built-in Ethernet port. The device driver code that is in 6.9-current as of this writing works just fine with it, however, so I’m happy.

Now for the bad news. Every Thinkpad I have owned for the last decade allows me to plug an m.2 2240 SATA drive into the WWAN slot and it works great. I assumed that would be the case with this machine. While I had the bottom off to replace the WiFi card, I slipped the 1TB drive from the WWAN slot of my T480 into the WWAN slot of the T14 and booted up. I was immediately presented with an error message stating effectively that the WWAN slot was white-listed by Lenovo and would only accept “approved” network cards. I was beyond frustrated by this.

Given that I want to get this machine into my production workflow, I decided that I’d slog along for the time being by putting a larger m.2 2280 NVMe drive in, installing rEFInd to allow me to boot multiple partitions from a single drive and then clone the 512gb drive that is in the machine to the 1GB drive out of the T480. Then, the remaining space on the new drive will contain an encrypted partition for my OpenBSD install.

Installing rEFInd

I followed the instructions from the rEFInd site on how to manually install under Windows 10 and the steps I followed included downloading and unpacking the ZIP file and then running the following commands from an administrative command prompt:

C:\Users\xxxx\Downloads\refind-bin-0.13.2\> mountvol R: /s
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> xcopy /E refind R:\EFI\refind\
C:\Users\xxxx\Downloads\refind-bin-0.13.2\> r:
R:\> cd \EFI\refind
R:\EFI\refind\> del /s drivers_aa64
R:\EFI\refind\> del /s drivers_ia32
R:\EFI\refind\> del /s tools_aa64
R:\EFI\refind\> del /s tools_ia32
R:\EFI\refind\> del refind_aa64.efi
R:\EFI\refind\> del refind_x64.efi
R:\EFI\refind\> rmdir drivers_aa64
R:\EFI\refind\> rmdir drivers_ia32
R:\EFI\refind\> rmdir tools_aa64
R:\EFI\refind\> rmdir tools_ia32R:\EFI\refind\> rename refind.conf-sample refind.conf
R:\EFI\refind\> mkdir images
R:\EFI\refind\> copy C:\Users\xxx\Pictures\mtstmichel.jpg images
R:\EFI\refind\> bcdedit /set "{bootmgr}" path \EFI\refind\refind_x64.efi

That next to the last line is because I wanted to have a picture of my “happy place” (Mount Saint Michel off of the northern coast of France) as the background for rEFInd. I edited the refind.conf file and added the following lines:

banner images\mtstmichel.jpg
banner_scale fillscreen

A quick reboot shows that rEFInd is installed correctly and has my customized background. Don’t be alarmed that the first time you boot up with rEFInd is slow, I think it is doing some scanning and processing and caching because the second and subsequent boots are faster.

Cloning the Drives

The process that I am going to follow, at a high level, is to first clone the contents of my primary 1TB 2280 NVMe drive in my T480 to a spare 256GB drive. I will then erase the 1TB drive and clone the contents of my T14’s drive to it (it’s only 512GB). I will then erase the 512GB drive and clone the 256GB drive back to it. Finally, for good operational security (OpSec) purposes, I’ll use the open source Windows program Eraser erase the 256GB drive. At this point I should have a bootable T480 (with a fried Thunderbolt bus – grr…) on the 512GB drive, and a bootable T14 on the 1TB drive.

I’m using Clonezilla, an open source tool that I burn to a bootable USB drive to do the cloning. For hardware that I am using to accomplish all of this, first I use a Star Tech device that allows me to plug m.2 drives into a little box that then acts as a 2.5 inch SSD drive. I plug that into a Wavlink USB drive docking station that can hold either 3.5″ or 2.5″ drives.

Another piece of software that I use as part of this process is GPartEd Live – an open source tool that allows you to create a USB drive that boots into the GPartEd software (the Gnu Partition Editor). This allows me to view the partition structure of one drive and create an analagous partition structure on another drive. The built-in tools for Windows to do this work (Disk Manager for example) can create hidden partitions under the covers that can cause problems with this process. I prefer to use GPartEd to ensure that I can see and control everything that is going on.

Step One is to take the T480, boot it into Windows and connect the Wavlink device to it with the 256GB NVMe drive plugged into it via the StarTech adapter. While I’m using Eraser to wipe the 256GB drive, I also go into Windows settings and decrypt the Windows disk by turning off BitLocker for it. This may not be necessary but it makes me feel more comfortable to do the cloning with unencrypted Windows drives because the key for the encryption is store in the TPM device on the motherboard and I’m not sure if the fact that the underlying hardware changes would muck that up. After the erase and decrypt is finished, I shrank the partition using “Disk Management” on Windows to be smaller than the new physical disk. If you don’t do this, then Clonezilla won’t allow you to clone from a larger partition to a smaller one.

Next we will need to reboot the machine to GPartEd Live. For the destination drive, you will need to use the “Device” menu and create a new GPT partition table. Take a look at the source drive and make a note of the various partitions, their flags, and their sizes. On the destination drive, recreate that partition structure with the same flags and the same or slightly larger size. I generally bump up the size of the partition by just a bit in order to avoid getting into trouble with rounding the size for display on the screen. If you get it wrong, don’t worry, Clonezilla will yell at you and you’ll have to go back and do this over again. 🙂

When launching Clonezilla, since I have the high resolution display on the T480 (a mistake I’ll never make again, HiDPI is a PITA in everything but Windows) I had to use my cell phone to zoom in on the microscopic text and select the “use 800×600 with large fonts from RAM” option. With readable text, I then make sure that I’m choosing “device-device” from the first menu (not the default). Next, select “Beginner Mode” to reduce the complexity of the choices you’ll have to make. After that, you want to select “part_to_local_part” to clone from one partition on the source drive to the corresponding partition on the destination drive. Finally, select the source partition and the destination partition. I recommend you do the smaller partitions first and then let the main C: partition (the largest one) grind because it can take a long time to clone.

After cloning the T480 drive, I removed it from the machine and was ready to clone the T14’s drive to it. This is where I ran into a “keying” problem with m.2 drives. Some are “B” keyed, and some are “B+M” keyed. This refers to the number of cutouts where they plug into the slot. Well, it looks like the NVMe drives in both the T480 and the T14 don’t fit the StarTech adapter. After some juggling around I found an old 256MB drive that I was able to use to get the swap completed.

Creating the OpenBSD Partition

To do this, I will use “Disk Manager” on Windows and shrink the NTFS partition (if necessary) to make room for OpenBSD and then create a new partition on the drive that takes up the remaining space. If you check the “don’t assign a drive letter” box and the “don’t format the partition” box, you’ll get a raw, unformatted partition that takes up the remaining space on the disk.

That new raw partition will be changed in OpenBSD to be the home of the encrypted slice on which I’ll be installing the operating system. After creating that partition, it’s time to download the 6.9-current .IMG file for the latest snapshot and use Rufus on Windows to create the USB drive and reboot from it.

Once in the OpenBSD installer, drop immediately to the shell and convert that NTFS partition into an OpenBSD partition. That will be where we we put the encrypted slice that we will be installing to. To do this, run the following commands:

# cd /dev
# sh ./MAKEDEV sd0
# fdisk -E sd0

sd0: 1> print
sd0: 1> edit 4
Partition id: A6
Partition offset <ENTER>
Partition size <ENTER>Partition name: OpenBSD
sd0*: 1> write
sd0: 1> exit

The print command above should show you the 4 partitions on your drive (the EFI partition, the Windows partition, the WindowsRecovery partition and your fourth partition that will hold OpenBSD that you created above).

Now that you have a partition for OpenBSD, you’ll want to copy the EFI bootloader over to your EFI drive. You’ll later make a configuration change in rEFInd to not only display it on the screen, but also show a cool OpenBSD “Puffy” logo for it!

# cd /dev
# sh ./MAKEDEV sd1
# mount /dev/sd1i /mnt
# mkdir /mnt2
# mount /dev/sd0i /mnt2
# mkdir /mnt2/EFI/OpenBSD
# cp /mnt/efi/boot/* /mnt2/EFI/OpenBSD
# umount /mnt
# umount /mnt2

Now that you have an OpenBSD EFI bootloader in its own directory on the EFI partition, you’ll want to create the encrypted slice for the operating system install:

# disklabel -E sd0

sd0> a a
sd0> offset: <ENTER>
sd0> size: <ENTER>
sd0> FS type: RAID
sd0*> w
sd0> q

# bioctl -c C -l sd0a softraid0
New passphrase: <your favorite passphrase>
Re-type passphrase: <your favorite passphrase>

Pay attention to the virtual device name that bioctl spits out for your new encrypted “drive”. That’s what you will tell the OpenBSD installer to use. To re-enter the installer, type “exit” at the command prompt. Do your install of the operating system as you normally do. When you reboot, go into Windows.

First, download an icon for OpenBSD from here (or pick your favorite elsewhere). Next, bring up an administrative command prompt and use the following commands to mount the EFI partition and add the icon for OpenBSD:

C:\Windows\system32> mountvol R: /s
C:\Windows\system32> r:
R:> cd \EFI\refind
R:\EFI\refind> copy "C:\Users\<YOUR USER>\Download\495_openbsd_icon.png" icons\os_openbsd.png

Save your changes, exit notepad and then reboot. rEFInd is smart enough to find your OpenBSD partition and use the icon you just added. When you select it from the rEFInd UI, you should be presented with your OpenBSD encrypted disk password and be able to boot for the first time. I ran into a weird thing with my snapshot where it couldn’t download the firmware. I formatted a USB thumb drive as FAT32, downloaded the amdgpu, iwx, uvideo and vmm firmware from the site, mounted the drive in my OpenBSD system and ran fw_update -p /mnt to get the firmware.

At this point, you should be able to reboot and select either Windows or OpenBSD from your rEFInd interface. My hope is that Lenovo will remove this absurd white-listing of the WWAN devices from their UEFI/BIOS code and I’ll be able to plug drives into it again; however, if (and this is more likely) they do not, I’ll at some point buy a 2TB m.2 NVMe drive for this machine, repeat this process and be able to add Linux to it.

I hope folks find this guide helpful.

Posted in Uncategorized | 2 Comments

OpenBSD 6.9 – Help with the “Failed to install bootblocks” issue

Hi everyone!

I purposely chose a non-catchy title so that it would be more easily found by the search engines as this one has been a challenge for me in my last several laptop installs and I always manage to fix it after fiddling around for a while. This time around, I thought I’d actually produce a decent (hopefully!) write-up on just how I go about addressing the problem from scratch. This will provide two benefits: 1) I’ll have a nice step by step the next time I install my machine <grin>; and 2) It might help some other intrepid soul who is running into the same issue!

While the FAQ is always the best place to go for the most up to date steps on formatting and installing a system, I tend to run a “weird” setup that it seems like confounds the installer and most easily-accessible information. What I normally do in my Thinkpad laptops is install a second (or third) SSD or NVMe drive and then dedicate the entire disk to a given operating system. For example, if I’m running Windows 10 and OpenBSD 6.9 on my Thinkpad T480, I install Windows on the first drive (so that if my machine falls into evil hands and they power it on, it will just default boot into Windows and they might not even suspect OpenBSD is on the machine) and then I install OpenBSD onto the second drive. I then use the UEFI or BIOS boot menu to choose the OpenBSD drive to boot from.

Install Windows

I started off by installing Windows from a USB key to the primary drive in the laptop. As is my custom, after install, I put on all of the drivers and used the group policy editor to increase the BitLocker encryption from 128-bit AES to 256-bit AES. I also edited the registry to allow Outlook’s OST file to expand beyond the pitiful limit that it defaults to. After a reboot, I start the BitLocker encryption process and connect my email accounts.

If you are installing OpenBSD on a drive that has previously had something on it, it’s always a good idea to erase that drive. I use an open source tool for Windows called Eraser if I’m on Windows or good old dd if I’m on Linux. Eraser’s UI is a bit weird. It requires that you create a task that you can “run manually”, select the disk to be erased (in my cased “Hard disk 1”) and then select an erasure method (I use Pseudorandom 1-pass), then run the task manually.

I then download the install69.img file from my favorite mirror (https://openbsd.cs.toronto.edu/pub/OpenBSD) and use Rufus to transfer it to a bootable USB drive. I reboot, hit <F12> to get a boot menu from the UEFI, select my USB drive and then boot into the OpenBSD installer.

Install OpenBSD

The first thing I do is look at my dmesg to see what devices my drives have been attached to:

# dmesg | grep -i sd

This shows (in my case) that my Windows drive is connected to sd0, my blank drive that I will put OpenBSD on is connected to sd1 and my USB installer device is connected to sd2. Next, I need to create the necessary /dev devices:

# cd /dev
# sh ./MAKEDEV sd1
# sh ./MAKEDEV sd2

If you do a quick ls, you should see that the MAKEDEV script created the necessary device files and you should be good to proceed to the next step. Next, we want to initialize the sd1 drive to a GPT partitioning scheme and create the initial EFI partition on the disk. Fun fact, the EFI partition (while its own partition type) is formatted using FAT32 so thanks Windows 95! Here’s how you do this:

# fdisk -iy -g -b 960 sd1
# newfs_msdos /dev/rsd1i

Note my use of the /dev/r device (the raw device) and not the /dev/sd1i (normal device) in that second command. I’m not entirely sure if that is necessary, but the nice Reddit post that sparked me to think about how to do this did so why not, eh? If you get a weird error message trying to run newfs_msdos, it is likely that you have some previous partitioning data on that drive and it would be a good idea to completely erase it (see above).

Now, we need to mount the new partition, create the necessary directory structure that UEFI looks for and put the UEFI loader file from our installer USB drive into that directory:

# mount /dev/sd2i /mnt
# mount /dev/swd1i /mnt2
# mkdir -p /mnt2/efi/boot
# cp /mnt/efi/boot/* /mnt2/efi/boot

Now, we need to create the slice in the OpenBSD partition for the encrypted filesystem (you can skip this if you want to not have an encrypted drive):

# disklabel -E sd1

a a [ENTER]
offset: the default given
size: *
type: RAID
w [ENTER]
q [ENTER]

At this point, we have a slice set up as type “RAID” so we need to use the bioctl program to set up the encryption information along with the drive’s encryption password:

# bioctl -c C -l /dev/sd1a softraid0

You should see in the response to the above command the name of the new “virtual” encrypted disk. That is the disk that you will be installing OpenBSD onto. When you reach the question in the installation program about “Which disk is the root disk?”, enter that value (in my case, sd4). When i tasks whether or not you want to “Use (W)hole disk MBR, whole disk (G)PT or (E)dit?”, pick the MBR option (I know, this is counter intuitive but trust me here).

After the installer reboots the system, I press the [F12] key to get the boot menu (your key might be different if you aren’t running a Thinkpad) and select the disk I have installed OpenBSD on. I am immediately presented with the password prompt to decrypt the encrypted slice “virtual” disk and, upon entering it, I get the boot prompt. Everything proceeds as normal from that point forward and I am presented with the login prompt for my new system.

Updated Laptop Setup

If you are still with me and want to see how I set up my OpenBSD desktop (I get criticized slightly for making it “too heavy” with “too many packages” but I have to use Ubuntu as well for what I do and I like to have the UI be as consistent across the two operating systems as I can. Therefore I install Gnome 3 along with some gnome tweaks and plugins that give me the same theme and dock as Ubuntu.

To start out, I log in as root and enable my user account:

# echo "permit persist keepenv [my_non_root_user] as root" > /etc/doas.conf

At this point, I log out and back in as my unprivileged user account and work from there using the doas command to escalate privileges when needed. I start out by updating my system:

$ doas syspatch

Now, set up power management (this is a laptop):

$ doas rcctl enable apmd
$ doas rcctl set apmd flags -A
$ doas rcctl start apmd

I also add the following line to /etc/rc.conf.local (I haven’t cracked the code on how to do this with rcctl yet):

ntpd_flags=""

Now I need to make sure that I have the right level of resources available to my non-privileged user for tools like nextcloudclient (which opens a TON of files during its synchronization process). To do this I typically put myself in the “staff” and “operator” groups:

$ doas usermod -G staff MY_USERNAME
$ doas usermod -G operator MY_USERNAME
$ doas usermod -L staff MY_USERNAME

I then make the following changes to the “staff” section in /etc/login.conf:

...
staff:\
  :datasize-cur=4096M\
  :datasize-max=infinity\
  :maxproc-max=512:\
  :maxproc-cur=256:\
  :openfiles-max=102400:\
  :openfiles-cur=102400

I then have to add a line to /etc/sysctl.conf to take complete the work on allowing more open files on this system:

kern.maxfiles=102400

Now that I have modified all of this stuff and patched the system, it’s a good time to reboot.

Next, I add all of the packages I can’t live without (I know it seems like a small list, but they pull in a lot of others):

$ doas pkg_add gnome gnome-tweaks gnome-extras firefox chromium libreoffice nextcloudclient keepassxc \
   aisleriot evolution evolution-ews tor-browser shotwell gimp vim colorls cups reposync

A few changes to /etc/rc.conf.local are needed to boot into Gnome3:

$ doas rcctl disable xenodm
$ doas rcctl enable multicast messagebus avahi_daemon gdm cupsd

To avoid taking a kernel panic in my use-case (I have multiple monitors connected through a Lenovo Thunderbolt/USB-C dock), I have to manually switch to the Intel DRM driver in my /etc/X11/xorg.conf by adding the following section:

Section "Device"
  Identifier "Intel Graphics"
  Driver "intel"
EndSection

At this point, it’s time to reboot and go into GUI land. If you run into a situation where you have a monitor mirrored and no way to turn that feature off, I have found that turning all of the monitors off and back on generally fixes things. Once I have everything the way I would like it, I then download the yaru-remix-complete theme and install it manually by doing this:

$ cd ~
$ mkdir .themes
$ cd .themes
$ mv ~/Downloads/yaru-remix-complete-20.04.tar.xz .
$ unxz yaru-remix-complete-20.04.tar.xz
$ tar xf yaru-remix-complete-20.04.tar
$ mv themes/* .
$ rmdir themes
% doas mv icons/* /usr/local/share/icons
$ rmdir icons
$ doas mv wallpaper/* /usr/local/share/backgrounds/gnome
$ rmdir wallpaper
$ rm yaru-remix-complete-20.04.tar

Now launch gnome-tweaks and from the “Extensions” tab, turn on “user-themes”. Restart gnome-tweaks, go to the “Appearance” tab and select “Yaru-remixt” for applications, icons, and shell. On the “Top Bar” tab, enable “Battery Percentage” and “Weekday”. In the “Window Titlebars” tab, enable “Maximize” and “Minimize”.

Next, we want to put the wonderful extension Dash-To-Dock into the environment. To download it, go to https://extensions.gnome.org/extension/307/dash-to-dock/ and pick the right sehll version and extension version to match your install of Gnome shell. You will have to manually install it because the Gnome shell extension integration doesn’t appear to be enabled for OpenBSD:

$ cd ~/Downloads
$ unzip dash-to-docmicxgx.gmail.com.v67.shell-extension.zip
$ cat metadata.json

The value for “uuid” in that file is what you want to use in the next step:

$ mkdir -p ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ cd ~/.local/share/gnome-shell/extensions/dash-to-dock@micxgx.gmail.com
$ unzip ~/Downloads/dash-to-docmicxgx.gmail.com.v67.shell-extension.zip

At this point, reboot to pick up the changes you’ve made, log in and launch gnome-tweaks again. On the “Extensions” tab, enable dash to dock. From the settings gear icon, select “extend to edge” and “show on all monitors” and you should have a very serviceable dock that is quite similar to the one in Ubuntu.

I then switch the terminal to “White on Black” for a better look and a 16-point font, and pin my favorite apps to the dock. Now for some terminal-level tweaks. I typically edit my ~/.profile file and add a couple of things:

export PS1="\[033[01;32m\]\u@\h\[\033[00m\]:\[033[01;34m\]\w[\033[00m\]$ "
export ENV=$HOME/.kshrc
export CVSROOT=/home/cvs

I then edit the ~/.kshrc file to add some aliases:

alias ls="colorls -G"
alias vi="vim"

A couple of other changes I typically make include turning off suspend when I’m plugged in (Settings | Power | Automatic Suspend), setting Firefox as my default browser (Settings | Default Applications), and setting my Time Format to “AM/PM” instead of “24-hour” (Settings | Date & Time).

I also take a moment to switch to “View -> User Interface -> Tabbed” in the Write, Calc, and Present applications in LibreOffice. This gives an interface reminiscent of the one in Microsoft Office – which I find helpful in terms of standardizing my workflow across operating systems.

After installing the appropriate browser security plugins and configuration changes from my favorite https://privacytools.io site, it’s time to set up CVS on my system for development purposes. To do this, I always double-check the AnonCVS link from the OpenBSD website left navigation panel and follow the steps to:

  1. Pre-load the source tree (for src, sys, ports and xenocara)
  2. Follow the instructions to give your non-root user write access to the src, ports and xenocara directories
  3. Mirror the repository with reposync (Note: I have had the best luck using anoncvs.comstyle.com as my mirror)

I then typically will add a crontab entry to keep things in sync:

$ doas crontab -e

...
0    */4    *    *    *    *    -n su -m cvs -c "reposync rsync://anoncvs.comstyle.com/cvs /home/cvs"

After syncing up my NextCloud data and my email data, I now have what I consider to be a secure, fully-functional OpenBSD laptop, configured the way I like it.

Posted in Uncategorized | Leave a comment

Let’s Talk Password Vaults

When “civilians” asks me what the most important thing they can do to protect the security of their home computers, I always answer the same way – make sure you patch and do so automatically! However, as Windows 10 finally has started defaulting to this behaviour (and they seem to be taking security way more seriously at Microsoft these days), my next favourite recommendation for folks is that they invest energy in a password vault.

For the uninitiated, a password vault is a piece of software that stores the passwords you use on various services and then encrypts them with a master password so that they are safe. “Since I use password123 as the password for all of the sites I visit, why would I need that”, you might be saying. Arrrrggggghhhh!!!!

You should use a unique, long, complex and randomly-generated password for every site you visit! How can anyone who is not superhuman do that? Well, it’s a bit circular but see above – a password vault. The good ones will even help you generate passwords and give you a health report on the ones you store in it, indicating that they might not be long enough, etc.

Ah, but my fellow paranoids might be thinking that this puts all of your eggs in one basket. And if you store them in the cloud (someone else’s computer) then OMG! Doomsday scenario! Well, I have a plan for that (no, I’m not secretly Elizabeth Warren)! Be all self-hosted with it!

So, how do I recommend setting things up? First things first, you need a place to store the password file. You could put it on your local hard drive but that would make it difficult to use it across your multiple devices (most everyone has a smart phone these days and you want to be as secure on that device as you are on your home computer). As I always recommend, put that file on a server in a country that has strong privacy laws and isn’t part of the dreaded Fourteen Eyes. Switzerland is a good choice and there are Swiss owned VPS providers who will give you a small virtual server for a reasonable monthly fee.

I recommend the open source project “NextCloud” as a good self-hosted service to run for this purpose. It is incredibly flexible and has a very active community around it creating all sorts of plugins, etc. You can buy space on a public NextCloud server but that would defeat the whole purpose of having the control of the server yourself and putting it in a country that is safer. There is a great tutorial on DigitalOcean for setting up NextCloud on an Ubuntu LTS release that I’d recommend you reference. While you are at it, take a look at their “initial Ubuntu server setup” for some other security recommendations. Add to it a LetsEncrypt certificate with automatic renewal set up and you have a pretty decent platform for storing your files.

OK. There are two ways you can get your password file to/from the server. You can either share it directly from your NextCloud server using WebDAV or you can just install the NextCloud desktop client software (available for pretty much every operating system) to sync a local folder with a folder on your NextCloud server. Typically I use the sync solution for desktops/laptops and the WebDAV solution for my mobile devices.

Now, you have a place to store files but what file are you going to store there? More specifically, what password vault software do I recommend if you want to go the self-hosted route. Well, I really really really like KeePassXC as my password vault software and file format of choice. It’s well-written, free, open source, what isn’t there to love about it!

To set it up, install the software on your desktop/laptop and create a new password database in your directory you are syncing to NextCloud. Make sure you pick a complex, random, long password that you can remember without writing it down as the master password for the vault. If you want to get even more secure, check out the Yubikey option for two factor authentication for your vault. You can also set up the browser extension for it if you want the convenience, but keep in mind it does increase your attack surface so you might just want to go old school and copy paste the credentials from the KeePassXC client software.

For Android and iOS, I use the app “Strongbox” and, as I mentioned above, use WebDAV over https as the way I read and write the file from my NextCloud server. The end result is that I have a single, secure password file that, even if my NextCloud server is compromised, is encrypted and would be a nightmare to try and hack your way into given the length, complexity and randomness of my master password.

KeePassXC has some really nice features you probably want to start leveraging right away. It has a great random password generator so that you can create crazy complex passwords that are unique for each service you use. In addition, it has a “Health Check” report that you can run to check up on your stable of credentials to make sure you aren’t re-using any of them or have some that are not complex enough.

In addition there is an integration with “HaveIBeenPwned” that allows you to check to see if any of the credentials you use have been exposed in a data breach. It does so by sending a secure, cryptographic hash of part of your password to the service so your risk is minimal other than your IP address being exposed to the service. All in all, I trust the author of the service and think it’s a great thing to do periodically.

Finally, I recommend taking a look at the security settings in your KeePassXC client or your Strongbox app. There is a feature that clears your clipboard and logs you out of the application after a period of inactivity. That’s literally the first thing I turn on when I install either of the applications because it keeps you from having your device stolen and being logged into the most secure thing you probably have.

All in all, I hope you enjoyed this post. I really do think that password vaults are an incredibly important development in the field of cybersecurity and would encourage everyone to use them, even if you want to go with a commercial one that you don’t have to self host.

Posted in Uncategorized | 1 Comment

Fast Follower – Even More Privacy Centric DNS!

After posting this blog entry, I had a number of people reach out to me to tell me that, while my configuration recommendations were good, there was something new in the DNScrypt protocol that I could take advantage of to make my DNS even more obfuscated – “Annonymized DNS”.

The way this protocol works is that you send your encrypted DNS request to a relay (which can’t decrypt it). The relay then sends the encrypted request to your resolver. The resolver decrypts the request, resolves it and sends the encrypted answer back to the relay. The relay (which can’t decrypt the answer either), sends the encrypted answer back to you where you can decrypt it and stick it in the cache.

If you followed all of that, the relay (even if it is evil and logging the heck out of everything you do) can’t know what your request is. It only passes it along. The resolver (again, it could be evil and logging stuff too) does know your request but doesn’t know that it is coming from you! It only knows that the request came from the relay.

This is a pretty cool thing when it comes to making your DNS resolution even more private!

To make this work with the setup (regardless of the operating system), you simply need to add one more section to your dnscrypt-proxy.toml configuration file:

[anonymized_dns]routes = [
{ server_name=’cs-ch’, via=[‘sdns://gRE1MS4xNTguMTY2Ljk3OjQ0Mw’] },
{ server_name=’faelix-ch-ipv4′, via=[‘sdns://gRMxODUuMjUzLjE1NC42Njo0MzQz’] },
{ server_name=’yofiji-se-ipv6′, via=[‘sdns://gS5bMmEwMjoxMjA1OjM0ZTc6OGUzMDpiMjZlOmJmZmY6ZmUxZDplMTliXTo4NDQz’] }
]

The server_name fields should be recognizable because they are the ones that you listed for your resolvers. The sdns mumbo-jumbo is the name of the relay you are asking to use for each one of them. They are published on this page. I’d recommend picking ones in countries you like and that are different providers from the resolvers. If you have an evil service provider that runs both a relay and a resolver, they could piece the traffic back together. If they are different entities, it is far less likely that they would.

Enjoy!

Posted in Uncategorized | Leave a comment

Fixing an OpSec Hole…

As return readers of this blog know, I try pretty hard to maximize my privacy and security online and also share what I’ve learned with the readers. One (in retrospect) painfully obvious hole, however, in my operational security (OpSec for the cool kids) is that I use the same bloody username for most of my online accounts. It doesn’t take a data brokerage genius to figure out that all of these accounts are owned by the same person. Duh!

So if you are a creature of habit like I am and would like to improve your operational security, I thought this would be a helpful post. I’m going to outline just how to do this along with some tools that are pretty useful as well. First off, what’s the best way to come up with a new username for a service that won’t give away who you are? Turns out, there are a variety of websites that will generate readable, random usernames for you. One that I found particularly helpful was from LastPass, a password vault application.

Thus armed, it’s now down to the laborious process of figuring out if you can rename your account on a variety of services and, if not, deleting and recreating said account by hand. You might also want to consider deleting some of these accounts if you don’t use them any more. That will reduce your personal attack surface in the event that one of these services is breached.

Just for fun, I have a set of links below that I discovered that should save you some time if you frequent these sites / services. What’s surprising is how many of these services do not let you rename your account. For those, it’s best to delete it unless there are specific digital purchased tied to that account (damn you Steam!).

After you think you are done, do yourself a favour and do a DuckDuckGo search of your commonly-used username. You might find some accounts out there that you had forgotten about. Chances are, if you forgot them you probably don’t use them so take the opportunity to delete them and decrease your attack surface further.

Posted in Uncategorized | Leave a comment

How To: Privacy-centric DNS

For those who aren’t as technically minded, it’s worth talking about what happens when you type a URL into your web browser and how that impacts privacy. The first thing is that the string you type in, say https://mycoolsite.com needs to be turned into a numeric IP address. This is done using a protocol called “Domain Name Service” or DNS that is as old as the hills. When you get an IP address from your Internet Service Provider (ISP), it’s usually done using something called Dynamic Host Configuration Protocol, or DHCP. At the same time you get that IP address, the ISPs DHCP server will generally pass other configuration parameters such as your DNS server.

OK, so that’s the mechanics of it. What is the privacy implication. Well, ISPs like to make money. And while they make money from you by providing you with Internet service, they also like to make money other ways. One of those is selling information on you to data brokers and advertisers. Since they are turning your URLs into IP addresses using their servers, it’s a pretty easy thing for them to sell the URLs that you like to visit to those data brokers and advertisers. Given that they have to log this information in order to sell it, they can also turn it over to government agencies when they are either subpoenaed or (worst case) handed a national security letter which they can’t even disclose that they were given.

In other words, your ISP knows every web site you visit and happily tells other people about it that you might not want told. Let that sink in for a moment. All of the work you might (or might not <sigh>) do to stay private online goes out the window when you type that address into your URL. By the way, this also goes for your mobile service provider. When you are out and about (i.e. not on WiFi), your cell service provider is your ISP and they have all of that same power and information. Yikes, eh?

So, what can you do about it? Well, using some other regular DNS provider like Google (with the famous 8.8.8.8 DNS server) or CloudFlare doesn’t really help because who trusts their motives? You could use a privacy-respecting DNS provider that doesn’t log their information and has a warrant canary (a mechanism where they regularly post on their website that they haven’t been issued a national security letter until they stop posting that – which means they have), but since the DNS protocol is as old as the hills, that traffic is sent unencrypted over the internet and an adversary could capture it anyhow.

What to do? Well, there is a clever multi-part solution that I’m going to outline here for Linux, OpenBSD (of course) and Windows. It involves running a local DNS resolver on your laptop or desktop machine (why let your ISP resolve them when you can do so yourself) and when your local resolver doesn’t know the answer, using a protocol called DNScrypt to send that DNS request in an encrypted fashion, upstream to a DNS server that doesn’t log and is privacy respecting. It’s not airtight (that upstream server could be lying about being privacy respecting or could get compromised and not even know it) but like most privacy and security, the goal isn’t to be perfect, it’s to make yourself a much harder target so that the bad guys look for easier sheep to fleece.

Linux

For my example in Linux, I’ll be using Ubuntu 20.10 as the target operating system and version so if you are on a different distribution, your mileage may vary but the building blocks will be the same. I based this how-to on a combination of a Reddit post that I found while searching DuckDuckGo (you REALLY should stop using Google and who uses Bing anyhow?), a great article on LinuxConfig that I found the same way and finally

First things first, you need to install the necessary open source tools:

# apt install dnscrypt-proxy dnsmasq

Next, add the line “dns=127.0.0.1” to the [main] section in your local /etc/NetworkManager/NetworkManager.conf file. This tells NetworkManager to force the DNS manager to be the one installed locally (hence the “localhost” bit) on your machine.

After that is out of the way (please don’t reboot now because you won’t have the necessary other bits configured, please be patient <grin>), edit your local /etc/dnscrypt-proxy/dnscrypt-proxy.toml configuration file to set up the upstream server you want to use. I’d recommend taking a look at this list because 1) it’s from the documentation site for DNScrypt; and 2) it has a list of servers that are privacy respecting and don’t log connections. I’d also recommend refreshing your memory as to who the “Fourteen Eyes” nations are who do intelligence sharing with the US and try to pick a resolver that is in a country not on that list. Just sayin’…

Anyhow, add the following lines in the global section of the file:

listen_addresses = [‘127.0.0.1:53000′,'[::1]:53000’]
server_names = [‘cs-ch’,’faelix-ch-ipv4′,’yofiji-se-ipv6′]

I chose to use multiple servers in my configuration to make it resilient in case one of them happens to fail. I’m making the assumption (check my math on this) that the first two (the ones in Switzerland – my privacy-respecting country of choice) are the primary and the other one is there as a back-up in case the first two don’t respond.

Because I couldn’t get the “listen_addresses” bit working from the config file (I left it in there just in case), I also edited the /etc/systemd/system/sockets.target.wants/dnscrypt-proxy.socket file to specify the port I want this service running on. In the [Socket] section, I modified the two lines with the port 53000 setting that I have in the config file:

ListenStream=127.0.2.1:53000
ListenDatagram=127.0.2.1:53000

Now that we have DNScrypt set up, we need to set up the dnsmasq service to be our local, lightweight caching resolver that forwards its upstream resolution requests to DNScrypt. To do this, we need to edit it’s configuration file /etc/dnsmasq.conf and add the following lines (the default file has everything commented out so instead of searching for them and un-commenting them, I just slammed this at the end of the file):

no-resolv
server=::1#53000
server=127.0.0.1#53000
listen-address=::1,127.0.0.1

After setting this up, there is a REALLY IMPORTANT STEP you need to do and that is to disable systemd-resolvd:

# sudo systemctl disable systemd-resolved

After you have done this, you should be able to reboot and resolve DNS names (easy test, go to a site you rarely visit in your browser). To verify that it is your privacy-respecting configuration that is doing this, try the following tests:

# sudo lsof -iTCP:53000 -sTCP:LISTEN

You should see several lines showing tha t the proxess “dnscrypt” running as user “_dnscrypt-proxy” is listening on this socket.

Next, check who is listening on port 53 (the “regular” DNS port):

# sudo lsof -iTCP:53 -STCP:LISTEN

You should see that the process ‘dnsmasq’ running as user ‘dnsmasq’ is listening on this socket. So far so good.

Now, use the ‘dig’ command to force the resolution of a URL that you don’t normally visit:

# dig -t A microsoft.com @127.0.0.1

You should get a display of the IPs that match that URL. Finally, if you are really paranoid (and I am), verify that by turning off dnscrypt-proxy that you CANNOT resolve DNS queries:

# systemctl stop dnscrypt-proxy
# systemctl stop dnscrypt-proxy.socket
# dig -t A oracle.com @127.0.0.1
# dig -t A oracle.com

This should fail to resolve things. Make sure you restart the two systemd services after you have verified this.

Congratulations. You have just significantly improved the privacy of your machine. Now go do this on all of your Linux systems.

OpenBSD

I listed these sections in alphabetical order so don’t read into my choice of having “Linux” first as an indication that I don’t love me some OpenBSD! For this part, I used my coreboot+tianocore Thinkpad T440p running OpenBSD 6.8 off of a secondary drive (the primary drive had the fresh Windows 10 install on it for the Windows version of this how-to).

First things first, we need to install dnscrypt-proxy:

# pkg_add dnscrypt-proxy

The configuration file is logically /etc/dnscrypt-proxy.toml and you need to edit it and add our server_names and listen_addresses to it:

server_names = [‘cs-ch’,’faelix-ch-ipv4′,’yofiji-se-ipv6′]
listen_addresses = [‘127.0.0.1:53000′,'[::1]:53000’]

Once you have that saved, you should enable the daemon using the following command:

# rcctl enable dnscrypt_proxy

Note the underscore instead of a dash there for the daemon name. Now start it up:

# rcctl start dnscrypt_proxy

To verify that the daemon is running and listening on our desired port, simply use netstat:

# netstat -an | grep LISTEN

You should see that there is a process listening on port 53000 in both localhost IPv4 and IPv6.

Now we need to get a local resolver running and forwarding to DNScrypt. Fortunately, OpenBSD has a really nice builtin one called unbound. Enable and start it using the following commands:

# rcctl enable unbound
# rcctl start unbound

Now we need to tweak our dhcp configuration (if you are not running a static IP address) to override any “suggested” name server from your dhcp server. To do that, edit the /etc/dhclient.conf file and add the line:

supersede domain-name-servers 127.0.0.1;

Restart your network with the following command:

# sh /etc/netstart

And then check your /etc/resolv.conf file to verify that the name server is in there along with no other.

Use the following commands to verify that your DNS resolution is working using unbound locally:

# dig -t A microsoft.com @127.0.0.1
# dig -t A microsoft.com

Assuming that goes well, now we need to edit unbound’s configuration file to use localhost port 53000 for our upstream resolver. That file is in /var/unbound/etc/unbound.conf and you need to edit the forward-zone section to look like this:

forward-zone:
name “.”
forward-addr: 127.0.0.1@53000

In addition, in the “server:” section of the file, you need to add the following line in order to get the forwarding working properly between the unbound daemon and the dnscrypt-proxy daemon. Without it, localhost will be ignored for queries:

do-not-query-localhost: no

Use the following commands to restart the service and validate that name resolution is working:

# rcctl restart unbound
# dig -t A oracle.com @127.0.0.1
# dig -t A oracle.com

Now, to verify that you have unbound listening on port 53 and dnscrypt-proxy listening on port 53000, run the following:

# netstat -an | grep LISTEN

If you see listening processes on both the IPv4 and IPv6 ports, you should be good. Finally, to verify that the upstream requests are being forwarded to dnscrypt-proxy, stop its daemon and test using dig:

# rcctl stop dnscrypt_proxy

Windows

OK. Whew. I’m going to be a bit out of my depth on this one (I’m not a big-time Windows power user any more) but the Internet has some good stuff on it so here goes nothing. For my setup, I had a dead-fresh install of Windows 10 on my coreboot + tianocore Thinkpad T440p that I’m using. I figured that way no extra configuration would get in there that might mess things up.

I figured there are a lot of bad actors on the Internet who might post some bad info on how to set this up in such as way as to route all of your traffic to them (forgive me, I’m paranoid <grin>), so I thought the best place to go would be the GitHub repository for the DNScrypt-proxy project.

From there, I clicked the link to download the latest version of DNScrypt-proxy for Windows. Given that, I read the documentation on how to install the service myself, not using a possibly compromised “helper”. Also, this helps me learn where the files are and how to configure things which is always good. I’ll document my process here. From this point forward, assume that I’m running a PowerShell command prompt with the “run as Administrator” option.

I copied the contents of the “win64” directory to C:\Program Files\DNScrypt-proxy to start things off. Best to have it in a “standard” (I think) place. I then move to that as my current directory. Next, copy the example-dnscrypt-proxy.toml file to “dnscrypt-proxy.toml”:

PS > copy example-dnscrypt-proxy.toml dnscrypt-proxy.toml

Edit the file using notepad (I guess) to have the same server_names line that we had for the Linux install above:

server_names = [‘cs-ch’,’faelix-ch-ipv4′,’yofiji-se-ipv6′]

Now, let’s make sure everything is working by running ./dnscrypt-proxy from that PowerShell prompt:

PS > ./dnscrypt-proxy

You should see some diagnostic information finishing up with “dnscrypt-proxy is ready – live servers: 2” in your PowerShell prompt.

The instructions are a little off on the version of the page I was looking at (or maybe my Windows 10 fu is weak). Anyhow, I went old-school and went to my “Wi-Fi” settings in the UI and selected “Change adapter options” from the link on the right. I then went to my “Ethernet” and “Wi-Fi” icons, right clicked on them, selected “Properties” and then went to the “Internet Protocol Version 4 (TCP/IPv4)” item and clicked the “Properties” button.

For both adapters (Ethernet and Wi-Fi), I changed the page to “Use the following DNS server addresses” radio button and entered 127.0.0.1 in the “Prefered DNS server” address and “9.9.9.9” as the “Alternate DNS server” per the instructions. Just for shits and giggles I checked the “Validate settings upon exit” checkbox and hit OK. I didn’t get any error messages so I’m assuming it’s good.

I then repeated the process with the IPv6 properties, setting the “Preferred DNS Server” to “::1” (without the quotes) – the equivalent of localhost in IPv6 land. I also set the “Alternate DNS server” to “0:0:0:0:0:ffff:909:909” which is the IPv6 address of that server.

Back to the PowerShell prompt, I hit “Ctrl+C” to break out of the dnscrypt-proxy process that was running and executed the following command:

PS > ./dnscrypt-proxy -resolve example.com

This successfully verified that I was able to resolve the DNS query using my configuration file. Now for the fun part, installing the service. To do this, run the following command:

PS > ./dnscrypt-proxy -service install

If you don’t get any error messages, start the service:

PS > ./dnscrypt-proxy -service start

Assuming you are error free there, you actually have a working (without a local caching resolver) install of DNScrypt-proxy. The final step is to fiddle with a Windows 10 group policy setting for the Network Connect Status Indicator (NCSI) to prevent it from showing your network as “offline”.

To do this, run “gpedit.msc” to launch the Group Policy Editor. From there, drill down to:

Computer Configuration -> Administrative Templates -> Network .> Network Connectivity Status Indicator

In the right-hand pane, select the “Specify global DNS” policy and click the “Enabled” radio button. Now check the “Use global DNS” checkbox and hit OK.

At this point you should be safe to reboot and verify that things are working. After the reboot, bring up a “run as Administrator” PowerShell prompt and run the following command:

PS > netstat -a -b

You should see dnscrypt-proxy.exe listening on port 53 of the localhost IP address. Bring up a browser and hit a site you rarely visit to verify that you have name resolution working.

I don’t have the local cache part of this working on Windows yet but all I’m losing there is efficiency because I’m doing the DNS lookup each time I need to resolve something. All said, I think I improved the privacy of my Windows 10 install by a non-zero amount! 🙂

Mobile?

So what about mobile? That tends to be a platform that we all spend a lot of time on these days and I’d hate to leave it out. While my experience and expertise doesn’t allow me to truly test that this works like I can on desktop operating systems where I can use tcpdump, turn services on or off, etc. some sites that I trust lay down some recommendations that appear to be working when I test them. Therefore, use this at your own risk and do testing that you think makes sense for you and your threat model.

For iOS, I found the following link from PrivacyTools.IO (one of my favorite sites) that walks you through how to use an app on the iPhone to run dnscrypt-proxy as your local DNS resolver on iOS. Essentially, you need to install the app, click on the “edit” button in the toolbar when you launch it and add our standard “server_names” line below, then hit the checkmark button in the toolbar to save it:

server_names = [‘cs-ch’,’faelix-ch-ipv4′,’yofiji-se-ipv6′]

Then, you will want to click on the “hamburger button” in the toolbar on the left and turn on the “Connect On Demand” general option. After that I rebooted my phone just for good measure and I appear to be up and running.

For Android, I found an application on the Google Play store called Quad9Connect. It appears to use secure DNS to access servers but it lacks the fine-grained control I would like in order to specify which countries I want those DNS queries to be tunnelled through.

Another Android app that I have seen recommended on a variety of sites is InviZible Pro. This app allows you to have more control over your DNScrypt-proxy setup. The settings I’m running with are to turn off the Tor support, leave the DNSCrypt support on and set it to run at boot-time. I also turn on the “require_nolog” feature in the DNSCrypt Settings page. Finally, I go into the Network & Internet settings in Android, selecct VPN, select settings for InviZible Pro and enable “Always-on VPN”.

Posted in Uncategorized | 2 Comments