Tinker Take Two

I bought another Raspberry Pi 4B to replace an old 3B+ that did not want to play along any more. It had been acting as a web server, so it will need less software than the job scheduling server. The old server had been running Raspbian, but I am so satisfied with Alpine that I decided to switch, so I followed the first tinkering guide, but only installed:

apk add nano nodejs npm screen sudo

I only need nodejs, since that is what is used for the web server. After that I wanted to harden the system, but it turns out that ufw has moved to the edge community repository. In order to activate it edit /etc/apk/repositories.

nano /etc/apk/repositories

Add a tag named @community for it, and if you like me want the kakoune text editor then also add @testing, making the contents look as follows.

#/media/mmcblk0p1/apks
http://ftp.acc.umu.se/mirror/alpinelinux.org/v3.12/main
#http://ftp.acc.umu.se/mirror/alpinelinux.org/v3.12/community
#http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/main
@community http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/community
@testing http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/testing

Update to get the new package lists, then add and configure ufw.

apk update
apk add ufw@community
rc-update add ufw default 
ufw allow 2222 
ufw limit 2222/tcp
ufw allow 80
ufw allow 443

After that I followed the guide to disallow root login, enable ufw, and reboot, with one exception. When editing sshd_config I also changed to a non-standard port to get rid of most script kiddie attempts to hack the server. Find the line with:

#Port 22

and uncomment and change this to a port of your liking, for example:

Port 2222

Trust by Certificate

After logging in as the non-root user I created when following the guide, I can still switch to root by using su. I need to add certbot, that keeps the certificate of the server up to date and restore the contents of the www folder.

su
apk add certbot@community
cd /var
mount -t cifs //nas/backup /mnt -o username=myusr,password=mypwd
tar xvzf /mnt/www.tar.gz

Now when that is in place it’s time to update the certificates.

certbot certonly

Since I haven’t started any web servers yet, it’s safe to select option 1 and let certbot spin up it’s own. After entering the necessary information (you probably want to say “No” to releasing your email address to third parties), it’s time to schedule certbot to run daily. It will renew any certificates that are about to expire in the next 30 days.

cd /etc/periodic/daily
nano certbot.sh

The contents of this file should be (note that Alpine uses ash and not bash):

!/bin/ash
/usr/bin/certbot renew --quiet

After that, make that file executable.

chmod +x certbot.sh

With that in place I can start my own web server. It’s an extremely simple static server. The Node.js code uses the express framework and is found in a script named static.js with the following contents.

var express = require('express');
var server = express();
server.use('/', express.static('static'));
server.listen(80);

The HTML files reside in a subdirectory named “static”. For now I run the server in a screen, but will likely add a startup script at some point.

Superuser Do and Terminal Multiplexing

Since the server will listen on the default port 80 I need sudo privileges to start it. The recommended way is to let members of the wheel group use sudo. Depending on what you picked for a username, exemplified by “myusr” here, run the following.

echo '%wheel ALL=(ALL) ALL' > /etc/sudoers.d/wheel
adduser myusr wheel
exit
whoami
exit

The exit will return you to your normal user, from being root since “su” earlier. The second exit will end your session and you will have to log in again, in order for the “wheel” to stick.

screen
sudo node static.js

This will run the server in the foreground, so to detach the screen without cancelling the running command, press “Ctrl+a” followed by “d”. To check which screens are running you can list them.

screen -ls

This will list all screens:

There is a screen on:
3428.pts-0.www (Detached)
1 Socket in /tmp/uscreens/S-myusr.

In order to reattach to one of the listed screens, you do so by it’s session number.

screen -r 3428

Encrypted Backup to the Cloud

I will be hosting some things that I want to have a backup of, and this web server will not be running on a separate subnet, so my NAS is not accessible. I’ll therefore be backing up to OneDrive (in the cloud) using rclone. You will need access to rclone on a computer with a regular web browser to complete these steps. For this, I download rclone on my Windows PC. I will elevate privileges using su first.

su
apk add curl bash unzip
curl https://rclone.org/install.sh | bash

With rclone installed it is time to set it up for access to OneDrive.

rclone config

Select “New Remote”, and I named mine “onedrive”, then choose the number corresponding to Microsoft OneDrive. Leave client_id and client_secret as blanks (default values). Select “No” to advanced config and again “No” to auto config. Here is where you will need to follow the instructions and move to your computer with the web browser to get an access_token. Once this is pasted back into the config dialogue next select the option for “OneDrive Personal”. Select the drive it finds and confirm it is the right one and confirm again to finish the setup. Quit the config using “q” and test that the remote is working properly.

rclone ls onedrive:

Provided that worked, it is now time to enable encryption of the data we will be storing on OneDrive. Start the config again.

rclone config

Select “New Remote” and give this a different name, in my case “encrypted”, then choose the number corresponding to Encrypt/Decrypt. You will then need to decide on a path to where the encrypted data will reside. I chose “onedrive:encrypted” so that it will end up in a folder named “encrypted” on my OneDrive. I then selected to “Encrypt filenames” and “Encrypt directory names”. Then I provide my own password, since this Raspberry Pi will surely not last forever. I won’t be remembering salt, so I opted to leave it blank. Choose “No” to advanced config and “Yes” to finish the setup.

With that in place I will create a script that performs the backup, placed in the folder that I want to backup. I am going to run this manually and only when I’ve been editing any of the files I need to backup.

nano backup.sh

This file will have the following contents.

!/bin/sh
/usr/bin/rclone --links --filter "- node_modules/**" sync . encrypted:

It will filter out the nodejs modules, since they can and will be redownloaded when you run node anyway. After testing this script I can see something like the following on my OneDrive in the encrypted folder.

Prerequisites for Node.js Development

Since I moved from a 32-bit to a 64-bit operating system, some npm modules may be built for the wrong architecture. I will clean out and refresh all module dependencies using the following. There are lots of modules in my system, since it actually does more than just run a static web server, like being the foundation for Rita (our robotic intelligent telephone agent). Some modules may need to be built, which is why we need to add the necessary software to do so.

rm -Rf node_modules
apk add --virtual build-dependencies build-base gcc wget git
npm install
npm audit fix

For better editing of actual code (than nano) I will be using kakoune.

apk add kakoune@testing

Now, if you will be running this from Windows I highly recommend using a terminal with true color capabilities, such as Alacritty. Colors will otherwise not look as nice as in the screenshot below (using the zenburn colorscheme).

I believe that is all, and this server has everything it needs now. Those paying particular attention to the code in the screenshot will notice that the underlying SQLite database is Anchor modeled.

I am writing this guides mostly for my own benefit as something to lean on the next time one of my servers call it quits, but they could very well prove useful for someone else in the same situation.

Tinker, Tailor, Raspberry Pi

I went ahead and got myself a Raspberry Pi 4B with 4GB RAM, which I intend to use as a job scheduling server, only to find out that the suggested OS, Raspberry Pi OS, is 32-bit. Fortunately, the Linux distro Alpine, which I’ve grown very fond of lately, is available for Raspberry Pi as aarch64, meaning it’s both 64-bit kernel and userland. Unfortunately the distro is currently, as of version 3.12, not set up for persistent storage and is more of a live playground. Gathering bits and pieces from various guides online, this can however be remedied with some tinkering. In this article you will find how to set up a persistent 64-bit OS on the Raspberry Pi, share a USB attached disk, while also adding some interesting software.

If you go ahead and buy the Pi 4, note that it has micro-HDMI ports. I thought they were mini, for which I already had cabling, but alas, another adapter had to be purchased. Also, when attaching a USB disk it is better if it is externally powered. The Pi can however power newer external SSD drives that have low power consumption. I tried with a magnetic disk based one powered over USB first, but it behaved somewhat strangely. With that said, let’s go ahead and look at how to get yourself a shiny tiny new server.

Tinkering for Persistence

After downloading the v3.12 tarball from Alpine on my macOS, it’s time to set up the SDHC card for the Pi. I actually borrowed my old hand-me-down MacBook Air that I gave to my daughter a few years ago, since it has a built-in card reader, as opposed to my newer Air. The Pi boots off a FAT32 partition, but we want the system to reside in an ext4 partition later, so we will start by reserving a small portion of the card for the boot partition. This is done using Terminal in macOS with the following commands.

diskutil list
diskutil partitionDisk /dev/disk2 MBR "FAT32" ALP 256MB "Free Space" SYS R
sudo fdisk -e /dev/disk2
> f 1
> w
> exit

The tarball should have decompressed once it hit your download folder. If not, use the option “xvzf” for tar.

cd /Volumes/ALP
tar xvf ~/Downloads/alpine-rpi-3.12.0-aarch64.tar
nano usercfg.txt

The newly created file usercfg.txt should contain the following:

enable_uart=1
gpu_mem=32
disable_overscan=1

The least amount of memory for headless is 32MB. The UART thing is beyond me, but seems to be a recommended setting. Removing overscan gives you more screen estate. If you intend to use this as a desktop computer rather than a headless server you probably want to allot more memory to the GPU and enable sound. Full specification for options can be found on the official Raspberry Pi homepage.

After that we just need to make sure the card is not busy, so we change to a safe directory and thereafter eject the card (making sure that any pending writes are finalized).

cd
diskutil eject /dev/disk2

Put the SDHC card in the Pi and boot. Login with “root” as username and no password. This presumes that you have connected everything else, such as a keyboard and monitor.

setup-alpine

During setup, select your keymap, hostname, etc, as desired. However, when asked where to store configs, type “none”, and the same for the apk cache directory. If you want to follow this guide to the point, you should also select “chrony” as the NTP client. The most important part here though is to get your network up and running. A full description of the setup programs can be found on the Alpine homepage.

apk update
apk upgrade
apk add cfdisk
cfdisk /dev/mmcblk0

In cfdisk, select “Free space” and the option “New”. It will suggest using the entire available space, so just press enter, then select the option “primary”, followed by “Write”. Type “yes” to write the partition table to disk, then select “Quit”.

apk add e2fsprogs
mkfs.ext4 /dev/mmcblk0p2
mount /dev/mmcblk0p2 /mnt
setup-disk -m sys /mnt
mount -o remount,rw /media/mmcblk0p1

Ignore the warnings about extlinux. This and the following trick was found in the Alpine Wiki, but in some confusing order. 

rm -f /media/mmcblk0p1/boot/*
cd /mnt
rm boot/boot
mv boot/* /media/mmcblk0p1/boot/
rm -Rf boot
mkdir media/mmcblk0p1
ln -s media/mmcblk0p1/boot boot

Now the mountpoints need fixing, so run:

apk add nano
nano etc/fstab

If you prefer some other editor (since people tend to become religious about these things) then feel free to use whatever makes you feel better than nano. Add the following line:

/dev/mmcblk0p1   /media/mmcblk0p1   vfat   defaults   0 0

Now the kernel needs to know where the root filesystem is.

nano /media/mmcblk0p1/cmdline.txt

Append the following at the end of the one and only line in the file:

root=/dev/mmcblk0p2

After exiting nano, it’s safe to reboot, so:

reboot

After rebooting, login using “root” as username, and the password you selected during setup-alpine earlier. Now you have a persistent system and everything that is done will stick, as opposed to how the original distro was configured.

Tailoring for Remote Access

OpenSSH should already be installed, but it will not allow remote root login. We will initially relax this constraint. Last in this article is a section on hardening where we again disallow root login. If you intend to have this box accessible from the Internet I strongly advice on hardening the Pi.

nano /etc/ssh/sshd_config

Uncomment and change the line (about 30 lines down) with PermitRootLogin to:

PermitRootLogin yes

Then restart the service:

rc-service sshd restart

Now you should be able to ssh to your Pi. The following steps are easier when you can cut and paste things into a terminal window. Feeling lucky? Then now is a good time to disconnect your keyboard and monitor.

Keeping the Time

If you selected chrony as your NTP client it may take a long time for it to actually correct the clock. Since the Pi does not have a hardware clock, it’s necessary to have time corrected at boot time, so we will change the configuration such that the clock is set if it is more than 60 seconds off during the first 10 lookups. 

nano /etc/chrony/chrony.conf

Add the following line at the bottom of the file.

makestep 60 10

Check the date, restart the service, and check the (now hopefully corrected) date again.

date
rc-service chronyd restart
date

Having the correct time is a good thing, particularly when building a job scheduling server.

Silencing the Fan

Together with the Pi I also bought a fan, the Pimoroni Fan Shim. According to reviews it is one of the better ways to cool your Pi, but it’s still too soon for me to have an opinion. Unless controller software is installed, it will always run at full speed. It’s not noisy, but still noticeable sitting a metric meter from the Pi. Again, some tinkering will be needed since the controller software needs some prerequisites installed. We lost nano between reboots, so we will go ahead and add it again.

apk update
apk upgrade
apk add nano

Other software we need is in the “community” repositories of Alpine. In order to active that repository we need to edit a file:

nano /etc/apk/repositories

Uncomment the second line (ending in v3.12/community), exit, then install the necessary packages.

apk update
apk add git bash python3 python3-dev py3-pip py3-wheel build-base

After those prerequisites are in place, install the fan shim software using:

git clone https://github.com/pimoroni/fanshim-python
cd fanshim-python
./install.sh

apk add py3-psutil
cd examples
./install-service.sh

The last script will fail with “systemctl: command not found”, since Alpine uses OpenRC as its init system, and not systemd which this script presumes. We will instead write our own startup script:

nano /etc/init.d/fanshim

This new file should have the following contents:

#!/sbin/openrc-run

name="fanshim"
command="/usr/bin/python3 /root/fanshim-python/examples/automatic.py"
command_args="--on-threshold 65 --off-threshold 55 --delay 2"
pidfile="/var/run/$SVCNAME.pid"
command_background="yes"

There are a lot of interesting options for fanshim that you can explore, like tuning it’s RGB led. Now we want this to run at boot time, so add it the the default runlevel, then start it.

rc-update add fanshim default
rc-service fanshim start

Enjoy the silence!

Adding and Sharing a Disk

Some of files we will be transferring are going to be quite large. It would also be neat to be able to access files easily from the Finder in macOS, so I am adding a USB3 connected hard disk with 4TB storage. What follows will be very similar to setting up a NAS, and in fact, the way I fell in love with Alpine was by building my own NAS from scratch (with the minor differences being more disks and using zfs). 

First we need to change the filesystem. The disk comes formatted as FAT32, which is very poorly suited for a networked disk. Samba, which is what we will be using for sharing, more or less requires a filesystem that supports extended attributes. After plugging in the drive, we will therefore repartition the drive and format it to ext4. 

cfdisk /dev/sda

Using cfdisk, delete any existing partitions and create one new partition. It should become “Linux filesystem” by default. Don’t forget to “Write” before “Quit”. Then format it:

mkfs.ext4 /dev/sda1

Now we need to add autofs to get automatic mounting. This package is in edge/testing though, so we need to enable that branch and repository, but still have main and community take preference. This can be done by labelling a repository.

nano /etc/apk/repositories

Change the line with the testing repository (last line in my file) to the following. Note that yours will have some server.from.setup/path depending on what you selected in setup-alpine. You only uncomment and add the @testing label in other words.

@testing http://<server.from.setup/path>/edge/testing

Now autofs can be installed from the labelled repo.

apk add autofs@testing

Note that dependencies are still pulled from main/community to the extent it is possible. In order to configure autofs, first:

nano /etc/autofs/auto.master

Add the following line after the uncommented line starting with /misc. It will also disconnect the hard disk after 5 minutes to save energy:

/-   /etc/autofs/auto.hdd   --timeout=300

Then create this new config file:

nano /etc/autofs/auto.hdd

Add the the following line to the empty file.

/hdd   -fstype=ext4   :/dev/sda1

Now, the user pi needs to be created.

adduser pi
smbpasswd -a pi

Select desirable passwords for the pi user. The latter one will later be stored in the macOS keychain and therefore easy to forget, so make note of it somewhere. 

Add autofs to startup and start it now. Change the ownership of /hdd to pi.

rc-update add autofs default
rc-service autofs start
chown -R pi.pi /hdd

With that in place (disk can be accessed through /hdd) it is time to set up the sharing. For this we will use samba and avahi for network discovery.

apk add samba avahi dbus
nano /etc/samba/smb.cfg

Now, this is what my entire smb.cfg file looks like, with all the tweaks to get stuff running well from macOS.

[global]

  create mask = 0664
  directory mask = 0775
  veto files = /.DS_Store/lost+found/
  delete veto files = true
  nt acl support = no
  inherit acls = yes
  ea support = yes
  security = user
  passdb backend = tdbsam
  map to guest = Bad User
  vfs objects = catia fruit streams_xattr recycle
  acl_xattr:ignore system acls = yes
  recycle:repository = .recycle
  recycle:keeptree = yes
  recycle:versions = yes
  fruit:aapl = yes
  fruit:metadata = stream
  fruit:model = MacSamba
  fruit:veto_appledouble = yes
  fruit:posix_rename = yes 
  fruit:zero_file_id = yes
  fruit:wipe_intentionally_left_blank_rfork = yes 
  fruit:delete_empty_adfiles = yes 
  server max protocol = SMB3
  server min protocol = SMB2
  workgroup = WORKGROUP    
  server string = NAS      
  server role = standalone server
  dns proxy = no

[Harddisk]
  comment = Raspberry Pi Removable Harddisk                     
  path = /hdd    
  browseable = yes          
  writable = yes            
  spotlight = yes           
  valid users = pi       
  fruit:resource = xattr 
  fruit:time machine = yes
  fruit:advertise_fullsync = true

Those last two lines can be removed if you are not interested in using the disk as a Time Machine backup for your Apple devices. I will likely not use it, but since this is how I configured my NAS and it was a hassle to figure out how to get it working I thought I’d leave it here for reference. Doesn’t hurt to keep it there in any way.

Let us also configure the avahi-daemon, by creating a config file for the samba service. Avahi will announce the server using Bonjour, making them easily recognizable from macOS (where they automagically show up in the Finder). 

nano /etc/avahi/services/samba.service

This new file should have the following contents:

<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=RackMac</txt-record>
</service>
<service>
<type>_adisk._tcp</type>
<txt-record>sys=waMa=0,adVF=0x100</txt-record>
<txt-record>dk0=adVN=HDD,adVF=0x82</txt-record>
</service>
</service-group>

Not that the txt-record containing adVN=HDD can be removed if you are not interested in using the disk as a Time Machine backup. Still, leaving it won’t hurt.

Finally, it’s time to add samba and avahi to the startup and start the services.

rc-update add samba default
rc-update add avahi-daemon default
rc-service samba start
rc-service avahi-daemon start

The disk should now be visible from macOS. Remember to click “Connect as…” and enter “pi” as the username and your selected smbpasswd from earlier. Check the box “Remember this password in my keychain” for quicker access next time. Sometimes, due to a bug in Catalina, you may get “The original item cannot be found” when accessing the remote disk. If that happens, force quit Finder, and you should be good to go again. If anyone knows of any other fix to this issue, let me know!

Automation

Now, this server will be used as a job server. Some of the jobs running will need the psql command from PostgreSQL and some others will be R jobs. Let’s install both, or whatever you need to satisfy your desires. The dev and headers are needed when R wants to compile packages from source code. You can skip this step for now if you are undecided about what to run or just need basic services like the built-in shell scripting. However, in order to run programs as different users within Cronicle, sudo is necessary.

apk add R R-doc postgresql
apk add R-dev postgresql-dev linux-headers libxml2-dev 
apk add sudo

In order to automate these jobs, we will be using Cronicle. It depends on node.js so we need to install the prerequisites. It’s run script is fetched using curl, so it will also need to be installed.

apk add nodejs npm curl

The installation is done as follows (it is a oneliner even if it looks broken here).

curl -s https://raw.githubusercontent.com/jhuckaby/Cronicle/master/bin/install.js | node

I want to use standard ports, so I need to change the config slightly.

nano /opt/cronicle/conf/config.json

Change base_app_url from port 3012 to 80. Much further down, change http_port from 3012 to 80, and https_port from 3013 to 443. If you want mails to be sent, change smtp_hostname in the beginning of the file to the mail relay you are using. After that an initialization script needs to be run.

/opt/cronicle/bin/control.sh setup

Now we just need to get it running at boot time. This is, however, a service that we do not want to “kill” using a PID, so we are going to enable local scripts that start and stop the service in a controlled manner instead.

rc-update add local default
nano /etc/local.d/cronicle.start

This new file should have the following line in it:

/opt/cronicle/bin/control.sh start

Now we need to create a stop file as well:

nano /etc/local.d/cronicle.stop

This file should have the contents:

/opt/cronicle/bin/control.sh stop

In order for the local script daemon to run these, they need to be executable.

chmod +x /etc/local.d/cronicle.*

With that, let’s secure things.

Hardening

Now that most configuring is done, it’s time to harden the Pi. First we will install a firewall with some basic login protection using the builtin ‘limit’ in iptables. Assuming you are in the 192.168.1.0/24 range, which was set during setup-alpine, the following should be run. Only clients on the local network are allowed access to shared folders.

apk add ufw@testing
rc-update add ufw default
ufw allow 22
ufw limit 22/tcp
ufw allow 80
ufw allow 443
ufw allow from 192.168.1.0/24 to any app CIFS
ufw allow Bonjour

With the rules in place, it’s time to disallow root login over ssh, and make sure that only fresh protocols are used.

nano /etc/ssh/sshd_config

Change the line that previously said yes to no, and add the other lines at the bottom of the file (borrowed from this security site):

PermitRootLogin no

PrintMotd no
Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

After that, enable ufw and restart sshd. Note that if something goes wrong here you will need to plug in a monitor and keyboard again to login locally and fix things.

ufw enable
rc-service sshd restart

Now is a good time to reboot and reconnect to check that everything is working.

reboot

With root not being able to login, you will instead login as “pi”. It is possible for this user to (temporarily, until exit) elevate privileges by the following command:

su

Another option is to use sudo, but I will leave it like this for now, and go ahead with setting up some jobs. That’s a story for another article though.

I hope this guide has been of help. It should be of use for anyone tinkering with Alpine on their Raspberries, and likely some parts for those running other Linux flavors on different hardware as well.