codema.in

Poddery server upgraded frrom Debian 11 to 12

PB Pirate Bady Public Seen by 108

After nearly 4 days of downtime, some mistakes and a little bit of troubleshooting, we're finally on Debian 12 (Bookworm).

Preparation

@Kannan V M created a collaborative pad with a rough list of todo items. Basically we just need to backup important data (databases, media files and config files), perform dist-upgrade, restart server and make sure all services are running fine. @Pirate Praveen had specifically mentioned that it's good to do only one thing at a time to reduce the chances of messing things up. For example it was suggested to update just the virtual environment to work with latest python version (v3.11) available in bookworm, without upgrading the matrix-synapse pip package. The less changes we make the less things we have to revert if something goes wrong. So it was decided to keep other tasks like matrix-synapse upgrade, postgresql upgrade and replication, periodic backup of media files, etc. for later.

We checked the size of database and static files and updated the pad with the list of files to be backed up. Postgres database of matrix-synapse alone is ~536GB. Decided to use storage box of 1TB from Hetzner which will cost us €3.20/month, we can ditch it within a week or so if the dist-upgrade goes fine. Found that there are multiple ways to access the storage box including FTP, SAMBA, WebDAV, etc. Decided to go with SAMBA since it'll allow us to mount the partition and use it like other local partitions. But that was a bad decision as we'll see soon (mistake 1).

Backup and upgrade

Ordered storage box of 1TB and mounted it using SAMBA following the Hetzner docs:

sudo apt install cifs-utils
mount.cifs -o user=<username>,pass=<password> //<username>.your-storagebox.de/backup /mnt

Next step was to take backup of the databases. We tried taking sql dump directly to the storage box, but realized that it's too slow (~5 MB/s). So we tried copying the postgresql directory of ~536GB directly, but there was no noticeable change in the copying speed. So we just copied to the local system, even that took ~12h with the copying speed varying between 10-15 MB/s.

Now that we have the postgresql database backup, we went with trying database compression options mentioned here (mistake 2). The purge room API mentioned in this post was outdated, we need to use the new delete room API as mentioned here instead. But since it's a manual task that needs to be done one by one for each empty room, I skipped this step. I also skipped purging history of large rooms assuming that it won't have much impact on the database size (mistake 3).

According to the Levan's article, the next step is to use a tool called rust-synapse-compress-state to optimize synapse cache. The rust tool needs to be built using cargo, but since the cargo version available on Debian 11 was too old, we went with installing it using rustup as mentioned in the cargo's official docs.

# Installing Rust using rustup will also install cargo.
curl https://sh.rustup.rs -sSf | sh
# Clone and build compression tool
git clone https://github.com/matrix-org/rust-synapse-compress-state
cd synapse_auto_compressor
cargo build

The rust-synapse-compress-state consists of two tools, synapse_auto_compressor (automated tool) and synapse_compress_state (manual tool). Levan's article makes use of the manual tool, but we went with the automated tool since it'll make our job more easier. For more details about how to use the tool, please refer their README. We went with the example usage mentioned there:

synapse_auto_compressor -p "user=<username> dbname=<dbname> password=<password> host=/var/run/postgresql" -c 500 -n 100

The compression started and there was no option to track the overall progress. We also noticed that there was no significant change in memory or processor usage, so we went through their README again and retried compression by tweaking the `-c` and `-n` options until we saw a reasonable spike in the resource usage. While this was running on the actual database, we also copied the backed up database directory in local partition to /mnt using rsync.

Two days passed by and the database size actually started to increase by a couple of GB. This is probably because the compression tool makes some tables on their own to record the changes and their status. PostgreSQL does not automatically free up the remaining space after compression and we need to reclaim it manually by debloating index and tables using the `REINDEX` and `VACUUM` commands as mentioned in the Levan's article. But since we're using automated version of the compression tool, there's a chance that it will automatically handle this. We couldn't confirm it because the tool somehow stopped working in between. This might be due to resource constraints, but there wasn't any error message so we couldn't confirm. Since it works on chunks and stores the progress in between, we restarted the process in the hope that it will continue from where it stopped before. Another day passed by and it kept running like forever.

It was late when I came to know from @Akhil that deleting the history of large rooms like matrix-hq, the step which I skipped, could've actually helped the compression to take much less time. I also didn't know that the compression tool can run without actually stopping synapse, this means we didn't have to do this during dist-upgrade and could've done any time later once we get the service up and running on the new debian version. So given this and the time already lost, we decided to retry compression later.

Meanwhile the media files got backed up. So we went with copying the remaining files in the list and started dist upgrade following the steps in Debian wiki. We skipped `autoremove` and did restart instead of shutdown. We made sure to run `systemctl disable` for diaspora, prosody and matrix-synapse services before restart. They won't automatically start anyway since the encrypted partition needs to be decrypted manually first, we still went with disabling them to keep the logs clean.

During dist-upgrade, grub reported some errors. But it looked harmless. We did some online search and couldn't find any exact reasons for it. So we just made sure Hetzner provides a rescue system by reading their documentation regarding the same in case the server fails to boot. Anyway the restart went fine. We decrypted the hard disk and mounted the partitions. Since root was running short of space, we deleted some logs. Exim was one of the main culprits and its logrotate config was updated to reduce the maximum size.

Next step was to upgrade the virtual environment of matrix-synapse. Did it using `python3 -m venv --upgrade </path/to/venv>`. But matrix-synapse failed to start and after some trial and error and going through the logs, found that the optional dependencies `hiredis` and `txredisapi` needed by `redis` are not installed. Installing them fixed synapse startup. When testing the service it was found that media files are not loading, not even the ones from the same homeserver. After checking the log of synapse media repository worker, it was found that another optional dependency called `lxml` was missing. Installing it fixed the issue.

Diaspora and Prosody services got started without any issues. But I had issues with logging in to my poddery xmpp account. So checked the prosody logs and found that `bcrypt` lua module was missing. Installed it following the documentation here (step 2) and it fixed the issue.

Thanks to @Kannan V M for coordinating this and staying throughout the process making sure everything went fine. Thanks to @Akhil and @Pirate Praveen for providing valuable pointers. Special thanks to @Akhil for the blog links and joining calls in between for live troubleshooting.

Thanks to all community members for your cooperation.

Things left to do

1. Compress synapse database.

2. Upgrade PostgreSQL from v11 to v15 (Debian won't automatically handle database upgrade, we have to do it manually as mentioned here).

3. Set up backup of databases and media files.

4. Upgrade synapse.

5. Remove storage box if no critical issues are found within a couple of days.

Lessons learned

Mistake 1: Decision to go with SAMBA mount instead of using ssh. With SAMBA, the copying speed was ~5MB/s whereas it was ~25MB/s using ssh. Should've tested both methods once we found the copying speed was so pathetic, it was late when we realized the issue was with the SAMBA method.

Mistake 2: Decision to compress database during dist-upgrade. Avoiding it could've helped in reducing the downtime by nearly half.

Mistake 3: Decision to skip purging the history of large matrix rooms. This could've helped to reduce the database compression time.

Mistake 4: Not documenting and learning from the mistakes.

K

Karthik Thu 17 Oct 2024 12:45PM

@Pirate Bady very well documented. It is worth making as a blog post on fsci.in too

S

sahilister Sun 20 Oct 2024 2:39PM

Would love to join on con-calls over the coming days to complete the checklist. Can also serve as good opening for new comers to watch and learn how servers are managed and they can join too.

@Pirate Bady @Kannan V M

PP

Pirate Praveen Mon 9 Dec 2024 12:21PM

It'd be good to document the major points of how the service was fixed (nginx rewrite). This was really great effort from @Kannan V M and @Pirate Bady