A chronological documentation test project, nothing serious, really!

11 Mar 2021 Dovecot backup over SSH using doveadm

Dovecot backup using doveadm over SSHThis is just a short post about how to do  Dovecot Maildir backup using the command doveadm backup initiated from your home server that is not on the Internet, to your Internet facing mail server using SSH as a secure transport medium.

The post is not based on any linux distribution and can be used without any modifications as long as you have access to bash. My particular config is based on Ubuntu 20.04 and Centos 8 in my home lab.

The servers have been named host-A, host-B and host-C to better understand the configuration used.

I have installed Dovecot with a similar config as my Internet facing installation so that all email accounts can be backed up in a safe manner. The home lab is behind NAT and a firewall and is not accessible from the Internet by choice.

The Dovecot mail server on the Internet is placed behind a reverse proxy (HAproxy) in a secure manner and is not accessible directly from the Internet. SSH access directly to the mail server is not allowed, but you can access it by SSH jumping through the Bastion host. To make this as simple and automated as possible I have modified my .ssh/config file with the needed configuration to allow doveadm access the Dovecot server without any problems.

SSH config

To allow my home lab server (host-A) access the Bastion host (host-B) over SSH I have created a custom .ssh/config file with SSH-keys. Config of SSH-keys is not being described here.

Host A – .ssh/config

Host host-B
User username
IdentityFile ~/.ssh/id_rsa
Host host-C
User username
HostName <address of host C>
IdentityFile ~/.ssh/id_rsa
ProxyJump host-B

Host B – .ssh/config

Host host-C
Hostname <address of host C>
IdentityFile ~/.ssh/id_rsa

To verify that our SSH connection is working we start a SSH session fro host A with the command

$ ssh host-C

And if everything is working as expected you are now logged into the mail server over SSH.
This was made possible by the ProxyJump directive in .ssh/config file defined on host-A.

Doveadm backup

The doveadm command is versatile and can be used to perform many tasks, but I am planning it to solve my Dovecot Maildir backup needs. doveadm backup performs one-way synchronization. If there are any changes in the destination they will be deleted, so the destination will look exactly like the source.

You can also use doveadm sync to performs two-way synchronization. It merges all changes without losing anything. Both the mailboxes will end up looking identical after the synchronization is finished.

Backup of Dovecot

We are now ready to do the actual backup of Dovecot using the doveadm backup command. Usually the doveadm command is being run from the source and towards the target host, but in my case I reverse it because my home lab is not accessible from the Internet.

The command to initiate backup of a single user account using doveadm over SSH

# doveadm backup -R -u ssh Host-B doveadm dsync-server -u

When the backup command is running you will see the following process running on the source host-C

doveadm dsync-server -u dsync-server

Similarily you will see the following three processes on the target host, host-A in my home lab

doveadm -v backup -R -u ssh host-C doveadm dsync-server -u
ssh host-C doveadm dsync-server -u dsync-server
ssh -W [IP-address of host-C]:port host-B

To automate things and backup all user emails I use a simple bash script to query Dovecot about all users and perform backup of all accounts, one by one using doveadm backup over SSH.

List all Dovecot users

# doveadm user *@*

The script to backup mail from all users accounts

doveadm user *@* | while read user; do
doveadm -v backup -R -u $user ssh host-C doveadm dsync-server -u $user


-v option lets doveadm be verbose
-R option allows us to perform a Reverse backup, ie initiated from target host

If you do not have the same mailbox format in both ends, you can perform a conversion from the source to the target. I am using Maildir on both servers so a conversion is not necessary.

The doveadm backup command can be a little bit tricky if you abort the initial sync of email accounts before it finishes. If this happens you just delete the target directory and start the backup operation again.
To keep your backup updated regularly create a cron job with your doveadm backup command and you are all set.

Tags: , , , , ,

Posted by

09 Feb 2020 Modify Rspamd throughput (RRD) graph

This short post describes how to remove data from Rspamd throughput RRD-graph, usually stored in the rspamd.rrd file. The location depends on the linux distribution, but it is located /var/lib/rspamd/rspamd.rrd on Ubuntu 18.04.

This procedure can most likely be used on all types of RRD-files and is not exclusive for Rspamd.

It is recommended to stop the Rspamd daemon and make a backup of your rspamd.rrd file before you continue.


$ sudo systemctl stop rspamd.service
$ sudo cp -ax /var/lib/rspamd/rspamd.rrd /var/lib/rspamd/rspamd.rrd-$(date -I)

We have now created a backup file of our RRD-file.

Dump RRD-file

Next we need to create a dump of the RRD-file to a XML-file before we can do any modifications on the data.

$ sudo rrdtool dump /var/lib/rspamd/rspamd.rrd /tmp/rspamd.rrd.xml

Structure of the RRD-file

The Rspamd file is the basis for the graphs and are ordered in archives based on the datasets By day, By week, By Month and By year and you will find the same structure in the rrd-file if you search for 60, 300 600 or 3600 seconds.

    <!-- Round Robin Archives -->
            <pdp_per_row>60</pdp_per_row> <!-- 60 seconds -->
            <pdp_per_row>300</pdp_per_row> <!-- 300 seconds -->
            <pdp_per_row>600</pdp_per_row> <!-- 600 seconds -->
            <pdp_per_row>3600</pdp_per_row> <!-- 3600 seconds -->


The XML-file of the RRD-file is now stored in /tmp/rspamd.rrd-xml and can be edited with your preferred editor.

I removed several months of empty data points by searching them up and deleting the lines I did not want.
The values I deleted were inside the <database> tags like this

                    <!-- 2019-02-08 23:00:00 CET / 1549663200 --> <row><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v></row>
                    <!-- 2019-02-09 00:00:00 CET / 1549666800 --> <row><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v></row>
                    <!-- 2019-02-09 01:00:00 CET / 1549670400 --> <row><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v></row>
                    <!-- 2019-02-09 02:00:00 CET / 1549674000 --> <row><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v><v>NaN</v></row>

Restore RRD-file

Restore the modified RRD-file and start rspamd

$ sudo rm -f /var/lib/rspamd/rspamd.rrd && \
rrdtool restore -f /tmp/rspamd.rrd.xml /var/lib/rspamd/rspamd.rrd && \
chown _rspamd:_rspamd /var/lib/rspamd/rspamd.rrd
$ sudo systemctl start rspamd.service

Check if Rspamd complains

$ sudo tail -f /var/log/rspamd/rspamd.log 

You either have wrong file permission og done something wrong with the RRD-file uf you see the following error

2020-02-08 22:51:24 #15878(controller) ; csession; rspamd_controller_handle_graph: no rrd configured

Restore backup

This is the procedure if you made a mistanke and want to restore your backup (you did remember to create a backup before you started?).

Stop rspamd daemon and restore your RRD-backup. I assume you are doing this the same day you created a backup file.

$ sudo systemctl stop rspamd.service
$ sudo cp -ax /var/lib/rspamd/rspamd.rrd-$(date -I) /var/lib/rspamd/rspamd.rrd
$ sudo systemctl start rspamd

Check you rspamd.log and see if you have any error messages

$ sudo systemctl status rspamd.service
$ sudo tail -f /var/log/rspamd/rspamd.log

And thats all.

Tags: , , , , , , ,

Posted by

20 Mar 2019 Installing Vagrant on CentOS 7

This short post describes how to install the latest version of Vagrant using the libvirt provider on a fresh CentOS 7 (Minimal install). I will not do any security measures to harden this config, anyway not in this post. Vagrant supports different providers in addition to libvirt, like VirtualBox and VMware. I prefer libvirt because I am used to use virt-manager and KVM.

I assume you know what Vagrant is and basic usage of it. If you do not know what Vagrant is, please visit the Hashicorp website.

I used vagrant as a sandbox for my Puppet development several years ago, but somewhere along the way I stopped using it. The interest to start using Vagrant back again came after doing some Ansible playbook development. The easy way of setting up and tearing server boxes really helps when you develop and test.

My code examples usually starts with # or $, # tells you that I am using the root user account and $ as a normal user.

First we need to get the latest packages on our installation and reboot the server.

# yum -y update && shutdown -r

We are now ready to add the prerequisites to the installation.

It is easier to work with a graphical interface (GUI) with Vagrant, so we are installing the “Server with GUI” packages.

# yum -y group install "Server with GUI"

This command takes a while to finish, take a short break while it finishes the installation.

Now we are going to determine the latest version of Vagrant and install it. Open your web browser and visit and copy the URL to the latest version available. In my case version

Installing Vagrant

# yum -y install 

Package Arch Version Repository Size
vagrant x86_64 1:2.2.4-1 /vagrant_2.2.4_x86_64 110 M
Transaction Summary
Install 1 Package
Total size: 110 M
Installed size: 110 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:vagrant-2.2.4-1.x86_64 1/1
Verifying : 1:vagrant-2.2.4-1.x86_64 1/1
vagrant.x86_64 1:2.2.4-1

Please note that when we install a package using the yum command like this, there will not be any updates automatically available. You need to manually download a never version when desired.

Now we have Vagrant installed but we have not chosen the type provider type we would like to use running our VMs. I prefer libvirt (KVM) as a provider for my VMs based on stability. Installing KVM as provider.

# yum -y install libvirt libvirt-devel qemu-kvm virt-install virt-manager virt-top libguestfs-tools bridge-utils

The virt-manager package will give us a GUI to our VMs and gives us console access if needed.

Start the libvirt daemon and enable default KVM virtualization during startup.

# systemctl start libvirtd && systemctl enable libvirtd

As a convenience I usually install the Development Tools package as well

# yum -y group install "Development Tools"

It is now time to choose the Vagrant provider and start using Vagrant. We are using the vagrant-libvirt provider. Make sure to run the following command as the user you are going to use with vagrant. I am using a regular user to install the plugin.

$ vagrant plugin install vagrant-libvirt
Installing the 'vagrant-libvirt' plugin. This can take a few minutes…
Fetching: excon-0.62.0.gem (100%)
Fetching: formatador-0.2.5.gem (100%)
Fetching: fog-core-1.43.0.gem (100%)
Fetching: fog-json-1.2.0.gem (100%)
Fetching: mini_portile2-2.4.0.gem (100%)
Fetching: nokogiri-1.10.1.gem (100%)
Building native extensions. This could take a while…
Fetching: fog-xml-0.1.3.gem (100%)
Fetching: ruby-libvirt-0.7.1.gem (100%)
Building native extensions. This could take a while…
Fetching: fog-libvirt-0.6.0.gem (100%)
Fetching: vagrant-libvirt-0.0.45.gem (100%)
Installed the plugin 'vagrant-libvirt (0.0.45)'!

It is now time to download an OS-image and create a VM using Vagrant. You can search for boxes to add on URL

It is now time to create an environment for our VMs to be configured.

$ mkdir vagrant-example
$ cd vagrant-example

We are now ready to start using Vagrant and it is time to get the OS of our choice. You can search for the available boxes in

I will download Ubuntu 18.04 (generic unmodified image) and CentOS 7 box images by issuing the following commands

$ vagrant box add generic/ubuntu1804 
==> box: Loading metadata for box 'generic/ubuntu1804'
box: URL:
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.
1) hyperv
2) libvirt
3) parallels
4) virtualbox
5) vmware_desktop
Enter your choice: 2

Choose option 2) libvirt as provider since that is what I installed earlier in this post.

==> box: Adding box 'generic/ubuntu1804' (v1.9.6) for provider: libvirt
box: Downloading:
box: Download redirected to host:
==> box: Successfully added box 'generic/ubuntu1804' (v1.9.6) for 'libvirt'!

Next we add a CentOS 7 box image

$ vagrant box add centos/7

==> box: Loading metadata for box 'centos/7'
box: URL:
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.
1) hyperv
2) libvirt
3) virtualbox
4) vmware_desktop

Choose option 2) libvirt

==> box: Adding box 'centos/7' (v1902.01) for provider: libvirt
box: Downloading:
box: Download redirected to host:
==> box: Successfully added box 'centos/7' (v1902.01) for 'libvirt'!

If you are behind a proxy, tell Vagrant to use it. If not, ignore the next line.

$ export

To create a Vagrant file and get starting with the Centos 7 image we just added

$ vagrant init centos/7
A Vagrantfile has been placed in this directory.
You are now ready to vagrant up your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on for more information on using Vagrant.

The content of the Vagrantfile

Vagrant.configure("2") do |config| = "centos/7"

It is now time to start our first virtual machine using Vagrant, but first we list the available boxes.

To start up our CentOS 7 box we run the following command

$ vagrant up

Bringing machine 'default' up with 'libvirt' provider…
==> default: Checking if box 'centos/7' version '1902.01' is up to date…
==> default: Uploading base box image as volume into libvirt storage…
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings…
==> default: -- Name: vagrant-example_default
==> default: -- Domain type: kvm
==> default: -- Cpus: 1
==> default: -- Feature: acpi
==> default: -- Feature: apic
==> default: -- Feature: pae
==> default: -- Memory: 512M
==> default: -- Management MAC:
==> default: -- Loader:
==> default: -- Nvram:
==> default: -- Base box: centos/7
==> default: -- Storage pool: default
==> default: -- Image: /var/lib/libvirt/images/vagrant-example_default.img (41G)
==> default: -- Volume Cache: default
==> default: -- Kernel:
==> default: -- Initrd:
==> default: -- Graphics Type: vnc
==> default: -- Graphics Port: -1
==> default: -- Graphics IP:
==> default: -- Graphics Password: Not defined
==> default: -- Video Type: cirrus
==> default: -- Video VRAM: 9216
==> default: -- Sound Type:
==> default: -- Keymap: en-us
==> default: -- TPM Path:
==> default: -- INPUT: type=mouse, bus=ps2
==> default: Creating shared folders metadata…
==> default: Starting domain.
==> default: Waiting for domain to get an IP address…
==> default: Waiting for SSH to become available…
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it's present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Configuring and enabling network interfaces…
default: SSH address:
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Rsyncing folder: /home/hanshj/vagrant-example/ => /vagrant

You have to type your password to complete this command.

We have now created a new VM using Vagrant and ut is available to our disposal. The access it we can run the command

$ vagrant ssh
[vagrant@localhost ~]$

We are now presented with the Vagrant box prompt logged in as the user vagrant. Default for all Vagrant boxes is username vagrant and password vagrant.

To exit the SSH session to the Vagrant box just press Ctrl+D or just logout as you normally do.

To list the available boxes that we have downloaded

$ vagrant box list
centos/7 (libvirt, 1902.01)
generic/ubuntu1804 (libvirt, 1.9.6

To get a list of all VMs running on libvirt run the following command

$ sudo virsh list --all
1 vagrant-example_default running

The Vagrantfile can be modified to add extra disks, nics, memory, several VMs. There are many options available but here are some of the basics I usually add.

Vagrant.configure("2") do |config| = "centos/7"
config.vm.hostname = "centos7-01.acme"
config.vm.define "centos7.acme"

When you have done some tests on your VM and you would like to start all over with a fresh VM, just destroy it and start all over.

$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Removing domain…

To start the VM again, fresh and ready just issue the command

$ vagrant up

Tags: , , , ,

Posted by

30 Oct 2018 Puppet gotchas when using SSSD-module and network Team with NetworkManager

I have been using Puppet on some of my servers to keep my SSSD configuration in the state I want it to be. There is one thing I have learned this summer and later on found the Redhat bug 1414573, and that is that the Puppet SSSD module I have been using triggers a service refresh when the sssd.conf file changes. It currently restarts messagebus, sssd and oddjobd. On RHEL7 this results on two issues:

  1. SSH connections become really, really slow
  2. NetworkManager start spewing errors.

A side effect of issue number 2, NetworkManager is that if you have configured your network nics as members of a network Team, the team will stop working and will be shut down. The team nic-members will not become members of the network Team again until you restart the NetworkManager daemon.

“Restarting “messagebus” means to restart dbus. In general, many components don’t handle restart of dbus properly, so if you try to restart the dbus daemon, you effectively would have to restart a range of service — which amounts to a reboot. NetworkManager doesn’t support restarting dbus. Afterwards it will not reconnect to the message-bus and is effectively unreachable.”

Source: Bug 1414573 -‘systemctl restart messagebus sssd oddjobd’ results in slow logins and NetworkManager errors

Tags: , , , , , , ,

Posted by

30 Oct 2018 Email notification on SSH login using PAM

There are cases where you are interested in getting a email message on every successful login through SSH. This could have been solved by adding a simple line in .bash_profile for every user, but this solution does not catch all SSH logins. The preferred way of doing it is by using PAM and a custom email notify script.

Add the following line to the bottom of file /etc/pam.d/sshd

session optional seteuid /usr/local/bin/

This is the contents of /usr/local/bin/


# Change these two lines:

if [ "$PAM_TYPE" != "close_session" ]; then
    subject="SSH Login: $PAM_USER from $PAM_RHOST on $host"
    # Message to send, e.g. the current environment variables.
    echo "$message" | mailx -r "$sender" -s "$subject" "$recepient"

Make the script executable

# chmod 0700 /usr/local/bin/

This is the email message you receive the next time you or someone else log in using SSH

SSH Login: username from on


This has been tested on CentOS 7 and Ubuntu 18.04, but I guess most recent distributions supports this.

Sending emails on login may conflict with data privacy on multiuser systems. This can be circumvented by just sending emails for specific users or root (if at all accessible via SSH). I might cover that in a later post.

Tags: , , ,

Posted by