This is just a short post about how to do Dovecot Maildir backup using the command doveadm backup initiated from your home server that is not on the Internet, to your Internet facing mail server using SSH as a secure transport medium.
The post is not based on any linux distribution and can be used without any modifications as long as you have access to bash. My particular config is based on Ubuntu 20.04 and Centos 8 in my home lab.
The servers have been named host-A, host-B and host-C to better understand the configuration used.
I have installed Dovecot with a similar config as my Internet facing installation so that all email accounts can be backed up in a safe manner. The home lab is behind NAT and a firewall and is not accessible from the Internet by choice.
The Dovecot mail server on the Internet is placed behind a reverse proxy (HAproxy) in a secure manner and is not accessible directly from the Internet. SSH access directly to the mail server is not allowed, but you can access it by SSH jumping through the Bastion host. To make this as simple and automated as possible I have modified my .ssh/config file with the needed configuration to allow doveadm access the Dovecot server without any problems.
To allow my home lab server (host-A) access the Bastion host (host-B) over SSH I have created a custom .ssh/config file with SSH-keys. Config of SSH-keys is not being described here.
Host host-B
User username
HostName b.example.com
IdentityFile ~/.ssh/id_rsa
Host host-C
User username
HostName <address of host C>
IdentityFile ~/.ssh/id_rsa
ProxyJump host-B
Host host-C
Hostname <address of host C>
IdentityFile ~/.ssh/id_rsa
To verify that our SSH connection is working we start a SSH session fro host A with the command
$ ssh host-C
And if everything is working as expected you are now logged into the mail server over SSH.
This was made possible by the ProxyJump directive in .ssh/config file defined on host-A.
The doveadm command is versatile and can be used to perform many tasks, but I am planning it to solve my Dovecot Maildir backup needs. doveadm backup performs one-way synchronization. If there are any changes in the destination they will be deleted, so the destination will look exactly like the source.
You can also use doveadm sync to performs two-way synchronization. It merges all changes without losing anything. Both the mailboxes will end up looking identical after the synchronization is finished.
We are now ready to do the actual backup of Dovecot using the doveadm backup command. Usually the doveadm command is being run from the source and towards the target host, but in my case I reverse it because my home lab is not accessible from the Internet.
The command to initiate backup of a single user account using doveadm over SSH
# doveadm backup -R -u username@example.com ssh Host-B doveadm dsync-server -u username@example.com
When the backup command is running you will see the following process running on the source host-C
doveadm dsync-server -u username@example.com dsync-server
Similarily you will see the following three processes on the target host, host-A in my home lab
doveadm -v backup -R -u username@example.com ssh host-C doveadm dsync-server -u username@example.com
ssh host-C doveadm dsync-server -u username@example.com dsync-server
ssh -W [IP-address of host-C]:port host-B
To automate things and backup all user emails I use a simple bash script to query Dovecot about all users and perform backup of all accounts, one by one using doveadm backup over SSH.
List all Dovecot users
# doveadm user *@* user1@example.com user2@example.com user3@example.com
The script to backup mail from all users accounts
#!/bin/bash
doveadm user *@* | while read user; do
doveadm -v backup -R -u $user ssh host-C doveadm dsync-server -u $user
done
-v option lets doveadm be verbose
-R option allows us to perform a Reverse backup, ie initiated from target host
If you do not have the same mailbox format in both ends, you can perform a conversion from the source to the target. I am using Maildir on both servers so a conversion is not necessary.
The doveadm backup command can be a little bit tricky if you abort the initial sync of email accounts before it finishes. If this happens you just delete the target directory and start the backup operation again.
To keep your backup updated regularly create a cron job with your doveadm backup command and you are all set.
Tags: backup, CentOS, doveadm, dsync-server, ssh, Ubuntu
Posted by Hans-Henry Jakobsen
This short post describes how to install the latest version of Vagrant using the libvirt provider on a fresh CentOS 7 (Minimal install). I will not do any security measures to harden this config, anyway not in this post. Vagrant supports different providers in addition to libvirt, like VirtualBox and VMware. I prefer libvirt because I am used to use virt-manager and KVM.
I assume you know what Vagrant is and basic usage of it. If you do not know what Vagrant is, please visit the Hashicorp website.
I used vagrant as a sandbox for my Puppet development several years ago, but somewhere along the way I stopped using it. The interest to start using Vagrant back again came after doing some Ansible playbook development. The easy way of setting up and tearing server boxes really helps when you develop and test.
My code examples usually starts with # or $, # tells you that I am using the root user account and $ as a normal user.
First we need to get the latest packages on our installation and reboot the server.
# yum -y update && shutdown -r
We are now ready to add the prerequisites to the installation.
It is easier to work with a graphical interface (GUI) with Vagrant, so we are installing the “Server with GUI” packages.
# yum -y group install "Server with GUI"
This command takes a while to finish, take a short break while it finishes the installation.
Now we are going to determine the latest version of Vagrant and install it. Open your web browser and visit http://releases.hashicorp.com/vagrant/ and copy the URL to the latest version available. In my case version https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.rpm
Installing Vagrant
# yum -y install https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.rpm
=========================================================================================================
Package Arch Version Repository Size
Installing:
vagrant x86_64 1:2.2.4-1 /vagrant_2.2.4_x86_64 110 M
Transaction Summary
Install 1 Package
Total size: 110 M
Installed size: 110 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:vagrant-2.2.4-1.x86_64 1/1
Verifying : 1:vagrant-2.2.4-1.x86_64 1/1
Installed:
vagrant.x86_64 1:2.2.4-1
Please note that when we install a package using the yum command like this, there will not be any updates automatically available. You need to manually download a never version when desired.
Now we have Vagrant installed but we have not chosen the type provider type we would like to use running our VMs. I prefer libvirt (KVM) as a provider for my VMs based on stability. Installing KVM as provider.
# yum -y install libvirt libvirt-devel qemu-kvm virt-install virt-manager virt-top libguestfs-tools bridge-utils
The virt-manager package will give us a GUI to our VMs and gives us console access if needed.
Start the libvirt daemon and enable default KVM virtualization during startup.
# systemctl start libvirtd && systemctl enable libvirtd
As a convenience I usually install the Development Tools package as well
# yum -y group install "Development Tools"
It is now time to choose the Vagrant provider and start using Vagrant. We are using the vagrant-libvirt provider. Make sure to run the following command as the user you are going to use with vagrant. I am using a regular user to install the plugin.
$ vagrant plugin install vagrant-libvirt
Installing the 'vagrant-libvirt' plugin. This can take a few minutes…
Fetching: excon-0.62.0.gem (100%)
Fetching: formatador-0.2.5.gem (100%)
Fetching: fog-core-1.43.0.gem (100%)
Fetching: fog-json-1.2.0.gem (100%)
Fetching: mini_portile2-2.4.0.gem (100%)
Fetching: nokogiri-1.10.1.gem (100%)
Building native extensions. This could take a while…
Fetching: fog-xml-0.1.3.gem (100%)
Fetching: ruby-libvirt-0.7.1.gem (100%)
Building native extensions. This could take a while…
Fetching: fog-libvirt-0.6.0.gem (100%)
Fetching: vagrant-libvirt-0.0.45.gem (100%)
Installed the plugin 'vagrant-libvirt (0.0.45)'!
It is now time to download an OS-image and create a VM using Vagrant. You can search for boxes to add on URL https://app.vagrantup.com/boxes/search
It is now time to create an environment for our VMs to be configured.
$ mkdir vagrant-example
$ cd vagrant-example
We are now ready to start using Vagrant and it is time to get the OS of our choice. You can search for the available boxes in https://app.vagrantup.com/boxes/search
I will download Ubuntu 18.04 (generic unmodified image) and CentOS 7 box images by issuing the following commands
$ vagrant box add generic/ubuntu1804
==> box: Loading metadata for box 'generic/ubuntu1804'
box: URL: https://vagrantcloud.com/generic/ubuntu1804
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.
1) hyperv
2) libvirt
3) parallels
4) virtualbox
5) vmware_desktop
Enter your choice: 2
Choose option 2) libvirt as provider since that is what I installed earlier in this post.
==> box: Adding box 'generic/ubuntu1804' (v1.9.6) for provider: libvirt
box: Downloading: https://vagrantcloud.com/generic/boxes/ubuntu1804/versions/1.9.6/providers/libvirt.box
box: Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
==> box: Successfully added box 'generic/ubuntu1804' (v1.9.6) for 'libvirt'!
Next we add a CentOS 7 box image
$ vagrant box add centos/7
==> box: Loading metadata for box 'centos/7'
box: URL: https://vagrantcloud.com/centos/7
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.
1) hyperv
2) libvirt
3) virtualbox
4) vmware_desktop
Choose option 2) libvirt
==> box: Adding box 'centos/7' (v1902.01) for provider: libvirt
box: Downloading: https://vagrantcloud.com/centos/boxes/7/versions/1902.01/providers/libvirt.box
box: Download redirected to host: cloud.centos.org
==> box: Successfully added box 'centos/7' (v1902.01) for 'libvirt'!
If you are behind a proxy, tell Vagrant to use it. If not, ignore the next line.
$ export https_proxy=proxy.example.com:8080
To create a Vagrant file and get starting with the Centos 7 image we just added
$ vagrant init centos/7
A Vagrantfile has been placed in this directory.
You are now ready to vagrant up your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on
https://vagrantup.com for more information on using Vagrant.
The content of the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
end
It is now time to start our first virtual machine using Vagrant, but first we list the available boxes.
To start up our CentOS 7 box we run the following command
$ vagrant up
Bringing machine 'default' up with 'libvirt' provider…
==> default: Checking if box 'centos/7' version '1902.01' is up to date…
==> default: Uploading base box image as volume into libvirt storage…
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings…
==> default: -- Name: vagrant-example_default
==> default: -- Domain type: kvm
==> default: -- Cpus: 1
==> default: -- Feature: acpi
==> default: -- Feature: apic
==> default: -- Feature: pae
==> default: -- Memory: 512M
==> default: -- Management MAC:
==> default: -- Loader:
==> default: -- Nvram:
==> default: -- Base box: centos/7
==> default: -- Storage pool: default
==> default: -- Image: /var/lib/libvirt/images/vagrant-example_default.img (41G)
==> default: -- Volume Cache: default
==> default: -- Kernel:
==> default: -- Initrd:
==> default: -- Graphics Type: vnc
==> default: -- Graphics Port: -1
==> default: -- Graphics IP: 127.0.0.1
==> default: -- Graphics Password: Not defined
==> default: -- Video Type: cirrus
==> default: -- Video VRAM: 9216
==> default: -- Sound Type:
==> default: -- Keymap: en-us
==> default: -- TPM Path:
==> default: -- INPUT: type=mouse, bus=ps2
==> default: Creating shared folders metadata…
==> default: Starting domain.
==> default: Waiting for domain to get an IP address…
==> default: Waiting for SSH to become available…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it's present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Configuring and enabling network interfaces…
default: SSH address: 192.168.121.32:22
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Rsyncing folder: /home/hanshj/vagrant-example/ => /vagrant
You have to type your password to complete this command.
We have now created a new VM using Vagrant and ut is available to our disposal. The access it we can run the command
$ vagrant ssh
[vagrant@localhost ~]$
We are now presented with the Vagrant box prompt logged in as the user vagrant. Default for all Vagrant boxes is username vagrant and password vagrant.
To exit the SSH session to the Vagrant box just press Ctrl+D or just logout as you normally do.
To list the available boxes that we have downloaded
$ vagrant box list
centos/7 (libvirt, 1902.01)
generic/ubuntu1804 (libvirt, 1.9.6
To get a list of all VMs running on libvirt run the following command
$ sudo virsh list --all
----------------------------------------------------------------
1 vagrant-example_default running
The Vagrantfile can be modified to add extra disks, nics, memory, several VMs. There are many options available but here are some of the basics I usually add.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.hostname = "centos7-01.acme"
config.vm.define "centos7.acme"
end
When you have done some tests on your VM and you would like to start all over with a fresh VM, just destroy it and start all over.
$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Removing domain…
To start the VM again, fresh and ready just issue the command
$ vagrant up
Tags: CentOS, howto, libvirt, ssh, vagrant
Posted by Hans-Henry Jakobsen
This is just a short write-up on installing HAProxy version 1.8 on CentOS 7 using Software Collections. HAProxy is an application layer (Layer 7) load balancing and high availability solution that you can use to implement a reverse proxy for HTTP and TCP-based Internet services. I am using it to expose my webservices through a reverse proxy.
If default HAProxy version 1.5 is installed then it should be removed because it is blocking the new version we are going to install.
# yum remove haproxy ... warning: /etc/haproxy/haproxy.cfg saved as /etc/haproxy/haproxy.cfg.rpmsave ...
This warning indicates that your old HAProxy config file is renamed. This is useful to know if you are planning to use the same file in HAProxy version 1.8.
Install the Software Collections (SCL) repository to get access to the new HAProxy version
# yum install centos-release-scl
Update your repositories and accept the new repository.
Installing HAProxy 1.8
# yum install rh-haproxy18-haproxy rh-haproxy18-haproxy-syspaths
The rh-haproxy18-haproxy-syspaths package is a system-wide wrapper for the rh-haproxy18-haproxy package and allows us to run HAProxy 1.8 as a service. This package conflicts with the HAProxy and cannot be installed on one system.
If we now look in /etc/haproxy/haproxy.cfg we will see that the config file is a symling to the new package
# ls -l /etc/haproxy/ lrwxrwxrwx. 1 root root 44 Jul 17 18:19 haproxy.cfg -> /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
If you had HAProxy 1.5 installed previously and would like to continue using the config file, copy it to the new location. First we preserve the original HAProxy 1.8 config file by renaming it or just copy the rules that you need from the old config.
# mv /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg.original # cp /etc/haproxy/haproxy.cfg.rpmsave /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
We are now ready to start HAProxy 1.8 with our old config file
# systemctl start rh-haproxy18-haproxy # systemctl status rh-haproxy18-haproxy
If we would like to have the new HAProxy version to auto-start on reboot
# systemctl enable rh-haproxy18-haproxy
Done.
Installing HAProxy 1.8 on RedHat 7 is similar, except you use subscription-manager and add the software collections repository.
Tags: CentOS, haproxy, scl, software_collections
Posted by Hans-Henry Jakobsen
This post is almost similar as the previous where I created a team with two network nics as members using NetworkManager nmcli from a console. This time I have added a VLAN on top of my LACP network team with two member nics.
First we need to install the teamd package if it is not already installed.
# yum install teamd
Using the console command nmcli and NetworkManager and a json-config file with the default config for the team, filename team-master-conf.json:
{ "runner": { "active": true, "fast_rate": true, "name": "lacp", "tx_hash": [ "eth", "ipv4" ] }, "tx_balancer": { "name": "basic" }, "link_watch": { "name": "ethtool" } }
# nmcli con add type team con-name team0 ifname team0 config team-master-conf.json # nmcli con add type team-slave con-name team0-em1 ifname em1 master team0 # nmcli con add type team-slave con-name team0-em2 ifname em2 master team0
I have not added an IP-address to the new team since I will add that on the VLAN interface.
# nmcli con status NAME UUID TYPE DEVICE team0 7f0c0038-b8c1-45bb-a286-501d02304700 team team0 team0-em1 0394e2ae-6610-4997-92db-775876866d0d 802-3-ethernet em1 team0-em2 7050d641-83bb-497a-ae23-6af029386117 802-3-ethernet em2
Check the state of the team
# teamdctl team0 state setup: runner: lacp ports: em1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 1 runner: aggregator ID: 12, Selected selected: yes state: current em2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 12, Selected selected: yes state: current runner: active: yes fast rate: yes
# nmcli con add type vlan con-name team0-vlan12 dev team0 id 12 ip4 10.1.0.20/24 gw4 10.1.0.1
The new config looks like this
# nmcli con s | grep team team0 7f0c0038-b8c1-45bb-a286-501d02304700 team team0 team0-vlan12 d5de0d83-d490-4535-915c-4cbdcf39830b vlan team0.12 team0-em1 0394e2ae-6610-4997-92db-775876866d0d 802-3-ethernet em1 team0-em2 7050d641-83bb-497a-ae23-6af029386117 802-3-ethernet em2
This config is confirmed working on RHEL 7.4 and Centos.
I assume the switch is configured as needed before starting this config on the server.
Tags: CentOS, lacp, nmcli, RedHat, rhel7, teamd, teamdctl, vlan
Posted by Hans-Henry Jakobsen
This is a short post on how to create a LACP network team with two member nics using NetworkManager and nmcli. Configuring av network team is very similar to creating a bond.
First we need to install the teamd package if it is not already installed.
# yum install teamd
I have also included a json-config file with the default config for the team, filename team-master-conf.json:
{ "runner": { "active": true, "fast_rate": true, "name": "lacp", "tx_hash": [ "eth", "ipv4" ] }, "tx_balancer": { "name": "basic" }, "link_watch": { "name": "ethtool" } }
# nmcli con add type team con-name team0 ifname team0 config team-master-conf.json ip4 10.0.0.10/24 gw4 10.0.0.1 # nmcli con add type team-slave con-name team0-em1 ifname em1 master team0 # nmcli con add type team-slave con-name team0-em2 ifname em2 master team0
# nmcli con status NAME UUID TYPE DEVICE team0 7f0c0038-b8c1-45bb-a286-501d02304700 team team0 team0-em1 0394e2ae-6610-4997-92db-775876866d0d 802-3-ethernet em1 team0-em2 7050d641-83bb-497a-ae23-6af029386117 802-3-ethernet em2
Check the state of the team
# teamdctl team0 state setup: runner: lacp ports: em1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 1 runner: aggregator ID: 12, Selected selected: yes state: current em2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 12, Selected selected: yes state: current runner: active: yes fast rate: yes
Take down a network interface
# nmcli con down em1
Take up a network interface
# nmcli con up em1
Delete a network interface
# nmcli con delete em1
Add a new network device
# nmcli con add em1
This config is confirmed working on RHEL 7.4 and Centos.
I assume the switch is configured as needed before starting this config on the server.
Tags: CentOS, lacp, nmcli, RedHat, rhel7, teamd, teamdctl
Posted by Hans-Henry Jakobsen