Puppet gotchas when using SSSD-module and network Team with NetworkManager

I have been using Puppet on some of my servers to keep my SSSD configuration in the state I want it to be. There is one thing I have learned this summer and later on found the Redhat bug 1414573, and that is that the Puppet SSSD module I have been using triggers a service refresh when the sssd.conf file changes. It currently restarts messagebus, sssd and oddjobd. On RHEL7 this results on two issues:

  1. SSH connections become really, really slow
  2. NetworkManager start spewing errors.

A side effect of issue number 2, NetworkManager is that if you have configured your network nics as members of a network Team, the team will stop working and will be shut down. The team nic-members will not become members of the network Team again until you restart the NetworkManager daemon.

“Restarting “messagebus” means to restart dbus. In general, many components don’t handle restart of dbus properly, so if you try to restart the dbus daemon, you effectively would have to restart a range of service — which amounts to a reboot. NetworkManager doesn’t support restarting dbus. Afterwards it will not reconnect to the message-bus and is effectively unreachable.”

Source: Bug 1414573 -‘systemctl restart messagebus sssd oddjobd’ results in slow logins and NetworkManager errors

Email notification on SSH login using PAM

There are cases where you are interested in getting a email message on every successful login through SSH. This could have been solved by adding a simple line in .bash_profile for every user, but this solution does not catch all SSH logins. The preferred way of doing it is by using PAM and a custom email notify script.

Add the following line to the bottom of file /etc/pam.d/sshd

session optional pam_exec.so seteuid /usr/local/bin/login-notify.sh

This is the contents of /usr/local/bin/login-notify.sh

#!/bin/sh

# Change these two lines:
sender="root@example.com"
recepient="root"

if [ "$PAM_TYPE" != "close_session" ]; then
    host="`hostname`"
    subject="SSH Login: $PAM_USER from $PAM_RHOST on $host"
    # Message to send, e.g. the current environment variables.
    message="`env`"
    echo "$message" | mailx -r "$sender" -s "$subject" "$recepient"
fi

Make the script executable

# chmod 0700 /usr/local/bin/login-notify.sh

This is the email message you receive the next time you or someone else log in using SSH

SSH Login: username from hostname-remote.user.com on target-host.example.com

XDG_SESSION_ID=775
SELINUX_ROLE_REQUESTED=
PAM_SERVICE=sshd
SELINUX_USE_CURRENT_RANGE=
PAM_RHOST=hostname-remote.user.com
PAM_USER=username
PWD=/
SELINUX_LEVEL_REQUESTED=
SHLVL=1
PAM_TYPE=open_session
PAM_TTY=ssh
XDG_RUNTIME_DIR=/run/user/9000
_=/usr/bin/env

This has been tested on CentOS 7 and Ubuntu 18.04, but I guess most recent distributions supports this.

DATA PRIVACY
Sending emails on login may conflict with data privacy on multiuser systems. This can be circumvented by just sending emails for specific users or root (if at all accessible via SSH). I might cover that in a later post.

Reinstall grub using Live-CDROM

After a failed upgrade to Ubuntu 14.04 the server complains about missing disk error. The solution was to reinstall grub and reboot. The procedure is loosely described here.

Boot your Ubuntu server using av Live CDROM you have downloaded from Ubuntu. Choose to Test Ubuntu since installation is not the desired option at this time.

Open a terminal window and become root using the sudo command and make the LVM disks available

# lvm vgscan -v
# vgchange -a y vgdisk-for-root-partition 
# mkdir /mnt/rootMount
# mount /dev/vgdisk-for-root-partition/root /mnt/root

Create a chroot environment where you can run the grub-install command

# mount –bind /dev /mnt/root/bind
# mount –bind /proc /mnt/root/proc
# mount /boot /mnt/root/boot
# chroot /mnt/root /bin/bash
# grub-install –root /dev/sda

Always make sure you are working on the right disk before using the grub-install command since it overwrites the boot loader.

Reboot the server when the grub-install command has been successfully run. The server should now be rebooting and working again as it used to.

Install HAProxy 1.8 on CentOS 7

HaproxyThis is just a short write-up on installing HAProxy version 1.8 on CentOS 7 using Software Collections. HAProxy  is an application layer (Layer 7) load balancing and high availability solution that you can use to implement a reverse proxy for HTTP and TCP-based Internet services. I am using it to expose my webservices through a reverse proxy.

If default HAProxy version 1.5 is installed then it should be removed because it is blocking the new version we are going to install.

# yum remove haproxy
...
warning: /etc/haproxy/haproxy.cfg saved as /etc/haproxy/haproxy.cfg.rpmsave
...

This warning indicates that your old HAProxy config file is renamed. This is useful to know if you are planning to use the same file in HAProxy version 1.8.

Install the Software Collections (SCL) repository to get access to the new HAProxy version

# yum install centos-release-scl

Update your repositories and accept the new repository.

Installing HAProxy 1.8

# yum install rh-haproxy18-haproxy rh-haproxy18-haproxy-syspaths

The rh-haproxy18-haproxy-syspaths package is a system-wide wrapper for the rh-haproxy18-haproxy package and allows us to run HAProxy 1.8 as a service. This package conflicts with the HAProxy and cannot be installed on one system.

If we now look in /etc/haproxy/haproxy.cfg we will see that the config file is a symling to the new package

# ls -l /etc/haproxy/
lrwxrwxrwx. 1 root root 44 Jul 17 18:19 haproxy.cfg -> /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg

If you had HAProxy 1.5 installed previously and would like to continue using the config file, copy it to the new location. First we preserve the original HAProxy 1.8 config file by renaming it or just copy the rules that you need from the old config.

# mv /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg.original
# cp /etc/haproxy/haproxy.cfg.rpmsave /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg

We are now ready to start HAProxy 1.8 with our old config file

# systemctl start rh-haproxy18-haproxy
# systemctl status rh-haproxy18-haproxy

If we would like to have the new HAProxy version to auto-start on reboot

# systemctl enable rh-haproxy18-haproxy

Done.

Installing HAProxy 1.8 on RedHat 7 is similar, except you use subscription-manager and add the software collections repository.

Configure VLAN on top of network team using nmcli / NetworkManager

This post is almost similar as the previous where I created a team with two network nics as members using NetworkManager nmcli from a console. This time I have added a VLAN on top of my LACP network team with two member nics.

First we need to install the teamd package if it is not already installed.

# yum install teamd

Using the console command nmcli and NetworkManager and a json-config file with the default config for the team, filename team-master-conf.json:

{
        "runner":       {
                "active": true,
                "fast_rate": true,
                "name": "lacp",
                "tx_hash": [ "eth", "ipv4" ]
        },
        "tx_balancer":  { "name": "basic" },
        "link_watch":   { "name": "ethtool" }
}
# nmcli con add type team con-name team0 ifname team0 config team-master-conf.json
# nmcli con add type team-slave con-name team0-em1 ifname em1 master team0
# nmcli con add type team-slave con-name team0-em2 ifname em2 master team0

I have not added an IP-address to the new team since I will add that on the VLAN interface.

Check the status of the team

# nmcli con status
NAME               UUID                                  TYPE            DEVICE
team0              7f0c0038-b8c1-45bb-a286-501d02304700  team            team0
team0-em1          0394e2ae-6610-4997-92db-775876866d0d  802-3-ethernet  em1
team0-em2          7050d641-83bb-497a-ae23-6af029386117  802-3-ethernet  em2

Check the state of the team

# teamdctl team0 state
setup:
  runner: lacp
ports:
  em1
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 1
    runner:
      aggregator ID: 12, Selected
      selected: yes
      state: current
  em2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
    runner:
      aggregator ID: 12, Selected
      selected: yes
      state: current
runner:
  active: yes
  fast rate: yes

Add a VLAN to the network team

# nmcli con add type vlan con-name team0-vlan12 dev team0 id 12 ip4 10.1.0.20/24 gw4 10.1.0.1

The new config looks like this

# nmcli con s | grep team
team0              7f0c0038-b8c1-45bb-a286-501d02304700  team            team0
team0-vlan12       d5de0d83-d490-4535-915c-4cbdcf39830b  vlan            team0.12
team0-em1          0394e2ae-6610-4997-92db-775876866d0d  802-3-ethernet  em1
team0-em2          7050d641-83bb-497a-ae23-6af029386117  802-3-ethernet  em2

This config is confirmed working on RHEL 7.4 and Centos.
I assume the switch is configured as needed before starting this config on the server.