Installing Red Hat Satellite 6 with Letsencrypt certificates

Red Hat Satellite 6 is a nice tool for system life cycle management. It can get complex and even installation is sometimes tricky. This article is about how to install Satellite, it does not explain the principals and concepts behind it.

Requirements

A valid subscription for the Satellite (and optional for the capsule).

The system requirements are listed here.

There is one important thing the install guide is missing: Satellite 6.4 will not work in IPv6 only environments. There must be an IPv4 address configured, even if it is just an RFC1918 private address. You need to add this IP in /etc/hosts. The reason is that several daemons are listening on IPv4 addresses only. One of them is important: Apache QPID which is used for i.e. errata application via katello-agent. But there are two [update] undocumented installer parameters [/update].

Proxy is mandatory in IPv6 only environments

If your Satellite is only able to connect to the internet via IPv6, you need a IPv4 capable proxy to talk to subscription.rhsm.redhat.com which is not reachable by IPv6. That is the host the subscription manager is talking to.

Install EPEL and certbot

It makes sense to use officially valid certs since they are available for free usage from Letsencrypt. Certbot is available from EPEL and a handy way to request certificates using the ACME protocol.

[root@sat6 ~]# yum -y install http://ftp.tu-chemnitz.de/pub/linux/epel/7Server/x86_64/Packages/e/epel-release-7-11.noarch.rpm

It is important to disable EPEL by default to not get conflicts with RPMs from other repositories. Just enable EPEL when needed and double check.

[root@sat6 ~]# yum-config-manager --disable epel

Install the certbot package

[root@sat6 ~]# yum -y install certbot --enablerepo=epel

Issue the cert

[root@sat6 ~]# certbot certonly -n --standalone --agree-tos --domains sat6.example.com -m user@example.com

Download the CA-Certs

Root-CA

[root@sat6 ~]# wget https://www.identrust.com/node/935 -O trustidrootx3_chain.p7b

Convert the p7b to PEM format.

[root@sat6 ~]# openssl pkcs7 -in trustidrootx3_chain.p7b -inform DER -print_certs -out trustidrootx3_chain.pem

Intermediate CA-Cert

[root@sat6 ~]# wget https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt

Create the CA bundle file

[root@sat6 ~]# cp trustidrootx3_chain.pem bundle-ca-cert.pem
[root@sat6 ~]# cat lets-encrypt-x3-cross-signed.pem.txt >> bundle-ca-cert.pem

Check the certificate

You can check if you made all things as expected, run the check command.

[root@sat6 ~]# katello-certs-check -c "/etc/letsencrypt/live/sat6.example.com/fullchain.pem" -k "/etc/letsencrypt/live/sat6.example.com/privkey.pem" -b "/root/bundle-ca-cert.pem"

Running the installer

Note: The parameters –foreman-proxy-content-qpid-router-hub-addr :: –foreman-proxy-content-qpid-router-agent-addr :: are not documented and only needed if you want to be able that the capsule and/or clients will be able to communicate over IPv6.

[root@sat6 ~]# satellite-installer --scenario satellite --certs-server-cert "/etc/letsencrypt/archive/sat6.example.com/fullchain2.pem" --certs-server-key "/etc/letsencrypt/archive/sat6.example.com/privkey2.pem" --certs-server-ca-cert "/root/bundle-ca-cert.pem" --certs-update-server --certs-update-server-ca --katello-proxy-url=http://proxy.example.com --katello-proxy-port=3128 --foreman-proxy-content-qpid-router-hub-addr :: --foreman-proxy-content-qpid-router-agent-addr ::

Installing a Capsule Server

If you want to use a capsule server within an environment with Letsencrypt certificates, its a bit more complex, but however, it works.

Install the software

Needless to say that your capsule needs to have the correct repos enabled. For details, please see here.

[root@capsule ~]# yum install satellite-capsule

Request the Certificate

On the Satellite, request a certificate for the capsule. Note: This only works with the DNS challenge, so you need access to your DNS server.

[root@sat6 ~]# certbot -d capsule.example.com --manual --preferred-challenges dns certonly

Prepare the certificates and key

Create a directory and copy the certificates

[root@sat6 ~]# mkdir /root/capsule.example.com
[root@sat6 ~]# cp /etc/letsencrypt/live/capsule.example.com/privkey.pem capsule.example.com
[root@sat6 ~]# cp /etc/letsencrypt/live/capsule.example.com/cert.pem capsule.example.com
[root@sat6 ~]# cp /root/bundle-ca-cert.pem capsule.example.com

Validate the certificate

[root@sat6 ~]# katello-certs-check -c /root/capsule.example.com/cert.pem -b /root/capsule.example.com/bundle-ca-cert.pem -k /root/capsule.example.com/privkey.pem

If all is fine, run the capsule generator:

[root@sat6 ~]# capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule.example.com-certs.tar --server-cert /root/capsule.example.com/cert.pem --server-key /root/capsule.example.com/privkey.pem --server-ca-cert /root/capsule.example.com/bundle-ca-cert.pem

Copy the resulting tarball to the capsule server:

[root@sat6 ~]# scp capsule.example.com-certs.tar capsule.example.com

Running the installer

[root@capsule ~]#  satellite-installer --scenario capsule\
--foreman-proxy-content-parent-fqdn      "sat6.example.com"\
--foreman-proxy-register-in-foreman      "true"\
--foreman-proxy-foreman-base-url         "https://sat6.example.com"\
--foreman-proxy-trusted-hosts            "sat6.example.com"\
--foreman-proxy-trusted-hosts            "capsule.example.com"\
--foreman-proxy-oauth-consumer-key       "the key"\
--foreman-proxy-oauth-consumer-secret    "the secret"\
--foreman-proxy-content-certs-tar        "/root/capsule.example.com-certs.tar"\
--puppet-server-foreman-url              "https://sat6.example.com" \
--foreman-proxy-content-qpid-router-hub-addr :: \
--foreman-proxy-content-qpid-router-agent-addr ::

That’s it πŸ™‚

Do not ask me how certificate renewal works, I’ll let you know in three months πŸ˜‰

Workaround for IPv6-only Networks

Unfortunately the satellite-installer configures Apache QPID not correctly, it will be set up to use IPv4 only by default. That means, IPv6-only hosts (i.e. including the capsule) are unable to communicate with the Satellite.

There is a workaround: Add two additional listeners on the Satellite and one on the Capsule. Be aware: The Satellite installer overwrites your changes every time when you run it, i.e. for upgrades or adding new features. create a backup of the config file.

There are two undocumented parameters for the satellite installer: –foreman-proxy-content-qpid-router-hub-addr :: and –foreman-proxy-content-qpid-router-agent-addr ::. You can add them during first time run as well as after an initial installation.

It is the same procedure in the Satellite as well as on the Capsule.

satellite-installer --foreman-proxy-content-qpid-router-hub-addr :: --foreman-proxy-content-qpid-router-agent-addr ::

This behavior is already fixes upstream as you can see it here: https://github.com/theforeman/puppet-foreman_proxy_content/commit/89b4ea988d18f100b806e7cddc2dca623b68f084″.

Using Data Deduplication and Compression with VDO on RHEL 7 and 8

Storage deduplication technology has been on the market for quite some time now. Unfortunately all of the implementations have been vendor specific proprietary software. With VDO, there is now an open source Linux native solution available.

Red hat has introduced VDO (Virtual Data Optimizer) in RHEL 7.5, a storage deduplication technology bough with Permabit in 2017. Of course it has been open sourced since then.

In contrast to ZFS which provides the same functionality on the file system level, VDO is an inline data reduction which works on block device level, it is file system agnostic.

Use cases

There are basically two major use cases: VM Storage and Object Storage Backends.

VM Storage

The main use case is storage for virtual machines where a lot of data is redundant, i.e. the base operating system of the VMs. This allows to deduplicate the data on disk on a large scale, think about 100 VMs where the operating system takes about 5Gbyte each will be reduced to approx. 5 Gbyte instead of 500 Gbyte.

Typically VM storage can be over committed by factor 10.

Object- and Block storage backends

As a backend for CEPH and Glusterfs, it is recommended to not over commit more than factor 3. The reason for the lower over commitment is that the storage administrator usually does not know what kind of data will be stored on it.

Availability

VDO is available since RHEL 7.5, it is included in the base subscription. At the moment it is not available for Fedora (yet).

The source code is available on github:

At the moment the Kernel code is not yet in the upstream Mainline Kernel, it is ongoing work to get it into the Mainstream Kernel.

Typical setup

Physical disk -> VDO -> Volumegroup -> Logical volume -> file system.

Block device can be a physical disk (or a partition on it), multi path device, LUKS disk, or a software RAID device (md or LVM RAID).

Restrictions

You can not use LVM cache, LVM snapshots and thin provisioned logical volumes on top of VDO. Theoretically you can use LUKS on top of VDO, but it makes no sense because there is nothing to deduplicate. Needless to say that VDO on top of a VDO device does not make any sense as well. Be aware that you can not make use of partitioning or (LVM) Raid on top of VDO devices, all that things should be done in the underlying layer of VDO.

When using SAN, check if your storage box already does deduplication. In this case VDO is useless for you.

Installation

Its straight forward:

[root@vdotest ~]# yum -y install vdo kmod-kvdo

Create the VDO volume

In this test case, I attached a 110Gbyte disk, created a 100 GByte partition and will over commit it by factor 10.

Warning! As of writing this article, never use a whole physical disk, use a partition instead and leave some spare space in the disk to avoid data loss! (see further below)

[root@vdotest ~]# vdo create --name=vdo1 --device=/dev/vdb1 --vdoLogicalSize=1T

Creating volume group, logical volume and file system on top of the VDO volume

[root@vdotest ~]# pvcreate /dev/mapper/vdo1
[root@vdotest ~]# vgcreate vg_vdo /dev/mapper/vdo1
[root@vdotest ~]# lvcreate -n lv_vdo vg_vdo -L 900G
[root@vdotest ~]# mkfs.xfs -K /dev/vg_vdo/lv_vdo
[root@vdotest ~]# echo "/dev/mapper/vg_vdo-lv_vdo       /mnt    xfs     defaults,x-systemd.requires=vdo.service 0 0" >> /etc/fstab

Display the whole stack

[root@vdotest ~]# lsblk /dev/vdb
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                 252:16   0  110G  0 disk 
└─vdb1              252:17   0  100G  0 part 
  └─vdo1            253:7    0    1T  0 vdo  
    └─vg_vdo-lv_vdo 253:8    0  900G  0 lvm  /mnt
[root@vdotest ~]#

Populate the disk with data

The ideal test for VDO is to put some real-life VM-Images to the file system on top of it. In this case I scp’ed three IPA server and some instances to that file system. This kind of systems are all quite similar, the disk space saved is tremendous. The total size of the vm images is 105G

Lets have a look:

[root@vdotest ~]# df -h /mnt
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_vdo-lv_vdo  900G  105G  800G  12% /mnt
[root@vdotest ~]# 
[root@vdotest ~]# ll -h /mnt
total 101G
-rw-------. 1 root root 21G Dec 17 09:41 ipa1.lab.delouw.ch.qcow2
-rw-------. 1 root root 21G Dec 17 09:47 ipa1.ldelouw.ch
-rw-------. 1 root root 21G Dec 17 09:54 ipa2.ldelouw.ch
-rw-r--r--. 1 root root 21G Dec 17 09:58 ipaclient-rhel6.home.delouw.ch
-rw-------. 1 root root 21G Dec 17 10:03 ipatest.delouw.ch.qcow2
[root@vdotest ~]#

Lets use the vdostats utility to display the actually used storage on disk:

[root@vdotest ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo1        107.4G     15.2G     92.2G  14%           89%
[root@vdotest ~]#

Performance Tuning

There are a lot of parameters that can be changed. Unfortunately the documentation available at the moment is rudimentary, thus its more a guesswork than facts.

  • Number of worker threads of different kind
  • Enable or Disable compression

On machines with a lot of CPUs using more threads than the defaults can dramatically boost performance. man 8 vdo gives a glimpse of the different parameters related to threads.

Compression is a quite expensive operation. On top of that, depending on the kind of data you are storing, it does not make much sense to use compression (Well, deduplication is kind of compression as well).

Pitfalls

Be aware! With every storage deduplication solution there comes a big pitfall: The logical volume on top of VDO shows free disk space while the actual disk space on the physical disk can be (almost) exhausted. You need to carefully monitor the actual disk usage.

The fill grade can rapidly change if the data to be stored contains a lot of non-deduplicatable and/or compressible data. A good example is virtual machine images containing a LUKS encrypted disk, In such a case, use LUKS on the storage, not on the VM level.

Even if you update one virtual machine, the delta to other machine images will grow and less physical space is available.

VDO comes with a few Nagios plugins which are very useful for alerting administrators in the cause the available physical disk is filling up. They are located in /usr/share/doc/vdo/examples/nagios

According to df -h, on my test system there is still 800 Gbyte available. What happens if I store my 700 Gbyte Satellite 6 image? The data is mostly RPMs which are already compressed quite well. Lets see….

After a transfer of approx 155 Gbyte, the physical disk got full and the file system is inaccessible. I was hitting the worst case that can happen: Complete and unrecoverable data loss.

The df command shows some 241 Gbyte free.

[root@vdotest ~]# df -h |grep mnt
/dev/mapper/vg_vdo-lv_vdo                900G  241G  660G  27% /mnt
[root@vdotest ~]#

The vdostat command tells a different story, like expected.

[root@vdotest ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo1        107.4G    107.4G      0.0B 100%           59%
[root@vdotest ~]# 

When attempting to access the data, there will be an I/O error.

[root@vdotest ~]# ll -h /mnt
ls: cannot access /mnt: Input/output error
[root@vdotest ~]# 

Thats bad. I mean really bad. The device is not accessible anymore.

xfs_repair does not work. Do not attempt to make use of the -L option! Your file system will be gone.

Recovering from a full physical disk

Lets resize the partition instead. First unmount the file system

[root@vdotest ~]# umount /mnt

Delete and recreate the partition using fdisk

[root@vdotest ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Selected partition 1
Partition 1 is deleted

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@vdotest ~]# partprobe
[root@vdotest ~]#
[root@vdotest ~]# vdo growPhysical -n vdo1

Run a file system check.

Now you are able to mount the file system again and your data is available again.

Documentation

Red Hat maintains a nice documentation about storage administration, VDO is covered by an own chapter. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/storage_administration_guide/#vdo

Conclusion

The technology is very interesting and will kick some ass. Storage deduplication will be more and more important, with VDO there is now a Linux native solution for that.

At the moment it is quite dangerous to use VDO in production. Filling up a physical disk without spare space is an unrecoverable error, a complete data loss. That means: Always create the VDO device on top of a partition that is not using the whole disk or another device that can grow in size to prevent data loss.

If you plan to use VDO in production make sure you have a proper monitoring in place that alerts quite ahead of time to be able to take corrective action.

Nevertheless: Its cool stuff and I’m sure the current situation will be fixed soon.

Using MTA-STS to enhance email transport security and privacy

Overview

SMTP is broken by design. It comes from a time when communication partners trusted each other and the NSA was intercepting facsimiles and phone calls instead of internet traffic.

To enhance privacy, in 2002 RFC 3208 was added to the SMTP protocol. Unfortunately STARTTLS is only optional, it is not allowed to only accept encrypted connections.

The RFC states: A publicly-referenced SMTP server MUST NOT require use of the
STARTTLS extension in order to deliver mail locally
.

That is really problematic because it leaves SMTP vulnerable to MITM (Man-In-The-Middle) attacks as the connection is first established in clear text and using the STARTTLS command to establish an encrypted session. In that time window, an attacker is able to enforce a clear text communication by suppressing the STARTTLS command. DNS cache poisoning to hijack mail transfer by fake MX records is also an attack vector.

What to do to solve the problem? Well, DANE is a good solution but it requires DNSSEC to work as expected. Unfortunately DNSSEC is not available for all domains, so SMTP MTA Strict Transport Securityβ€œ (RFC 8461) was introduced.

It works similar like HTTP Strict Transport Security (HSTS). It works as “Trust on First Use” or also known as “TOFU”. The policy is announced by a DNS record (which will be cached) and will be retrieved by HTTPS.

MTA-STS is less secure than DANE, but it is a huge step forward.

Who is using it?

There are already some large scale mail providers that make use of MTA-STS. Here are a few of them:

  • Google mail
  • mail.de
  • Yahoo
  • GMX

All of them are using the testing mode in the policy.

Announce your policy

The first step is to create the DNS records needed. A TXT record is used to announce the policy and a A (or AAAA) record for the hosting of the policy. For this example I use my test domain ldelouw.ch. The data is STSv1 where v1 stand for the protocol number, where one is the first and only version at the moment. The ID is used to identify changes in the policy. It is good practice to use a timestamp in Zulu time format.

If you are using IPA for your DNS management, its very easy:

[root@ipa1 ~]# ipa dnsrecord-add ldelouw.ch _mta-sts --txt-rec="v=STSv1;id=$(date -u +'%Y%m%d%H%M%S')Z;"
  Record name: _mta-sts
  TXT record: v=STSv1;id=20181216095025Z
[root@ipa1 ~]#

Create a _smtp._tls record to announce where error reports are sent to. This is usually the postmaster of the domain. This is optional but recommended.

[root@ipa1 ~]# ipa dnsrecord-add ldelouw.ch _smtp._tls --txt-rec="v=TLSRPTv1;rua=mailto:postmaster@example.com"
  Record name: _smtp._tls
  TXT record: v=TLSRPTv1;rua=mailto:postmaster@example.com
[root@ipa1 ~]#

Be aware for the A/AAAA record(s) there is no _ (underscore) needed.

[root@ipa1 ~]# ipa dnsrecord-add ldelouw.ch mta-sts --aaaa-rec="2a01:4f8:141:14ce::9"
  Record name: mta-sts
  AAAA record: 2a01:4f8:141:14ce::9
[root@ipa1 ~]# 

The second step is to create a web server instance for https://mta-sts.<your-domain>:443. The x509 certificate must be valid (make sense, right? πŸ˜‰ ) otherwise it will not work. Free certificates can be created using Letsencrypt.org. It can also be a SAN (Subject Alternative Name). If you have a web mailer software installed on your mail server it can be reused.

In the web server instance you need to create a file containing your MTA-STS policy. The file contains the protocol version (STSv1), the mode, a list of your mail exchange servers and the maximum age caching your policy. The mode first should be testing so see if it working properly.

mkdir -p /var/www/html/mta-sts.ldelouw.ch/.well-known

cat > /var/www/html/mta-sts.ldelouw.ch/.well-known/mta-sts.txt << EOF
version: STSv1
mode: testing
mx: your-smtp-server.example.com
mx: secondary-smtp.example.com
max_age: 86400
EOF

Testing your Policy

There are two web based tests available.

  • https://aykevl.nl/apps/mta-sts/
    Does not work with IPv6 only setups, it caches DNS requests which is bad when you do some testing and need to correct DNS entries.
  • https://www.hardenize.com You need to register to be able to run multiple test. Checks a lot of related configuration parameters as well.

Fetch policies for Postfix on your SMTP Server

Postfix itself does not support MTA-STS. You need a little helper for that: postfix-mta-sts-resolver.

The software is in an early state, it lacks a bit of documentation. That probably will improve over time. The software itself works nicely, I do however don't have any experience on a large scale.

Unfortunately the stock Python version of RHEL7 is too outdated, but it will work with Python 3.6 which is available in the Software Collections.

yum -y install rh-python36-python-pip.noarch

Install postfix-mta-sts-resolver using PIP

/opt/rh/rh-python36/root/bin/pip install postfix-mta-sts-resolver

Create a user for the MTA-STS daemon:

useradd -c "Daemon for MTA-STS policy checks" mta-sts -s /sbin/nologin

Lets install a Systemd unit file:

cat > /etc/postfix-mta-sts.service << EOF
[Unit]
Description=Postfix MTA STS daemon
After=syslog.target network.target 

[Service]
Type=simple
User=mta-sts
Group=mta-sts
# This is the ExecStart path for RHEL7 using python 36 from the Software collections.
# You may use a different python interpreter on other distributions
ExecStart=/opt/rh/rh-python36/root/bin/mta-sts-daemon

[Install]
WantedBy=multi-user.target
EOF

Enable the service on system startup:

systemctl enable postfix-mta-sts.service

There is a some configuration needed for the MTA-STS daemon itself.

cat > /etc/postfix/mta-sts-daemon.yml << EOF
host: 127.0.0.1
port: 8461
cache:
  type: internal
  options:
    cache_size: 10000
default_zone:
  strict_testing: true
  timeout: 4
zones:
  myzone:
    strict_testing: false
    timeout: 4
EOF

I'm not sure what the configuration statement strict_testing means. Without setting it to true, it will not work when the policy is set to testing. Looks like it overrides the policy from testing to enforcing, handle with care!

Configuring Postfix

The last step is to let postfix know about MTA-STS. This is done with the configuration statement smtp_tls_policy_maps.

smtp_tls_policy_maps = socketmap:inet:127.0.0.1:8461:postfix

Also ensure that smtp_tls_CAfile is set to make use of your global CA-cert bundle, otherwise sending emails to TLS enabled servers will completely fail since STARTTLS is not opportunistic anymore!

smtp_tls_CAfile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem

Nice to have: Increase the TLS log level to see if it is working:

smtp_tls_loglevel = 1

Testing your setup

mail:~# postmap -q ldelouw.ch socketmap:inet:127.0.0.1:8461:postfix

It should return something like:

secure match=mail.delouw.ch

Now send an email to a domain using MTA-STS and verify the Posfix log.

Dec 16 13:32:24 mail postfix/smtp[3583]: Verified TLS connection established to gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1a]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)

It states Verified not just Trusted πŸ™‚

Conclusion

MTA-STS is a way that enhances security without DNSSEC. It is still in its early stage, IETF just released the first version of RFC 8461 in September 2018.

The critical point is the MTA-STS lookup to be done by the MTA. There is not much choice of software that can be used to achieve the goal of an MTA-STS capable MTA. I only made some tests with the most poplar MTA, Postfix, solutions for others like Sendmail and Exim may, or may not exist.

Another problem is the MTA-STS agent. It is a possible new attack vector for the bad guys, input sanitation for policies is key.

At the end of the day, lets give it a try and enable our MTAs out there.

Have fun πŸ™‚