Deploying RHEL as ESX guests – Kickstarting or using ESX templates?

Some time ago I asked my self the question if it is better to kickstart systems or working with ESX templates when deploying RHEL as ESX guests. I also had some discussions with friends working in the same industry. I tried it and came to the following conclusion:

Kickstart the systems is the way to go.

Pros:

  • Kickstarted Systems are already up-to-date after installation.
  • Proper SSH host keys. Using ESX templates ends up in having identical SSH host keys, from the security standpoint not usable, they need to be manually re-created.
  • Kickstarting means lean deployment, much less data needs to be transferred.
  • Very fast, kickstarted systems are deployed in ~3min instead of ~10min (depending on I/O and network performance).
  • Systems are being automatically registered @rhn or on a rhn-satellite with the help of cobbler snippets.
  • Better customization.

Cons:

  • The ESX template to used for kickstarting must have no disks configured, otherwise the whole nominal disk size is being transferred over the net.

When kickstarting virtual systems, only the data needed (the RPMs) is transferred. The best way is to have a “empty” ESX template, just with the network defined, but no disks. The reason for that is: ESX creates a checksum of the disk files, even if the disk is empty, the sparse disk files (in the case of “thin provisioning”) will be transferred over the net at its full nominal size.

When using ESX templates, after deploying, one needs to register the system manually and also manually update the system by invoking “yum -y update”. In contrary, kickstarted systems are always up to date automatically. To circumvent this fact, one needs to keep the templates up to date, it is a manual task which can not be automated easily.

Have fun!

What is possibly going into RHEL6 GA and what is not

As I wrote different times before, RHEL6 is going to have a Kernel based on upstreams 2.6.32 Kernel. Meanwhile Linus Torvalds and his fellows released 2.6.34. Since then – from a System Engineers Point of view – there have some “minor” changes which are affecting the daily work in enterprise environments.

I think that Red Hat is aware that RHEL6 is one of its most important releases made so far. RHEL6 Beta-Testers have acknowledged that this is one of the best Linux distributions made so far.

So lets have a look to http://bit.ly/98yNsk (https://bugzilla.redhat.com search for RHEL6 select all states, sort by Bug-ID and having RFE (Request For Enhancement) in Summary).

Unrar
I requested to add “unrar” to RHEL, unfortunatly they refused because of the strange license of unrar. This is really not understandable, because *ALL* major Linux distros such as SLES, Debian, Ubuntu are providing a package for it. Red Hat think (and they are right) it is a “unfree” license. From my point of view it does not hurt because nobody is forced to use its libs in own software. Unfortunately SAP distributes a lot of software components in RAR-compressed files, this is a problem.

virtio net/vhost net speed enhancements from upstream kernel
This was reported as bug #593158 and later appeared as #595287. Since Red Hat is keen to improve virtualization things, I think this is going to GA.

DRBD
DRBD was getting into upstream Kernel 2.6.33. DRBD (Distributed Replicated Block Device) is some kind of RAID-1 over TCP/IP and is rock solid since years. From my point of view it is the best invention since sliced bread when it comes to cluster technologies. It is widely used, also on RHEL. Have a look to Florians Haas’ comment about support, and further to Alan Robertson’s comment. While Florian is working at Linbit (the developer company of DRBD) points to support problems existing on current releases on RHEL, Alan is a “Urgestein” (sorry, cant find a English word for it, it is meant in a very positive manner) of Linux clustering likes too to have DRBD in RHEL6. Quite a lot of people are included in the bugs CC list (as I’m writing 37 people). This brings quite some preasure on Red Hat to include DRBD in RHEL6. @Red Hat: Do it! include DRBD! If not as a “supported” product, deliver it and find a way with Linbit for the support.

Getting rid of the crappy VMware-tools
For people urged to use VMWares ESX stuff as virtalization technology, there is another important thing that changed: In 2.6.34 upstream Kernel, Linus Torvalds accepted VMWares ballooning driver (vmmemctl). In 2.6.33 Linus accepted VMWares vmxnet3 and pvscsci drivers which have been already backported to RH’s Kernel 2.6.32-EL. So, also backporting vmmemctl is *THE* chance to get rid of those crappy VMWare Tools. For companies relying on ESX this would be a *VERY* important feature. I’ll made a service request (SR 2021028) @Red Hat and will file a RFE-Bug at bugzilla ASAP. Please vote for it!

Other stuff
There are other RFE’s pending. Most of them are not really important for enterprise computing (my point of view). Mostly this RFE’s are about virtualization and bound to libvirt. Most of these RFE’s seems to be trivial and are on status “ON_QA” which means they are most probably included in RHEL6.

What is your favorit RFE-Bug? Please let me know…

Have fun!

Luc

IUS Community RPMs for Red Hats RHEL

I was criticizing that software in RHEL is too outdated for web servers quite soon after release, see my blog post http://blog.delouw.ch/2010/05/02/rhel6-as-a-web-server/. While this is true for a system fully supported by Red Hat, I learned an alternative from a comment on the post. This alternative is the so called IUS community repository.

About the IUS Community Project
The project was launched in September 2009. In spite of being a young project, it has a history. At Rackspace, a large hosting company which is operating thousands of production (web) servers, it was an internal project since 2006. They decided to build up a community around it, like Fedora is for RHEL, Quote: “IUS is The Fedora of Rackspace RPMS”

Support
Like for other community repositories out there, you cannot expect a “official” support neither from Red Hat nor from IUS or Rackspace. Of course there are the usual support sources for communities such as forums, IRC, bugtracker etc.

The difference to other repositories
While most community repositories such as EPEL, rpmforge etc. are focused on providing missing software, IUS focuses on providing upgrades for web server related software which is included in RHEL. This includes PHP, Python, MySQL and others.

Package conflicts with the stock distribution
One may think replace stock software with newer version is tricky and create conflicts. There is one way to find out: Lets give it a try…

The test
The server is a basic install of the yesterday released Centos 5.5. The following installation turns this machine in a lightweight LAMP server:

yum install httpd php-mysql php php-cli php-common php-pgsql php-dba php-pdo php-gd mysql-server perl-DBD-MySQL.

Now we have the situation like it exists in many companies: An outdated webserver. Now we want to upgrade PHP to 5.3.x. Lets see what happens.


[root@centos5 ~]# rpm -i http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm
warning: /var/tmp/rpm-xfer.o6JH6k: Header V3 DSA signature: NOKEY, key ID 9cd4953f
[root@centos5 ~]# rpm -i http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm
warning: /var/tmp/rpm-xfer.MRnuo8: Header V3 DSA signature: NOKEY, key ID 9cd4953f
package epel-release-5-3.noarch (which is newer than epel-release-1-1.ius.el5.noarch) is already installed
[root@centos5 ~]#

Hmm… no GPG key…
The second output is confusing me. Is the package just a clone of epel-release-5-3.noarch? Lets go forward to see if it is working.

“yum clean-all && yum check-update” did not show any pending updates, so far so good. Now lets try to upgrade php.


root@centos5 ~]# yum install php53
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* addons: mirror.netcologne.de
* base: mirror.netcologne.de
* epel: mirror.andreas-mueller.com
* extras: mirror.netcologne.de
* ius: ftp.astral.ro
* updates: mirror.netcologne.de
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package php53.x86_64 0:5.3.2-3.ius.el5 set to be updated
--> Processing Dependency: php53-common = 5.3.2-3.ius.el5 for package: php53
--> Processing Dependency: php53-cli = 5.3.2-3.ius.el5 for package: php53
--> Processing Dependency: php53-pear >= 1:1.8 for package: php53

[omitted output]

--> Processing Conflict: php53 conflicts php < 5.3 --> Finished Dependency Resolution
php53-5.3.2-3.ius.el5.x86_64 from ius has depsolving problems
--> php53 conflicts with php
Error: php53 conflicts with php
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
The program package-cleanup is found in the yum-utils package.

Correct behaviour, since it is a replacement package. After removing php (and only php) yum was complaining about more conflicts. After removing all php related packages installed to prepare for the test, needed to be removed. So the dependencies has been proper solved. Also the installation of related stock distribution packages such as “php-pgsql” has been successfully prevented.

Conclusion
The IUS community repositories are working as expected. With such a basic test I cannot promise if there are not hidden conflicts with packages between stock RHEL/CentOS packages and those from IUS. The experience on the long term will bring more clarity. I think is is sane to do some real-life tests with servers that are in an early project phase.

Further readings:
http://iuscommunity.org/
http://wiki.iuscommunity.org/
http://saferepo.iuscommunity.org/specification/

Have fun!