RHEL compute nodes in Mirantis OpenStack

One of the “killer features” of  OpenStack is the ability to combine multiple flavors of resources under a single Control Plane. For example, the same set of OpenStack controllers can easily manage multiple types of hypervisors, such as KVM, vSphere, and XenServer, enabling cloud users to provision a workload on the Compute resources best suited for it.

Until recently, the discussion about “heterogeneous clouds” mostly covered the clouds that had KVM hypervisors combined with vSphere clusters. This configuration enables the cloud owner to solve a number of challenges, including support for “non-cloud-ready workloads” (requiring High Availability features from the infrastructure). Mirantis OpenStack has supported these types of configurations since Mirantis OpenStack 7.0.

We recently found out about one more challenge that cloud owners face – support for RHEL-certified workloads. It’s quite common for Enterprise environments to use RHEL servers with KVM hypervisor for hosting of virtualized workloads, and many application vendors are actually certifying the RHEL flavor of KVM as the “supported configuration” for running their product in VMs. Simply put, this means that if you run the app on KVM anywhere but RHEL, it’s “not a supported configuration”, and you get no support from the vendor, you break the compliance requirements, and so on.

That got us thinking: how can we solve this challenge for Mirantis OpenStack users? We run Controller nodes on Ubuntu, deploy Compute nodes with Ubuntu, and allow you to easily combine this Ubuntu Compute nodes with vSphere and Kubernetes – but can we add RHEL Compute nodes to the mix?

It turns out that we can, and in this blog post we’ll explain how this works in Mirantis OpenStack 8.0

Combining RHEL and Ubuntu KVM under a single Control plane

From the 10,000 foot view, the RHEL Compute v1 feature allows cloud owners to:

  1. Deploy a MOS cloud with Fuel, having all Controller and Ceph Storage nodes running on a Ubuntu Host OS, then
  2. Integrate RHEL nodes to be managed by MOS Controllers as Compute nodes.  (Note that RHEL nodes must be pre-provisioned. The basic installation of the Host must be accomplished by other tooling prior to starting integration into the MOS cloud.)

In the v1 release of the RHEL Compute feature we did a number of things to enable this approach. We:

  1. Packaged Mirantis OpenStack in RPMs for the Compute role, enabling easy deployment on RHEL. (This applies only OpenStack components and dependencies that are absent in RHEL repos. The RHEL native baseline packages (libvirt, kvm, and so on) are still used.)
  2. Modified Puppet manifests shipped with fuel-library (which are based on the upstream Puppet-OpenStack project)  to support installation of Mirantis OpenStack packages on RHEL systems.
  3. Developed a Solution Guide for integrating RHEL into a MOS cloud using packages and puppet scripts from the previous 2 steps.

The Solution Guide provides a complete step-by-step guide to this procedure, but In this blog post we’ll do a high level overview.

7 steps towards RHEL Compute in your MOS/Ubuntu based cloud

So, we have a MOS cloud running, with some Controllers, some Ceph storage nodes, maybe even some Ubuntu-based Compute nodes. Let’s now add some pre-provisioned RHEL nodes to it!

We’ll go through a number of steps to achieve this:

  • Step 0: Check the Limitations and supported configurations page to make sure that your MOS cloud conforms to needed parameters for RHEL integration
  • Step 1: Get yourself some RHEL servers . These have to be pre-provisioned with RHEL by your tooling of choice, such as Foreman
  • Step 2: Validate the configuration of RHEL servers. Make sure you have enough disk space, the right partitioning schema, NICs, access to repos, and so on
  • Step 3: Prepare RHEL servers for MOS deployment
  • Step 4: Configure SElinux on the RHEL nodes
  • Step 5: Deploy the MOS components on the RHEL nodes
  • Step 6: Validate the deployed RHEL Compute node(s)
  • Step 7: Segregate RHEL nodes into an Availability Zone or Host Aggregate for easier VM scheduling

Let’s take a closer look at these steps.

Validate the configuration of the RHEL servers

Here we will make sure that RHEL servers have the configuration feasible to run Compute service on it. Check the table below:

Requirement Description
Disk partition
  • If you have separate partitions:
    • Root file system (/) must have at least 10 GB of disk space.
    • /var/log must have at least 10 GB of disk space.
    • /var/lib/nova uses the rest of disk space. You must allocate at least 30 GB of free space.
  • If you have a single partition, assign at least 50 GB.
Network Configuration of networking equipment connected to the RHEL compute nodes must match networking configuration in the Fuel environment. Connection to the Fuel Admin (PXE) network is not required.
Domain name resolution RHEL nodes must resolve domain names.
RHEL subscription The RHEL compute nodes must have a valid RHEL subscription.
Access to Mirantis OpenStack repository The RHEL compute nodes must have access to the Mirantis OpenStack repository over the Internet or to a local repository mirror available in the company’s internal network.

For each of these requirements there’s a CLI command to perform the check – please the Solution Guide for full details.

Prepare the RHEL servers for MOS deployment

Once you’ve deployed Red Hat Enterprise Linux itself to your servers, there are a few additional steps you need to take to ensure that the server is ready to join your MOS deployment.  You can find complete instructions in the Solution Guide, but the basic sequence of steps is:

  1. Enable root or sudo access with public SSH keys
  2. Configure the mos-8.0 repository to access the MOS Packages for RHEL and import the repository key
  3. Enable extra RHEL mirrors for the OpenStack dependencies
  4. Generate public SSH keys for VM migration (yes, it will also work )
  5. Copy the Ceph SSH keys to the RHEL compute node (if you are using Ceph)
  6. Install Puppet 3.x and Ruby 2.1.x
  7. Verify that the needed KVM modules are enabled
  8. Install fuel-library8.0 from the Mirantis repo (That’s where all the puppet magic will come from.)  You can do this with the command:
       yum install fuel-library8.0 -y

The Solution Guide includes detailed command listings for every step.

Next we need to properly configure SElinux.

Configure SElinux on RHEL nodes

Security-Enhanced Linux (SELinux) is a Linux kernel security mechanism that enables mandatory access control (MAC). Since SELinux is enabled in “enforcing” mode by default in Red Hat Enterprise Linux, we need to configure SELinux so it does not block OpenStack services. Otherwise, they simply won’t work properly.

We can configure SELinux in one of the following modes:

  • Custom permissive – This mode is preferred for all OpenStack environments. SELinux manages system security by allowing permitted and denying insecure operations. You must configure SELinux to enable the OpenStack services operations so that they are “allowed”.
  • Permissive – SELinux is enabled and allows all operations. Operations that SELinux would typically deny in enforcing mode are logged in the SELinux log file located in /var/log/avc.log.
  • Disabled – Switch off SELinux in kernel and permit all operations.

Check the Solution Guide for detailed instructions on how to configure SElinux.

Deploy MOS on RHEL nodes

Now that you’ve prepared your RHEL nodes, it’s time to actually deploy OpenStack on them.  This process involves several steps.

Configure astute.yaml

To turn your RHEL server into a proper Compute node, we’ll first need to edit the astute.yaml. Puppet manifests that are shipped with fuel-library. By default, the fetching cloud configuration parameters from it, so edit the astute.yaml file to address the requirements of your configuration and place it in the /etc/ directory before applying Puppet manifests. If you’re deploy additional components, such as Sahara, Murano, or Ceilometer, you must update the astute.yaml network section with the corresponding network roles and parameters.

Check the Solution Guide for listings listing and example of astute.yaml options. Here are some examples of ready-to-use yaml’s for different cloud configurations:

Apply Puppet manifests on RHEL node

The process of the RHEL compute nodes deployment now boils down to applying Puppet manifests on RHEL nodes in a specific order. In this example, all Puppet manifests are located in /etc/puppet/modules/.

The following table describes the Puppet manifests that you must apply on each RHEL compute node.

Puppet manifest Description Path to file
hiera.pp Configures the hierapackages and its dependencies. /etc/puppet/modules/osnailyfacter/modular/hiera/hiera.pp
globals.pp Optimizes the hieraconfiguration file structure for Puppet. /etc/puppet/modules/osnailyfacter/modular/globals/globals.pp
firewall.pp Configures firewall to accept connections from the OpenStack components. /etc/puppet/modules/osnailyfacter/modular/firewall/firewall.pp
tools.pp Adds the following tools for debugging and deployment: man, atop, tmux, screen, tcpdump, strace. /etc/puppet/modules/osnailyfacter/modular/tools/tools.pp
netconfig.pp Configures network interfaces and bridges according to the settings specified in the following sections of the astute.yaml file:

  • network_metadata(incoming data)
  • network_scheme
  • transformations
/etc/puppet/modules/osnailyfacter/modular/netconfig/netconfig.pp
/roles/compute.pp Installs required packages for nova-compute. Configures the libvirtand nova-computeservices. /etc/puppet/modules/osnailyfacter/modular/roles/compute.pp
common-config.pp Installs and configures required packages for Neutron. /etc/puppet/modules/osnailyfacter/modular/openstack-network/common-config.pp
ml2.pp Configures Neutron ML2 plugin and service. /etc/puppet/modules/osnailyfacter/modular/openstack-network/plugins/ml2.pp
l3.pp (Optional) If you use Neutron DVR, this manifest configures Neutron L3 agent on the RHEL compute node. Otherwise, do not apply this manifest. /etc/puppet/modules/osnailyfacter/modular/openstack-network/agents/l3.pp
metadata.pp (Optional) If you enabled Neutron DVR, configures the Neutron metadata agent on the RHEL compute node. Otherwise, do not apply this manifest. /etc/puppet/modules/osnailyfacter/modular/openstack-network/agents/metadata.pp
compute-nova.pp Applies common configuration for Nova and Neutron. Starts the nova-compute service. /etc/puppet/modules/osnailyfacter/modular/openstack-network/compute-nova.pp
enable_compute.pp Configures the nova-compute service to start on boot. /etc/puppet/modules/osnailyfacter/modular/astute/enable_compute.pp
/ceilometer/compute.pp (Optional) If you deploy Ceilometer, applies the Ceilometer configuration to the RHEL compute node. Otherwise, do not apply this manifest. /etc/puppet/modules/osnailyfacter/modular/ceilometer/compute.pp
/ceph/ceph_compute.pp (Optional) If you deploy Ceph, applies Ceph configuration to the RHEL compute node. Otherwise, do not apply this manifest. /etc/puppet/modules/osnailyfacter/modular/ceph/ceph_compute.pp

See the Solutions Guide for specific CLI listings to help apply the manifests and configure OpenStack services on the RHEL node.

Validate deployed RHEL Compute node(s)

Now you’re done with deployment, so it’s time to actually see if the newly introduced RHEL nodes will work . Since Fuel is not managing the RHEL Compute nodes, it’s not possible to use the usual Health Check. Luckily, we’ve published some some CLI examples in the Solutions Guide that will help in checking your newly introduced RHEL nodes using the scenarios below:

Validation Requirement
Verify the state of OpenStack services.
  • The RHEL compute node is up and enabled in the nova hypervisor list on the controller node.
  • The nova-compute service on the RHEL compute node is up and enabled in the nova services list.
  • The status of the OVS agent on the RHEL compute node is alive in the list of neutron agents.
Verify the launch of a virtual machine instance on the RHEL compute node. After the launch, the output of the nova list command returns the following properties for the created instance:

  • The Status column has the value ACTIVE.
  • The Power State column has value Running.

The output of nova show command returns the following properties set to RHEL for the created instance:

  • OS-EXT-SRV-ATTR:host
  • OS-EXT-SRV-ATTR:hypervisor_hostname
Verify network connectivity of a virtual machine instance without an assigned floating IP address. You must be able to successfully execute the following actions:

  • Ping the internal IP address of the instance.
  • Establish a TCP connection between the controller node and the internal IP address of the instance.
Verify network connectivity of an instance with an assigned floating IP address. You must be able to successfully execute the following actions:

  • Ping the floating IP address of the instance.
  • Establish a TCP connection between the controller node and the floating IP address of the instance.
  • Access the Internet from the instance.
Verify that an instance can access metadata. Instance must be able to receive metadata, such as a public key.

Segregate RHEL Compute node(s) for easier workload scheduling

Chances are, you added RHEL nodes to your MOS cloud so you can expose them to cloud users in a way that will simplify scheduling RHEL-specific workloads on them. HostAggregates and Availability Zones are two ways that OpenStack makes it possible to guide the placement of workloads, and the Solutions guide can give you some examples of how to put them to use.

What’s next?

So now that RHEL Compute node v1 Mirantis OpenStack can be used to manage multiple flavors of KVM hypervisor — one from Ubuntu and one from RHEL — we’re thinking about evolving this Pluggable Compute Host OS idea in a number of directions, such as:

  • Enabling automated flow of integrating pre-provisioned non-Ubuntu Compute nodes into MOS clouds via Fuel
  • Introducing support for additional Compute Host OS options, such as Oracle Linux, CentOS, openSuse

If we’ve piqued your interested, please feel free to check out the full documentation.

This entry was posted in Aktualisierung, Blog, MOS, OpenStack, rhel, Wolke. Bookmark the permalink.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind markiert *