Taillieu.Info

More Than a Hobby..

OpenStack - Centos Lab3 : Nova – Kilo

Centos Lab3 : Nova – Kilo

 

In this Lab we will deploy the OpenStack Compute Service, aka Nova.

Nova is a cloud compute controller, which is the core of any IaaS system. Nova interacts with Keystone for authentication, Glance for images, Neutron for network service (though it still has it’s own embedded option as well) and Horizon as a user and administrative graphical (web based) interface. Nova can manage a number of different underlying compute, storage, and network services, and is in the process of adding the ability to manage physical non-virtualized compute components as well!

In this lab, we’ll focus on deploying the compute control components (API servers, etc.) as well as a compute agent that will run on the same server (All-In-One mode). In a later lab, we will add a second separate compute instance to highlight how additional services are added, and capacity in the cloud can be scaled.

Compute Service Installation

Step 1: As with the previous labs, you will need to SSH the aio node.

If you have logged out, SSH into your AIO node:

Copy
ssh centos@aio151

If asked, the user password (as with the sudo password) would be centos, then become root via the sudo password:

Copy
sudo su -

Then we’ll source the OpenStack administrative user credentials. As you’ll recall from the previous lab, this sets a set of environment variables (OS_USERNAME, etc.) that are then picked up by the command line tools (like the keystone and glance tools we’ll be using in this lab) so that we don’t have to pass the equivalent –os-username command line variables for each command we run:

Copy
source ~/openrc.sh

Install Compute Controller Service packages

Step 2: You will now install a number of nova packages that will provide the Compute services on the aio node:

Copy
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y

You have just installed:

  • openstack-nova-api: Accepts and responds to end user compute API calls.
  • openstack-nova-cert: Manages x509 certificates
  • openstack-nova-conductor: acts as an intermediary between compute nodes and the nova database
  • openstack-nova-console : Authorizes tokens for users that console proxies provide.
  • openstack-nova-novncproxy: Provides a proxy for accessing running instances through a VNC connection using a web browser.
  • openstack-nova-scheduler: determines how to dispatch compute and volume requests.
  • python-novaclient: Client library for OpenStack Compute API.

Install Compute Node packages

Step 3: While the previous step installed the service components, we also want to configure a local compute agent to manage our local KVM hypervisor, and we’ll install the sysfsutils package to add the required local tools for managing virtual disk connectivity.

Copy
yum install openstack-nova-compute sysfsutils -y

As with our previous steps, we’ll create the database for nova to store it’s state in, and configure the nova user access credentials (again, super secret:pass):

Create Database for Compute Service

Step 4: Create Database nova for Openstack nova by logging into MariaDB with password as pass

Copy
mysql -uroot -ppass
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'pass';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'pass';
exit

Step 5: Create a nova service user in Keystone

We need to create the user that Nova uses to authenticate with the Identity Service. As with Glance, we’ll add the nova user to the service tenant and give the user the admin role:

Copy
openstack user create nova --password pass --email Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.

Associate the user with the tenant and role:

Copy
openstack role add --project service --user nova admin

While we’re at it, we also need to configure the service and endpoint catalog entries in Keystone.

The service endpoint is just like the one we created in Glance, but now we’re using the well known name of nova, and the type tag of compute

Copy
openstack service create --name nova --description "Compute service" compute
Copy
openstack endpoint create --publicurl http://aio151:8774/v2/%\(tenant_id\)s --internalurl http://aio151:8774/v2/%\(tenant_id\)s --adminurl http://aio151:8774/v2/%\(tenant_id\)s --region RegionOne compute

Example output:

+--------------+-------------------------------------+
| Field        | Value                               |
+--------------+-------------------------------------+
| adminurl     | http://aio132:8774/v2/%(tenant_id)s |
| id           | a741f82c58ac475d8519cf8e9431ec0c    |
| internalurl  | http://aio132:8774/v2/%(tenant_id)s |
| publicurl    | http://aio132:8774/v2/%(tenant_id)s |
| region       | RegionOne                           |
| service_id   | c6f1f6c038f648448e560b6cb5075556    |
| service_name | nova                                |
| service_type | compute                             |
+--------------+-------------------------------------+
Note:This endpoint is a little more complicated than the glance endpoint, which was effectively just a hostname and a port. In this case we also require a tenant ID to be mapped into the path name, or the API will not function properly, and we’ve passed a substitution model that client applications (like the default python CLI tools) can use to properly format their API requests with.

Configure Compute Service

Step 6: Configure the common compute services’ connections to the internal components of Nova (RabbitMQ), the database, and Keystone. And some less common ones…

As with glance, we configure RabbitMQ connectivity to allow the nova processes to leverage the message queue for communication. We’ll also configure the database connection for those services that talk directly to the database (principally the API service, Scheduler, and the Compute Conductor). We’ll also establish a connection to Keystone so that Nova can authenticate itself for communications with other services (e.g. talking to Glance), or to accept and validate client communications (nova CLI authenticating with Nova via Keystone).

We’ll also need to configure the VNC server (Keyboard Video Mouse via web browser for “console” access to our virtual machines), a connection to glance (we’ll need to be able to get those images that we’re going to store in Glance for our virtual machines),

In this case we’ll edit the nova.conf file using the openstack-config tool rather than editing the file directly. This reduces the likelihood that we place a value in the wrong location (e.g. under the wrong [heading]). We’ll operate on the /etc/nova/nova.conf file for the service configuration(s), and then modify the same configuration file for the compute service configuration.

First we’ll establish the required communications parameters for RabbitMQ. These parameters go in the [DEFAULT] section.

Note:You do not include the [] in the OpenStack config file, which is good as you would have to “escape” them from being translated by the linux command line interpreter!

The format is:

openstsack-config --set {config_file} {section} {parameter} {value}
Copy
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host aio151
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password pass

Next we’ll configure the database communications, again using the openstack-config tool.

Copy
openstack-config --set /etc/nova/nova.conf database connection 'mysql://nova:pass@aio151/nova'

As should be obvious, this is a much more efficient method than manually editing the files, and does reduce the likelihood of “placement” errors. It’s still important to get the actual parameters right as well!

And we’ll carry on with the Keystone config, much like in Glance, we tell Nova “where” Keystone lives, but in this case, we’re differentiating between Authorization, and Identity endpoints. One being a validation endpoint (I’d like a token for myself please) and the other one is used to verify client tokens (is this client/token valid).

Copy
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri 'http://aio151:5000/v2.0'
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri 'http://aio151:35357'
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pass
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
Note:That last parameter is actually in the [DEFAULT] section, but we’ve included it here as it’s part of enabling Keystone, telling the system to use Keystone (rather than a local file) for it’s authentication needs. This is one of the values of the opentsack-config tool, as this is the sort of parameter that might easy get added to the wrong section of the nova.conf file!

Next we’ll provide the configuration for the VNC proxy process, which provides a web based ‘Keyboard Video Mouse’ interface for interacting with the console of our virtual compute devices.

Note:The my_ip parameter really does want an IP, not a host name.
Copy
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url 'http://10.1.64.151:6080/vnc_auto.html'

Next we’ll add the pointer to Glance so that Nova can interoperate with the Image service.

Copy
openstack-config --set /etc/nova/nova.conf glance host 'aio151'

Modify the Hypervisor configuration for Nova-Compute

Step 7: Determine the hypervisor type

You must determine whether your system’s processor and/or hypervisor support hardware acceleration for virtual machines, as this will determine if we can use the KVM virtualization engine, or if we need instead to leverage the QEMU emulator. It turns out that the interfaces and management of these to systems is now identical, but there are backend differences, and it is in order to address those differences that we need to determine _what_ the right configuration is.

Run the following command to determine if KVM will function on your machine:

Copy
egrep -c '(vmx|svm)' /proc/cpuinfo
Note:If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration as the default for OpenStack is to use hardware accelerated KVM. For completeness in our configuration, we would configure virt_type=kvm in [libvirt] section of /etc/nova/nova.conf.

As our systems are already virtualized, we will get a value of zero, and so we must configure libvirt to use qemu instead of kvm in the [libvirt] section of /etc/nova/nova.conf. Again with the openstack-config client:

Copy
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

That should complete the “edits” we need to make tot he configuration file. We have a few more tasks to complete now that the nova tools can start to find the right connection parameters for communications.

Step 8: Populate the database tables for the nova database.

We’ll use the same model we used with glance, and leverage the nova-manage tool to migrate the database from nothing to “current” state.

Copy
su -s /bin/sh -c "nova-manage db sync" nova

Then we’ll enable and start (or re-start) the services that we’ve configured this far.

Copy
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

And we also need to start the Nova compute services so that we can eventually turn on a VM!

Copy
sudo systemctl enable libvirtd.service openstack-nova-compute.service
sudo systemctl start libvirtd.service openstack-nova-compute.service
sudo systemctl status libvirtd.service openstack-nova-compute.service

Step 9: Verify Nova Operations:

Unfortunately, even though we have Nova properly configured, we can’t yet turn on a VM. This is because we have no network service yet, and we’ve not enabled the Nova Network model services at this point. In the next lab we’ll enable Neutron so that we finally have network functionality, and will then be able to actually _use_ this OpenStack environment. Until then, we can at least ensure that the OpenStack Compute service is healthy and ready to start serving us a soon as the Network comes online.

Firstly, we can see if the services that make up Nova (api, scheduler, conductor, auth, cert and at least our first compute node) have checked in with the API service. This will let us know if our inter process (RabbitMQ), database (MariaDB), and Keystone connections are functional:

Copy
nova service-list

Example output:

+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | aio151 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 2  | nova-consoleauth | aio151 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 3  | nova-scheduler   | aio151 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 4  | nova-cert        | aio151 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 5  | nova-compute     | aio151 | nova     | enabled | up    | 2015-04-27T12:54:19.000000 | -               |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+

We had also previously configured a connection to Glance, and we should be able to ask Nova to ask Glance what images are available as in:

Copy
nova image-list

Example output:

+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| ff10d15d-d75d-4bda-b9bc-342213a95b03 | CirrOS 0.3.2        | ACTIVE |        |
| 8f90a562-e995-4f86-a7c1-b76a901f12b5 | cirros_0.3.2_direct | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+
You have successfully installed the OpenStack Compute Service, and verified it’s internal state is functional. Even though the compute controller is now enabled, we will still need to install networking before creating actually using nova so that the VMs we create can be accessed.

If time permits, review the lab to get a reminder of what you have accomplished.

In the next Lab, we’ll install the Neutron controller, and connect Nova and Neutron together so that we can spin up a VM!