Taillieu.Info

More Than a Hobby..

OpenStack - Centos Lab10 : Telemetry – Kilo

Centos Lab10 : Telemetry – Kilo

 

In this lab you will set up the core OpenStack Telemetry (Ceilometer) module on your AIO node and then enable a set of Ceilometer agents on both your control and compute nodes so that we can capture some intial metrics from both nodes. While Ceilometer was initially intended to support the concept of metering as a pre-cursor input to a billing system, it has been leveraged to provide a level of monitoring and even analytics for event triggering capabilities (as with HEAT). We will enable the basic data capture in this configuration and collect a simple metric.

Telemetry Module Installation on AIO Node

Step 1: Log into the AIO node and install the core Ceilometer components

If you have not already, SSH to your AIO node and run:

Copy
ssh centos@aio151
sudo su -
source ~/openrc.sh

Step 2: First you will install the core Ceilometer components:

Copy
yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient -y

Now you’ve installed:

  • ceilometer-api – service to query and view data recorded by collector
  • ceilometer-collector – daemon designed to gather and record event and metering data created by notification and polling agents.
  • ceilometer-notification – daemon designed to listen to notifications on message queue and convert them to Events and Samples.
  • ceilometer-central – polls the public REST APIs of other OpenStack services such as nova and glance, in order to keep tabs on resource existence.
  • ceilometer-alarm – daemons to evaluate and notify based on defined alarming rules.
  • python-ceilometerclient – the python CLI and SDK components for Ceilometer

We will want to gather data on VMs running on our compute nodes using the ceilometer-compute agent. This agent polls the local libvirt daemon to gather performance data for the instances on a given node and emits this information as AMQP notifications. Since you have installed nova-compute onaio151 and are using this node to provide compute services, you will also want to install ceilometer-compute agent and related python packages to enable compute process monitoring on that node:

Copy
yum install openstack-ceilometer-compute -y

Create Database for Telemetry Service

Step 3: The Telemetry service uses a database to store information. Unlike other services installed so far, ceilometer uses MongoDB as default. Mongo is a “noSQL” database that provides much greater capacity and preformnce when compared to mySQL/MariaDB for this class of use (data collection) and was chosen due to the massive amount of data that could potentially be gathered in an even moderately sized OpenStack deployment. Install and specify the location of the database in the configuration file.

In our lab we will deploy MongoDB on the AOIO node:

Copy
yum install mongodb-server mongodb -y

Which installs the server code and the client tools and libraries.

Edit the MongoDB configuartion file /etc/mongod.conf to create smaller default files (useful in this environment where the system automatically creates databases per service), and to bind the database server to the management/public IP address of our AIO node:

Copy
openstack-config --set /etc/mongod.conf '' smallfiles true
openstack-config --set /etc/mongod.conf '' bind_ip 10.1.64.151

Enable and start the mongoDB service:

Copy
systemctl enable mongod.service
systemctl start mongod.service
systemctl status mongod.service

Now that we have installed and configured the MongoDB service, we can create the actual Database we will use with Ceilometer.

Copy
mongo --host aio151 --eval ' db = db.getSiblingDB("ceilometer"); db.createUser({user: "ceilometer", pwd: "pass", roles: [ "readWrite", "dbAdmin" ]})'

Example output:

MongoDB shell version: 2.6.9
connecting to: aio151:27017/test
WARNING: The 'addUser' shell helper is DEPRECATED. Please use 'createUser' instead
Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] }

Create User and User-Roles

Step 4: Create a ceilometer user that Ceilometer uses to authenticate with the Keystone.

As with all our services, we once again create a service specific user and associate them with the service tenant and provide an admin role. First create the user:

Copy
openstack user create ceilometer --password pass --email Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.

And then associate the user with the tenant and associate the role:

Copy
openstack role add --project service --user ceilometer admin

Define services and service endpoints

Step 5: Register the Ceilometer with Keystone so that other OpenStack services, clients and SDKs can locate it.

Register the service name:

Copy
openstack service create --name ceilometer --description "Telemetry" metering

Create the service end point for the service.

Copy
openstack endpoint create --publicurl http://aio151:8777 --internalurl http://aio151:8777 --adminurl http://aio151:8777 --region RegionOne metering

Step 6: Configure the Ceilometer Service

We’ll use the openstack-config tool to update the specific configuration options in the /etc/ceilometer/ceilometer.conf file. As before, we’ll configure the rabbit and keystone services. Since we’re not associating a mysql database, we instead point to the mogodb database we created instead.

In the Ceilometer case, while we configure the keystone admin configuration as normal, we also configure an OS ‘user’ set of credentials. In our environment this is the same user, but it will be used for agents to communicate back to Ceilometer, in which case, we are acting in a fashion more similar to a user connecting to Ceilometer.

We also create a metering secret for services to pass information into the metering sub-system directly via the AMQP service as opposed to an agent talking in a “CLI-like” fashion.

Copy
openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/ceilometer/ceilometer.conf database connection mongodb://ceilometer:pass@aio151:27017/ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://aio151:5000/v2.0
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken identity_uri http://aio151:35357
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password pass
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://aio151:5000/v2.0
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name service
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password pass
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_endpoint_type internalURL
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_region_name regionOne
openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret  pass
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_host aio151
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_userid guest
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_password pass

Configure Compute agent for Telemetry

Ceilometer provides an API service that provides a collector (AMQP based) and agent (CLI-like) for most of the OpenStack services. We will now configure the Nova compute agent and collector:

Step 7: Update the /etc/nova/nova.conf file with the openstack-config tool. Here we define some default collection parameters for the agent to consume.

Copy
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour
openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state
openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver messagingv2

Configure the Image Service for Telemetry

Step 8: We will also configure the Glance Agent by updating the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files. We just need to configure the notification driver.

Copy
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messagingv2

Edit /etc/glance/glance-registry.conf

Copy
openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver messagingv2

Add Block Storage service agent for Telemetry

Step 9: To retrieve data related to Cinder volumes, you must configure the Block Storage service to send notifications to the bus. Edit /etc/cinder/cinder.conf

Copy
openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messagingv2
openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder

Step 10: Enable and Start the Ceilometer services

First we can enable all the core Ceilometer services:

Copy
systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
systemctl status openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

We’ll also start the compute specific Ceilometer service:

Copy
systemctl enable openstack-ceilometer-compute.service
systemctl start openstack-ceilometer-compute.service
systemctl status openstack-ceilometer-compute.service

The we’ll restart the other openstack services that we modifed to enable their embedded Ceilometer notification services:

Copy
systemctl restart openstack-glance-registry.service openstack-glance-api.service openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service openstack-nova-compute.service

Step 11: Validate that Ceilomter is enabled, and that an inital set of metering elements are enabled.
Use the ceilometer meter-list command to test the access to ceilometer:

Copy
ceilometer meter-list

Example output:

+----------------+-------+---------+--------------------------------------+----------------------------------+-------------+
| Name           | Type  | Unit    | Resource ID                          | User ID                          | Project ID                       |
+----------------+-------+---------+--------------------------------------+----------------------------------+-------------+
| image          | gauge | image   | 3470f821-8483-4dd6-a55f-d0c102348c7e | None                             | e0eb7387... |
| image          | gauge | image   | 80ef3d94-c959-4ea4-af59-57933fed4bf5 | None                             | e0eb7387... |
| image.size     | gauge | B       | 3470f821-8483-4dd6-a55f-d0c102348c7e | None                             | e0eb7387... |
| image.size     | gauge | B       | 80ef3d94-c959-4ea4-af59-57933fed4bf5 | None                             | e0eb7387... |
| network        | gauge | network | c2e66fbf-672b-4cf6-8b5a-395c3b776c0f | dea011bd9e8449099707b7d18048f795 | e0eb7387... |
| network.create | delta | network | c2e66fbf-672b-4cf6-8b5a-395c3b776c0f | dea011bd9e8449099707b7d18048f795 | e0eb7387... |
+----------------+-------+---------+--------------------------------------+----------------------------------+-------------+

Telemetry Service Installation on Compute Node

To install Ceilometer on the compute node (compute161) you will need to log in to the compute node in your lab.

Step 12: Log on to compute161 via ssh from the lab-gateway:

Copy
ssh centos@compute161
sudo su -
source ~/openrc.sh

Install telemetry agent for compute to collect information from Compute Node.

Copy
yum install openstack-ceilometer-compute python-ceilometerclient -y

As with the AIO node:Edit the /etc/nova/nova.conf file and

Copy
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour
openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state
openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver messagingv2

Configure Ceilometer Service

Step 13: Edit /etc/ceilometer/ceilometer.conf

Copy
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://aio151:5000/v2.0
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken identity_uri http://aio151:35357
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password pass
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://aio151:5000/v2.0
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name service
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password pass
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_endpoint_type internalURL
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_region_name RegionOne
openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret pass

Step 14: Restart the following services:

Copy
systemctl restart openstack-nova-compute.service
Copy
systemctl enable openstack-ceilometer-compute.service
systemctl start openstack-ceilometer-compute.service
systemctl status openstack-ceilometer-compute.service

Verify the Telemetry service installation

Step 15: Download an image from Glance:

We’ll download an image to create an event against the glance download meter.

Copy
glance image-download "CirrOS 0.3.2" > cirros.img

You can now get usage statistics for the various meters:

Copy
ceilometer statistics -m image.download -p 60

This command will give a statistics of image download meter.

Step 16: Log on to OpenStack Dashboard by Open the webbrowser and type

Copy
http://localhost:8080/dashboard

Type user name as admin and password as pass.

Goto Admin –> System Panel –>Resource Usage. Click Stats and check for which metrics are available.

You have enabled a basic two node OpenStack cloud by hand. As you will learn in the lecture, this is highly relevant to customizing and troubleshooting an OpenStack environment, but most basic aspects of installation are typically automated.

If time permits, review the labs you’ve completed and consider what you have accomplished over the past days. Now that you don’t have to pause for lectures, how quickly do you think you could run through the lab a second time?

The last lab exercise introduces you to DevStack, an automated installation project that will condense all your effort from the previous days into about one hour (depending on the speed of your laptop and internet connection). As you build an OpenStack environment with DevStack, consider when this might be useful, and when you might rather want to take the more hands-on approach of manual installation.

OpenStack - Centos Lab9 : Orchestration – Kilo

Centos Lab9 : Orchestration – Kilo

 

The OpenStack Orchestration Service (Heat) provides a template-based orchestration service for OpenStack. The system leverages all of the other OpenStack services via direct API calls resulting in a deployed (and re-deployable) cloud application infrastructure. The templates enable configuration and creation of all of the Core OpenStack services and capabilities, and integrates closely with the Ceilometer service to enable auto-scaling of deployed templates.

In this lab, we will enable the OpenStack Heat engine, and create and deploy a simple template based system.

Heat Installation on AIO Node

Step 1: As with previous labs, ensure you are logged in to your AIO node from the lab-gateway, and have elevated your privileges to the root user.

Copy
ssh centos@aio151
sudo su -
source ~/openrc.sh

Step 2: Install the heat module your AIO node.

Copy
yum install openstack-heat-api openstack-heat-engine openstack-heat-api openstack-heat-engine openstack-heat-api-cfn python-heatclient -y

{{{{Note Replaced openstack-heat-api-cfnpython-heatclient with openstack-heat-api openstack-heat-engine openstack-heat-api-cfn python-heatclient in above}}}}

You have just installed:
openstack-heat-api: Accepts and responds to end user compute API calls.
openstack-heat-engine: The heat engine does all the orchestration work and is the layer in which the resource integration is implemented
python-heatclient: The Command Line Interface and Client libraries for Heat.
openstack-heat-cfn: Provides an AWS Query API that is compatible with AWS CloudFormation, and perhaps more importantly, enables the ceilometer scale up/down trigger mechanisms.

Create Database for Orchestration Service

Step 3: As with prior service installations we need to create a database named “heat” for Heat by logging in to MariaDB and creating both the database and the user (heat) with password (pass)

Copy
mysql -uroot -ppass <<EOF
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'pass';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'pass';
exit
EOF

Define Users and User Roles

Step 4: As with the others services we have installed, we need to create a heat user that Heat uses to authenticate with Keystone:

Copy
openstack user create heat --password pass --email Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.

We will then assign the admin role as a part of the service tenant:

Copy
openstack role add --user heat admin --project service

In the Heat default policy.json (located in /etc/heat/policy.json), there is an additional set of roles created, one of which is the “heat_stack_user” role. To make use of this role, Keystone needs to be aware of it, so that users can be associated with it.

Copy
openstack role create heat_stack_owner

We will then assign the role to our admin user so that we can leverage Heat functions::

Copy
openstack role add --user admin heat_stack_owner --project admin

Define services and service endpoints

Now register the Heat API with the Keystone so that other OpenStack services and clients can locate the API.

Step 5: Register the services and specify the endpoints

First we create the orchestration service “heat”:

Copy
openstack service create --name heat --description "OpenStack Orchestration" orchestration

Then we add the endpoint mapping to the orchestration service:

Copy
openstack endpoint create --publicurl http://aio151:8004/v1/%\(tenant_id\)s --internalurl http://aio151:8004/v1/%\(tenant_id\)s --adminurl http://aio151:8004/v1/%\(tenant_id\)s --region RegionOne orchestration

Similarly register a service and endpoint for heat-cfn (the AWS model service and endpoint):

Copy
openstack service create --name heat-cfn --description "Orchestration CloudFormation" cloudformation

And register the endpoint with Keystone:

Copy
openstack endpoint create --publicurl http://aio151:8000/v1 --internalurl http://aio151:8000/v1 --adminurl http://aio151:8000/v1 --region RegionOne cloudformation
Note: Early implementations of heat, prior to the maturation of the HOT format, used AmazonWebServices CloudFormation-compatible templates. Although CloudFormation template compatibility was deprecated in the Icehouse release of OpenStack, the naming conventions of some heat components continue to bear evidence of this former functional relationship. In addition, some of the automation code elements are still only available throught the CFN compatibility process, hence the configuration of the CFN component.

Configure Heat Service

Now that we have the base components installed, and have the database created, and our service endpoints created, we need to configure the processes.

As one might start to expect, we will point the system to our AMQP service (rabbitmq), configure keystone access, the database connection, and then we configure both ec2 access credentials (as a part of the CFN process configuration), and two CFN specific service connections, the metadata service (this being different than the Neutron MetaData service) and the call-back endpoint for Ceilometer integration (aka: waitcondition_server).

Step 6: Edit /etc/heat/heat.conf

Copy
openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/heat/heat.conf DEFAULT rabbit_host aio151
openstack-config --set /etc/heat/heat.conf DEFAULT rabbit_password pass
openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://aio151:5000/v2.0
openstack-config --set /etc/heat/heat.conf keystone_authtoken identity_uri http://aio151:35357
openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_user heat
openstack-config --set /etc/heat/heat.conf keystone_authtoken admin_password pass
openstack-config --set /etc/heat/heat.conf database connection mysql://heat:pass@aio151/heat
openstack-config --set /etc/heat/heat.conf ec2authtoken auth_uri http://aio151:5000/v2.0
openstack-config --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://10.1.64.151:8000
openstack-config --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://10.1.64.151:8000/v1/waitcondition

Step 7: Final confguration: database migration and service startup:

Now that the heat processes know how to talk to the database, we can trigger the migration of the database to the current state:

Copy
su -s /bin/sh -c "heat-manage db_sync" heat

We also want to start the services, and enable their auto-start on boot if the control server (AIO node) reboots:

Copy
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
systemctl status openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

Verify the Orchestration service installation

Step 8: The resoruces described the a Heat Orchestration Template (HOT), are also known as a ‘stack’. We’ll create a simple template that enables a VM and attaches it to our private network in order to ensure that the system functions.

The following will create a file called test-stack.yml.

Copy
cat > ~/test-stack.yml << EOF
heat_template_version: 2013-05-23
description: |
  Simple template to deploy a single compute instance, 
  and associate it with our private network.

parameters:
  Priv_Net:
    type: string
    description: Private Network Name for the server
    default: private-net

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      name: Stack-VM
      key_name: mykey
      image: CirrOS 0.3.2
      flavor: m1.tiny
      networks:
      - network:  { get_param: Priv_Net }

outputs:
  private_ip:
    description: IP address of the server in the private network
    value: { get_attr: [ my_instance, first_address ] }
EOF

The file above provides a metadata section that provides a template version and name, parameters that are user-defined variables and finally the resources section of the template that specifies exactly what the heat template is to create, and an output parameter that determines the allocated internal IP address of the VM after it has been created.

This test file automates the creation of a VM, but the private network name for the server was left as a user-defined parameter. While we did include a default for this parameter,we will pass this parameter to Heat when we create the stack. Recall that we can determine the names of our networks from the output of the “net-list” command if using the neutron CLI, or from Horizon.

So now, let’s create your first stack:

Copy
heat stack-create -f test-stack.yml -P "Priv_Net=private-net" Stack1

While it may take a minute or two for the stack automation to complete, we can check on the current status with the “stack-list” command:

Copy
heat stack-list

Example output:  

+--------------------------------------+---------------+-----------------+----------------------+
|                  ID                  | stack_name    | stack_status    | creation_time        | 
+--------------------------------------+---------------+-----------------+----------------------+
| 847ee6a4-61ff-4bbe-953a-7d080cbac2f8 |   Stack1      | CREATE_COMPLETE | 2014-08-30T15:08:15Z |                    
+--------------------------------------+---------------+-----------------+----------------------+

Step 9: Deploy another version of the stack via Horizon:

If you don’t already have a browser pointing to Horizon, you can log on to Horizon by opening a web browser on your laptop and type the following (assuming you followed the tunnel configuration instructions for connecting to the jumpbox):

Copy
http://127.0.0.1:8080/dashboard

Log in with the admin user and pass password.

If you were already logged in, you will need to log out and back in for the dashboard to pick up the Orchestration menu and pages

Go to Project > Orchestration > Stacks Click on Stack1, from which you should see the same information about Stack1 as we received from the CLI command previously.

You can also create a new stack, give it a different name (e.g. “Stack2”), and use the same HOT template. If you save the following to a file on your laptop, you can point Horizon to it. Or, you can just paste the template into the browser window (copy the code below and paste it into the template section). You don’t have to include anything in the environment section.

Copy
heat_template_version: 2013-05-23
description: |
  Simple template to deploy a single compute instance, 
  and associate it with our private network.

parameters:
  Priv_Net:
    type: string
    description: Private Network Name for the server
    default: private-net

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      name: Stack-VM
      key_name: mykey
      image: CirrOS 0.3.2
      flavor: m1.tiny
      networks:
      - network:  { get_param: Priv_Net }

outputs:
  private_ip:
    description: IP address of the server in the private network
    value: { get_attr: [ my_instance, first_address ] }

Start the stack, and add in the name, a user password (this isn’t used in these systems but does apply to other hypervisor types), and ensure that the Newtork name is still private-net.

This stack should be created in the same fashion as our command line created stack.

Goto Project > Compute > Instance to confirm that this worked (you should now see two instances with Stack-XXXXX names).

Now that have now installed heat and had a chance to work with it, consider how Heat “orchestration” differs from Nova “orchestration.”

In the next lab, you will install Ceilometer – a metering project that integrates with Heat to provide a means of triggering Heat actions. Can you think of any use of such functionality?

OpenStack - Centos Lab8 : Horizon – Kilo

Centos Lab8 : Horizon – Kilo

 

Horizon is a Web interface that enables cloud administrators and users to manage most of the commonly used OpenStack resources and services through a graphical user interface (GUI). The dashboard enables web-based interactions with the OpenStack services through the OpenStack APIs.

OpenStack Dashboard Installation

Step 1: Although Horizon is a web-based graphical interface, we first need to install this service on our control node in the same way as any other OpenStack service. If you are not already logged into the lab environment, SSH to your AIO node and then:

Copy
ssh centos@aio151
sudo su -
source ~/openrc.sh

Step 2: You will now install Horizon on the node that can contact Keystone as root.

Copy
yum install openstack-dashboard httpd mod_wsgi memcached python-memcached -y

You just installed:

  • OpenStack-dashboard – the django-based web interface to OpenStack
  • httpd – Apache HTTP Server and binaries
  • mod-wsgi – An Apache module that provides a WSGI (Web Server Gateway Interface) compliant interface for hosting Python based web applications within Apache.
  • memcached – Memory caching system
  • python-memcached- Provides Python interface to the memcached daemon

Configure OpenStack Dashboard

Step 3: Edit /etc/OpenStack-dashboard/local_settings

Copy
vi /etc/openstack-dashboard/local_settings

Change OPENSTACK_HOST to the hostname of your Identity Service:

Copy
OPENSTACK_HOST = "aio151"

Replace ALLOWED_HOSTS line as given below, to allow all hosts to access the dashboard:

Copy
ALLOWED_HOSTS = ['*']

To Configure the memcached session storage service. Uncomment the following lines and comment if any duplicates are present.

CACHES = {
   'default': {
       'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
       'LOCATION': '127.0.0.1:11211',
   }
}

Now save and exit the file.

To finalize installation:

On CentOS (and RHEL), configure SELinux to permit the web server to connect to OpenStack services:

Copy
setsebool -P httpd_can_network_connect on

Due to a packaging bug, the dashboard CSS fails to load properly. Run the following command to resolve this issue:

Copy
chown -R apache:apache /usr/share/openstack-dashboard/static

Step 4: Start the Apache web server and memcached:

Copy
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service
systemctl status httpd.service memcached.service

You have now installed Horizon. Next you will login to the dashboard through a browser to verify that it is properly installed and to complete rest of the activities in this lab.

Login to the dashboard and explore Project and Admin Tabs

The dashboard is available on the node with the nova-dashboard server role, in your case this will be the AIO or Controller node. If you logged into your environment via the jumpbox, you are already set up for local access to your dashboard by typing the following in a web browser that has JavaScript and cookies enabled:

Copy
http://localhost:8080/dashboard

Type user name as admin and password as pass and sign in.

If you accessing your environment through VNC session, open a web browser that has JavaScript and cookies enabled and type:

Copy
http://10.1.64.151/dashboard

Type user name as admin and password as pass and sign in.

OpenStack dashboard – Project tab

The top of the window displays your user name. You can also access Settings or sign out of the dashboard. The visible tabs and functions in the dashboard depends on the access permissions, or roles, of the user you are logged in as.

  • If you are logged in as an end user, only the Project tab is displayed.
  • If you are logged in as an administrator, both the Project tab and Admin tab are displayed.

Projects (=tenants) are organizational units in the cloud. Each user is a member of one or more projects. Within a project, a user creates and manages instances. From the Project tab, you can view and manage the resources in a selected project, including instances and images. You can select from multiple projects (if your user has more than one project) from the current project dropdown list at the top of the page next to the OpenStack logo.

From the Project tab, you can access the following tabs:

Compute tab

  • Overview – View reports for the project.
    • Instances View, launch, create a snapshot from, stop, pause, or reboot instances, or connect to them through VNC.
    • Volumes – Use the following tabs to complete these tasks:
      • Volumes – View, create, edit, and delete volumes.
      • Volume Snapshots – View, create, edit, and delete volume snapshots.
  • Images – View images and instance snapshots created by project users, plus any images that are publicly available. Create, edit, and delete images, and launch instances from images and snapshots.
  • Access & Security – Use the following tabs to complete these tasks:
    • Security Groups – View, create, edit, and delete security groups and security group rules.
    • Key Pairs – View, create, edit, import, and delete key pairs.
    • Floating IPs – Allocate an IP address to or release it from a project.
    • API Access – View API endpoints.

Network tab

  • Network Topology – View the network topology.
  • Networks – Create and manage public and private networks.
  • Routers – Create and manage subnets.

Object Store tab

  • Containers – Create and manage containers and objects.

Orchestration tab

  • Containers -Use the REST API to orchestrate multiple composite cloud applications.

OpenStack dashboard—Admin tab

Administrative users have access to all of the functionality of the Project Tab, but can also use the Admin tab to view usage and to manage instances, volumes, flavors, images, projects, users, services, and quotas. Access the following categories to complete these tasks:

System Panel tab

  • Overview – View basic reports.
  • Resource Usage – Use the following tabs to view the following usages:
    • Daily Report – View the daily report.
    • Stats – View the statistics of all resources.
  • Hypervisors – View the hypervisor summary.
  • Host Aggregates – View, create, and edit host aggregates. View the list of availability zones.
  • Instances – View, pause, resume, suspend, migrate, soft or hard reboot, and delete running instances that belong to users of some, but not all, projects. Also, view the log for an instance or access an instance through VNC.
  • Volumes – View, create, edit, and delete volumes and volume types.
  • Flavors – View, create, edit, view extra specifications for, and delete flavors. A flavor is size of an instance.
  • Images – View, create, edit properties for, and delete custom images.
  • Networks – View, create, edit properties for, and delete networks.
  • Routers – View, create, edit properties for, and delete routers.
  • System Info – Use the following tabs to view the service information:
  • Services – View a list of the services.
  • Compute Services – View a list of all Compute services.
  • Network Agents – View the network agents.
  • Default Quotas – View default quota values. Quotas are hard-coded in OpenStack Compute and define the maximum allowable size and number of resources.

Identity Panel tab

  • Projects – View, create, assign users to, remove users from, and delete projects.
  • Users – View, create, enable, disable, and delete users.

Upload and manage image

A virtual machine image, which we simply refer to as an image, is a single file that contains a virtual disk that has a bootable operating system installed on it. As we’ve already discussed in our treatments of Glance and Nova services, images are used to create VM instances within OpenStack. Depending on your user role, you may have permission to upload and manage images. Operators may restrict the upload and management of images to cloud administrators or operators only. If you have the appropriate privileges, as you do in this lab, you can use the dashboard to upload and manage images in the admin project. So now, we will go through the process of image upload and management.

Activity 1: Upload an image to a project

  1. If you are not already logged in, log in to Horizon.
  2. On the Project tab, click Images.
  3. Click Create Image. The Create an Image dialog box will appear.
  4. Enter the following values:
    • Name – Enter a name for the image.
    • Description – Optionally, enter a brief description of the image.
    • Image Source – Choose the image source from the list. Your choices are Image Location and Image File.
    • Image File or Image Location – Based on your selection for Image Source, you either enter the location URL of the image in the Image Location field. or browse to the image file on your system and add it. ( e.g. http source and: http://10.1.1.92/images/cirros-0.3.2-x86_64-disk.img )
    • Format – Select the correct format (for example, QCOW2) for the image.
    • Architecture – Specify the architecture. For example, i386 for a 32-bit architecture or x86-64for a 64-bit architecture.
    • Minimum Disk (GB) – Leave this optional field empty.
    • Minimum RAM (MB) – Leave this optional field empty.
    • Public – Select this check box to make the image public to all users
    • Protected – Select this check box to ensure that only users with permissions can delete the image.
  5. Click Create Image. The image is queued to be uploaded. It might take some time before the status changes from Queued to Active.

Activity 2: Update an existing image

  1. Log in to the dashboard.
  2. On the Project tab, click Images.
  3. Select the image that you want to edit.
  4. In the Actions column, click More and then select Edit from the list.
  5. In the Update Image dialog box, you can perform the following actions:
    • Change the name of the image.
    • Select the Public check box to make the image public.
    • Clear the Public check box to make the image private.
  6. Click Update Image.

Activity 3 – Delete an image Deletion of images is permanent and cannot be reversed. Only users with the appropriate permissions can delete images.

  1. Log in to the dashboard.
  2. On the Project tab, click Images.
  3. Select the images that you want to delete.
  4. Click Delete Images.
  5. In the Confirm Delete Image dialog box, click Delete Images to confirm the deletion.

Configure access and security for instances

Before you launch an instance, you should add security group rules to enable users to ping and use SSH to connect to the instance. To do so, you either add rules to the default security group or add a security group with rules.

Key pairs are SSH credentials that are injected into an instance when it is launched. To use key pair injection, the image that the instance is based on must contain the cloud-init package. Each project should have at least one key pair. If you have generated a key pair with an external tool, you can import it into OpenStack with Horizon. The same key pair can be used for multiple instances that belong to a project.

When an instance is created in OpenStack, it is automatically assigned a fixed IP address in the network to which the instance is assigned. This IP address is permanently associated with the instance until the instance is terminated. However, in addition to the fixed IP address, a floating IP address can also be attached to an instance. Unlike fixed IP addresses, floating IP addresses are able to have their associations modified at any time, regardless of the state of the instances involved.

Activity 1 – Add a rule to the default security group

This procedure enables SSH and ICMP (ping) access to instances. The rules apply to all instances within a given project, and should be set for every project unless there is a reason to prohibit SSH or ICMP access to the instances. This procedure can be adjusted as necessary to add additional security group rules to a project, if your cloud environment requires them.

  1. Log in to the dashboard, choose a project, and click Access & Security. The Security Groups tab shows the security groups that are available for this project.
  2. Select the default security group and click Edit Rules.
  3. To allow SSH access, click Add Rule.
  4. In the Add Rule dialog box, enter the following values:
    Copy
    Rule - SSH Remote - CIDR CIDR - 0.0.0.0/0

    To accept requests from a particular range of IP addresses, specify the IP address block in the CIDR box.

  5. Click Add. Instances will now have SSH port 22 open for requests from any IP address.
  6. To add an ICMP rule, click Add Rule.
  7. In the Add Rule dialog box, enter the following values:
    Copy
    Rule - All ICMP Direction - Ingress Remote - CIDR CIDR - 0.0.0.0/0
  8. Click Add. Instances will now accept all incoming ICMP packets.

Activity 2- Add a key pair

You should create at least one key pair for each project in OpenStack.

  1. Log in to the dashboard, choose a project, and click Access & Security.
  2. Click the Keypairs tab, which shows the key pairs that are available for this project.
  3. Click Create Keypair.
  4. In the Create Keypair dialog box, enter a name for your key pair, and click Create Keypair.
  5. Respond to the prompt to download the key pair.

Activity 3 – Import a key pair

If you don’t have a keypair on your local machine (aka OS-X or Linux or the AIO control host):

Copy
cat ~/.ssh/id_rsa.pub

If that is blank, then:

Copy
ssh-keygen -b 1024 -t rsa -f id_rsa -P ''

Now that you have a keypair to import:

  1. Log in to the dashboard, choose a project, and click Access & Security.
  2. Click the Keypairs tab, which shows the key pairs that are available for this project.
  3. Click Import Keypair.
  4. In the Import Keypair dialog box, enter the name of your key pair, copy the public key into the Public Key box, and then click Import Keypair. If you are using the dashboard from a Windows computer, use PuTTYgen to load the .pem file and convert and save it as .ppk. (For more information see the WinSCP web page for PuTTYgen.) The Compute database registers the public key of the key pair. The dashboard lists the key pair on the Access & Security tab, though you can not download the public keypair directly from Horizon.

Launch and manage instances

You can launch an instance (VM) from the following sources:

  • Images uploaded to the OpenStack Image Service, as described in the section 8b “Upload and manage image”.
  • Image that you have copied to a persistent volume. The instance launches from the volume, which is provided by the cinder-volume API through iSCSI.

Activity 1 – Launch an instance When you launch an instance from an image, OpenStack creates a local copy of the image on the compute node where the instance starts. When you launch an instance from a volume, note the following steps:

  • To select the volume to from which to launch, launch an instance from an arbitrary image on the volume. The image that you select does not boot. Instead, it is replaced by the image on the volume that you choose in the next steps. (To boot a Xen image from a volume, the image you launch in must be the same type, fully virtualized or paravirtualized, as the one on the volume.)
  • Select the volume or volume snapshot from which to boot. Enter a device name. (Enter vda for KVM images or xvda for Xen images.)

Step 1. Log in to the dashboard, choose a project, and click Images. The dashboard shows the images that have been uploaded to OpenStack Image Service and are available for this project. Step 2. Select an image and click Launch. Step 3. In the Launch Instance dialog box, specify the following values:

Details tab

  • Availability Zone – By default, this value is set to the availability zone given by the cloud provider (for example, us-west or asia-south). For some cases, it could simply be nova.
  • Instance Name – Assign a name to the virtual machine. The name you assign here becomes the initial host name of the server. After the server is built, if you change the server name in the API or change the host name directly, the names are not updated in the dashboard. Server names are not guaranteed to be unique when created so you could have two instances with the same host name. It is best to a assign a unique name to your instance at creation (e.g. “Test1” is better than just “test”)
  • Flavor – Specify the size of the instance to launch. The flavor is selected based on the size of the image selected for launching an instance. For example, while creating an image, if you have entered the value in the Minimun RAM (MB)field as 2048, then on selecting the image, the default flavor is m1.small.
  • Instance Count – To launch multiple instances, enter a value greater than 1. The default is 1.
  • Instance Boot Source – Since you are launching an instance from an image, Boot from image is chosen by default. However, your options are:
    • Boot from image – If you choose this option, a new field for Image Name displays. You can select the image from the list.
    • Boot from snapshot – If you choose this option, a new field for Instance Snapshotdisplays. You can select the snapshot from the list.
    • Boot from volume – If you choose this option, a new field for Volume displays. You can select the volume from the list.
    • Boot from image (creates a new volume) – With this option, you can boot from an image and create a volume by entering the Device Size and Device Name for your volume. Click the Delete on Terminate option to delete the volume on terminating the instance.
    • Boot from volume snapshot (creates a new volume) – Using this option, you can boot from a volume snapshot and create a new volume by choosing Volume Snapshot from a list and adding a Device Name for your volume. Click the Delete on Terminate option to delete the volume on terminating the instance.
  • Image Name – This field changes based on your previous selection. Since you have chosen to launch an instance using an image, the Image Name field displays. Select the image name from the dropdown list.

Access & Security tab

  • Keypair – Specify a key pair. If the image uses a static root password or a static key set (neither is recommended), you do not need to provide a key pair to launch the instance.
  • Security Groups – Activate the security groups that you want to assign to the instance. Security groups are a kind of cloud firewall that defines which incoming network traffic is forwarded to instances. For details, see the section called “Add a rule to the default security group”. If you have not created any security groups, you can assign only the default security group to the instance.

Networking tab

  • Selected Networks – To add a network to the instance, click the + in the Available Networks field.

Post-Creation tab

  • Customization Script – Specify a customization script that runs after your instance launches.

Advanced Options tab Details tab * Disk Partition – Select the type of disk partition from the dropdown list.Choose automatic for this lab. * Automatic – Entire disk is single partition and automatically resizes. * Manual – Faster build times but requires manual partitioning.

Step 4. Click Launch.

The instance starts on a compute node in the cloud. The Instances tab shows the instance’s name, its private and public IP addresses, size, status, task, and power state. If you did not provide a key pair, security groups, or rules, users can access the instance only from inside the cloud through VNC. Even pinging the instance is not possible without an ICMP rule configured. To access the instance through a VNC console, see the section called “Access an instance through a console”.

Activity 2 – Connect to your instance by using SSH: To use SSH to connect to your instance, you use the downloaded keypair file. The defautlt user name is centos for the centos cloud images. 1. Copy the IP address for your instance. 2. Use the ssh command to make a secure connection to the instance. For example: $ ssh -i MyKey.pem centos@10.0.0.2. At the prompt, type yes.

Activity 3 – Track usage for instances: You can track usage for instances for each project. You can track costs per month by showing metrics like number of vCPUs, disks, RAM, and uptime for all your instances.

  1. Log in to the dashboard, choose a project, and click Overview.
  2. To query the instance usage for a month, select a month and click Submit.
  3. To download a summary, click Download CSV Summary. Create an instance snapshot
  4. Log in to the dashboard, choose a project, and click Instances.
  5. Select the instance from which to create a snapshot.
  6. In the Actions column, click Create Snapshot.
  7. In the Create Snapshot dialog box, enter a name for the snapshot, and click Create Snapshot. The Images category shows the instance snapshot. To launch an instance from the snapshot, select the snapshot and click Launch. Proceed with the directions provided earlier to “Launch an instance”.

Activity 4 – Manage an instance

  1. Log in to the dashboard, choose a project, and click Instances.
  2. Select an instance.
  3. In the More list in the Actions column, select the state.

You can resize or rebuild an instance. You can also choose to view the instance console log, edit instance or the security groups. Depending on the current state of the instance, you can pause, resume, suspend, soft or hard reboot, or terminate it.

Create and manage volumes

Volumes are block storage devices that you attach to instances to enable persistent storage. You can attach a volume to a running instance or detach a volume and attach it to another instance at any time. You can also create a snapshot from or delete a volume. Only administrative users can create volume types.

Activity 1: Create a volume

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. Click Create Volume. In the dialog box that opens, enter or select the following values.
  • Volume Name – Specify a name for the volume.
  • Description – Optionally, provide a brief description for the volume.
  • Type – Leave this field blank.
  • Size (GB) – The size of the volume in gigabytes.
  • Volume Source – Select one of the following options:
    • No source, empty volume – Creates an empty volume. An empty volume does not contain a file system or a partition table.
    • Snapshot – If you choose this option, a new field for Use snapshot as a source displays. You can select the snapshot from the list.
    • Image – If you choose this option, a new field for Use image as a source displays. You can select the image from the list. Select the Availability Zone from the list. By default, this value is set to the availability zone given by the cloud provider (for example,us-west or asia-south). For some cases, it could be nova.
    • Volume – If you choose this option, a new field for Use volume as a source displays. You can select the volume from the list. Options to use a snapshot or a volume as the source for a volume are displayed only if there are existing snapshots or volumes.
  1. Click Create Volume.

The dashboard shows the volume on the Volumes tab.

Activity 2 – Attach a volume to an instance

After you create one or more volumes, you can attach them to instances. You can attach a volume to one instance at a time.

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. Select the volume to add to an instance and click Edit Attachments.
  3. In the Manage Volume Attachments dialog box, select an instance.
  4. Enter the name of the device from which the volume is accessible by the instance. The actual device name might differ from the volume name because of hypervisor settings.
  5. Click Attach Volume. The dashboard shows the instance to which the volume is now attached and the device name. You can view the status of a volume in the Volumes tab of the dashboard. The volume is either Available or In-Use. Now you can log in to the instance and mount, format, and use the disk.

Activity 3 – Detach a volume from an instance

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. Select the volume and click Edit Attachments.
  3. Click Detach Volume and confirm your changes.

A message indicates whether the action was successful.

Activity 4 – Create a snapshot from a volume

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. Select a volume from which to create a snapshot.
  3. From the More list, select Create Snapshot.
  4. In the dialog box that opens, enter a snapshot name and a brief description.
  5. Confirm your changes.

The dashboard shows the new volume snapshot in Volume Snapshots tab.

Activity 5 – Edit a volume

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. On the Project tab, click Volumes.
  3. Select the image that you want to edit.
  4. In the Actions column, click Edit Volume.
  5. In the Edit Volume dialog box, update the name and description of the image.
  6. Click Edit Volume. You can extend a volume by using the Extend Volume option available in the “More” dropdown list and entering the new value for volume size.

Activity 6 – Delete a volume

When you delete an instance, the data in its attached volumes is not destroyed. When you delete a volume, the data is permanently destroyed.

  1. Log in to the dashboard, choose a project, and click Volumes.
  2. Select the check boxes for the volumes that you want to delete.
  3. Click Delete Volumes and confirm your choice.

A message indicates whether the action was successful.

Create and manage networks

The OpenStack Networking service provides a scalable system for managing the network connectivity within an OpenStack cloud deployment. It can easily and quickly react to changing network needs (for example, creating and assigning new IP addresses). Networking in OpenStack is complex. This lab section provides the basic instructions for creating a network and a router. For detailed information about managing networks, refer to the OpenStack Cloud Administrator Guide on OpenStack.org.

Activity 1 – Create a network

  1. Log in to the dashboard, choose a project, and click Networks.
  2. Click Create Network.
  3. In the Create Network dialog box, specify the following values.

Network tab

  • Network Name – Specify a name to identify the network.
  • Allocation Pools – Specify IP address pools.
  • DNS Name Servers – Specify a name for the DNS server.
  • Host Routes – Specify the IP address of host routes.

Subnet tab

  • Create Subnet – Select this check box to create a subnet You do not have to specify a subnet when you create a network, but if you do not, any attached instance receives an Error status.
  • Subnet Name – Specify a name for the subnet.
  • Network Address – Specify the IP address for the subnet.
  • IP Version – Select IPv4 or IPv6.
  • Gateway IP – Specify an IP address for a specific gateway. This parameter is optional.
  • Disable Gateway – Select this check box to disable a gateway IP address.

Subnet Detail tab

  • Enable DHCP – Select this check box to enable DHCP.
  1. Click Create. The dashboard shows the network on the Networks tab.

Activity 2 – Create a router

  1. Log in to the dashboard, choose a project, and click Routers.
  2. Click Create Router.
  3. In the Create Router dialog box, specify a name for the router and click Create Router. The new router is now displayed in the Routers tab.
  4. Click the new router’s Set Gateway button.
  5. In the External Network field, specify the network to which the router will connect, and then click Set Gateway.
  6. To connect a private network to the newly created router, perform the following steps:
  1. On the Routers tab, click the name of the router.
  2. On the Router Details page, click Add Interface.
  3. In the Add Interface dialog box, specify the following information:
  • Subnet – Select a subnet.
  • IP Address (optional) – Enter the router interface IP address for the selected subnet. Note: If this value is not set, then by default, the first host IP address in the subnet is used by OpenStack Networking.

The Router Name and Router ID fields are automatically updated.

  1. Click Add Interface.

You have successfully created the router. You can view the new topology from the Network Topology tab.

You’ve now successfully installed Horizon and explored most of the basic operations accessible through this GUI.

As a review – recall that there are actions that are accessible by CLI that are not through Horizon. Can you (or did you) find (or not find) some of these?

In the next two labs you will explore Heat and Ceilometer, two OpenStack projects that provide some ‘intelligence’ to your cloud and enable automation. First you will install Heat, the OpenStack Orchestration project. Consider how this ‘orchestration’ differs from the nova ‘orchestration’ functions.