Month: October 2018

Top 5 eCommerce Posts for October

SEO for holiday shoppers – Search Engine Land let’s dive in with 5 things you can do right now to get started on making more money during this peak time of year. 4 Online Marketing Tips That May Sound Obvious But Should Not Be Overlooked – Marketing Insider Group With more people fighting for a […]

WordPress Configuration Cheat Sheet

In our series about misconfigurations of PHP frameworks, we have investigated Symfony, a very versatile and modular framework. Due to the enormous distribution and the multitude of plugins, WordPress is also a very popular target for attackers. This cheat sheet focuses on the wp-config.php file and highlights important settings to check when configuring your secure WordPress installation.
Download Cheat Sheet as PDF
1. Disable Debugging The debug functionality should not be active in a production environment, as it might provide useful information to potential attackers.

Kali Linux 2018.4 Release

Welcome to our fourth and final release of 2018, Kali Linux 2018.4, which is available for immediate download. This release brings our kernel up to version 4.18.10, fixes numerous bugs, includes many updated packages, and a very experimental 64-bit Raspberry Pi 3 image.

New Tools and Tool Upgrades

We have only added one new tool to the distribution in this release cycle but it’s a great one. Wireguard is a powerful and easy to configure VPN solution that eliminates many of the headaches one typically encounters setting up VPNs. Check out our Wireguard post for more details on this great addition.

Kali Linux 2018.4 also includes updated packages for Burp Suite, Patator, Gobuster, Binwalk, Faraday, Fern-Wifi-Cracker, RSMangler, theHarvester, wpscan, and more. For the complete list of updates, fixes, and additions, please refer to the Kali Bug Tracker Changelog.

64-bit Raspberry Pi 3

We have created a very experimental Raspberry Pi 3 image that supports 64-bit mode. Please note that this is a beta image, so if you discover anything that isn’t working, please alert us on our bug tracker.

Download Kali Linux 2018.4

If you would like to check out this latest and greatest Kali release, you can find download links for ISOs and Torrents on the Kali Downloads page along with links to the Offensive Security virtual machine and ARM images, which have also been updated to 2018.4. If you already have a Kali installation you’re happy with, you can easily upgrade in place as follows.

root@kali:~# apt update && apt -y full-upgrade

Ensuring your Installation is Updated

To double check your version, first make sure your Kali package repositories are correct.

root@kali:~# cat /etc/apt/sources.list
deb http://http.kali.org/kali kali-rolling main non-free contrib

Then after running ‘apt -y full-upgrade’, you may require a ‘reboot’ before checking:

root@kali:~# grep VERSION /etc/os-release
VERSION=”2018.4″
VERSION_ID=”2018.4″
root@kali:~#
root@kali:~# uname -a
Linux kali 4.18.0-kali2-amd64 #1 SMP Debian 4.18.10-2kali1 (2018-10-09) x86_64 GNU/Linux

If you come across any bugs in Kali, please open a report on our bug tracker. We’ll never be able to fix what we don’t know about.

Free Exchange chapters – Manage Users, Groups & Public Folders

Last year I was invited to write an Exchange 2016 book with a few fellow MVPs. Unfortunately, due to unforeseen circumstances, the book was later canceled

Rather than have my chapters go to waste (combined these chapters are 40,000 words) I wanted to offer them to you for free.

Below you will find three chapters covering the management of mailboxes, groups, contacts, mail-enabled users, and public folders.

That said I do want to warn you that these chapters only received a brief technical review. They did not go through any formal editorial process, so I apologize in advance for any errors. Please read these at your own risk.

While I offer these for free I would like to state that my work is under copyright. By viewing or downloading these chapters you agree to the copyright notice below.

Copyright © 2018 by Gareth Gudger

All rights reserved. No part of these chapters may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, please email info@supertekboy.com.

The post Free Exchange chapters – Manage Users, Groups & Public Folders appeared first on SuperTekBoy.

How to configure HAProxy in Openstack (High Availability)

How to configure HAProxy in Openstack (High Availability)




Since this is the second part of my previous article where I shared the steps to configure OpenStack HA cluster using pacemaker and corosync. In this article I will share the steps to configure HAProxy in Openstack and move our keystone endpoints to load balancer using Virtual IP.

How to configure HAProxy in Openstack (High Availability)

 

Configure HAProxy in Openstack

To configure HAProxy in OpenStack, we will be using HAProxy to load-balance our control plane services in this lab deployment. Some deployments may also implement Keepalived and run HAProxy in an Active/Active configuration. For this deployment, we will run HAProxy Active/Passive and manage it as a resource along with our VIP in Pacemaker.

To start, install HAProxy on both nodes using the following command:

NOTE:

On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which “yum” package manager can install the provided rpm and it’s dependencies.
[root@controller1 ~]# yum install -y haproxy
[root@controller2 ~]# yum install -y haproxy

Verify installation with the following command:


[root@controller1 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

[root@controller2 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

Next, we will create a configuration file for HAProxy which load-balances the API services installed on the two controllers. Use the following example as a template, replacing the IP addresses in the example with the IP addresses of the two controllers and the IP address of the VIP that you’ll be using to load-balance the API services.

NOTE:

The IP Address which you plan to use for VIP must be free.

Take a backup of the existing config file on both the controller nodes

[root@controller1 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp
[root@controller2 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp

The following example /etc/haproxy/haproxy.cfg, will load-balance Horizon in our environment:

[root@controller1 haproxy]# cat haproxy.cfg
global
  daemon
  group  haproxy
  maxconn  40000
  pidfile  /var/run/haproxy.pid
  user  haproxy

defaults
  log  127.0.0.1 local2 warning
  mode  tcp
  option  tcplog
  option  redispatch
  retries  3
  timeout  connect 10s
  timeout  client 60s
  timeout  server 60s
  timeout  check 10s

listen horizon
  bind 192.168.122.30:80
  mode http
  cookie SERVERID insert indirect nocache
  option tcplog
  timeout client 180s
  server controller1 192.168.122.20:80 cookie controller1 check inter 1s
  server controller2 192.168.122.22:80 cookie controller2 check inter 1s

In this example, controller1 has an IP address of 192.168.122.20 and controller2 has an IP address of 192.168.122.22. The VIP that we’ve chosen to use is 192.168.122.30. Copy this file, replacing the IP addresses with the addresses in your lab, to /etc/haproxy/haproxy.cfg on each of the controllers.

 

To configure HAProxy in OpenStack we must copy this haproxy.cfg file to the second controller


[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg

In order for Horizon to respond to requests on the VIP, we’ll need to add the VIP as a ServerAlias in the Apache virtual host configuration. This is found at /etc/httpd/conf.d/15-horizon_vhost.conf in our lab installation. Look for the following line on controller1:

ServerAlias 192.168.122.20

and below line on controller2

ServerAlias 192.168.122.22

Add an additional ServerAlias line with the VIP on both controllers:

ServerAlias 192.168.122.30

You’ll also need to tell Apache not to listen on the VIP so that HAProxy can bind to the address. To do this, modify /etc/httpd/conf/ports.conf and specify the IP address of the controller in addition to the port numbers. The following is an example:

[root@controller1 ~]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.20:35357
Listen 192.168.122.20:5000
Listen 192.168.122.20:80
Here 192.168.122.20 is the IP of controller1

On controller2 repeat the same with the IP of the respective controller node

[root@controller2 ~(keystone_admin)]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.22:35357
Listen 192.168.122.22:5000
Listen 192.168.122.22:80

Restart Apache to pick up the new alias:


[root@controller1 ~]# systemctl restart httpd
[root@controller2 ~]# systemctl restart httpd

Next, add the VIP and the HAProxy service to the Pacemaker cluster as resources. These commands should only be run on the first controller node. This tells Pacemaker three things about the resource you want to add:

  • The first field (ocf in this case) is the standard to which the resource script conforms and where to find it.
  • The second field (heartbeat in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in.
  • The third field (IPaddr2 in this case) is the name of the resource script.

 

[root@controller1 ~]# pcs resource create VirtualIP IPaddr2 ip=192.168.122.30 cidr_netmask=24
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')

[root@controller1 ~]# pcs resource create HAProxy systemd:haproxy

Co-locate the HAProxy service with the VirtualIP to ensure that the two run together:

[root@controller1 ~]# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY

Verify that the resources have been started on both the controllers:

[root@controller1 ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 12:44:27 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

At this point, you should be able to access Horizon using the VIP you specified. Traffic will flow from your client to HAProxy on the VIP to Apache on one of the two nodes.

 

Additional API service configuration

Now here configure HAProxy in Openstack is complete, the final configuration step is to move each of the OpenStack API endpoints behind the load balancer. There are three steps in this process, which are as follows:


  • Update the HAProxy configuration to include the service.
  • Move the endpoint in the Keystone service catalog to the VIP.
  • Reconfigure services to point to the VIP instead of the IP of the first controller.

 

In the following example, we will move the Keystone service behind the load balancer. This process can be followed for each of the API services.

First, add a section to the HAProxy configuration file for the authorization and admin endpoints of Keystone. So we are adding below template to our existing haproxy.cfg file on both the controllers

[root@controller1 ~]# vim /etc/haproxy/haproxy.cfg
listen keystone-admin
  bind 192.168.122.30:35357
  mode tcp
  option tcplog
  server controller1 192.168.122.20:35357 check inter 1s
  server controller2 192.168.122.22:35357 check inter 1s

listen keystone-public
  bind 192.168.122.30:5000
  mode tcp
  option tcplog
  server controller1 192.168.122.20:5000 check inter 1s
  server controller2 192.168.122.22:5000 check inter 1s

Restart the haproxy service on the active node:

[root@controller1 ~]# systemctl restart haproxy.service

You can determine the active node with the output from pcs status. Check to make sure that HAProxy is now listening on ports 5000 and 35357 using the following commands on both the controllers:

[root@controller1 ~]# curl http://192.168.122.30:5000
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:5000/v2.0/", "rel": "self"}, {"href": "htt

[root@controller1 ~]# curl http://192.168.122.30:5000/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}}

[root@controller1 ~]# curl http://192.168.122.30:35357/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}}

[root@controller1 ~]# curl http://192.168.122.30:35357
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

All the above commands should output some JSON describing the status of the Keystone service. So all the respective ports are in listening state


 

Next, update the endpoint for the identity service in the Keystone service catalogue by creating a new endpoint and deleting the old one. So you can source your existing keystonerc_admin file

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

Below is the content from my keystonerc_admin

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.20:5000/v3
    export PS1='[u@h W(keystone_admin)]$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

As you see currently the OS_AUTH_URL reflects to the existing endpoint for the controller. We will update this in a while.

Get the list if current keystone endpoints on your active controller

[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |
| b0f5b7887cd346b3aec747e5b9fafcd3 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.20:35357/v3                |
| c1380d643f734cc1b585048b2e7a7d47 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.20:5000/v3                 |

Now since we want to move the endpoint in the keystone service to VIP, we will create new endpoints with the VIP url as below for admin, public and internal

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity public http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 08a26ace08884b85a0ff869ddb20bea3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity admin http://192.168.122.30:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ef210afef1da4558abdc00cc13b75185 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:35357/v3   |
+--------------+----------------------------------+

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity internal http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5205be865e2a4cb9b4ab2119b93c7461 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

Last, update the auth_uri, auth_url and identity_uri parameters in each of the OpenStack services to point to the new IP address. The following configuration files will need to be edited:


/etc/ceilometer/ceilometer.conf
/etc/cinder/api-paste.ini
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf
/etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/nova/nova.conf
/etc/swift/proxy-server.conf

Next install openstack-utils to get the openstack tools which can help us restart all the services at once rather than manually restarting all the openstack related services

[root@controller1 ~(keystone_admin)]# yum -y install openstack-utils

After editing each of the files, restart the OpenStack services on all of the nodes in the lab deployment using the following command:

[root@controller1 ~(keystone_admin)]# openstack-service restart

Next update your keystonerc_admin file to point to the new OS_AUTH_URL with the VIP i.e. 192.168.122.30:5000/v3 as shown below

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.30:5000/v3
    export PS1='[u@h W(keystone_admin)]$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

Now re-source the updated keystonerc_admin file

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

Validate the new changes if the OS_AUTH_URL is pointing to the new VIP

[root@controller1 ~(keystone_admin)]# echo $OS_AUTH_URL
http://192.168.122.30:5000/v3

Once the openstack services are restrated, delete the old endpoints for keystone service

[root@controller1 ~(keystone_admin)]# openstack endpoint delete b0f5b7887cd346b3aec747e5b9fafcd3
[root@controller1 ~(keystone_admin)]# openstack endpoint delete c1380d643f734cc1b585048b2e7a7d47

 

NOTE:

You may get below error while attempting to delete the old endpoints, these are most likely because the keystone database is still not properly refreshed so perform another round of “openstact-service restart” and then re-attempt to delete the endpoint
[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6
Failed to delete endpoint with ID '3ded2a2faffe4fd485f6c3c58b1990d6': More than one endpoint exists with the name '3ded2a2faffe4fd485f6c3c58b1990d6'.
1 of 1 endpoints failed to delete.

[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep 3ded2a2faffe4fd485f6c3c58b1990d6
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |

[root@controller1 ~(keystone_admin)]# openstack-service restart

[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6

Repeat the same set of steps of controller2


 

After deleting the old endpoints and creating the new ones, below is the updated list of keystone endpoints on controller2

[root@controller2 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |

The OpenStack services will now be using the Keystone API endpoint provided by the VIP and the service will be highly available.

 

Perform a Cluster Failover

Since our ultimate goal is high availability, we should test failover of our new resource.

Before performing a failover let us make sure our cluster is UP and running properly

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 14:54:45 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As we see both our controller are online so let us stop the second controller

[root@controller2 ~(keystone_admin)]# pcs cluster stop controller2
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...

Now let us try to check the pacemaker status from controller2

[root@controller2 ~(keystone_admin)]# pcs status
Error: cluster is not currently running on this node

Since cluster service is not running on controller2 we cannot check the status. So let us get the status from controller1


[root@controller1 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:21:32 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 ]
OFFLINE: [ controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As expected it shows controller2 is offline. So now let us check if our endpoint from keystone is readable

[root@controller2 ~(keystone_admin)]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                           |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| 06473a06f4a04edc94314a97b29d5395 | RegionOne | cinderv3     | volumev3     | True    | internal  | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 07ad2939b59b4f4892d6a470a25daaf9 | RegionOne | aodh         | alarming     | True    | public    | http://192.168.122.20:8042                    |
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 0856cd4b276f490ca48c772af2be49a3 | RegionOne | gnocchi      | metric       | True    | internal  | http://192.168.122.20:8041                    |
| 08ff114d526e4917b5849c0080cfa8f2 | RegionOne | aodh         | alarming     | True    | admin     | http://192.168.122.20:8042                    |
| 1e6cf514c885436fb14ffec0d55286c6 | RegionOne | aodh         | alarming     | True    | internal  | http://192.168.122.20:8042                    |
| 20178fdd0a064b5fa91b869ab492d2d1 | RegionOne | cinderv2     | volumev2     | True    | internal  | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3524908122a44d7f855fd09dd2859d4e | RegionOne | nova         | compute      | True    | public    | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| 3a896bde051f4ae4bfa3694a1eb05321 | RegionOne | cinderv2     | volumev2     | True    | admin     | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3ef1f30aab8646bc96c274a116120e66 | RegionOne | nova         | compute      | True    | admin     | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 42a690ef05aa42adbf9ac21056a9d4f3 | RegionOne | nova         | compute      | True    | internal  | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 45fea850b0b34f7ca2443da17e82ca13 | RegionOne | glance       | image        | True    | admin     | http://192.168.122.20:9292                    |
| 46cbd1e0a79545dfac83eeb429e24a6c | RegionOne | cinderv2     | volumev2     | True    | public    | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 49f82b77105e4614b7cf57fe1785bdc3 | RegionOne | cinder       | volume       | True    | internal  | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 4aced9a3c17741608b2491a8a8fb7503 | RegionOne | cinder       | volume       | True    | public    | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 63eeaa5246f54c289881ade0686dc9bb | RegionOne | ceilometer   | metering     | True    | admin     | http://192.168.122.20:8777                    |
| 6e2fd583487846e6aab7cac4c001064c | RegionOne | gnocchi      | metric       | True    | public    | http://192.168.122.20:8041                    |
| 79f2fcdff7d740549846a9328f8aa993 | RegionOne | cinderv3     | volumev3     | True    | public    | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 9730a44676b042e1a9f087137ea52d04 | RegionOne | glance       | image        | True    | public    | http://192.168.122.20:9292                    |
| a028329f053841dfb115e93c7740d65c | RegionOne | neutron      | network      | True    | internal  | http://192.168.122.20:9696                    |
| acc7ff6d8f1941318ab4f456cac5e316 | RegionOne | placement    | placement    | True    | public    | http://192.168.122.20:8778/placement          |
| afecd931e6dc42e8aa1abdba44fec622 | RegionOne | glance       | image        | True    | internal  | http://192.168.122.20:9292                    |
| c08c1cfb0f524944abba81c42e606678 | RegionOne | placement    | placement    | True    | admin     | http://192.168.122.20:8778/placement          |
| c0c0c4e8265e4592942bcfa409068721 | RegionOne | placement    | placement    | True    | internal  | http://192.168.122.20:8778/placement          |
| d9f34d36bd2541b98caa0d6ab74ba336 | RegionOne | cinder       | volume       | True    | admin     | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| e051cee0d06e45d48498b0af24eb08b5 | RegionOne | ceilometer   | metering     | True    | public    | http://192.168.122.20:8777                    |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |
| ea6f1493aa134b6f9822eca447dfd1df | RegionOne | neutron      | network      | True    | admin     | http://192.168.122.20:9696                    |
| ed97856952bb4a3f953ff467d61e9c6a | RegionOne | gnocchi      | metric       | True    | admin     | http://192.168.122.20:8041                    |
| f989d76263364f07becb638fdb5fea6c | RegionOne | neutron      | network      | True    | public    | http://192.168.122.20:9696                    |
| fe32d323287c4a0cb221faafb35141f8 | RegionOne | ceilometer   | metering     | True    | internal  | http://192.168.122.20:8777                    |
| fef852af4f0d4f0cacd4620e5d5245c2 | RegionOne | cinderv3     | volumev3     | True    | admin     | http://192.168.122.20:8776/v3/%(tenant_id)s   |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+

yes we are still able to read the endpoint list for keystone so all looks fine..

 

Let us again start our cluster configuration on controller2

[root@controller2 ~(keystone_admin)]# pcs cluster start
Starting Cluster...

And check the status

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:23:17 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

So all is back to green and we were successfully able to configure HAProxy in Openstack.


 

Lastly I hope the steps from the article to configure HAProxy in Openstack (High Availability between controllers) was helpful. So, let me know your suggestions and feedback using the comment section.

 

<div>How to configure Openstack High Availability with corosync & pacemaker</div>

How to configure Openstack High Availability with corosync & pacemaker




This is a two part article, here I will share the steps to configure OpenStack High Availability (HA) between two controllers. In the second part I will share the steps to configure HAProxy and move keystone service endpoints to loadbalancer. By default if your bring up a controller and compute node using tripleo configuration then the controllers will by default get configured via pacemaker cluster. But if you are manually bringing up your openstack setup using packstack or devstack or by manually creating all the database and services then you will have to manually configure cluster between the controllers to configure OpenStack High Availability (HA).

How to configure Openstack High Availability with corosync & pacemaker

 

Configure OpenStack High Availability (HA)

For the sake of this article I brought up two controller nodes using packstack on two different virtual machines using CentOS 7. After the successful completion of packstack you will observe keystonerc_admin file in the home folder of the root user.

 

Installing the Pacemaker resource manager

Since we will configure OpenStack High Availability using pacemaker and corosync, first of all we need to install all the rpms required for the cluster setup. So we will install Pacemaker to manage the VIPs that we will use with HAProxy to make the web services highly available.

So install pacemaker on all the controller nodes


[root@controller2 ~]# yum install -y pcs fence-agents-all
[root@controller1 ~]# yum install -y pcs fence-agents-all

Verify that the software installed correctly by running the following command:

[root@controller1 ~]# rpm -q pcs
pcs-0.9.162-5.el7.centos.2.x86_64

[root@controller2 ~]# rpm -q pcs
pcs-0.9.162-5.el7.centos.2.x86_64

Next, add rules to the firewall to allow cluster traffic:

[root@controller1 ~]# firewall-cmd --permanent --add-service=high-availability
success

[root@controller1 ~]# firewall-cmd --reload
success

[root@controller2 ~]# firewall-cmd --permanent --add-service=high-availability
success

[root@controller2 ~]# firewall-cmd --reload
success
NOTE:

If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports: TCP ports 2224, 3121, and 21064, and UDP port 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host.

The installed packages will create a hacluster user with a disabled password. While this is fine for running pcs commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes.

Set the password for the Pacemaker cluster on each controller node using the following command:

[root@controller1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@controller2 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Start the Pacemaker cluster manager on each node:


[root@controller1 ~]# systemctl start pcsd.service
[root@controller1 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

[root@controller2 ~]# systemctl start pcsd.service
[root@controller2 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

 

Configure Corosync

To configure Openstack High Availability we need to configure corosync on both the nodes, use pcs cluster auth to authenticate as the hacluster user:

[root@controller1 ~]# pcs cluster auth controller1 controller2
Username: hacluster
Password:
controller2: Authorized
controller1: Authorized

[root@controller2 ~]# pcs cluster auth controller1 controller2
Username: hacluster
Password:
controller2: Authorized
controller1: Authorized
NOTE:

If you face any issues at this step, check your firewalld/iptables or selinux policy

Finally, run the following commands on the first node to create the cluster and start it. Here our cluster name will be openstack

[root@controller1 ~]# pcs cluster setup --start --name openstack controller1 controller2
Destroying cluster on nodes: controller1, controller2...
controller1: Stopping Cluster (pacemaker)...
controller2: Stopping Cluster (pacemaker)...
controller1: Successfully destroyed cluster
controller2: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'controller1', 'controller2'
controller1: successful distribution of the file 'pacemaker_remote authkey'
controller2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
controller1: Succeeded
controller2: Succeeded

Starting cluster on nodes: controller1, controller2...
controller1: Starting Cluster...
controller2: Starting Cluster...

Synchronizing pcsd certificates on nodes controller1, controller2...
controller2: Success
controller1: Success
Restarting pcsd on the nodes in order to reload the certificates...
controller2: Success
controller1: Success

Enable the pacemaker and corosync services on both the controller so they can automatically start on boot

[root@controller1 ~]# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.

[root@controller1 ~]# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.

[root@controller2 ~]# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.

[root@controller2 ~]# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.

 

Validate cluster using pacemaker

Verify that the cluster started successfully using the following command on both the nodes:

[root@controller1 ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 11:51:13 2018
Last change: Tue Oct 16 11:50:51 2018 by root via cibadmin on controller1

2 nodes configured
0 resources configured

Online: [ controller1 controller2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

[root@controller2 ~]# pcs status
Cluster name: openstack
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Mon Oct 15 17:04:29 2018
Last change: Mon Oct 15 16:49:09 2018 by hacluster via crmd on controller2

2 nodes configured
0 resources configured

Online: [ controller1 controller2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

How to start the Cluster

Now that corosync is configured, it is time to start the cluster. The command below will start corosync and pacemaker on both nodes in the cluster. If you are issuing the start command from a different node than the one you ran the pcs cluster auth command on earlier, you must authenticate on the current node you are logged into before you will be allowed to start the cluster.

[root@controller1 ~]# pcs cluster start --all

An alternative to using the pcs cluster start --all command is to issue either of the below command sequences on each node in the cluster separately:


[root@controller1 ~]# pcs cluster start
Starting Cluster...

or

[root@controller1 ~]# systemctl start corosync.service
[root@controller1 ~]# systemctl start pacemaker.service

 

Verify Corosync Installation

First, use corosync-cfgtool to check whether cluster communication is happy:

[root@controller2 ~]#  corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
        id      = 192.168.122.22
        status  = ring 0 active with no faults

So all looks normal with our fixed IP address (not a 127.0.0.x loopback address) listed as the id, and no faults for the status.
If you see something different, you might want to start by checking the node’s network, firewall and SELinux configurations.

Next, check the membership and quorum APIs:

[root@controller2 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.122.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.122.22)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined

Check the status of corosync service


[root@controller2 ~]# pcs status corosync
Membership information
----------------------
    Nodeid      Votes Name
         1          1 controller1
         2          1 controller2 (local)

You should see both nodes have joined the cluster.

Repeat the same steps on both the controller to validate the corosync services

 

Verify the cluster configuration

Before we make any changes, it’s a good idea to check the validity of the configuration.

[root@controller1 ~]# crm_verify -L -V
   error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid

As you can see, the tool has found some errors.

In order to guarantee the safety of your data, [5] fencing (also called STONITH) is enabled by default. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster will not be able to make progress if a situation requiring node fencing arises).


We will disable this feature for now and configure it later. To disable STONITH, set the stonith-enabled cluster option to false on both the controller nodes:

[root@controller1 ~]# pcs property set stonith-enabled=false
[root@controller1 ~]# crm_verify -L

[root@controller2 ~]# pcs property set stonith-enabled=false
[root@controller2 ~]# crm_verify -L
With the new cluster option set, the configuration is now valid.

WARNING:

The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that the nodes which fails are safely in powered off state. Some vendors will refuse to support clusters that have STONITH disabled.

 

I will continue this article i.e to configure OpenStack High Availability in separate part. In the next part I will share the steps to configure HAProxy and we will manage it as a resource. Also  the detail steps to move OpenStack API endpoints behind the cluster load balancer.

 

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack




I will assume that your undercloud installation is complete, so here we will continue with the steps to configure the director to deploy overcloud in Openstack using Red Hat Openstack Platform Director 10 and virt-manager.

 

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack

In our last article we covered below areas

    • First of all bring up a physical host
  • Install a new virtual machine for undercloud-director
  • Set hostname for the director
  • Configure repo or subscribe to RHN
  • Install python-tripleoclient
  • Configure undercloud.conf
  • Install Undercloud

 

Now in this article we will continue with the pending steps to configure Undercloud director node to deploy overcloud in Openstack

  • Obtain and upload images for overcloud introspection and deployment
  • Create virtual machines for overcloud nodes (compute and controller)
  • Configure Virtual Bare Metal Controller
  • Importing and registering the overcloud nodes
  • Introspecting the overcloud nodes
  • Tagging overcloud nodes to profiles
  • Lastly start deploying Overcloud Nodes

 

Deploy Overcloud in Openstack

The director requires several disk images for provisioning overcloud nodes. This includes:


  • An introspection kernel and ramdisk ⇒ Used for bare metal system introspection over PXE boot.
  • A deployment kernel and ramdisk ⇒ Used for system provisioning and deployment.
  • An overcloud kernel, ramdisk, and full image ⇒ A base overcloud system that is written to the node’s hard disk.

 

Obtaining Images for Overcloud

[stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa -y

[stack@director ~]$ cp /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar ~/images/

[stack@director ~]$ cd images/

Extract the archives to the images directory on the stack user’s home (/home/stack/images):

[stack@director images]$ tar -xf overcloud-full-latest-10.0.tar
[stack@director images]$ tar -xf ironic-python-agent-latest-10.0.tar
[stack@director images]$ ls -l
total 3848560
-rw-r--r--. 1 stack stack  425703356 Aug 22 02:15 ironic-python-agent.initramfs
-rwxr-xr-x. 1 stack stack    6398256 Aug 22 02:15 ironic-python-agent.kernel
-rw-r--r--. 1 stack stack  432107520 Oct  8 10:14 ironic-python-agent-latest-10.0.tar
-rw-r--r--. 1 stack stack   61388282 Aug 22 02:29 overcloud-full.initrd
-rw-r--r--. 1 stack stack 1537239040 Oct  8 10:13 overcloud-full-latest-10.0.tar
-rw-r--r--. 1 stack stack 1471676416 Oct  8 10:18 overcloud-full.qcow2
-rwxr-xr-x. 1 stack stack    6398256 Aug 22 02:29 overcloud-full.vmlinuz

 

Change root password for overcloud nodes

You need virt-customize to change the root password.

[stack@director images]$ sudo yum install -y libguestfs-tools

Execute below command. Replace highlighted text “password” with the password you wish to assign for “root”

[stack@director images]$ virt-customize -a overcloud-full.qcow2 --root-password password:password
[   0.0] Examining the guest ...
[  40.9] Setting a random seed
[  40.9] Setting the machine ID in /etc/machine-id
[  40.9] Setting passwords
[  63.0] Finishing off

Import these images into the director:

[stack@director images]$ openstack overcloud image upload --image-path ~/images/
Image "overcloud-full-vmlinuz" was uploaded.
+--------------------------------------+------------------------+-------------+---------+--------+
|                  ID                  |          Name          | Disk Format |   Size  | Status |
+--------------------------------------+------------------------+-------------+---------+--------+
| db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz |     aki     | 6398256 | active |
+--------------------------------------+------------------------+-------------+---------+--------+
Image "overcloud-full-initrd" was uploaded.
+--------------------------------------+-----------------------+-------------+----------+--------+
|                  ID                  |          Name         | Disk Format |   Size   | Status |
+--------------------------------------+-----------------------+-------------+----------+--------+
| 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd |     ari     | 61388282 | active |
+--------------------------------------+-----------------------+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------+------------+--------+
|                  ID                  |      Name      | Disk Format |    Size    | Status |
+--------------------------------------+----------------+-------------+------------+--------+
| 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full |    qcow2    | 1471676416 | active |
+--------------------------------------+----------------+-------------+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------+---------+--------+
|                  ID                  |       Name       | Disk Format |   Size  | Status |
+--------------------------------------+------------------+-------------+---------+--------+
| 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel |     aki     | 6398256 | active |
+--------------------------------------+------------------+-------------+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------+-----------+--------+
|                  ID                  |        Name       | Disk Format |    Size   | Status |
+--------------------------------------+-------------------+-------------+-----------+--------+
| 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk |     ari     | 425703356 | active |
+--------------------------------------+-------------------+-------------+-----------+--------+

View the list of the images in the CLI:


[stack@director images]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID                                   | Name                   | Status |
+--------------------------------------+------------------------+--------+
| 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk      | active |
| 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel       | active |
| 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full         | active |
| 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd  | active |
| db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+

This list will not show the introspection PXE images. The director copies these files to /httpboot.

[stack@director images]$ ls -l /httpboot/
total 421988
-rwxr-xr-x. 1 root             root               6398256 Oct  8 10:19 agent.kernel
-rw-r--r--. 1 root             root             425703356 Oct  8 10:19 agent.ramdisk
-rw-r--r--. 1 ironic           ironic                 759 Oct  8 10:41 boot.ipxe
-rw-r--r--. 1 ironic-inspector ironic-inspector       473 Oct  8 09:43 inspector.ipxe
drwxr-xr-x. 2 ironic           ironic                   6 Oct  8 10:51 pxelinux.cfg

 

Setting a nameserver on the undercloud’s neutron subnet

Overcloud nodes require a nameserver so that they can resolve hostnames through DNS. For a standard overcloud without network isolation, the nameserver is defined using the undercloud’s neutron subnet.

[stack@director images]$ neutron subnet-list
+--------------------------------------+------+------------------+--------------------------------------------------------+
| id                                   | name | cidr             | allocation_pools                                       |
+--------------------------------------+------+------------------+--------------------------------------------------------+
| 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 |      | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.150"} |
+--------------------------------------+------+------------------+--------------------------------------------------------+

Update the nameserver to your subnet

[stack@director images]$ neutron subnet-update 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 --dns-nameserver 192.168.122.1
Updated subnet: 7b7f251d-edfc-46ea-8d56-f9f2397e01d1

Validate the changes

[stack@director images]$ neutron subnet-show 7b7f251d-edfc-46ea-8d56-f9f2397e01d1
+-------------------+-------------------------------------------------------------------+
| Field             | Value                                                             |
+-------------------+-------------------------------------------------------------------+
| allocation_pools  | {"start": "192.168.126.100", "end": "192.168.126.150"}            |
| cidr              | 192.168.126.0/24                                                  |
| created_at        | 2018-10-08T04:20:48Z                                              |
| description       |                                                                   |
| dns_nameservers   | 192.168.122.1                                                     |
| enable_dhcp       | True                                                              |
| gateway_ip        | 192.168.126.1                                                     |
| host_routes       | {"destination": "169.254.169.254/32", "nexthop": "192.168.126.1"} |
| id                | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1                              |
| ip_version        | 4                                                                 |
| ipv6_address_mode |                                                                   |
| ipv6_ra_mode      |                                                                   |
| name              |                                                                   |
| network_id        | 7047a1c6-86ac-4237-8fe5-b0bb26538752                              |
| project_id        | 681d63dc1f1d4c5892941c68e6d07c54                                  |
| revision_number   | 3                                                                 |
| service_types     |                                                                   |
| subnetpool_id     |                                                                   |
| tenant_id         | 681d63dc1f1d4c5892941c68e6d07c54                                  |
| updated_at        | 2018-10-08T04:50:09Z                                              |
+-------------------+-------------------------------------------------------------------+

 

Create virtual machines for overcloud

My controller node configuration:

OS RHEL 7.4
VM Name controller0
vCPUs 2
Memory 8192 MB
Disk 60 GB
NIC 1 (Provisioning Network) MAC: 52:54:00:36:65:a6
NIC 2 (External Network) MAC: 52:54:00:c4:34:ca

 

My compute node configuration:


OS RHEL 7.4
VM Name compute1
vCPUs 2
Memory 8192 MB
Disk 60 GB
NIC 1 (Provisioning Network) MAC: 52:54:00:13:b8:aa
NIC 2 (External Network) MAC: 52:54:00:d1:93:28

 

For the overcloud we need one controller and one compute. Create two qcow disks each for controller and compute node on your physical host machine.

IMPORTANT NOTE:

You can also create virtual machines using virt-manager.
[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata controller0.qcow2 60G
Formatting 'controller0.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off

[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata compute1.qcow2 60G
Formatting 'compute1.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
[root@openstack images]# ls -lh
total 47G
-rw-r--r--. 1 root root 61G Oct  8 10:35 compute1.qcow2
-rw-r--r--. 1 root root 61G Oct  8 10:34 controller0.qcow2
-rw-------. 1 qemu qemu 81G Oct  8 10:35 director-new.qcow2

Change the ownership of the qcow2 disk to “qemu:qemu”

[root@openstack images]# chown qemu:qemu *
[root@openstack images]# ls -lh
total 47G
-rw-r--r--. 1 qemu qemu 61G Oct  8 10:35 compute1.qcow2
-rw-r--r--. 1 qemu qemu 61G Oct  8 10:34 controller0.qcow2
-rw-------. 1 qemu qemu 81G Oct  8 10:35 director-new.qcow2

Next install “virt-install” to be able to create a virtual machine using CLI.

[root@openstack images]# yum -y install virt-install

Here I am creating xml files for two virtual machines namely controller0 and compute1

[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/controller0.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name controller0 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/controller0.xml

[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name compute1 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/compute1.xml

Validate the files we created above


[root@openstack images]# ls -l /tmp/*.xml
-rw-r--r--. 1 root root 1850 Oct  8 10:45 /tmp/compute1.xml
-rw-r--r--. 1 root root 1856 Oct  8 10:45 /tmp/controller0.xml
-rw-r--r--. 1 root root  207 Oct  7 15:52 /tmp/external.xml
-rw-r--r--. 1 root root  117 Oct  6 19:45 /tmp/provisioning.xml

Now it is time to add those virtual machine

[root@openstack images]# virsh define --file /tmp/controller0.xml
Domain controller0 defined from /tmp/controller0.xml

[root@openstack images]# virsh define --file /tmp/compute1.xml
Domain compute1 defined from /tmp/compute1.xml

Validate the currently active virtual machines on your host machine. We are running our undercloud director on director-new

[root@openstack images]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 6     director-new                   running
 -     compute1                       shut off
 -     controller0                    shut off

 

Configure Virtual Bare Metal Controller (VBMC)

The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. SInce we have a lab setup using KVM as my setup we will use VBMC to help register the nodes.

Since we are using virtual machines for our setup which does not has any iLO or similar utility for power management we will use VBMC. You can get the package from the openstack git repository.

[root@openstack ~]# wget https://git.openstack.org/openstack/virtualbmc

Next install the VBMC package


[root@openstack ~]# yum install -y python-virtualbmc

Start adding your virtual machines to the vbmc domain list

NOTE:

Use a different port for each virtual machine. Port numbers lower than 1025 require root privileges in the system.
[root@openstack images]# vbmc add controller0 --port 6320 --username admin --password redhat
[root@openstack images]# vbmc add compute1 --port 6321 --username admin --password redhat

To list the available domains

[root@openstack images]# vbmc list
+-------------+--------+---------+------+
| Domain name | Status | Address | Port |
+-------------+--------+---------+------+
|   compute1  |  down  |    ::   | 6321 |
| controller0 |  down  |    ::   | 6320 |
+-------------+--------+---------+------+

Next start all the virtual BMCs:

[root@openstack images]# vbmc start compute1
[root@openstack images]# vbmc start controller0

Check the status again

[root@openstack images]# vbmc list
+-------------+---------+---------+------+
| Domain name |  Status | Address | Port |
+-------------+---------+---------+------+
|   compute1  | running |    ::   | 6321 |
| controller0 | running |    ::   | 6320 |
+-------------+---------+---------+------+

Now all our domains are in running state.

NOTE:

With VBMC we will use pxe_ipmitool as the driver for executing all the IPMI commands so make sure this is loaded and available on your undercloud

The command-line utility to test the functionality of the power IPMI emulation uses this syntax


[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6320 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off

[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6321 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off

 

Registering nodes for the overcloud

The director requires a node definition template, which you create manually. This file (instack-twonodes.json) uses the JSON format file, and contains the hardware and power management details for your nodes.

[stack@director ~]$ cat instack-twonodes.json
{
    "nodes":[
        {
            "mac":[
                "52:54:00:36:65:a6"
            ],
            "name":"controller0",
            "cpu":"2",
            "memory":"8192",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_addr": "192.168.122.1",
            "pm_password": "redhat",
            "pm_port": "6320"
        },
        {
            "mac":[
                "52:54:00:13:b8:aa"
            ],
            "name":"compute1",
            "cpu":"2",
            "memory":"8192",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_addr": "192.168.122.1",
            "pm_password": "redhat",
            "pm_port": "6321"
        }
    ]
}

To deploy overcloud in Openstack the next step is to register the nodes part of Overcloud which for us are a single controller and compute node. The Workflow service manages this task set, which includes the ability to schedule and monitor multiple tasks and actions.

[stack@director ~]$ openstack baremetal import --json instack-twonodes.json
Started Mistral Workflow. Execution ID: 6ad7c642-275e-4293-988a-b84c28fd99c1
Successfully registered node UUID 633f53f7-7b3c-454a-8d39-bd9c4371d248
Successfully registered node UUID f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
Started Mistral Workflow. Execution ID: 5989359f-3cad-43cb-9ea3-e86ebee87964
Successfully set all nodes to available.

Check the available ironic node list after the import

[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | available          | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | available          | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

This assigns each node the bm_deploy_kernel and bm_deploy_ramdisk images

[stack@director ~]$ openstack baremetal configure boot

Set the provisioning state to manageable using this command

[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done

The nodes are now registered and configured in the director. View a list of these nodes in the CLI:

[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | manageable         | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | manageable         | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

In the following output, verify that deploy_kernel and deploy_ramdisk are assigned to the new nodes.


[stack@director ~]$ for i in controller0 compute1 ; do ironic node-show $i| grep -1 deploy; done
| driver                 | pxe_ipmitool                                                          |
| driver_info            | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': |
|                        | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':             |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-                 |
|                        | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                 |
| driver                 | pxe_ipmitool                                                          |
| driver_info            | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': |
|                        | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':             |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-                 |
|                        | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                 |

 

Inspecting the hardware of nodes

The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.

IMPORTANT NOTE:

Since we are using VirtualBMC, we cannot use openstack overcloud node introspect --all-manageable --provide command, as we initiate power on and off for virtual machines using port rather than IP address. So a bulk introspection is not possible on virtual machines.
[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node introspect $node --provide; done
Started Mistral Workflow. Execution ID: 123c4290-82ba-4766-8fdc-65878eac03ac
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: 5b6009a1-855a-492b-9196-9c0291913d2f
Successfully set all nodes to available.
Started Mistral Workflow. Execution ID: 7f9a5d65-c94a-496d-afe2-e649a85d5912
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: ffb4a0c5-3090-4d88-b407-2a8e06035485
Successfully set all nodes to available.

Monitor the progress of the introspection using the following command in a separate terminal window:

[stack@director ~]$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironicinspector-dnsmasq -u openstack-ironic-conductor -f

Check the introspection status

[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do echo -e "n"$node;openstack baremetal introspection status $node; done

633f53f7-7b3c-454a-8d39-bd9c4371d248
+----------+-------+
| Field    | Value |
+----------+-------+
| error    | None  |
| finished | True  |
+----------+-------+

f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
+----------+-------+
| Field    | Value |
+----------+-------+
| error    | None  |
| finished | True  |
+----------+-------+

 

Collect the introspection data for controller

You can check the introspection data which was collected for individual nodes. In this example I will show you the steps to get this information for the controller node

[stack@director ~]$ openstack baremetal node show controller0
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Field                  | Value                                                                                                                                     |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| clean_step             | {}                                                                                                                                        |
| console_enabled        | False                                                                                                                                     |
| created_at             | 2018-10-08T04:55:22+00:00                                                                                                                 |
| driver                 | pxe_ipmitool                                                                                                                              |
| driver_info            | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':           |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                                |
| driver_internal_info   | {}                                                                                                                                        |
| extra                  | {u'hardware_swift_object': u'extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248'}                                                        |
| inspection_finished_at | None                                                                                                                                      |
| inspection_started_at  | None                                                                                                                                      |
| instance_info          | {}                                                                                                                                        |
| instance_uuid          | None                                                                                                                                      |
| last_error             | None                                                                                                                                      |
| maintenance            | False                                                                                                                                     |
| maintenance_reason     | None                                                                                                                                      |
| name                   | controller0                                                                                                                               |
| ports                  | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'self'}, {u'href':                 |
|                        | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'bookmark'}]                                     |
| power_state            | power off                                                                                                                                 |
| properties             | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'59', u'cpus': u'2', u'capabilities':                                       |
|                        | u'cpu_vt:true,cpu_aes:true,cpu_hugepages:true,boot_option:local'}                                                                         |
| provision_state        | available                                                                                                                                 |
| provision_updated_at   | 2018-10-08T05:00:44+00:00                                                                                                                 |
| raid_config            | {}                                                                                                                                        |
| reservation            | None                                                                                                                                      |
| states                 | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'self'}, {u'href':                |
|                        | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'bookmark'}]                                    |
| target_power_state     | None                                                                                                                                      |
| target_provision_state | None                                                                                                                                      |
| target_raid_config     | {}                                                                                                                                        |
| updated_at             | 2018-10-08T05:00:51+00:00                                                                                                                 |
| uuid                   | 633f53f7-7b3c-454a-8d39-bd9c4371d248                                                                                                      |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+

Store the ironic user password from the undercloud-passwords.conf file

[stack@director ~]$ grep ironic undercloud-passwords.conf
undercloud_ironic_password=f670269d38916530ac00e5f1af6bf8e39619a9f5

Here use ironic password as OS_PASSWORD and the object as extra_hardware value from the above highlighted section.

[stack@director ~]$ OS_TENANT_NAME=service OS_USERNAME=ironic OS_PASSWORD=f670269d38916530ac00e5f1af6bf8e39619a9f5 openstack object save ironic-inspector extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248

Check if the object storage is created


[stack@director ~]$ ls -l
total 36
-rw-rw-r--. 1 stack stack  9013 Oct  8 10:34 extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
drwxrwxr-x. 2 stack stack   245 Oct  8 10:14 images
-rw-rw-r--. 1 stack stack   836 Oct  8 10:25 instack-twonodes.json
-rw-------. 1 stack stack   725 Oct  8 09:51 stackrc
-rw-r--r--. 1 stack stack 11150 Oct  8 09:05 undercloud.conf
-rw-rw-r--. 1 stack stack  1650 Oct  8 09:33 undercloud-passwords.conf

Now you can read your data using below command

[stack@director ~]$ jq . < extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
[
  [
    "disk",
    "logical",
    "count",
    "1"
  ],
  [
    "disk",
    "vda",
    "size",
    "64"
  ],
  [
    "disk",
    "vda",
    "vendor",
    "0x1af4"
  ],

  *** output trimmed ***
  [
    "system",
    "kernel",
    "version",
    "3.10.0-862.11.6.el7.x86_64"
  ],
  [
    "system",
    "kernel",
    "arch",
    "x86_64"
  ],
  [
    "system",
    "kernel",
    "cmdline",
    "ipa-inspection-callback-url=http://192.168.126.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,numa-topology,logs systemd.journald.forward_to_console=yes BOOTIF=52:54:00:36:65:a6 ipa-debug=1 ipa-inspection-dhcp-all-interfaces=1 ipa-collect-lldp=1 initrd=agent.ramdisk"
  ]
]

 

Tagging nodes to profiles

So after registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. The following example shows the relationship across roles, flavors, profiles, and nodes for Controller nodes:

[stack@director ~]$ openstack flavor list
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| ID                                   | Name          |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| 06ab97b9-6d7e-4d4d-8d6e-c2ba1e781657 | baremetal     | 4096 |   40 |         0 |     1 | True      |
| 17eec9b0-811d-4ff0-a028-29e7ff748654 | block-storage | 4096 |   40 |         0 |     1 | True      |
| 38cbb6df-4852-49d0-bbed-0bddee5173c8 | compute       | 4096 |   40 |         0 |     1 | True      |
| 88345a7e-f617-4514-9aac-0d794a32ee80 | ceph-storage  | 4096 |   40 |         0 |     1 | True      |
| dce1c321-32bb-4abf-bfd5-08f952529550 | swift-storage | 4096 |   40 |         0 |     1 | True      |
| febf52e2-5707-43b3-8f3a-069a957828fb | control       | 4096 |   40 |         0 |     1 | True      |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | available          | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | available          | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

The addition of the profile:compute and profile:control options tag the two nodes into each respective profiles. These commands also set the boot_option:local parameter, which defines the boot mode for each node.

[stack@director ~]$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 633f53f7-7b3c-454a-8d39-bd9c4371d248
[stack@director ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5

After completing node tagging, check the assigned profiles or possible profiles:

[stack@director ~]$ openstack overcloud profiles list
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name   | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | available       | control         |                   |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | available       | compute         |                   |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+

You can also check your flavor if the same flavor is assigned here as we assigned to our ironic node


[stack@director ~]$ openstack flavor show control -c properties
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:profile='control' |
+------------+------------------------------------------------------------------+

[stack@director ~]$ openstack flavor show compute -c properties
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:profile='compute' |
+------------+------------------------------------------------------------------+

 

Deploying the Overcloud

So now the final stage deploy Overcloud in OpenStack environment is by running openstack overcloud deploy command.

[stack@director ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 5dd005ed-67c8-4cef-8d16-c196fc852051
Plan updated
Deploying templates in the directory /tmp/tripleoclient-LDQ2md/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 23e8f1b0-6e4c-444b-9890-d48fef1a96a6
2018-10-08 17:11:42Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2018-10-08 17:11:42Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  Stack CREATE started
2018-10-08 17:11:43Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.Networks]: CREATE_IN_PROGRESS  state changed


*** Output Trimmed ***

2018-10-08 17:53:25Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE  state changed
2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  state changed
2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-10-08 17:54:24Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2018-10-08 17:54:24Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

 Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://192.168.126.107:5000/v2.0

So our overcloud deployment is complete at this stage. Check the stack status

[stack@director ~]$ openstack stack list
+--------------------------------------+------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+-----------------+----------------------+--------------+
| 952eeb74-0c29-4cdc-913c-5d834c8ad6c5 | overcloud  | CREATE_COMPLETE | 2018-10-08T17:11:41Z | None         |
+--------------------------------------+------------+-----------------+----------------------+--------------+

To get the list of overcloud nodes

[stack@director ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks                 |
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
| 9a8307e3-7e53-44f8-a77b-7e0115ac75aa | overcloud-compute-0    | ACTIVE | -          | Running     | ctlplane=192.168.126.112 |
| 3667b67f-802f-4c13-ba86-150576cd2b16 | overcloud-controller-0 | ACTIVE | -          | Running     | ctlplane=192.168.126.113 |
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+

You can get your horizon dashboard credential from the overcloudrc file available at the home folder of user stack (~/stack)

[stack@director ~]$ cat overcloudrc
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="}  /^OS_/ {print $1}' ); do unset $key ; done
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export NOVA_VERSION=1.1
export OS_PROJECT_NAME=admin
export OS_PASSWORD=tZQDQsbGG96t4KcXYfAM22BzN
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export no_proxy=,192.168.126.107,192.168.126.107
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.126.107:5000/v2.0
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"

 

So you can login to your horizon dashboard at 192.168.126.107 as shown below using the OS_USERNAME and OS_PASSWORD from overcloudrc file.

install tripleo (openstack on openstack) undercloud and deploy overcloud in Openstack

 

Lastly I hope the steps from the article to configure tripleo Undercloud to deploy Overcloud in OpenStack was helpful. So, let me know your suggestions and feedback using the comment section.

 

How to Install TripleO Undercloud (Openstack) on RHEL 7

How to Install TripleO Undercloud (Openstack) on RHEL 7




I have written another article to install Openstack using Packstack, here I will show you step by step guide to Install TripleO Undercloud (Openstack) using Red Hat OpenStack Platform 10 on virtual machines using virt-manager (RHEL).

How to Install TripleO Undercloud and deploy Overcloud in Openstack

 

The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for “OpenStack-On-OpenStack“. .

So the Red Hat OpenStack Platform director uses two main concepts:

  • Undercloud
  • Overcloud

 

Before we start with the steps to install tripleo undercloud, let us understand some basic terminologies


 

Undercloud

The undercloud is the main director node. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud).

The primary objectives of undercloud are as below:

  • Discover the bare-metal servers on which the deployment of Openstack Platform has been deployed
  • Serve as the deployment manager for the software to be deployed on these nodes
  • Define complex network topology and configuration for the deployment
  • Rollout of software updates and configurations to the deployed nodes
  • Reconfigure an existing undercloud deployed environment
  • Enable high availability support for the openstack nodes

 

Overcloud

  • The overcloud is the resulting Red Hat OpenStack Platform environment created using the undercloud.
  • This includes different nodes roles which you define based on the OpenStack Platform environment you aim to create.

 

So this was brief overview on Openstack On Openstack, let us start with the steps to Install TripleO Undercloud and deploy Overcloud in Openstack.

 

My Environment:

I plan to bring up a single controller and compute node as part of my overcloud deployment.

  • Physical host machine for hosting undercloud and overcloud nodes
  • Red Hat OpenStack Platform Director (VM)
  • one Red Hat OpenStack Platform Compute node (VM)
  • One Red Hat OpenStack Platform Controller node (VM)

 

Physical Host Machine Requirements (Minimal)

Below are the minimal requirements recommended by Red Hat for performing the prototype:


  • Dual Core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions
  • A minimum of 16 GB of RAM.
  • Atleast 40 GB of available disk space on the root disk.
  • A minimum of 2 x 1 Gbps Network Interface Cards
  • Red Hat Enterprise Linux 7.X / CentOS 7.X installed as the host operating system.
  • SELinux is enabled on the host.

 

My Setup Details

Below is my physical host configuration:

OS CentOS 7.4
Hostname openstack.example
Bridge IP (nm-bridge1) 10.43.138.12
External Network (virbr0) 192.168.122.0/24
GW: 192.168.122.1
Provisioning Network (virbr1) 192.168.126.0/24
GW: 192.168.126.254
RAM 128 GB
Disk 900 GB
CPU Dual Core

 

IMPORTANT NOTE:

While installing your physical host, make sure you install GNOME Desktop with all the Virtualization related rpms or else you can manually install them later using

$ yum install libvirt-client libvirt-daemon qemu-kvm libvirt-daemondriver-qemu libvirt-daemon-kvm virt-install bridge-utils rsync
NOTE:

I would recommend to use CentOS as the physical host because you will need to use VirtualBMC for performing power related activities. In RHEL you will again need active subscription to rhel-7-server-openstack-11-rpms, while we are using openstack-10. For CentOS you can download VirtualBMC from RDO project.

 

Networking Requirements

The undercloud host requires at least two networks:

  • Provisioning network – Provides DHCP and PXE boot functions to help discover bare metal systems for use in the overcloud. Typically, this network must use a native VLAN on a trunked interface so that the director serves PXE boot and DHCP requests.
  • External Network – A separate network for remote connectivity to all nodes. The interface connecting to this network requires a routable IP address, either defined statically, or dynamically through an external DHCP service.

 

Design Flow (Steps for this article)

In a nutshell below is the flow to “Install TripleO Undercloud and deploy Overcloud in Openstack”

  • First of all bring up a physical host
  • Install a new virtual machine for undercloud-director
  • Set hostname for the director
  • Configure repo or subscribe to RHN
  • Install python-tripleoclient
  • Configure undercloud.conf
  • Install Undercloud
  • Obtain and upload images for overcloud introspection and deployment
  • Create virtual machines for overcloud nodes (compute and controller)
  • Configure Virtual Bare Metal Controller
  • Importing and registering the overcloud nodes
  • Introspecting the overcloud nodes
  • Tagging overcloud nodes to profiles
  • Lastly start deploying Overcloud Nodes

 

Install TripleO Undercloud Openstack

On my physical host (openstack) we already have a default network

[root@openstack ~]# virsh net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

We will destroy this network and create external and provisioning network


[root@openstack ~]# virsh net-destroy default

[root@openstack ~]# virsh net-undefine default

[root@openstack ~]# virsh net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

Next use the below template to create external network. Here I am using 192.168.122.1 as the gateway which is assigned to the physical host.

[root@openstack ~]# cat /tmp/external.xml

   external
   
       
      
   
   
   

Now define this network and make it start automatically on boot

[root@openstack ~]# virsh net-define /tmp/external.xml
[root@openstack ~]# virsh net-autostart external
[root@openstack ~]# virsh net-start external

So let us validate the new network

[root@openstack ~]# virsh net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 external             active     yes           yes

 

Similarly create a provisioning network with 192.168.126.254 as the gateway.

[root@openstack ~]# cat /tmp/provisioning.xml

   provisioning
   
   

Now define this network and make it start automatically on boot


[root@openstack ~]# virsh net-define /tmp/provisioning.xml
[root@openstack ~]# virsh net-autostart provisioning
[root@openstack ~]# virsh net-start provisioning

Finally validate your new list of network for virtual machines.

[root@openstack ~]# virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 external             active     yes           yes
 provisioning         active     yes           yes

 

Check your network configuration. As you see now we have two bridge virbr0 and virbr1 with the network we created above.

[root@openstack ~]# ifconfig
eno51: flags=4163  mtu 1500
        ether 9c:dc:71:77:ef:51  txqueuelen 1000  (Ethernet)
        RX packets 100888  bytes 5670187 (5.4 MiB)
        RX errors 0  dropped 208  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno52: flags=4163  mtu 1500
        ether 9c:dc:71:77:ef:59  txqueuelen 1000  (Ethernet)
        RX packets 54461086  bytes 81543828070 (75.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2985822  bytes 438043585 (417.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1  (Local Loopback)
        RX packets 152875  bytes 9356602 (8.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 152875  bytes 9356602 (8.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

nm-bridge1: flags=4163  mtu 1500
        inet 10.43.138.12  netmask 255.255.255.224  broadcast 10.43.138.31
        inet6 fe80::9edc:71ff:fe77:ef59  prefixlen 64  scopeid 0x20
        ether 9c:dc:71:77:ef:59  txqueuelen 1000  (Ethernet)
        RX packets 8015838  bytes 77945540204 (72.5 GiB)
        RX errors 0  dropped 240  overruns 0  frame 0
        TX packets 2725594  bytes 416996466 (397.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4163  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:4e:e8:2c  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 160 (160.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr1: flags=4163  mtu 1500
        inet 192.168.126.254  netmask 255.255.255.0  broadcast 192.168.126.255
        ether 52:54:00:c9:37:63  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163  mtu 1500
        inet6 fe80::fc54:ff:fea1:8128  prefixlen 64  scopeid 0x20
        ether fe:54:00:a1:81:28  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 74  bytes 4788 (4.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163  mtu 1500
        inet6 fe80::fc54:ff:fe33:e8b4  prefixlen 64  scopeid 0x20
        ether fe:54:00:33:e8:b4  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 33  bytes 1948 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Similarly check the network connectivity for your gateway

[root@openstack ~]# ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.040 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms

[root@openstack ~]# ping 192.168.126.254
PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data.
64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.069 ms
^C
--- 192.168.126.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.058/0.063/0.069/0.009 ms

 

Configure OpenStack with KVM-based Nested Virtualization

When using virtualization technologies like KVM, one can take advantage of Nested VMX (i.e. the ability to run KVM on KVM) so that the VMs in cloud (Nova guests) can run relatively faster than with plain QEMU emulation.

Check if the nested KVM Kernel parameter is enabled


[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested
N

Add the below content in kvm.conf

[root@openstack ~]# vim /etc/modprobe.d/kvm.conf
options kvm_intel nested=Y

Reboot the node and check the nested KVM kernel parameter again.

[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested
Y

Update /etc/hosts content on your physical host (openstack). I plan to use 192.168.122.90 for my director node so I have added the same here

[root@openstack ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.90   director.example   director

Disable firewalld on the host openstack machine

[root@openstack ~]# systemctl stop firewalld
[root@openstack ~]# systemctl disable firewalld

 

Create Director Virtual Machine

Here you can manually create a virtual machine for the director node. Below are my specs and node details

OS RHEL 7.4
Hostname director.example
vCPUs 4
Memory 20480 MB
Disk
Format: qcow2
60 GB
Public Network (ens3)
MAC: 52:54:00:a1:81:28
10.43.138.27
Provisioning Network (ens4)
MAC: 52:54:00:33:e8:b4
192.168.122.90
External Network (ens9)
MAC: 52:54:00:86:83:c0
192.168.126.1

 

Setting your hostname for the undercloud

The director requires a fully qualified domain name for its installation and configuration process.
This means you may need to set the hostname of your director’s host.


# hostnamectl set-hostname director.example
# hostnamectl set-hostname --transient director.example

The director also requires an entry for the system’s hostname and base name in /etc/hosts.

[stack@director ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.90 director.example director

Below is the network configuration for my director node

[root@director network-scripts]# ifconfig
ens3: flags=4163  mtu 1500
        inet 10.43.138.27  netmask 255.255.255.0  broadcast 10.43.138.255
        inet6 fe80::5054:ff:fea1:8128  prefixlen 64  scopeid 0x20
        ether 52:54:00:a1:81:28  txqueuelen 1000  (Ethernet)
        RX packets 1393  bytes 75417 (73.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 78  bytes 7833 (7.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens4: flags=4163  mtu 1500
        inet 192.168.126.1  netmask 255.255.255.0  broadcast 192.168.126.255
        inet6 fe80::5054:ff:fe33:e8b4  prefixlen 64  scopeid 0x20
        ether 52:54:00:33:e8:b4  txqueuelen 1000  (Ethernet)
        RX packets 2  bytes 130 (130.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 77  bytes 4226 (4.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens9: flags=4163  mtu 1500
        inet 192.168.122.90  netmask 255.255.255.0  broadcast 192.168.122.255
        inet6 fe80::5054:ff:fe86:83c0  prefixlen 64  scopeid 0x20
        ether 52:54:00:86:83:c0  txqueuelen 1000  (Ethernet)
        RX packets 1238  bytes 87817 (85.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 805  bytes 220059 (214.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1  (Local Loopback)
        RX packets 251  bytes 20716 (20.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 251  bytes 20716 (20.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Similarly below is my network config file for the public network which I use for direct connectivity from my laptop

[root@director network-scripts]# cat ifcfg-ens3
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.43.138.27
PREFIX=24
GATEWAY=10.43.138.30
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens3
UUID=e7dab5ae-06c6-4855-bf1e-487919fe13a2
DEVICE=ens3
ONBOOT=yes

Similarly below is my network config file for the provisioning network

[root@director network-scripts]# cat ifcfg-ens4
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.126.1
PREFIX=24
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens4
UUID=8f6b534e-2ee1-4bc8-9159-27be0214d507
DEVICE=ens4
ONBOOT=yes

Lasltly below is my network config file for the external network

[root@director network-scripts]# cat ifcfg-ens9
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens9
UUID=7ab31c05-3da6-4609-a55f-c63c078e8f19
DEVICE=ens9
ONBOOT=yes
IPADDR=192.168.122.90
PREFIX=24

Below are my route files

[root@director network-scripts]# cat route-ens4
ADDRESS0=192.168.126.0
NETMASK0=255.255.255.0
GATEWAY0=192.168.126.254
METRIC0=0

[root@director network-scripts]# cat route-ens9
ADDRESS0=192.168.122.0
NETMASK0=255.255.255.0
GATEWAY0=192.168.122.1
METRIC0=0

Likewise below are my route details


[root@director network-scripts]# ip route show
default via 10.43.138.30 dev ens3 proto static metric 100
10.43.138.0/24 dev ens3 proto kernel scope link src 10.43.138.27 metric 100
192.168.122.0/24 via 192.168.122.1 dev ens9 proto static
192.168.122.0/24 dev ens9 proto kernel scope link src 192.168.122.90 metric 102
192.168.126.0/24 via 192.168.126.254 dev ens4 proto static
192.168.126.0/24 dev ens4 proto kernel scope link src 192.168.126.1 metric 101

Lastly make sure you are able to ping to all your gateways

[root@director network-scripts]# ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.269 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.315 ms
64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.335 ms
^C
--- 192.168.122.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.269/0.306/0.335/0.031 ms

[root@director network-scripts]# ping 192.168.126.254
PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data.
64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.410 ms
64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.337 ms
64 bytes from 192.168.126.254: icmp_seq=3 ttl=64 time=0.365 ms
^C
--- 192.168.126.254 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.337/0.370/0.410/0.037 ms

 

Configure Repository

Since I do not have direct access to Internet from this director node, So I have synced the required online repo to my openstack host machine and using it over http as offline repo

[root@director network-scripts]# cat /etc/yum.repos.d/rhel.repo
[rhel-7-server-extras-rpms]
name=rhel-7-server-extras-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-extras-rpms/
gpgcheck=0
enabled=1

[rhel-7-server-rh-common-rpms]
name=rhel-7-server-rh-common-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-rh-common-rpms/
gpgcheck=0
enabled=1

[rhel-7-server-rpms]
name=rhel-7-server-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-rpms/
gpgcheck=0
enabled=1

[rhel-7-server-openstack-10-devtools-rpms]
name=rhel-7-server-openstack-10-devtools-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-devtools-rpms/
gpgcheck=0
enabled=1

[rhel-7-server-openstack-10-rpms]
name=rhel-7-server-openstack-10-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-rpms/
gpgcheck=0
enabled=1

[rhel-7-server-satellite-tools-6.2-rpms]
name=rhel-7-server-satellite-tools-6.2-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-satellite-tools-6.2-rpms/
gpgcheck=0
enabled=1

[rhel-ha-for-rhel-7-server-rpms]
name=rhel-ha-for-rhel-7-server-rpms
baseurl=http://192.168.122.1/repo/rhel-ha-for-rhel-7-server-rpms/
gpgcheck=0
enabled=1

Disable firewalld on director node

[root@director ~]# systemctl stop firewalld
[root@director ~]# systemctl disable firewalld

 

Installing the Director Packages

So use the following command to install the required command line tools for director installation and configuration:

[root@director ~]# yum install -y python-tripleoclient

 

Creating user for undercloud deployment

The undercloud and overcloud deployment must be done as a normal user and not the root user so we will create a stack user for this purpose.

[root@director ~]# useradd stack

[root@director network-scripts]# echo redhat | passwd --stdin stack
Changing password for user stack.
passwd: all authentication tokens updated successfully.

[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
stack ALL=(root) NOPASSWD:ALL

[root@director ~]# chmod 0440 /etc/sudoers.d/stack

[root@director ~]# su - stack
Last login: Mon Oct 8 08:54:44 IST 2018 on pts/0

 

Configure undercloud deployment parameters

Copy the sample undercloud.conf file to the home directory of stack user as shown below

[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

Now update or add the below variables in your undercloud.conf . These variables will be used to setup your undercloud node.


[stack@director ~]$ vim undercloud.conf
[DEFAULT]
local_ip = 192.168.126.1/24
undercloud_public_vip = 192.168.126.2
undercloud_admin_vip = 192.168.126.3
local_interface = ens4
masquerade_network = 192.168.126.0/24
dhcp_start = 192.168.126.100
dhcp_end = 192.168.126.150
network_cidr = 192.168.126.0/24
network_gateway = 192.168.126.1
inspection_iprange = 192.168.126.160,192.168.126.199
generate_service_certificate = true
certificate_generation_ca = local

You can follow the official Red Hat page to understand individual parameter we have used here.

 

Install TripleO Undercloud

Undercloud deployment is completely automated and uses puppet manifest provided by TripleO. This launches the director’s configuration script. The director installs additional packages and configures its services to suit the settings in the undercloud.conf

NOTE:

This will take some time for complete configuration.
[stack@director ~]$ openstack undercloud install
** output trimmed **
#############################################################################
Undercloud install complete.

The file containing this installation's passwords is at
/home/stack/undercloud-passwords.conf.

There is also a stackrc file at /home/stack/stackrc.

These files are needed to interact with the OpenStack services, and should be
secured.

#############################################################################
NOTE:

If you face any issues while configuring the undercloud node, check /home/stack/.instack/install-undercloud.log file for the installation related logs

The configuration is performed using python script /usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py

 

The configuration script generates two files when complete:

  • undercloud-passwords.conf – A list of all passwords for the director’s services.
  • stackrc – A set of initialisation variables to help you access the director’s command line tools.

 

View the undercloud’s configured network interfaces. The br-ctlplane bridge is the 192.168.126.1 provisioning network; the ens9 interface is the 192.168.122.90 external network and ens3 with 10.43.138.27 is the public network.


[root@director ~]# ip a | grep -E 'br-ctlplane|ens9|ens3'
2: ens3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.43.138.27/24 brd 10.43.138.255 scope global noprefixroute ens3
4: ens9:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.122.90/24 brd 192.168.122.255 scope global noprefixroute ens9
6: br-ctlplane:  mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 192.168.126.1/24 brd 192.168.126.255 scope global br-ctlplane
    inet 192.168.126.3/32 scope global br-ctlplane
    inet 192.168.126.2/32 scope global br-ctlplane

 

So In my next article I will continue with “Install TripleO Undercloud and deploy Overcloud in Openstack”. Next I will share the steps to deploy the Overcloud with single controller and compute node.

 

What is PHP Object Injection

PHP Serialization Recap PHP provides a mechanism for storing and loading data with PHP types across multiple HTTP requests. This mechanism boils down to two functions: serialize() and unserialize(). This may sound complicated but let’s look at the following easy example:
A PHP object being ‘serialized’ 1 2 3 4 data = “Some data!”; $cached = serialize($object);
The above example creates a new object and then produces the following serialized string representation of this object: