Month: October 2018

Top 5 eCommerce Posts for October

SEO for holiday shoppers – Search Engine Land let’s dive in with 5 things you can do right now to get started on making more money during this peak time of year. 4 Online Marketing Tips That May Sound Obvious But Should Not Be Overlooked – Marketing Insider Group With more people fighting for a […]

WordPress Configuration Cheat Sheet

In our series about misconfigurations of PHP frameworks, we have investigated Symfony, a very versatile and modular framework. Due to the enormous distribution and the multitude of plugins, WordPress is also a very popular target for attackers. This cheat sheet focuses on the wp-config.php file and highlights important settings to check when configuring your secure WordPress installation.
Download Cheat Sheet as PDF
1. Disable Debugging The debug functionality should not be active in a production environment, as it might provide useful information to potential attackers.

Why Data Center Location Matters

Data center location should always be a top priority for customers. Choosing a data center in an optimal location not only provides immediate advantages, it also prevents major headaches down the road. 123Net carefully handpicked an ideal environment for each of its four Michigan data centers.

Network Speed

While people may think that bandwidth is the only factor determining speed, that’s not the case. The physical distance from a data center to an application can significantly affect network latency. For this reason, many companies, such as healthcare companies, are seeking data centers local to their office space. These companies are gaining a clear edge over the competition due to the speed advantage lower network latency provides. While it could be less expensive to collocate further away, it is often not worth sacrificing speed.

Natural Disasters

Businesses that collocate in areas prone to natural disasters are playing with fire. Events such as earthquakes, hurricanes, and tornadoes can cause critical power outages, leading to crippling downtime. If the best data center option happens to be in a disaster-prone area, consider selecting a backup data center at a safer location, preferably on a different power grid.

Accessibility

Data center space should be in an area that is comfortable to reach. Personnel may need to travel to the data center to make upgrades and service equipment. If a data center is out of driving distance, consider the logistics of transporting personnel and equipment through the air.

Connections

Businesses can take advantage of data center connectivity by creating multiple channels that will keep data moving freely despite outages. This makes data centers in well-connected areas safer and more reliable. It also gives businesses more room for growth, as they can easily make plenty of connections within the data center itself.

123Net’s three Southfield data center facilities have a superior location. They have access to more than 20 carriers, are in low-risk southeastern Michigan and close to thousands of Metro Detroit businesses. 123Net also has secure, easily accessible data center space in Grand Rapids that makes for premier primary or backup colocation space. Learn more about the strategic advantages you can gain from data center location or to schedule a tour of our data centers, visit: https://www.123.net/data-center/

The post Why Data Center Location Matters appeared first on 123Net.

Kali Linux 2018.4 Release

Welcome to our fourth and final release of 2018, Kali Linux 2018.4, which is available for immediate download. This release brings our kernel up to version 4.18.10, fixes numerous bugs, includes many updated packages, and a very experimental 64-bit Raspberry Pi 3 image.

New Tools and Tool Upgrades

We have only added one new tool to the distribution in this release cycle but it’s a great one. Wireguard is a powerful and easy to configure VPN solution that eliminates many of the headaches one typically encounters setting up VPNs. Check out our Wireguard post for more details on this great addition.

Kali Linux 2018.4 also includes updated packages for Burp Suite, Patator, Gobuster, Binwalk, Faraday, Fern-Wifi-Cracker, RSMangler, theHarvester, wpscan, and more. For the complete list of updates, fixes, and additions, please refer to the Kali Bug Tracker Changelog.

64-bit Raspberry Pi 3

We have created a very experimental Raspberry Pi 3 image that supports 64-bit mode. Please note that this is a beta image, so if you discover anything that isn’t working, please alert us on our bug tracker.

Download Kali Linux 2018.4

If you would like to check out this latest and greatest Kali release, you can find download links for ISOs and Torrents on the Kali Downloads page along with links to the Offensive Security virtual machine and ARM images, which have also been updated to 2018.4. If you already have a Kali installation you’re happy with, you can easily upgrade in place as follows.

root@kali:~# apt update && apt -y full-upgrade

Ensuring your Installation is Updated

To double check your version, first make sure your Kali package repositories are correct.

root@kali:~# cat /etc/apt/sources.list
deb http://http.kali.org/kali kali-rolling main non-free contrib

Then after running ‘apt -y full-upgrade’, you may require a ‘reboot’ before checking:

root@kali:~# grep VERSION /etc/os-release
VERSION=”2018.4″
VERSION_ID=”2018.4″
root@kali:~#
root@kali:~# uname -a
Linux kali 4.18.0-kali2-amd64 #1 SMP Debian 4.18.10-2kali1 (2018-10-09) x86_64 GNU/Linux

If you come across any bugs in Kali, please open a report on our bug tracker. We’ll never be able to fix what we don’t know about.

Raising Crypto for the Greater Good

Open Library is raising 50 Ethereum (ETH) to get books our readers love! Chip-in and help us democratize our bookshelves for all.


If you donate now, WeTrust Spring will match your individual ETH donation 100% (until they’ve hit $100k), through Giving Tuesday, Nov. 27!


In 2006, Aaron Swartz founded Open Library with the vision of creating “one web page for every book ever published”. Over the last twelve years, a lot has changed. Open Library has matured not only into a book catalog spanning 25M editions and 16M unique works, but into a library initiative recognized by the state of California, under the auspices of the Internet Archive. Today, Open Library makes over 3M of Internet Archive’s digital books (2.3M public access, 800k modern borrowable) readable directly from your browser. Last year, over 1.3M books were lent to readers from openlibrary.org.

And we’re just getting started. The dream of an Open Library doesn’t end at cataloging the world’s books. Together, we have the opportunity to create a new type of library which works for its readers. To be a library of the people, by the people, and for the people. A library which democratizes the books on its shelves and empowers its readers to pursue knowledge and fuel their imaginations. But how do we get there?

As a first step, we needed some way for patrons to let us know what books they wanted in their library. In January of this year, Open Library announced a new Reading Log feature which allows readers to keep track of which books they’re reading and which books they wish we had available. Over the last 8 months, a quarter million users have been anonymously helping us identify over 400k books most desired by our community. Next comes the hard part: how can we get all these books for our readers? An answer came to us directly from of one of Aaron’s early presentations on Open Library — crowdfunding and direct democracy.

What if our patrons could help us purchase a collection of books for their library and make them available to the world through our lending library? What if, for starters, we crowdfunded just a single pallet of some of our most requested books, to be purchased and shipped in bulk, and then made lendable to an international audience on openlibrary.org? Something like a global, digital book-drive. And what better way for Open Library to accept donations than with cryptocurrency — decentralized digital currency?

Thanks to the help of a partner, we now have this chance. Starting in November, Open Library is fortunate to be one of a select group of nonprofits to be listed on WeTrust Spring, a platform whose motto is, “Raising Crypto for the Greater Good” and which helps nonprofits accept donations for their causes in cryptocurrency. Through this initiative, Open Library aims to raise 50 ETH (~$10,000 USD) which it can use to unlock a combination of books from Internet Archive’s wishlist and Open Library’s most requested works. We plan to release a blog post about our progress each month in 2019.

Book lovers, help us democratize Open Library for all:

Donate ETH now* or Learn more

*Have your individual ETH donation doubled by WeTrust Spring (until they’ve hit $100k), through Giving Tuesday, Nov. 27!

Don’t have Ethereum? You can also donate using credit card.

Free Exchange chapters – Manage Users, Groups & Public Folders

Last year I was invited to write an Exchange 2016 book with a few fellow MVPs. Unfortunately, due to unforeseen circumstances, the book was later canceled

Rather than have my chapters go to waste (combined these chapters are 40,000 words) I wanted to offer them to you for free.

Below you will find three chapters covering the management of mailboxes, groups, contacts, mail-enabled users, and public folders.

That said I do want to warn you that these chapters only received a brief technical review. They did not go through any formal editorial process, so I apologize in advance for any errors. Please read these at your own risk.

While I offer these for free I would like to state that my work is under copyright. By viewing or downloading these chapters you agree to the copyright notice below.

Copyright © 2018 by Gareth Gudger

All rights reserved. No part of these chapters may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, please email info@supertekboy.com.

The post Free Exchange chapters – Manage Users, Groups & Public Folders appeared first on SuperTekBoy.

How to configure HAProxy in Openstack (High Availability)

How to configure HAProxy in Openstack (High Availability)




Since this is the second part of my previous article where I shared the steps to configure OpenStack HA cluster using pacemaker and corosync. In this article I will share the steps to configure HAProxy in Openstack and move our keystone endpoints to load balancer using Virtual IP.

How to configure HAProxy in Openstack (High Availability)

 

Configure HAProxy in Openstack

To configure HAProxy in OpenStack, we will be using HAProxy to load-balance our control plane services in this lab deployment. Some deployments may also implement Keepalived and run HAProxy in an Active/Active configuration. For this deployment, we will run HAProxy Active/Passive and manage it as a resource along with our VIP in Pacemaker.

To start, install HAProxy on both nodes using the following command:

NOTE:

On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which “yum” package manager can install the provided rpm and it’s dependencies.
[root@controller1 ~]# yum install -y haproxy
[root@controller2 ~]# yum install -y haproxy

Verify installation with the following command:


[root@controller1 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

[root@controller2 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

Next, we will create a configuration file for HAProxy which load-balances the API services installed on the two controllers. Use the following example as a template, replacing the IP addresses in the example with the IP addresses of the two controllers and the IP address of the VIP that you’ll be using to load-balance the API services.

NOTE:

The IP Address which you plan to use for VIP must be free.

Take a backup of the existing config file on both the controller nodes

[root@controller1 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp
[root@controller2 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp

The following example /etc/haproxy/haproxy.cfg, will load-balance Horizon in our environment:

[root@controller1 haproxy]# cat haproxy.cfg
global
  daemon
  group  haproxy
  maxconn  40000
  pidfile  /var/run/haproxy.pid
  user  haproxy

defaults
  log  127.0.0.1 local2 warning
  mode  tcp
  option  tcplog
  option  redispatch
  retries  3
  timeout  connect 10s
  timeout  client 60s
  timeout  server 60s
  timeout  check 10s

listen horizon
  bind 192.168.122.30:80
  mode http
  cookie SERVERID insert indirect nocache
  option tcplog
  timeout client 180s
  server controller1 192.168.122.20:80 cookie controller1 check inter 1s
  server controller2 192.168.122.22:80 cookie controller2 check inter 1s

In this example, controller1 has an IP address of 192.168.122.20 and controller2 has an IP address of 192.168.122.22. The VIP that we’ve chosen to use is 192.168.122.30. Copy this file, replacing the IP addresses with the addresses in your lab, to /etc/haproxy/haproxy.cfg on each of the controllers.

 

To configure HAProxy in OpenStack we must copy this haproxy.cfg file to the second controller


[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg

In order for Horizon to respond to requests on the VIP, we’ll need to add the VIP as a ServerAlias in the Apache virtual host configuration. This is found at /etc/httpd/conf.d/15-horizon_vhost.conf in our lab installation. Look for the following line on controller1:

ServerAlias 192.168.122.20

and below line on controller2

ServerAlias 192.168.122.22

Add an additional ServerAlias line with the VIP on both controllers:

ServerAlias 192.168.122.30

You’ll also need to tell Apache not to listen on the VIP so that HAProxy can bind to the address. To do this, modify /etc/httpd/conf/ports.conf and specify the IP address of the controller in addition to the port numbers. The following is an example:

[root@controller1 ~]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.20:35357
Listen 192.168.122.20:5000
Listen 192.168.122.20:80
Here 192.168.122.20 is the IP of controller1

On controller2 repeat the same with the IP of the respective controller node

[root@controller2 ~(keystone_admin)]# cat /etc/httpd/conf/ports.conf
# ************************************
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ************************************

Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.22:35357
Listen 192.168.122.22:5000
Listen 192.168.122.22:80

Restart Apache to pick up the new alias:


[root@controller1 ~]# systemctl restart httpd
[root@controller2 ~]# systemctl restart httpd

Next, add the VIP and the HAProxy service to the Pacemaker cluster as resources. These commands should only be run on the first controller node. This tells Pacemaker three things about the resource you want to add:

  • The first field (ocf in this case) is the standard to which the resource script conforms and where to find it.
  • The second field (heartbeat in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in.
  • The third field (IPaddr2 in this case) is the name of the resource script.

 

[root@controller1 ~]# pcs resource create VirtualIP IPaddr2 ip=192.168.122.30 cidr_netmask=24
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')

[root@controller1 ~]# pcs resource create HAProxy systemd:haproxy

Co-locate the HAProxy service with the VirtualIP to ensure that the two run together:

[root@controller1 ~]# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY

Verify that the resources have been started on both the controllers:

[root@controller1 ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 12:44:27 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

At this point, you should be able to access Horizon using the VIP you specified. Traffic will flow from your client to HAProxy on the VIP to Apache on one of the two nodes.

 

Additional API service configuration

Now here configure HAProxy in Openstack is complete, the final configuration step is to move each of the OpenStack API endpoints behind the load balancer. There are three steps in this process, which are as follows:


  • Update the HAProxy configuration to include the service.
  • Move the endpoint in the Keystone service catalog to the VIP.
  • Reconfigure services to point to the VIP instead of the IP of the first controller.

 

In the following example, we will move the Keystone service behind the load balancer. This process can be followed for each of the API services.

First, add a section to the HAProxy configuration file for the authorization and admin endpoints of Keystone. So we are adding below template to our existing haproxy.cfg file on both the controllers

[root@controller1 ~]# vim /etc/haproxy/haproxy.cfg
listen keystone-admin
  bind 192.168.122.30:35357
  mode tcp
  option tcplog
  server controller1 192.168.122.20:35357 check inter 1s
  server controller2 192.168.122.22:35357 check inter 1s

listen keystone-public
  bind 192.168.122.30:5000
  mode tcp
  option tcplog
  server controller1 192.168.122.20:5000 check inter 1s
  server controller2 192.168.122.22:5000 check inter 1s

Restart the haproxy service on the active node:

[root@controller1 ~]# systemctl restart haproxy.service

You can determine the active node with the output from pcs status. Check to make sure that HAProxy is now listening on ports 5000 and 35357 using the following commands on both the controllers:

[root@controller1 ~]# curl http://192.168.122.30:5000
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:5000/v2.0/", "rel": "self"}, {"href": "htt

[root@controller1 ~]# curl http://192.168.122.30:5000/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}}

[root@controller1 ~]# curl http://192.168.122.30:35357/v3
{"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}}

[root@controller1 ~]# curl http://192.168.122.30:35357
{"versions": {"values": [{"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

All the above commands should output some JSON describing the status of the Keystone service. So all the respective ports are in listening state


 

Next, update the endpoint for the identity service in the Keystone service catalogue by creating a new endpoint and deleting the old one. So you can source your existing keystonerc_admin file

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

Below is the content from my keystonerc_admin

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.20:5000/v3
    export PS1='[u@h W(keystone_admin)]$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

As you see currently the OS_AUTH_URL reflects to the existing endpoint for the controller. We will update this in a while.

Get the list if current keystone endpoints on your active controller

[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |
| b0f5b7887cd346b3aec747e5b9fafcd3 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.20:35357/v3                |
| c1380d643f734cc1b585048b2e7a7d47 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.20:5000/v3                 |

Now since we want to move the endpoint in the keystone service to VIP, we will create new endpoints with the VIP url as below for admin, public and internal

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity public http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 08a26ace08884b85a0ff869ddb20bea3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity admin http://192.168.122.30:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ef210afef1da4558abdc00cc13b75185 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:35357/v3   |
+--------------+----------------------------------+

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity internal http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5205be865e2a4cb9b4ab2119b93c7461 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

Last, update the auth_uri, auth_url and identity_uri parameters in each of the OpenStack services to point to the new IP address. The following configuration files will need to be edited:


/etc/ceilometer/ceilometer.conf
/etc/cinder/api-paste.ini
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf
/etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/nova/nova.conf
/etc/swift/proxy-server.conf

Next install openstack-utils to get the openstack tools which can help us restart all the services at once rather than manually restarting all the openstack related services

[root@controller1 ~(keystone_admin)]# yum -y install openstack-utils

After editing each of the files, restart the OpenStack services on all of the nodes in the lab deployment using the following command:

[root@controller1 ~(keystone_admin)]# openstack-service restart

Next update your keystonerc_admin file to point to the new OS_AUTH_URL with the VIP i.e. 192.168.122.30:5000/v3 as shown below

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.30:5000/v3
    export PS1='[u@h W(keystone_admin)]$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

Now re-source the updated keystonerc_admin file

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

Validate the new changes if the OS_AUTH_URL is pointing to the new VIP

[root@controller1 ~(keystone_admin)]# echo $OS_AUTH_URL
http://192.168.122.30:5000/v3

Once the openstack services are restrated, delete the old endpoints for keystone service

[root@controller1 ~(keystone_admin)]# openstack endpoint delete b0f5b7887cd346b3aec747e5b9fafcd3
[root@controller1 ~(keystone_admin)]# openstack endpoint delete c1380d643f734cc1b585048b2e7a7d47

 

NOTE:

You may get below error while attempting to delete the old endpoints, these are most likely because the keystone database is still not properly refreshed so perform another round of “openstact-service restart” and then re-attempt to delete the endpoint
[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6
Failed to delete endpoint with ID '3ded2a2faffe4fd485f6c3c58b1990d6': More than one endpoint exists with the name '3ded2a2faffe4fd485f6c3c58b1990d6'.
1 of 1 endpoints failed to delete.

[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep 3ded2a2faffe4fd485f6c3c58b1990d6
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |

[root@controller1 ~(keystone_admin)]# openstack-service restart

[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6

Repeat the same set of steps of controller2


 

After deleting the old endpoints and creating the new ones, below is the updated list of keystone endpoints on controller2

[root@controller2 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |

The OpenStack services will now be using the Keystone API endpoint provided by the VIP and the service will be highly available.

 

Perform a Cluster Failover

Since our ultimate goal is high availability, we should test failover of our new resource.

Before performing a failover let us make sure our cluster is UP and running properly

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 14:54:45 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As we see both our controller are online so let us stop the second controller

[root@controller2 ~(keystone_admin)]# pcs cluster stop controller2
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...

Now let us try to check the pacemaker status from controller2

[root@controller2 ~(keystone_admin)]# pcs status
Error: cluster is not currently running on this node

Since cluster service is not running on controller2 we cannot check the status. So let us get the status from controller1


[root@controller1 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:21:32 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 ]
OFFLINE: [ controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

As expected it shows controller2 is offline. So now let us check if our endpoint from keystone is readable

[root@controller2 ~(keystone_admin)]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                           |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| 06473a06f4a04edc94314a97b29d5395 | RegionOne | cinderv3     | volumev3     | True    | internal  | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 07ad2939b59b4f4892d6a470a25daaf9 | RegionOne | aodh         | alarming     | True    | public    | http://192.168.122.20:8042                    |
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 0856cd4b276f490ca48c772af2be49a3 | RegionOne | gnocchi      | metric       | True    | internal  | http://192.168.122.20:8041                    |
| 08ff114d526e4917b5849c0080cfa8f2 | RegionOne | aodh         | alarming     | True    | admin     | http://192.168.122.20:8042                    |
| 1e6cf514c885436fb14ffec0d55286c6 | RegionOne | aodh         | alarming     | True    | internal  | http://192.168.122.20:8042                    |
| 20178fdd0a064b5fa91b869ab492d2d1 | RegionOne | cinderv2     | volumev2     | True    | internal  | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3524908122a44d7f855fd09dd2859d4e | RegionOne | nova         | compute      | True    | public    | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| 3a896bde051f4ae4bfa3694a1eb05321 | RegionOne | cinderv2     | volumev2     | True    | admin     | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3ef1f30aab8646bc96c274a116120e66 | RegionOne | nova         | compute      | True    | admin     | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 42a690ef05aa42adbf9ac21056a9d4f3 | RegionOne | nova         | compute      | True    | internal  | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 45fea850b0b34f7ca2443da17e82ca13 | RegionOne | glance       | image        | True    | admin     | http://192.168.122.20:9292                    |
| 46cbd1e0a79545dfac83eeb429e24a6c | RegionOne | cinderv2     | volumev2     | True    | public    | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 49f82b77105e4614b7cf57fe1785bdc3 | RegionOne | cinder       | volume       | True    | internal  | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 4aced9a3c17741608b2491a8a8fb7503 | RegionOne | cinder       | volume       | True    | public    | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 63eeaa5246f54c289881ade0686dc9bb | RegionOne | ceilometer   | metering     | True    | admin     | http://192.168.122.20:8777                    |
| 6e2fd583487846e6aab7cac4c001064c | RegionOne | gnocchi      | metric       | True    | public    | http://192.168.122.20:8041                    |
| 79f2fcdff7d740549846a9328f8aa993 | RegionOne | cinderv3     | volumev3     | True    | public    | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 9730a44676b042e1a9f087137ea52d04 | RegionOne | glance       | image        | True    | public    | http://192.168.122.20:9292                    |
| a028329f053841dfb115e93c7740d65c | RegionOne | neutron      | network      | True    | internal  | http://192.168.122.20:9696                    |
| acc7ff6d8f1941318ab4f456cac5e316 | RegionOne | placement    | placement    | True    | public    | http://192.168.122.20:8778/placement          |
| afecd931e6dc42e8aa1abdba44fec622 | RegionOne | glance       | image        | True    | internal  | http://192.168.122.20:9292                    |
| c08c1cfb0f524944abba81c42e606678 | RegionOne | placement    | placement    | True    | admin     | http://192.168.122.20:8778/placement          |
| c0c0c4e8265e4592942bcfa409068721 | RegionOne | placement    | placement    | True    | internal  | http://192.168.122.20:8778/placement          |
| d9f34d36bd2541b98caa0d6ab74ba336 | RegionOne | cinder       | volume       | True    | admin     | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| e051cee0d06e45d48498b0af24eb08b5 | RegionOne | ceilometer   | metering     | True    | public    | http://192.168.122.20:8777                    |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |
| ea6f1493aa134b6f9822eca447dfd1df | RegionOne | neutron      | network      | True    | admin     | http://192.168.122.20:9696                    |
| ed97856952bb4a3f953ff467d61e9c6a | RegionOne | gnocchi      | metric       | True    | admin     | http://192.168.122.20:8041                    |
| f989d76263364f07becb638fdb5fea6c | RegionOne | neutron      | network      | True    | public    | http://192.168.122.20:9696                    |
| fe32d323287c4a0cb221faafb35141f8 | RegionOne | ceilometer   | metering     | True    | internal  | http://192.168.122.20:8777                    |
| fef852af4f0d4f0cacd4620e5d5245c2 | RegionOne | cinderv3     | volumev3     | True    | admin     | http://192.168.122.20:8776/v3/%(tenant_id)s   |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+

yes we are still able to read the endpoint list for keystone so all looks fine..

 

Let us again start our cluster configuration on controller2

[root@controller2 ~(keystone_admin)]# pcs cluster start
Starting Cluster...

And check the status

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:23:17 2018
Last change: Tue Oct 16 12:44:23 2018 by root via cibadmin on controller1

2 nodes configured
2 resources configured

Online: [ controller1 controller2 ]

Full list of resources:

 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

So all is back to green and we were successfully able to configure HAProxy in Openstack.


 

Lastly I hope the steps from the article to configure HAProxy in Openstack (High Availability between controllers) was helpful. So, let me know your suggestions and feedback using the comment section.

 

<div>How to configure Openstack High Availability with corosync & pacemaker</div>

How to configure Openstack High Availability with corosync & pacemaker




This is a two part article, here I will share the steps to configure OpenStack High Availability (HA) between two controllers. In the second part I will share the steps to configure HAProxy and move keystone service endpoints to loadbalancer. By default if your bring up a controller and compute node using tripleo configuration then the controllers will by default get configured via pacemaker cluster. But if you are manually bringing up your openstack setup using packstack or devstack or by manually creating all the database and services then you will have to manually configure cluster between the controllers to configure OpenStack High Availability (HA).

How to configure Openstack High Availability with corosync & pacemaker

 

Configure OpenStack High Availability (HA)

For the sake of this article I brought up two controller nodes using packstack on two different virtual machines using CentOS 7. After the successful completion of packstack you will observe keystonerc_admin file in the home folder of the root user.

 

Installing the Pacemaker resource manager

Since we will configure OpenStack High Availability using pacemaker and corosync, first of all we need to install all the rpms required for the cluster setup. So we will install Pacemaker to manage the VIPs that we will use with HAProxy to make the web services highly available.

So install pacemaker on all the controller nodes


[root@controller2 ~]# yum install -y pcs fence-agents-all
[root@controller1 ~]# yum install -y pcs fence-agents-all

Verify that the software installed correctly by running the following command:

[root@controller1 ~]# rpm -q pcs
pcs-0.9.162-5.el7.centos.2.x86_64

[root@controller2 ~]# rpm -q pcs
pcs-0.9.162-5.el7.centos.2.x86_64

Next, add rules to the firewall to allow cluster traffic:

[root@controller1 ~]# firewall-cmd --permanent --add-service=high-availability
success

[root@controller1 ~]# firewall-cmd --reload
success

[root@controller2 ~]# firewall-cmd --permanent --add-service=high-availability
success

[root@controller2 ~]# firewall-cmd --reload
success
NOTE:

If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports: TCP ports 2224, 3121, and 21064, and UDP port 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host.

The installed packages will create a hacluster user with a disabled password. While this is fine for running pcs commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes.

Set the password for the Pacemaker cluster on each controller node using the following command:

[root@controller1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@controller2 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Start the Pacemaker cluster manager on each node:


[root@controller1 ~]# systemctl start pcsd.service
[root@controller1 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

[root@controller2 ~]# systemctl start pcsd.service
[root@controller2 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

 

Configure Corosync

To configure Openstack High Availability we need to configure corosync on both the nodes, use pcs cluster auth to authenticate as the hacluster user:

[root@controller1 ~]# pcs cluster auth controller1 controller2
Username: hacluster
Password:
controller2: Authorized
controller1: Authorized

[root@controller2 ~]# pcs cluster auth controller1 controller2
Username: hacluster
Password:
controller2: Authorized
controller1: Authorized
NOTE:

If you face any issues at this step, check your firewalld/iptables or selinux policy

Finally, run the following commands on the first node to create the cluster and start it. Here our cluster name will be openstack

[root@controller1 ~]# pcs cluster setup --start --name openstack controller1 controller2
Destroying cluster on nodes: controller1, controller2...
controller1: Stopping Cluster (pacemaker)...
controller2: Stopping Cluster (pacemaker)...
controller1: Successfully destroyed cluster
controller2: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'controller1', 'controller2'
controller1: successful distribution of the file 'pacemaker_remote authkey'
controller2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
controller1: Succeeded
controller2: Succeeded

Starting cluster on nodes: controller1, controller2...
controller1: Starting Cluster...
controller2: Starting Cluster...

Synchronizing pcsd certificates on nodes controller1, controller2...
controller2: Success
controller1: Success
Restarting pcsd on the nodes in order to reload the certificates...
controller2: Success
controller1: Success

Enable the pacemaker and corosync services on both the controller so they can automatically start on boot

[root@controller1 ~]# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.

[root@controller1 ~]# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.

[root@controller2 ~]# systemctl enable corosync
Created symlink from /etc/systemd/system/multi-user.target.wants/corosync.service to /usr/lib/systemd/system/corosync.service.

[root@controller2 ~]# systemctl enable pacemaker
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.

 

Validate cluster using pacemaker

Verify that the cluster started successfully using the following command on both the nodes:

[root@controller1 ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 11:51:13 2018
Last change: Tue Oct 16 11:50:51 2018 by root via cibadmin on controller1

2 nodes configured
0 resources configured

Online: [ controller1 controller2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

[root@controller2 ~]# pcs status
Cluster name: openstack
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Mon Oct 15 17:04:29 2018
Last change: Mon Oct 15 16:49:09 2018 by hacluster via crmd on controller2

2 nodes configured
0 resources configured

Online: [ controller1 controller2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

How to start the Cluster

Now that corosync is configured, it is time to start the cluster. The command below will start corosync and pacemaker on both nodes in the cluster. If you are issuing the start command from a different node than the one you ran the pcs cluster auth command on earlier, you must authenticate on the current node you are logged into before you will be allowed to start the cluster.

[root@controller1 ~]# pcs cluster start --all

An alternative to using the pcs cluster start --all command is to issue either of the below command sequences on each node in the cluster separately:


[root@controller1 ~]# pcs cluster start
Starting Cluster...

or

[root@controller1 ~]# systemctl start corosync.service
[root@controller1 ~]# systemctl start pacemaker.service

 

Verify Corosync Installation

First, use corosync-cfgtool to check whether cluster communication is happy:

[root@controller2 ~]#  corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
        id      = 192.168.122.22
        status  = ring 0 active with no faults

So all looks normal with our fixed IP address (not a 127.0.0.x loopback address) listed as the id, and no faults for the status.
If you see something different, you might want to start by checking the node’s network, firewall and SELinux configurations.

Next, check the membership and quorum APIs:

[root@controller2 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.122.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.122.22)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined

Check the status of corosync service


[root@controller2 ~]# pcs status corosync
Membership information
----------------------
    Nodeid      Votes Name
         1          1 controller1
         2          1 controller2 (local)

You should see both nodes have joined the cluster.

Repeat the same steps on both the controller to validate the corosync services

 

Verify the cluster configuration

Before we make any changes, it’s a good idea to check the validity of the configuration.

[root@controller1 ~]# crm_verify -L -V
   error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid

As you can see, the tool has found some errors.

In order to guarantee the safety of your data, [5] fencing (also called STONITH) is enabled by default. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster will not be able to make progress if a situation requiring node fencing arises).


We will disable this feature for now and configure it later. To disable STONITH, set the stonith-enabled cluster option to false on both the controller nodes:

[root@controller1 ~]# pcs property set stonith-enabled=false
[root@controller1 ~]# crm_verify -L

[root@controller2 ~]# pcs property set stonith-enabled=false
[root@controller2 ~]# crm_verify -L
With the new cluster option set, the configuration is now valid.

WARNING:

The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that the nodes which fails are safely in powered off state. Some vendors will refuse to support clusters that have STONITH disabled.

 

I will continue this article i.e to configure OpenStack High Availability in separate part. In the next part I will share the steps to configure HAProxy and we will manage it as a resource. Also  the detail steps to move OpenStack API endpoints behind the cluster load balancer.

 

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack




I will assume that your undercloud installation is complete, so here we will continue with the steps to configure the director to deploy overcloud in Openstack using Red Hat Openstack Platform Director 10 and virt-manager.

 

How to Configure Tripleo Undercloud to deploy Overcloud in OpenStack

In our last article we covered below areas

    • First of all bring up a physical host
  • Install a new virtual machine for undercloud-director
  • Set hostname for the director
  • Configure repo or subscribe to RHN
  • Install python-tripleoclient
  • Configure undercloud.conf
  • Install Undercloud

 

Now in this article we will continue with the pending steps to configure Undercloud director node to deploy overcloud in Openstack

  • Obtain and upload images for overcloud introspection and deployment
  • Create virtual machines for overcloud nodes (compute and controller)
  • Configure Virtual Bare Metal Controller
  • Importing and registering the overcloud nodes
  • Introspecting the overcloud nodes
  • Tagging overcloud nodes to profiles
  • Lastly start deploying Overcloud Nodes

 

Deploy Overcloud in Openstack

The director requires several disk images for provisioning overcloud nodes. This includes:


  • An introspection kernel and ramdisk ⇒ Used for bare metal system introspection over PXE boot.
  • A deployment kernel and ramdisk ⇒ Used for system provisioning and deployment.
  • An overcloud kernel, ramdisk, and full image ⇒ A base overcloud system that is written to the node’s hard disk.

 

Obtaining Images for Overcloud

[stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa -y

[stack@director ~]$ cp /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar ~/images/

[stack@director ~]$ cd images/

Extract the archives to the images directory on the stack user’s home (/home/stack/images):

[stack@director images]$ tar -xf overcloud-full-latest-10.0.tar
[stack@director images]$ tar -xf ironic-python-agent-latest-10.0.tar
[stack@director images]$ ls -l
total 3848560
-rw-r--r--. 1 stack stack  425703356 Aug 22 02:15 ironic-python-agent.initramfs
-rwxr-xr-x. 1 stack stack    6398256 Aug 22 02:15 ironic-python-agent.kernel
-rw-r--r--. 1 stack stack  432107520 Oct  8 10:14 ironic-python-agent-latest-10.0.tar
-rw-r--r--. 1 stack stack   61388282 Aug 22 02:29 overcloud-full.initrd
-rw-r--r--. 1 stack stack 1537239040 Oct  8 10:13 overcloud-full-latest-10.0.tar
-rw-r--r--. 1 stack stack 1471676416 Oct  8 10:18 overcloud-full.qcow2
-rwxr-xr-x. 1 stack stack    6398256 Aug 22 02:29 overcloud-full.vmlinuz

 

Change root password for overcloud nodes

You need virt-customize to change the root password.

[stack@director images]$ sudo yum install -y libguestfs-tools

Execute below command. Replace highlighted text “password” with the password you wish to assign for “root”

[stack@director images]$ virt-customize -a overcloud-full.qcow2 --root-password password:password
[   0.0] Examining the guest ...
[  40.9] Setting a random seed
[  40.9] Setting the machine ID in /etc/machine-id
[  40.9] Setting passwords
[  63.0] Finishing off

Import these images into the director:

[stack@director images]$ openstack overcloud image upload --image-path ~/images/
Image "overcloud-full-vmlinuz" was uploaded.
+--------------------------------------+------------------------+-------------+---------+--------+
|                  ID                  |          Name          | Disk Format |   Size  | Status |
+--------------------------------------+------------------------+-------------+---------+--------+
| db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz |     aki     | 6398256 | active |
+--------------------------------------+------------------------+-------------+---------+--------+
Image "overcloud-full-initrd" was uploaded.
+--------------------------------------+-----------------------+-------------+----------+--------+
|                  ID                  |          Name         | Disk Format |   Size   | Status |
+--------------------------------------+-----------------------+-------------+----------+--------+
| 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd |     ari     | 61388282 | active |
+--------------------------------------+-----------------------+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------+------------+--------+
|                  ID                  |      Name      | Disk Format |    Size    | Status |
+--------------------------------------+----------------+-------------+------------+--------+
| 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full |    qcow2    | 1471676416 | active |
+--------------------------------------+----------------+-------------+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------+---------+--------+
|                  ID                  |       Name       | Disk Format |   Size  | Status |
+--------------------------------------+------------------+-------------+---------+--------+
| 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel |     aki     | 6398256 | active |
+--------------------------------------+------------------+-------------+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------+-----------+--------+
|                  ID                  |        Name       | Disk Format |    Size   | Status |
+--------------------------------------+-------------------+-------------+-----------+--------+
| 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk |     ari     | 425703356 | active |
+--------------------------------------+-------------------+-------------+-----------+--------+

View the list of the images in the CLI:


[stack@director images]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID                                   | Name                   | Status |
+--------------------------------------+------------------------+--------+
| 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk      | active |
| 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel       | active |
| 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full         | active |
| 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd  | active |
| db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+

This list will not show the introspection PXE images. The director copies these files to /httpboot.

[stack@director images]$ ls -l /httpboot/
total 421988
-rwxr-xr-x. 1 root             root               6398256 Oct  8 10:19 agent.kernel
-rw-r--r--. 1 root             root             425703356 Oct  8 10:19 agent.ramdisk
-rw-r--r--. 1 ironic           ironic                 759 Oct  8 10:41 boot.ipxe
-rw-r--r--. 1 ironic-inspector ironic-inspector       473 Oct  8 09:43 inspector.ipxe
drwxr-xr-x. 2 ironic           ironic                   6 Oct  8 10:51 pxelinux.cfg

 

Setting a nameserver on the undercloud’s neutron subnet

Overcloud nodes require a nameserver so that they can resolve hostnames through DNS. For a standard overcloud without network isolation, the nameserver is defined using the undercloud’s neutron subnet.

[stack@director images]$ neutron subnet-list
+--------------------------------------+------+------------------+--------------------------------------------------------+
| id                                   | name | cidr             | allocation_pools                                       |
+--------------------------------------+------+------------------+--------------------------------------------------------+
| 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 |      | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.150"} |
+--------------------------------------+------+------------------+--------------------------------------------------------+

Update the nameserver to your subnet

[stack@director images]$ neutron subnet-update 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 --dns-nameserver 192.168.122.1
Updated subnet: 7b7f251d-edfc-46ea-8d56-f9f2397e01d1

Validate the changes

[stack@director images]$ neutron subnet-show 7b7f251d-edfc-46ea-8d56-f9f2397e01d1
+-------------------+-------------------------------------------------------------------+
| Field             | Value                                                             |
+-------------------+-------------------------------------------------------------------+
| allocation_pools  | {"start": "192.168.126.100", "end": "192.168.126.150"}            |
| cidr              | 192.168.126.0/24                                                  |
| created_at        | 2018-10-08T04:20:48Z                                              |
| description       |                                                                   |
| dns_nameservers   | 192.168.122.1                                                     |
| enable_dhcp       | True                                                              |
| gateway_ip        | 192.168.126.1                                                     |
| host_routes       | {"destination": "169.254.169.254/32", "nexthop": "192.168.126.1"} |
| id                | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1                              |
| ip_version        | 4                                                                 |
| ipv6_address_mode |                                                                   |
| ipv6_ra_mode      |                                                                   |
| name              |                                                                   |
| network_id        | 7047a1c6-86ac-4237-8fe5-b0bb26538752                              |
| project_id        | 681d63dc1f1d4c5892941c68e6d07c54                                  |
| revision_number   | 3                                                                 |
| service_types     |                                                                   |
| subnetpool_id     |                                                                   |
| tenant_id         | 681d63dc1f1d4c5892941c68e6d07c54                                  |
| updated_at        | 2018-10-08T04:50:09Z                                              |
+-------------------+-------------------------------------------------------------------+

 

Create virtual machines for overcloud

My controller node configuration:

OS RHEL 7.4
VM Name controller0
vCPUs 2
Memory 8192 MB
Disk 60 GB
NIC 1 (Provisioning Network) MAC: 52:54:00:36:65:a6
NIC 2 (External Network) MAC: 52:54:00:c4:34:ca

 

My compute node configuration:


OS RHEL 7.4
VM Name compute1
vCPUs 2
Memory 8192 MB
Disk 60 GB
NIC 1 (Provisioning Network) MAC: 52:54:00:13:b8:aa
NIC 2 (External Network) MAC: 52:54:00:d1:93:28

 

For the overcloud we need one controller and one compute. Create two qcow disks each for controller and compute node on your physical host machine.

IMPORTANT NOTE:

You can also create virtual machines using virt-manager.
[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata controller0.qcow2 60G
Formatting 'controller0.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off

[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata compute1.qcow2 60G
Formatting 'compute1.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
[root@openstack images]# ls -lh
total 47G
-rw-r--r--. 1 root root 61G Oct  8 10:35 compute1.qcow2
-rw-r--r--. 1 root root 61G Oct  8 10:34 controller0.qcow2
-rw-------. 1 qemu qemu 81G Oct  8 10:35 director-new.qcow2

Change the ownership of the qcow2 disk to “qemu:qemu”

[root@openstack images]# chown qemu:qemu *
[root@openstack images]# ls -lh
total 47G
-rw-r--r--. 1 qemu qemu 61G Oct  8 10:35 compute1.qcow2
-rw-r--r--. 1 qemu qemu 61G Oct  8 10:34 controller0.qcow2
-rw-------. 1 qemu qemu 81G Oct  8 10:35 director-new.qcow2

Next install “virt-install” to be able to create a virtual machine using CLI.

[root@openstack images]# yum -y install virt-install

Here I am creating xml files for two virtual machines namely controller0 and compute1

[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/controller0.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name controller0 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/controller0.xml

[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name compute1 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/compute1.xml

Validate the files we created above


[root@openstack images]# ls -l /tmp/*.xml
-rw-r--r--. 1 root root 1850 Oct  8 10:45 /tmp/compute1.xml
-rw-r--r--. 1 root root 1856 Oct  8 10:45 /tmp/controller0.xml
-rw-r--r--. 1 root root  207 Oct  7 15:52 /tmp/external.xml
-rw-r--r--. 1 root root  117 Oct  6 19:45 /tmp/provisioning.xml

Now it is time to add those virtual machine

[root@openstack images]# virsh define --file /tmp/controller0.xml
Domain controller0 defined from /tmp/controller0.xml

[root@openstack images]# virsh define --file /tmp/compute1.xml
Domain compute1 defined from /tmp/compute1.xml

Validate the currently active virtual machines on your host machine. We are running our undercloud director on director-new

[root@openstack images]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 6     director-new                   running
 -     compute1                       shut off
 -     controller0                    shut off

 

Configure Virtual Bare Metal Controller (VBMC)

The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. SInce we have a lab setup using KVM as my setup we will use VBMC to help register the nodes.

Since we are using virtual machines for our setup which does not has any iLO or similar utility for power management we will use VBMC. You can get the package from the openstack git repository.

[root@openstack ~]# wget https://git.openstack.org/openstack/virtualbmc

Next install the VBMC package


[root@openstack ~]# yum install -y python-virtualbmc

Start adding your virtual machines to the vbmc domain list

NOTE:

Use a different port for each virtual machine. Port numbers lower than 1025 require root privileges in the system.
[root@openstack images]# vbmc add controller0 --port 6320 --username admin --password redhat
[root@openstack images]# vbmc add compute1 --port 6321 --username admin --password redhat

To list the available domains

[root@openstack images]# vbmc list
+-------------+--------+---------+------+
| Domain name | Status | Address | Port |
+-------------+--------+---------+------+
|   compute1  |  down  |    ::   | 6321 |
| controller0 |  down  |    ::   | 6320 |
+-------------+--------+---------+------+

Next start all the virtual BMCs:

[root@openstack images]# vbmc start compute1
[root@openstack images]# vbmc start controller0

Check the status again

[root@openstack images]# vbmc list
+-------------+---------+---------+------+
| Domain name |  Status | Address | Port |
+-------------+---------+---------+------+
|   compute1  | running |    ::   | 6321 |
| controller0 | running |    ::   | 6320 |
+-------------+---------+---------+------+

Now all our domains are in running state.

NOTE:

With VBMC we will use pxe_ipmitool as the driver for executing all the IPMI commands so make sure this is loaded and available on your undercloud

The command-line utility to test the functionality of the power IPMI emulation uses this syntax


[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6320 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off

[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6321 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off

 

Registering nodes for the overcloud

The director requires a node definition template, which you create manually. This file (instack-twonodes.json) uses the JSON format file, and contains the hardware and power management details for your nodes.

[stack@director ~]$ cat instack-twonodes.json
{
    "nodes":[
        {
            "mac":[
                "52:54:00:36:65:a6"
            ],
            "name":"controller0",
            "cpu":"2",
            "memory":"8192",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_addr": "192.168.122.1",
            "pm_password": "redhat",
            "pm_port": "6320"
        },
        {
            "mac":[
                "52:54:00:13:b8:aa"
            ],
            "name":"compute1",
            "cpu":"2",
            "memory":"8192",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_addr": "192.168.122.1",
            "pm_password": "redhat",
            "pm_port": "6321"
        }
    ]
}

To deploy overcloud in Openstack the next step is to register the nodes part of Overcloud which for us are a single controller and compute node. The Workflow service manages this task set, which includes the ability to schedule and monitor multiple tasks and actions.

[stack@director ~]$ openstack baremetal import --json instack-twonodes.json
Started Mistral Workflow. Execution ID: 6ad7c642-275e-4293-988a-b84c28fd99c1
Successfully registered node UUID 633f53f7-7b3c-454a-8d39-bd9c4371d248
Successfully registered node UUID f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
Started Mistral Workflow. Execution ID: 5989359f-3cad-43cb-9ea3-e86ebee87964
Successfully set all nodes to available.

Check the available ironic node list after the import

[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | available          | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | available          | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

This assigns each node the bm_deploy_kernel and bm_deploy_ramdisk images

[stack@director ~]$ openstack baremetal configure boot

Set the provisioning state to manageable using this command

[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done

The nodes are now registered and configured in the director. View a list of these nodes in the CLI:

[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | manageable         | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | manageable         | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

In the following output, verify that deploy_kernel and deploy_ramdisk are assigned to the new nodes.


[stack@director ~]$ for i in controller0 compute1 ; do ironic node-show $i| grep -1 deploy; done
| driver                 | pxe_ipmitool                                                          |
| driver_info            | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': |
|                        | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':             |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-                 |
|                        | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                 |
| driver                 | pxe_ipmitool                                                          |
| driver_info            | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': |
|                        | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':             |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-                 |
|                        | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                 |

 

Inspecting the hardware of nodes

The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.

IMPORTANT NOTE:

Since we are using VirtualBMC, we cannot use openstack overcloud node introspect --all-manageable --provide command, as we initiate power on and off for virtual machines using port rather than IP address. So a bulk introspection is not possible on virtual machines.
[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node introspect $node --provide; done
Started Mistral Workflow. Execution ID: 123c4290-82ba-4766-8fdc-65878eac03ac
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: 5b6009a1-855a-492b-9196-9c0291913d2f
Successfully set all nodes to available.
Started Mistral Workflow. Execution ID: 7f9a5d65-c94a-496d-afe2-e649a85d5912
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: ffb4a0c5-3090-4d88-b407-2a8e06035485
Successfully set all nodes to available.

Monitor the progress of the introspection using the following command in a separate terminal window:

[stack@director ~]$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironicinspector-dnsmasq -u openstack-ironic-conductor -f

Check the introspection status

[stack@director ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do echo -e "n"$node;openstack baremetal introspection status $node; done

633f53f7-7b3c-454a-8d39-bd9c4371d248
+----------+-------+
| Field    | Value |
+----------+-------+
| error    | None  |
| finished | True  |
+----------+-------+

f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
+----------+-------+
| Field    | Value |
+----------+-------+
| error    | None  |
| finished | True  |
+----------+-------+

 

Collect the introspection data for controller

You can check the introspection data which was collected for individual nodes. In this example I will show you the steps to get this information for the controller node

[stack@director ~]$ openstack baremetal node show controller0
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Field                  | Value                                                                                                                                     |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| clean_step             | {}                                                                                                                                        |
| console_enabled        | False                                                                                                                                     |
| created_at             | 2018-10-08T04:55:22+00:00                                                                                                                 |
| driver                 | pxe_ipmitool                                                                                                                              |
| driver_info            | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address':           |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'******'}                                |
| driver_internal_info   | {}                                                                                                                                        |
| extra                  | {u'hardware_swift_object': u'extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248'}                                                        |
| inspection_finished_at | None                                                                                                                                      |
| inspection_started_at  | None                                                                                                                                      |
| instance_info          | {}                                                                                                                                        |
| instance_uuid          | None                                                                                                                                      |
| last_error             | None                                                                                                                                      |
| maintenance            | False                                                                                                                                     |
| maintenance_reason     | None                                                                                                                                      |
| name                   | controller0                                                                                                                               |
| ports                  | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'self'}, {u'href':                 |
|                        | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'bookmark'}]                                     |
| power_state            | power off                                                                                                                                 |
| properties             | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'59', u'cpus': u'2', u'capabilities':                                       |
|                        | u'cpu_vt:true,cpu_aes:true,cpu_hugepages:true,boot_option:local'}                                                                         |
| provision_state        | available                                                                                                                                 |
| provision_updated_at   | 2018-10-08T05:00:44+00:00                                                                                                                 |
| raid_config            | {}                                                                                                                                        |
| reservation            | None                                                                                                                                      |
| states                 | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'self'}, {u'href':                |
|                        | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'bookmark'}]                                    |
| target_power_state     | None                                                                                                                                      |
| target_provision_state | None                                                                                                                                      |
| target_raid_config     | {}                                                                                                                                        |
| updated_at             | 2018-10-08T05:00:51+00:00                                                                                                                 |
| uuid                   | 633f53f7-7b3c-454a-8d39-bd9c4371d248                                                                                                      |
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+

Store the ironic user password from the undercloud-passwords.conf file

[stack@director ~]$ grep ironic undercloud-passwords.conf
undercloud_ironic_password=f670269d38916530ac00e5f1af6bf8e39619a9f5

Here use ironic password as OS_PASSWORD and the object as extra_hardware value from the above highlighted section.

[stack@director ~]$ OS_TENANT_NAME=service OS_USERNAME=ironic OS_PASSWORD=f670269d38916530ac00e5f1af6bf8e39619a9f5 openstack object save ironic-inspector extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248

Check if the object storage is created


[stack@director ~]$ ls -l
total 36
-rw-rw-r--. 1 stack stack  9013 Oct  8 10:34 extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
drwxrwxr-x. 2 stack stack   245 Oct  8 10:14 images
-rw-rw-r--. 1 stack stack   836 Oct  8 10:25 instack-twonodes.json
-rw-------. 1 stack stack   725 Oct  8 09:51 stackrc
-rw-r--r--. 1 stack stack 11150 Oct  8 09:05 undercloud.conf
-rw-rw-r--. 1 stack stack  1650 Oct  8 09:33 undercloud-passwords.conf

Now you can read your data using below command

[stack@director ~]$ jq . < extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
[
  [
    "disk",
    "logical",
    "count",
    "1"
  ],
  [
    "disk",
    "vda",
    "size",
    "64"
  ],
  [
    "disk",
    "vda",
    "vendor",
    "0x1af4"
  ],

  *** output trimmed ***
  [
    "system",
    "kernel",
    "version",
    "3.10.0-862.11.6.el7.x86_64"
  ],
  [
    "system",
    "kernel",
    "arch",
    "x86_64"
  ],
  [
    "system",
    "kernel",
    "cmdline",
    "ipa-inspection-callback-url=http://192.168.126.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,numa-topology,logs systemd.journald.forward_to_console=yes BOOTIF=52:54:00:36:65:a6 ipa-debug=1 ipa-inspection-dhcp-all-interfaces=1 ipa-collect-lldp=1 initrd=agent.ramdisk"
  ]
]

 

Tagging nodes to profiles

So after registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. The following example shows the relationship across roles, flavors, profiles, and nodes for Controller nodes:

[stack@director ~]$ openstack flavor list
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| ID                                   | Name          |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| 06ab97b9-6d7e-4d4d-8d6e-c2ba1e781657 | baremetal     | 4096 |   40 |         0 |     1 | True      |
| 17eec9b0-811d-4ff0-a028-29e7ff748654 | block-storage | 4096 |   40 |         0 |     1 | True      |
| 38cbb6df-4852-49d0-bbed-0bddee5173c8 | compute       | 4096 |   40 |         0 |     1 | True      |
| 88345a7e-f617-4514-9aac-0d794a32ee80 | ceph-storage  | 4096 |   40 |         0 |     1 | True      |
| dce1c321-32bb-4abf-bfd5-08f952529550 | swift-storage | 4096 |   40 |         0 |     1 | True      |
| febf52e2-5707-43b3-8f3a-069a957828fb | control       | 4096 |   40 |         0 |     1 | True      |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
[stack@director ~]$ openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None          | power off   | available          | False       |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | None          | power off   | available          | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+

The addition of the profile:compute and profile:control options tag the two nodes into each respective profiles. These commands also set the boot_option:local parameter, which defines the boot mode for each node.

[stack@director ~]$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 633f53f7-7b3c-454a-8d39-bd9c4371d248
[stack@director ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5

After completing node tagging, check the assigned profiles or possible profiles:

[stack@director ~]$ openstack overcloud profiles list
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name   | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | available       | control         |                   |
| f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1    | available       | compute         |                   |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+

You can also check your flavor if the same flavor is assigned here as we assigned to our ironic node


[stack@director ~]$ openstack flavor show control -c properties
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:profile='control' |
+------------+------------------------------------------------------------------+

[stack@director ~]$ openstack flavor show compute -c properties
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:profile='compute' |
+------------+------------------------------------------------------------------+

 

Deploying the Overcloud

So now the final stage deploy Overcloud in OpenStack environment is by running openstack overcloud deploy command.

[stack@director ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 5dd005ed-67c8-4cef-8d16-c196fc852051
Plan updated
Deploying templates in the directory /tmp/tripleoclient-LDQ2md/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 23e8f1b0-6e4c-444b-9890-d48fef1a96a6
2018-10-08 17:11:42Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2018-10-08 17:11:42Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  Stack CREATE started
2018-10-08 17:11:43Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:11:43Z [overcloud.Networks]: CREATE_IN_PROGRESS  state changed


*** Output Trimmed ***

2018-10-08 17:53:25Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS  state changed
2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE  state changed
2018-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  state changed
2018-10-08 17:54:23Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-10-08 17:54:24Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2018-10-08 17:54:24Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

 Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://192.168.126.107:5000/v2.0

So our overcloud deployment is complete at this stage. Check the stack status

[stack@director ~]$ openstack stack list
+--------------------------------------+------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+-----------------+----------------------+--------------+
| 952eeb74-0c29-4cdc-913c-5d834c8ad6c5 | overcloud  | CREATE_COMPLETE | 2018-10-08T17:11:41Z | None         |
+--------------------------------------+------------+-----------------+----------------------+--------------+

To get the list of overcloud nodes

[stack@director ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks                 |
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
| 9a8307e3-7e53-44f8-a77b-7e0115ac75aa | overcloud-compute-0    | ACTIVE | -          | Running     | ctlplane=192.168.126.112 |
| 3667b67f-802f-4c13-ba86-150576cd2b16 | overcloud-controller-0 | ACTIVE | -          | Running     | ctlplane=192.168.126.113 |
+--------------------------------------+------------------------+--------+------------+-------------+--------------------------+

You can get your horizon dashboard credential from the overcloudrc file available at the home folder of user stack (~/stack)

[stack@director ~]$ cat overcloudrc
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="}  /^OS_/ {print $1}' ); do unset $key ; done
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export NOVA_VERSION=1.1
export OS_PROJECT_NAME=admin
export OS_PASSWORD=tZQDQsbGG96t4KcXYfAM22BzN
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export no_proxy=,192.168.126.107,192.168.126.107
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.126.107:5000/v2.0
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"

 

So you can login to your horizon dashboard at 192.168.126.107 as shown below using the OS_USERNAME and OS_PASSWORD from overcloudrc file.

install tripleo (openstack on openstack) undercloud and deploy overcloud in Openstack

 

Lastly I hope the steps from the article to configure tripleo Undercloud to deploy Overcloud in OpenStack was helpful. So, let me know your suggestions and feedback using the comment section.