Top 5 eCommerce Posts for November

How to Connect Your Android Phone to Ubuntu Wirelessly
Learn how to connect your Android phone to Ubuntu using GSConnect to transfer files, see notifications, or use your phone touchscreen as a mouse.
This post, How to Connect Your Android Phone to Ubuntu Wirelessly, was written by Joey Sneddon and first appeared on OMG! Ubuntu!.
How to inspect the certificate of a mail server over a CLI
If you ever need to inspect the certificate of a remote SMTP server, you can use the openssl
CLI tool.
If you need to check STARTTLS
:
openssl s_client -connect mail.example.com:25 -starttls smtp
Or, for a standard secure SMTP port:
openssl s_client -connect mail.example.com:465
To save the certificate to a file, just redirect the output:
openssl s_client -connect mail.example.com:25 -starttls smtp > mail.example.com.crt
You can also check SMTP TLS using MX Toolbox or Check TLS.
The post How to inspect the certificate of a mail server over a CLI appeared first on seanthegeek.net.
PHP Security Advent Calendar 2018
Open Advent Calendar

Here For Life: Alexandria & Alomere Health
One of the best parts of my job is getting to talk about Alomere Health—and the entire Alexandria area—to talented physicians who are considering joining our team. People are consistently amazed by what we have going here. Medical professionals come from all over to see our world-class, nationally acclaimed hospital—and they leave feeling blown away by everything this community has to offer. Many end up moving here to stay. They are thrilled that, here, they can build a career they’re proud of, with the lifestyle they want, in a place they love.
The Best Of The Best
“How the heck are all these great things happening in Alexandria?” This is a question I hear all the time. And I know it’s a question our business community, our school district, and community nonprofits hear too. Hospitals around the state and country have asked us how we’ve built such a thriving, successful healthcare organization in our size market. How do we have full-time neurosurgery, a growing number of highly skilled providers, a nationally acclaimed surgical team and the latest 3D mammography equipment in the world, with full-time dermatology coming in 2019? My answer is simple: It’s our commitment. And our community.
This Home Is Our Home
As part of that, we have the privilege of supporting many important things happening around here, as individuals and as an organization. From coaching football and leading Girl Scout troupes, to helping with the basketball boosters and the Annual Band Festival, to supporting the Jaycees and Kiwanis clubs, Habitat for Humanity, Someplace Safe, and so much more. One of our closest partnerships is with the Alexandria Area High School.
Every semester our physicians, nurses and advanced practice providers teach medical procedures like intubation, applying casts and suturing. It’s a rare opportunity for high school students to gain early experiences in healthcare.
We support this community because we live here, we love it and we want it to be great—just like you.
Sincerely,
Eddie Reif,
Alomere Health Director of Community Relations and Development
The post Here For Life: Alexandria & Alomere Health appeared first on Alomere Health News.

Here For Life: Alexandria & Alomere Health
One of the best parts of my job is getting to talk about Alomere Health—and the entire Alexandria area—to talented physicians who are considering joining our team. People are consistently amazed by what we have going here. Medical professionals come from all over to see our world-class, nationally acclaimed hospital—and they leave feeling blown away by everything this community has to offer. Many end up moving here to stay. They are thrilled that, here, they can build a career they’re proud of, with the lifestyle they want, in a place they love.
The Best Of The Best
“How the heck are all these great things happening in Alexandria?” This is a question I hear all the time. And I know it’s a question our business community, our school district, and community nonprofits hear too. Hospitals around the state and country have asked us how we’ve built such a thriving, successful healthcare organization in our size market. How do we have full-time neurosurgery, a growing number of highly skilled providers, a nationally acclaimed surgical team and the latest 3D mammography equipment in the world, with full-time dermatology coming in 2019? My answer is simple: It’s our commitment. And our community.
This Home Is Our Home
As part of that, we have the privilege of supporting many important things happening around here, as individuals and as an organization. From coaching football and leading Girl Scout troupes, to helping with the basketball boosters and the Annual Band Festival, to supporting the Jaycees and Kiwanis clubs, Habitat for Humanity, Someplace Safe, and so much more. One of our closest partnerships is with the Alexandria Area High School.
Every semester our physicians, nurses and advanced practice providers teach medical procedures like intubation, applying casts and suturing. It’s a rare opportunity for high school students to gain early experiences in healthcare.
We support this community because we live here, we love it and we want it to be great—just like you.
Sincerely,
Eddie Reif,
Alomere Health Director of Community Relations and Development
The post Here For Life: Alexandria & Alomere Health appeared first on Alomere Health News.

Here For Life: Alexandria & Alomere Health
One of the best parts of my job is getting to talk about Alomere Health—and the entire Alexandria area—to talented physicians who are considering joining our team. People are consistently amazed by what we have going here. Medical professionals come from all over to see our world-class, nationally acclaimed hospital—and they leave feeling blown away by everything this community has to offer. Many end up moving here to stay. They are thrilled that, here, they can build a career they’re proud of, with the lifestyle they want, in a place they love.
The Best Of The Best
“How the heck are all these great things happening in Alexandria?” This is a question I hear all the time. And I know it’s a question our business community, our school district, and community nonprofits hear too. Hospitals around the state and country have asked us how we’ve built such a thriving, successful healthcare organization in our size market. How do we have full-time neurosurgery, a growing number of highly skilled providers, a nationally acclaimed surgical team and the latest 3D mammography equipment in the world, with full-time dermatology coming in 2019? My answer is simple: It’s our commitment. And our community.
This Home Is Our Home
As part of that, we have the privilege of supporting many important things happening around here, as individuals and as an organization. From coaching football and leading Girl Scout troupes, to helping with the basketball boosters and the Annual Band Festival, to supporting the Jaycees and Kiwanis clubs, Habitat for Humanity, Someplace Safe, and so much more. One of our closest partnerships is with the Alexandria Area High School.
Every semester our physicians, nurses and advanced practice providers teach medical procedures like intubation, applying casts and suturing. It’s a rare opportunity for high school students to gain early experiences in healthcare.
We support this community because we live here, we love it and we want it to be great—just like you.
Sincerely,
Eddie Reif,
Alomere Health Director of Community Relations and Development
The post Here For Life: Alexandria & Alomere Health appeared first on Alomere Health News.
phpBB 3.2.3: Phar Deserialization to RCE

OpenStack Command Line Cheat Sheet (Beginner’s Guide)
In this article I will share some of the basic OpenStack command line cheat sheet which can be useful for beginners starting with OpenStack.
Identity (Keystone)
To list the installed users
openstack user list
Show provided user’s properties
openstack user show
Create a user Willian
with password redhat
, email as William@example.com
and is part of project Production
openstack user create --project Production --email William@example.com --password redhat --enable William
Assign admin role to user William
openstack role add --user William --project Production admin
Check the assigned role to user William
openstack role assignment list --user William --project Production openstack role assignment list --user William --project Production --names
Enable or disable user William
openstack user set --disable William openstack user set --enable William
Flavor
Create flavor named m1.petite
with 1 vcpu, 1 GB RAM, 10 GB Disk and must not be publicly accessible
openstack flavor create --id auto --vcpus 1 --ram 1024 --disk 10 --private m1.petite
Assign flavor m1.petite
to Engineering
project
openstack flavor set --project Engineering m1.petite
Security Group
Create security group with name ssh
openstack security group create ssh
Add a rule to allow ssh and icmp in the ssh
security group
openstack security group rule create --ingress --protocol tcp --dst-port 22 ssh openstack security group rule create --ingress --protocol tcp --protocol icmp ssh
Keypair
Create a keypair with name webkey
in your home folder
openstack keypair create webkey > ~/webkey.pem
Glance Image
Create a glance image webimage
using a file osp-small.qcow2
available inside /tmp
openstack image create --disk-format qcow2 --file /tmp/osp-small.qcow2 webimage
Neutron (Network)
Create a public
and private
network under Engineering
project
openstack network create --external --provider-network-type flat --provider-physical-network datacentre --project Engineering --enable --no-share public openstack network create --internal --project Engineering --enable --no-share private
Create external
network with subnet 172.25.250.20/24
, gateway as 172.25.250.254
, and an allocation pool between 172.25.250.100-150
openstack subnet create --network public --no-dhcp --project Engineering --subnet-range 172.25.250.0/24 --gateway 172.25.250.254 --allocation-pool start=172.25.250.100,end=172.25.250.150 external
Create internal
network with subnet range 192.168.1.0/24
openstack subnet create --network private --project Engineering --subnet-range 192.168.1.0/24 internal
Create and configure a router with name Router1
openstack router add subnet Router1 internal neutron router-gateway-set Router1 public
Server (Instance)
Create an instance/server using the flavor m1.petite
, key as webkey
, security group as ssh
and Router1
network
openstack server create --image webimage --flavor m1.petite --key-name webkey --security-group ssh --nic net-id=private webserver
Create a floating IP for the instance
openstack ip floating create public
Assign floating IP to the webserver
instance
openstack ip floating add 172.25.250.100 webserver
Block Storage
Create a 2GB
block storage volume named storage
openstack volume create --size 2 --project Engineering storage
Attach the storage to the webserver
instance as /dev/sdb
openstack server add volume --device /dev/sdb webserver storage
Snapshot
Before creating snapshot detach the volume from webserver
instance
openstack server remove volume webserver storage
Here strgsnap
is the snapshot name and storage
is the name of the volume attached
openstack snapshot create --name strgsnap storage
Attach volume back to webserver
after taking the snapshot
openstack server add volume --device /dev/sdb webserver storage
Guestfish
Edit an image using guestfish
yum install -y libguestfs-tools-c
Here osd-webserver.qcow2
is the image which we will edit
$ guestfish -i --network -a /root/osd-webserver.qcow2 >command "yum -y install httpd" > command "systemctl enable httpd" > command "systemctl is-enabled httpd" enabled > command "touch /var/www/html/index.html" > edit /var/www/html/index.html > command "ls -l /var/www/html/index.html" -rw-r--r-- 1 root root 20 Oct 18 16:14 /var/www/html/index.html > command "sudo useradd Sheila" > command "sudo grep Sheila /etc/shadow" > selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts / > exit
Above we are installing httpd, enabling the service, creating a password less user Sheila, creating a dummy index.html file and updating the selinux context which is the most important part. Without this your image will not work.
Lastly I hope this article with OpenStack command line cheat sheet was helpful. So, let me know your suggestions and feedback using the comment section.

How to configure or build ceph storage cluster in Openstack ( CentOS 7 )
In my last article I shared the steps to configure controller node in OpenStack manually, now in this article I will share the steps to configure and build ceph storage cluster using CentOS 7. Ceph is an open source, scalable, and software-defined object store system, which provides object, block, and file system storage in a single platform. Ceph has a capability to self-heal, self-manage, and does not have a single point of failure. It is a perfect replacement for a traditional storage system and an efficient storage solution for the object and block storage of cloud environments.
Before we start with the steps to build ceph storage cluster, let us understand some basic terminologies
Monitors:
A Ceph Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.
Managers:
A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based plugins to manage and expose Ceph cluster information, including a web-based dashboard and REST API. At least two managers are normally required for high availability.
Ceph OSDs:
A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.
My infrastructure detail.
We will build ceph storage cluster with two nodes ceph storage with one OSD per storage node and one admin node where we will perform most of our tasks. So in total we have three virtual machines running on Oracle Virtual Box on top of my Windows laptop.
Below are the configuration I have used for my storage and admin nodes
admin | storage1 | storage2 | |
---|---|---|---|
OS | CentOS 7 | CentOS 7 | CentOS 7 |
Disk 1 | 10 GB | 10 GB | 10 GB |
Disk 2 | 4 GB | 4 GB | 4 GB |
RAM | 4 GB | 4 GB | 4 GB |
vCPU | 2 | 2 | 2 |
Network | 10.0.2.10 | 10.0.2.13 | 10.0.2.14 |
Hostname | controller | storage1 | storage2 |
About ceph-deploy tool
ceph-deploy
is the official tool to deploy Ceph clusters. It works on the principle of having an admin node with SSH access (without password) to all machines in your Ceph cluster; it also holds a copy of the Ceph configuration file. Every time you carry out a deployment action, it uses SSH to connect to your Ceph nodes to carry out the necessary steps. Although the ceph-deploy tool is an entirely supported method, which will leave you with a perfectly functioning Ceph cluster, ongoing management of Ceph will not be as easy as desired. Larger scale Ceph clusters will also cause a lot of management overheads if ceph-deploy is to be used. For this reason, it is recommended that ceph-deploy is limited to test or small-scale production clusters, although as you will see, an orchestration tool allows the rapid deployment of Ceph and is probably better suited for test environments where you might need to continually build new Ceph clusters.
Install pre-requisite rpms
To get the required rpms to build ceph storage cluster we need to install epel
repo and enable ceph
repo
Install the latest available epel
repo on your admin node
# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph.repo with the following command. Replace {ceph-stable-release} with a stable Ceph release (e.g., mimic.)
# cat /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
Next install ceph-deploy:
# yum -y install ceph-deploy
Configure NTP
To build ceph storage cluster it is very important that all the nodes part of the cluster are in time sync with each other. So NTP is very important. Install the ntp package and configure your ntp.conf
with your nearest server
# yum -y install ntp
My server pool list are as below which I have added in my ntp.conf
server 0.asia.pool.ntp.org server 1.asia.pool.ntp.org server 2.asia.pool.ntp.org server 3.asia.pool.ntp.org
Next start and enable your ntp daemon on both the storage node
# systemctl start ntpd # systemctl enable ntpd
Create ceph user
The ceph-deploy
utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
We will create a new user ceph
on both the storage nodes
# useradd ceph # echo redhat | passwd --stdin ceph
Give sudo permission to the “ceph” user
# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph # chmod 0440 /etc/sudoers.d/ceph
Enable Passwordless SSH
Since ceph-deploy
will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy
will attempt to generate the SSH keys for initial monitors.
Generate ssh-keys
on the admin node
# ssh-keygen -t rsa
Next copy the public key to the target storage nodes
# ssh-copy-id ceph@storage1 # ssh-copy-id ceph@storage2
Modify the ~/.ssh/config
file of your ceph-deploy
admin node so that ceph-deploy
can log in to Ceph nodes as the user you created without requiring you to specify --username
{username} each time you execute ceph-deploy
. This has the added benefit of streamlining ssh and scp usage.
# cat ~/.ssh/config Host storage1 Hostname storage1 User ceph Host storage2 Hostname storage2 User ceph
Validate connectivity
Now since our password less is setup it is time to validate the connectivity
# ssh ceph@storage1 Last login: Sat Nov 17 12:48:38 2018 from 10.0.2.10 [ceph@storage1 ~]$ logout Connection to storage1 closed. # ssh ceph@storage2 Last login: Sat Nov 17 12:30:31 2018 from 10.0.2.13 [ceph@storage2 ~]$ logout Connection to storage2 closed.
So all looks good.
Open firewall ports
During your POC stage you can also stop and disable the firewall to make sure the config works. But before going to production the firewall must be enabled again
On monitors
# firewall-cmd --zone=public --add-service=ceph-mon --permanent
On OSDs
# firewall-cmd --zone=public --add-service=ceph --permanent
Once you have finished configuring firewalld with the --permanent
flag, you can make the changes live immediately without rebooting:
# firewall-cmd --reload
Handle Selinux
On CentOS and RHEL, SELinux is set to Enforcing
by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:
# setenforce 0
Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.
# yum install yum-plugin-priorities
Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy
generates for your cluster.
# mkdir my-cluster
and navigate inside this directory as the ceph-deploy
tool will create all the required configuration file in your current working directory
# cd my-cluster
Steps to build ceph storage cluster
The ceph-deploy
tool operates out of a directory on an admin node.
On your admin node from the directory you created for holding your configuration details, perform the following steps using ceph-deploy
.
Create the cluster:
# ceph-deploy new storage1 storage2
Check the output of ceph-deploy
with ls and cat in the current directory. You should see a Ceph configuration file (ceph.conf)
, a monitor secret keyring (ceph.mon.keyring
), and a log file for the new cluster.
If you have more than one network interface, add the public network setting under the [global]
section of your Ceph configuration file. See the Network Configuration Reference for details.
public network = 10.1.2.0/24
to use IPs in the 10.1.2.0/24
(or 10.1.2.0/255.255.255.0
) network
Install Ceph packages from your admin node:
# ceph-deploy install storage1 storage2
<< Output trimmed >>
[storage2][DEBUG ]
[storage2][DEBUG ] Complete!
[storage2][INFO ] Running command: sudo ceph --version
[storage2][DEBUG ] ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
ceph-deploy
utility will install Ceph on each node.Check the ceph
version from your storage nodes
[root@storage1 ~]# ceph --version ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
Deploy the initial monitor(s) and gather the keys:
# ceph-deploy mon create-initial
<< Output trimmed >>
[storage1][DEBUG ] fetch remote file
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.storage1.asok mon_status
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.admin
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-mds
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-mgr
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-osd
[storage1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpYahdve
Once you complete the process, your local directory should have the following keyrings
:
ceph.client.admin.keyring ceph.bootstrap-mgr.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph.bootstrap-rbd.keyring
Use ceph-deploy
to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring
each time you execute a command.
# ceph-deploy admin storage1 storage2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin storage1 storage2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['storage1', 'storage2']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to storage1
[storage1][DEBUG ] connection detected need for sudo
[storage1][DEBUG ] connected to host: storage1
[storage1][DEBUG ] detect platform information from remote host
[storage1][DEBUG ] detect machine type
[storage1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to storage2
[storage2][DEBUG ] connection detected need for sudo
[storage2][DEBUG ] connected to host: storage2
[storage2][DEBUG ] detect platform information from remote host
[storage2][DEBUG ] detect machine type
[storage2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Deploy a manager daemon. (Required only for luminous+ builds):
# ceph-deploy mgr create storage1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create storage1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('storage1', 'storage1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts storage1:storage1
[storage1][DEBUG ] connection detected need for sudo
[storage1][DEBUG ] connected to host: storage1
[storage1][DEBUG ] detect platform information from remote host
[storage1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to storage1
[storage1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[storage1][WARNIN] mgr keyring does not exist yet, creating one
[storage1][DEBUG ] create a keyring file
[storage1][DEBUG ] create path recursively if it doesn't exist
[storage1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-creat e mgr.storage1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-storage1/keyring
[storage1][INFO ] Running command: sudo systemctl enable ceph-mgr@storage1
[storage1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@storage1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[storage1][INFO ] Running command: sudo systemctl start ceph-mgr@storage1
[storage1][INFO ] Running command: sudo systemctl enable ceph.target
Add OSDs:
# ceph-deploy osd create --data /dev/sdc storage1
<< Output trimmed >>
[storage1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host storage1 is now ready for osd use.
# ceph-deploy osd create --data /dev/sdc storage2
<< Output trimmed >>
[storage2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host storage2 is now ready for osd use.
So we are all done now here.
Now we brought up a functioning Ceph cluster from scratch and incrementally scaled it without taking it down. Next we’ll explore a number of operations we can perform to customise, populate, and manage Ceph clusters.
To check the cluster health
[root@storage2 ~]# ceph health
HEALTH_OK
Check the OSD and cluster status
[root@storage1 ~]# ceph -s cluster: id: 454f796c-ed9f-4242-89b4-e0d43f740ffd health: HEALTH_OK services: mon: 2 daemons, quorum storage1,storage2 mgr: storage1(active) osd: 2 osds: 2 up, 2 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 2.0 GiB used, 6.0 GiB / 8.0 GiB avail pgs:
OSD dump
The ceph osd dump command shows a wealth of lower-level information about our clusters. This includes a list of pools with their attributes and a list of OSDs each including reweight adjustment, up/in status, and more. This command is mostly used in unusual troubleshooting situations.
[root@storage2 ~]# ceph osd dump
epoch 15
fsid 454f796c-ed9f-4242-89b4-e0d43f740ffd
created 2018-11-17 13:19:39.387450
modified 2018-11-17 16:14:46.813272
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 6
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release mimic
max_osd 3
osd.0 up in weight 1 up_from 11 up_thru 0 down_at 10 last_clean_interval [5,9) 10.0.2.13:6800/1236 10.0.2.13:6801/1236 10.
osd.1 up in weight 1 up_from 13 up_thru 0 down_at 12 last_clean_interval [0,0) 10.0.2.14:6800/1079 10.0.2.14:6801/1079 10.
CRUSH dump
This command presents much the same information as ceph osd tree, though in a different, JSON, format.
# ceph osd crush dump
OSD list
The ceph osd ls command simply returns a list of the OSD numbers currently deployed within the cluster.
[root@storage2 ~]# ceph osd ls 0 1
Get the quorum status
[root@storage2 ~]# ceph quorum_status --format json-pretty
{
"election_epoch": 8,
"quorum": [
0,
1
],
"quorum_names": [
"storage1",
"storage2"
],
"quorum_leader_name": "storage1",
"monmap": {
"epoch": 1,
"fsid": "454f796c-ed9f-4242-89b4-e0d43f740ffd",
"modified": "2018-11-17 13:17:57.395486",
"created": "2018-11-17 13:17:57.395486",
"features": {
"persistent": [
"kraken",
"luminous",
"mimic",
"osdmap-prune"
],
"optional": []
},
"mons": [
{
"rank": 0,
"name": "storage1",
"addr": "10.0.2.13:6789/0",
"public_addr": "10.0.2.13:6789/0"
},
{
"rank": 1,
"name": "storage2",
"addr": "10.0.2.14:6789/0",
"public_addr": "10.0.2.14:6789/0"
}
]
}
}
Reference:
Lastly I hope the steps from the article to build ceph storage cluster in Openstack using CentOS 7 Linux was helpful. So, let me know your suggestions and feedback using the comment section.