Month: November 2018

Top 5 eCommerce Posts for November

5 Content Marketing Ideas for December 2018 – Practical Ecommerce Connect with potential customers in December 2018 with humor, a history of eggnog, and holiday happenings. 7 Essential Elements For Creating A High-Converting Landing Page – Bootstrap Business Your landing page is one of the most important pages of your entire website. It’s where you […]

PHP Security Advent Calendar 2018

In our first calendar edition in 2016, we analyzed exceptional vulnerabilities in some of the most popular open source PHP applications. Last year, we released 24 PHP security challenges with a hidden security pitfall in every day’s code challenge. This year we would like to give once again something back to the great PHP and Infosec community and release another advent calendar with 24 security surprises.
  Open Advent Calendar

phpBB 3.2.3: Phar Deserialization to RCE

Impact phpBB is one of the oldest and most popular board software. If an attacker aims to take over a board running phpBB3, he will usually attempt to gain access to the admin control panel by means of bruteforcing, phishing or XSS vulnerabilities in plugins that the target site has installed. But plugins cannot be installed directly in the admin panel and there is no other feature that can be abused by administrators to execute arbitrary PHP code.
OpenStack Command Line Cheat Sheet (Beginner’s Guide)

OpenStack Command Line Cheat Sheet (Beginner’s Guide)




In this article I will share some of the basic OpenStack command line cheat sheet which can be useful for beginners starting with OpenStack.

OpenStack command line cheat sheet

 

Identity (Keystone)

To list the installed users

openstack user list

Show provided user’s properties

openstack user show 

Create a user Willian with password redhat, email as William@example.com and is part of project Production


openstack user create --project Production --email William@example.com --password redhat --enable William

Assign admin role to user William

openstack role add --user William --project Production admin

Check the assigned role to user William

openstack role assignment list --user William --project Production

openstack role assignment list --user William --project Production --names

Enable or disable user William

openstack user set --disable William

openstack user set --enable William

 

Flavor

Create flavor named m1.petite with 1 vcpu, 1 GB RAM, 10 GB Disk and must not be publicly accessible

openstack flavor create --id auto --vcpus 1 --ram 1024 --disk 10 --private m1.petite

Assign flavor m1.petite to Engineering project


openstack flavor set --project Engineering m1.petite

 

Security Group

Create security group with name ssh

openstack security group create ssh

Add a rule to allow ssh and icmp in the ssh security group

openstack security group rule create --ingress --protocol tcp --dst-port 22 ssh

openstack security group rule create --ingress --protocol tcp --protocol icmp ssh

 

Keypair

Create a keypair with name webkey in your home folder

openstack keypair create webkey > ~/webkey.pem

 

Glance Image

Create a glance image webimage using a file osp-small.qcow2 available inside /tmp

openstack image create --disk-format qcow2 --file /tmp/osp-small.qcow2 webimage

 

Neutron (Network)

Create a public and private network under Engineering project

openstack network create --external --provider-network-type flat --provider-physical-network datacentre --project Engineering --enable --no-share public

openstack network create --internal --project Engineering --enable --no-share private

Create external network with subnet 172.25.250.20/24, gateway as 172.25.250.254, and an allocation pool between 172.25.250.100-150


openstack subnet create --network public --no-dhcp --project Engineering --subnet-range 172.25.250.0/24 --gateway 172.25.250.254 --allocation-pool start=172.25.250.100,end=172.25.250.150 external

Create internal network with subnet range 192.168.1.0/24

openstack subnet create --network private --project Engineering --subnet-range 192.168.1.0/24 internal

Create and configure a router with name Router1

openstack router add subnet Router1 internal

neutron router-gateway-set Router1 public

 

Server (Instance)

Create an instance/server using the flavor m1.petite, key as webkey, security group as ssh and Router1 network

openstack server create --image webimage --flavor m1.petite --key-name webkey --security-group ssh --nic net-id=private webserver

Create a floating IP for the instance

openstack ip floating create public

Assign floating IP to the webserver instance


openstack ip floating add 172.25.250.100 webserver

 

Block Storage

Create a 2GB block storage volume named storage

openstack volume create --size 2 --project Engineering storage

Attach the storage to the webserver instance as /dev/sdb

openstack server add volume --device /dev/sdb webserver storage

 

Snapshot

Before creating snapshot detach the volume from webserver instance

openstack server remove volume webserver storage

Here strgsnap is the snapshot name and storage is the name of the volume attached

openstack snapshot create --name strgsnap storage

Attach volume back to webserver after taking the snapshot


openstack server add volume --device /dev/sdb webserver storage

 

Guestfish

Edit an image using guestfish

yum install -y libguestfs-tools-c

Here osd-webserver.qcow2 is the image which we will edit

$ guestfish -i --network -a /root/osd-webserver.qcow2

> command "yum -y install httpd"
> command "systemctl enable httpd"

> command "systemctl is-enabled httpd"
enabled

> command "touch /var/www/html/index.html"

> edit /var/www/html/index.html 
> command "ls -l /var/www/html/index.html"
-rw-r--r-- 1 root root 20 Oct 18 16:14 /var/www/html/index.html

> command "sudo useradd Sheila"
> command "sudo grep Sheila /etc/shadow"

> selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /
> exit

Above we are installing httpd, enabling the service, creating a password less user Sheila, creating a dummy index.html file and updating the selinux context which is the most important part. Without this your image will not work.

 

Lastly I hope this article with OpenStack command line cheat sheet was helpful. So, let me know your suggestions and feedback using the comment section.

How to configure or build ceph storage cluster in Openstack ( CentOS 7 )

How to configure or build ceph storage cluster in Openstack ( CentOS 7 )




In my last article I shared the steps to configure controller node in OpenStack manually, now in this article I will share the steps to configure and build ceph storage cluster using CentOS 7. Ceph is an open source, scalable, and software-defined object store system, which provides object, block, and file system storage in a single platform. Ceph has a capability to self-heal, self-manage, and does not have a single point of failure. It is a perfect replacement for a traditional storage system and an efficient storage solution for the object and block storage of cloud environments.

How to configure or build ceph storage cluster in Openstack ( CentOS 7 )

 

Before we start with the steps to build ceph storage cluster, let us understand some basic terminologies

Monitors:

A Ceph Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.

Managers:

A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based plugins to manage and expose Ceph cluster information, including a web-based dashboard and REST API. At least two managers are normally required for high availability.


Ceph OSDs:

A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.

 

My infrastructure detail.

We will build ceph storage cluster with two nodes ceph storage with one OSD per storage node and one admin node where we will perform most of our tasks. So in total we have three virtual machines running on Oracle Virtual Box on top of my Windows laptop.

Below are the configuration I have used for my storage and admin nodes

  admin storage1 storage2
OS CentOS 7 CentOS 7 CentOS 7
Disk 1 10 GB 10 GB 10 GB
Disk 2 4 GB 4 GB 4 GB
RAM 4 GB 4 GB 4 GB
vCPU 2 2 2
Network 10.0.2.10 10.0.2.13 10.0.2.14
Hostname controller storage1 storage2

 

About ceph-deploy tool

ceph-deploy is the official tool to deploy Ceph clusters. It works on the principle of having an admin node with SSH access (without password) to all machines in your Ceph cluster; it also holds a copy of the Ceph configuration file. Every time you carry out a deployment action, it uses SSH to connect to your Ceph nodes to carry out the necessary steps. Although the ceph-deploy tool is an entirely supported method, which will leave you with a perfectly functioning Ceph cluster, ongoing management of Ceph will not be as easy as desired. Larger scale Ceph clusters will also cause a lot of management overheads if ceph-deploy is to be used. For this reason, it is recommended that ceph-deploy is limited to test or small-scale production clusters, although as you will see, an orchestration tool allows the rapid deployment of Ceph and is probably better suited for test environments where you might need to continually build new Ceph clusters.

 

Install pre-requisite rpms

To get the required rpms to build ceph storage cluster we need to install epel repo and enable ceph repo
Install the latest available epel repo on your admin node


# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph.repo with the following command. Replace {ceph-stable-release} with a stable Ceph release (e.g., mimic.)

# cat /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

Next install ceph-deploy:

# yum -y install ceph-deploy

 

Configure NTP

To build ceph storage cluster it is very important that all the nodes part of the cluster are in time sync with each other. So NTP is very important. Install the ntp package and configure your ntp.conf with your nearest server

# yum -y install ntp

My server pool list are as below which I have added in my ntp.conf

server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org

Next start and enable your ntp daemon on both the storage node

# systemctl start ntpd

# systemctl enable ntpd

 

Create ceph user

The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.


We will create a new user ceph on both the storage nodes

# useradd ceph
# echo redhat | passwd --stdin ceph

Give sudo permission to the “ceph” user

# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
# chmod 0440 /etc/sudoers.d/ceph

 

Enable Passwordless SSH

Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.

Generate ssh-keys on the admin node

# ssh-keygen -t rsa

Next copy the public key to the target storage nodes


# ssh-copy-id ceph@storage1
# ssh-copy-id ceph@storage2

Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage.

# cat ~/.ssh/config
Host storage1
   Hostname storage1
   User ceph
Host storage2
   Hostname storage2
   User ceph

 

Validate connectivity

Now since our password less is setup it is time to validate the connectivity

# ssh ceph@storage1
Last login: Sat Nov 17 12:48:38 2018 from 10.0.2.10
[ceph@storage1 ~]$ logout
Connection to storage1 closed.

# ssh ceph@storage2
Last login: Sat Nov 17 12:30:31 2018 from 10.0.2.13
[ceph@storage2 ~]$ logout
Connection to storage2 closed.

So all looks good.

 

Open firewall ports

During your POC stage you can also stop and disable the firewall to make sure the config works. But before going to production the firewall must be enabled again

On monitors


# firewall-cmd --zone=public --add-service=ceph-mon --permanent

On OSDs

# firewall-cmd --zone=public --add-service=ceph --permanent

Once you have finished configuring firewalld with the --permanent flag, you can make the changes live immediately without rebooting:

# firewall-cmd --reload

 

Handle Selinux

On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:

# setenforce 0

Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.

# yum install yum-plugin-priorities

Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster.

# mkdir my-cluster

and navigate inside this directory as the ceph-deploy tool will create all the required configuration file in your current working directory


# cd my-cluster

 

Steps to build ceph storage cluster

The ceph-deploy tool operates out of a directory on an admin node.

On your admin node from the directory you created for holding your configuration details, perform the following steps using ceph-deploy.

Create the cluster:

# ceph-deploy new storage1 storage2

Check the output of ceph-deploy with ls and cat in the current directory. You should see a Ceph configuration file (ceph.conf), a monitor secret keyring (ceph.mon.keyring), and a log file for the new cluster.

If you have more than one network interface, add the public network setting under the [global] section of your Ceph configuration file. See the Network Configuration Reference for details.

public network = 10.1.2.0/24

to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network

 

Install Ceph packages from your admin node:

# ceph-deploy install storage1 storage2
<< Output trimmed >>
[storage2][DEBUG ]
[storage2][DEBUG ] Complete!
[storage2][INFO  ] Running command: sudo ceph --version
[storage2][DEBUG ] ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
NOTE:

The ceph-deploy utility will install Ceph on each node.

Check the ceph version from your storage nodes

[root@storage1 ~]# ceph --version
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)

Deploy the initial monitor(s) and gather the keys:


# ceph-deploy mon create-initial
<< Output trimmed >>

[storage1][DEBUG ] fetch remote file
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.storage1.asok mon_status
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.admin
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-mds
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-mgr
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-osd
[storage1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-storage1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpYahdve

Once you complete the process, your local directory should have the following keyrings:

ceph.client.admin.keyring
ceph.bootstrap-mgr.keyring
ceph.bootstrap-osd.keyring
ceph.bootstrap-mds.keyring
ceph.bootstrap-rgw.keyring
ceph.bootstrap-rbd.keyring

Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

# ceph-deploy admin storage1 storage2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin storage1 storage2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['storage1', 'storage2']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to storage1
[storage1][DEBUG ] connection detected need for sudo
[storage1][DEBUG ] connected to host: storage1
[storage1][DEBUG ] detect platform information from remote host
[storage1][DEBUG ] detect machine type
[storage1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to storage2
[storage2][DEBUG ] connection detected need for sudo
[storage2][DEBUG ] connected to host: storage2
[storage2][DEBUG ] detect platform information from remote host
[storage2][DEBUG ] detect machine type
[storage2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

 

Deploy a manager daemon. (Required only for luminous+ builds):

# ceph-deploy mgr create storage1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create storage1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('storage1', 'storage1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts storage1:storage1
[storage1][DEBUG ] connection detected need for sudo
[storage1][DEBUG ] connected to host: storage1
[storage1][DEBUG ] detect platform information from remote host
[storage1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to storage1
[storage1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[storage1][WARNIN] mgr keyring does not exist yet, creating one
[storage1][DEBUG ] create a keyring file
[storage1][DEBUG ] create path recursively if it doesn't exist
[storage1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-creat                                                                                e mgr.storage1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-storage1/keyring
[storage1][INFO  ] Running command: sudo systemctl enable ceph-mgr@storage1
[storage1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@storage1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[storage1][INFO  ] Running command: sudo systemctl start ceph-mgr@storage1
[storage1][INFO  ] Running command: sudo systemctl enable ceph.target

 

Add OSDs:

NOTE:

For the purposes of these instructions, we assume you have an unused disk in each node called /dev/sdc. Be sure that the device is not currently in use and does not contain any important data.
# ceph-deploy osd create --data /dev/sdc storage1
<< Output trimmed >>

[storage1][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host storage1 is now ready for osd use.
# ceph-deploy osd create --data /dev/sdc storage2
<< Output trimmed >>

[storage2][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host storage2 is now ready for osd use.

So we are all done now here.

 

Now we brought up a functioning Ceph cluster from scratch and incrementally scaled it without taking it down. Next we’ll explore a number of operations we can perform to customise, populate, and manage Ceph clusters.

 

To check the cluster health

[root@storage2 ~]# ceph health
HEALTH_OK

Check the OSD and cluster status


[root@storage1 ~]# ceph -s
  cluster:
    id:     454f796c-ed9f-4242-89b4-e0d43f740ffd
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum storage1,storage2
    mgr: storage1(active)
    osd: 2 osds: 2 up, 2 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   2.0 GiB used, 6.0 GiB / 8.0 GiB avail
    pgs:

 

OSD dump

The ceph osd dump command shows a wealth of lower-level information about our clusters. This includes a list of pools with their attributes and a list of OSDs each including reweight adjustment, up/in status, and more. This command is mostly used in unusual troubleshooting situations.

[root@storage2 ~]# ceph osd dump
epoch 15
fsid 454f796c-ed9f-4242-89b4-e0d43f740ffd
created 2018-11-17 13:19:39.387450
modified 2018-11-17 16:14:46.813272
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 6
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release mimic
max_osd 3
osd.0 up   in  weight 1 up_from 11 up_thru 0 down_at 10 last_clean_interval [5,9) 10.0.2.13:6800/1236 10.0.2.13:6801/1236 10.
osd.1 up   in  weight 1 up_from 13 up_thru 0 down_at 12 last_clean_interval [0,0) 10.0.2.14:6800/1079 10.0.2.14:6801/1079 10.

 

CRUSH dump

This command presents much the same information as ceph osd tree, though in a different, JSON, format.

# ceph osd crush dump

 

OSD list

The ceph osd ls command simply returns a list of the OSD numbers currently deployed within the cluster.

[root@storage2 ~]# ceph osd ls
0
1

 

Get the quorum status

[root@storage2 ~]# ceph quorum_status --format json-pretty

{
    "election_epoch": 8,
    "quorum": [
        0,
        1
    ],
    "quorum_names": [
        "storage1",
        "storage2"
    ],
    "quorum_leader_name": "storage1",
    "monmap": {
        "epoch": 1,
        "fsid": "454f796c-ed9f-4242-89b4-e0d43f740ffd",
        "modified": "2018-11-17 13:17:57.395486",
        "created": "2018-11-17 13:17:57.395486",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "storage1",
                "addr": "10.0.2.13:6789/0",
                "public_addr": "10.0.2.13:6789/0"
            },
            {
                "rank": 1,
                "name": "storage2",
                "addr": "10.0.2.14:6789/0",
                "public_addr": "10.0.2.14:6789/0"
            }
        ]
    }
}

 

Reference:

Official Ceph Page


 

Lastly I hope the steps from the article to build ceph storage cluster in Openstack using CentOS 7 Linux was helpful. So, let me know your suggestions and feedback using the comment section.

 

Pydio 8.2.1 Unauthenticated Remote Code Execution

Impact The vulnerability, a PHP object injection, was fixed in the latest security release of Pydio. Affected are all installations below version 8.2.2 with default settings. The vulnerability allowed remote attackers to perform a full takeover of the filesharing system, leading to remote access to all internal files of the enterprise deploying Pydio. The vulnerability does not require any default settings to be changed or any user privileges. The vulnerability was found with RIPS Code Analysis in 200.

ICANN Performs DNS Root Zone KSK RollOver

The Internet Corporation for Assigned Names and Numbers (ICANN) is a nonprofit organization created by U.S government who coordinates and maintains several databases related to the namespecs in the internet. ICANN is responsible to perform the technical maintenance work of the Central Internet Address pools and DNS root zone. ICANN coordinates the domain name system to ensure that all the domains and addresses are unique. You should have a knowledge about DNSSEC, keys used in DNSSEC and trust anchors before reading about the KSK rollover. So let’s get started on ICANN Performs DNS Root Zone KSK RollOver.

What is DNSSEC?

When the DNS was first implemented, it was not secured and vulnerabilities in the DNS were discovered that allow an attacker to hijack a DNS query (process of looking for the IP address corresponding to a domain name). It may lead the user to hijacker’s own deceptive website for fraud purposes like collecting passwords, account information ..etc.

These vulnerabilities led to implement an end to end deployment of security protocols called “DNS Security Extension (DNSSEC)” which is an additional security layer to the DNS lookup and exchange process. DNSSEC was developed in the form of extensions that could be added to the existing DNS protocols. DNSSEC does not encrypt the data, but it attests the validity of the address of the site you visit so that a resolver can be assured that the answer provided to the DNS query is valid and authentic and thus we can prevent activities like cache poisoning, pharming, and man-in-the-middle attacks.

We can ensure that the end user is connecting to the actual website corresponding to a domain name by the full deployment of DNSSEC (it must be deployed at each step in the lookup from root zone to final domain name) and thus we can eliminate the vulnerability from the internet.

How DNSSEC Works?

DNSSEC uses a system of digital signatures and public keys to verify the data. It simply adds new records such as RRSIG and DNSKEY to DNS with the existing records. Each domain name is digitally signed by using these new records using a method known as public key cryptography. Each signed domain name has a public and private key for each zone. If a DNS query came for a domain name that is using DNSSEC, it sends information signed with its private key. The recipient then unlocks it with the public key. If a third party tries to send malicious information, it won’t unlock properly with the public key, so the recipient will know the information is bogus.

There are two types of keys that are used by DNSSEC at the root zone:

ZSK – “Zone Signing Key” is a private/public key pair. The ZSK private key is used to digitally sign the DNS records on the zone of a domain name and the digital signature is called RRSIG. The ZSK public key is stored on the DNS zone to authenticate an RRSIG.

KSK – “Key Signing Key” is a private/public key pair. The digital signature for ZSK is generated using KSK private key and the KSK public key is stored on the DNS to authenticate the ZSK.

Given sufficient time and data, cryptographic keys can eventually be compromised may be through brute force or other methods. This allows an attacker to defeat the protections afforded by DNSSEC. So it is necessary to rollover the ZSK and KSK keys frequently. DNSSEC overcomes these compromise attempts by rolling over the ZSK frequently (every 3 months) to make it difficult for an attacker to “guess” while the longer KSK is changed over a much longer time period.

KSK RollOver

The KSK rollover plans were developed by the Root Zone Management Partners; ICANN in its role as the IANA Functions Operator, Verisign as the Root Zone Maintainer, and the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) as the Root Zone Administrator.

Key Signing Key (KSK): rollover means that generating a new cryptographic key pair (public and private key pair) and distributing the new public component of KSK (trust anchors) to parties who operate validating resolvers (Recursive DNS), including: Internet Service Providers; enterprise network administrators and other Domain Name System (DNS) resolver operators; DNS resolver software developers; system integrators; and hardware and software distributors who install or ship the root’s "trust anchor."

The DNS root zone was first signed with DNSSEC in 2010 and the corresponding Key Signing Key (KSK) is known as KSK2010. The Board of Directors for the Internet Corporation of Assigned Names and Numbers (ICANN) had approved plans for the first-ever changing of the cryptographic key on 18 September 2018 that helps to protect the Domain Name System (DNS). As per the plan, ICANN had performed the Root Zone Domain Name System Security Extension’s (DNSSEC) key signing key (KSK) rollover first time in the history on 11th October 2018 at 16:00 UTC. The new KSK is called KSK-2017. After the rollover, KSK-2010 will no longer be signing the root key set: instead, KSK-2017 will be signing the root key set.

The changing of the DNS root key was originally scheduled to happen a year ago (11 October 2017), but plans were postponed on 27 September 2017 after the ICANN organization found and began analyzing some new last-minute data. It was a good news that since the Root KSK Rollover was delayed 1 year, most all of the DNS resolver software has been shipping for quite some time with the new key. That data dealt with the potential readiness of network operators for the key roll. Since then, ICANN has undertaken efforts to determine if and when the KSK rollover should proceed. So ICANN had conducted a research on the effect KSK rollover on the end users.

The design of DNSSEC includes a mechanism, commonly referred to as RFC 5011, whereby DNSSEC validators can automatically update their trust anchors. Because this was the first operational root KSK rollover, RFC 5011 had never been tested in production. Starting in 2017 a few popular recursive DNS resolvers implemented a feature defined in RFC 8145 — “Signaling Trust Anchor Knowledge in DNSSEC.” If a DNSSEC validator supports RFC 8145 and the feature is enabled, it sends periodic reports of its trust anchor configuration to one of the root name servers.

Verisign, an operator of root name servers, receives some of this RFC 8145 data. Verisign regularly analyze the data to identify sources that appear to have an out-of-date trust anchor configuration. Earlier this year, they began contacting operators of recursive servers that reported only the old trust anchor. However, in many cases, a responsible party could not be identified, due in large part to dynamic addressing of ISP subscribers. Also, late last year, ICANN began receiving trust anchor signaling data from more root server operators, as well as data from more recursive name servers as the recursive name servers updated to software versions that provided these signals.

Based on the research, David Conrad, ICANN’s Chief Technology Officer revealed before the KSK rollover that the research shows there are many thousands of network operators that have enabled DNSSEC validation, and about a quarter of the Internet’s users rely on those operators. ICANN makes this data publicly available. Out of these DNSSEC validating resolvers, 7% of reporters were signaling the 2010 trust anchor before the KSK rollover. These validating name servers were expected to experience DNS resolution failures after the KSK rollover. As planned, the rollover had completed on October 11, 2018. According to the research by APNIC, more than 99% of users (whose resolvers perform DNSSEC validation) were successfully resolved in the rollover.

Trust Anchors

Each validating resolver is configured with a set of trust anchors, which are copies of the keys or key identifiers that match the root KSK public key. Trust anchors are typically configured automatically by software vendors, or by the resolvers who are configured to automatically update the trust anchors using the process described in RFC 5011 or by the resolver operator who manually adds a new KSK to the resolver’s trust anchor store. Before KSK-2017 existed, all validating resolvers only had the KSK-2010 configured as a trust anchor. After that KSK-2017 created and published, most resolver operators either manually added KSK-2017 to their resolver’s trust anchor configuration, or the change was made for them by their software (such as through the RFC 5011 automated update process) or by their software vendor.

How to update the DNS Validating Resolver with the Latest Trust Anchor

There are currently a small number of Domain Name System Security Extensions (DNSSEC) validating recursive resolvers that are misconfigured, and some of the users relying on these resolvers will be experiencing problems. As soon as operators discover that their resolver’s DNSSEC validation is failing, they should change their resolver configuration to “temporarily disable” DNSSEC validation. This should cause the problems to immediately stop. After that, the operator should install, as soon as possible, the KSK-2017 as a trust anchor and turn on DNSSEC validation again.

If you are an administrator of a DNSSEC validating resolver, you need to check whether the the validating resolver is configured with the latest trust anchor (KSK-2017). You can check which trust anchor is using by your resolver by following the steps on this link https://www.icann.org/dns-resolvers-checking-current-trust-anchors

You need to install the new trust anchor (KSK-2017) if the validating resolver is using the old trust anchor (KSK-2010). You can follow the steps here rust anchor (KSK-2010). You can follow the steps here https://www.icann.org/dns-resolvers-updating-latest-trust-anchor to update the trust anchor on your DNSSEC validating resolver to the latest KSK-2017 trust anchor.

Effect of KSK Rollover on End Users

  • Users who rely on a resolver that does not perform DNSSEC validation will not see any effect from the rollover.
  • Users who rely on a resolver that has the new KSK will not see any effect from the rollover.
  • Users of resolvers that are prepared for the rollover will see no difference when the rollover happens. The responses they get to normal queries will be identical before and after the rollover.
  • Most Internet users have more than one DNS resolver configured. If any of the resolvers that a user has configured is prepared for the rollover, the user’s software should find that resolver after the rollover and continue to use it. This might slow down DNS resolution as their system keeps trying the resolver that is not prepared before switching to the resolver that is prepared, but the user will still get DNS resolution.
  • If all of a users’ resolvers do not have the new KSK-2017 key configured as a trust anchor and that resolver performs DNSSEC validation, the user will likely experienced the effects at some point in the 48 hours after the rollover happened, since the TTL for the KSK and ZSK records are 48 hours. If a resolver obtains the root key set and validates it just before the rollover, that resolver won’t know about the rollover for almost two days, because the resolver will not fetch a new KSK until it gets the first query after the TTL of root key set has expired.

Only negligible number of cases reported by the Internet end-users who had negatively impacted by the rollover process of the cryptographic keys. The few issues that is arising, appears to have been quickly mitigated and none suggested a systemic failure that would approach the threshold (as defined by the ICANN community) to initiate a reversal of the roll. In that context, it appears the rollover to the new Key Signing Key, known as KSK 2017, is a success. At this point, there are no indications it is necessary to back out of the rollover and ICANN will now proceed to the next step in the rollover process: revoking the old KSK, known as KSK 2010 during the next key ceremony on 11 January 2019.

Effect of KSK Rollover on Root DNS Servers

Since the rollover is completed, root server operators might already started to see significantly more queries from resolvers that are unprepared for the rollover. Those queries will most likely be for the DNSKEY of the root (./IN/DNSKEY), and will also likely include queries for the DNS record of the .net zone (.net/IN/DS). Additionally, since the responses can’t be validated correctly, they will not be cached, and may lead to increased traffic overall from these validating resolvers. Similarly, operators of resolvers that allow other resolvers to forward through them has already been seeing increased counts of these requests.

Do you need any expert advice on ICANN Performs DNS Root Zone KSK RollOver?

We have an expert team to guide you

Thanks for dropping by. Ready for the next blog?

Amazon EFS Performance Considerations and Overview

The post ICANN Performs DNS Root Zone KSK RollOver appeared first on Sysally.

Where is your ecommerce business required to collect sales tax?

This is a guest post by Lizzy Ploen of Avalara Close to 30 states now require certain out-of-state sellers to collect and remit sales tax. Is your business one of them? Economic nexus trumps physical presence Physical presence used to be the deciding factor for sales tax: States could tax sales by businesses with a […]