Month: May 2017

Uirouter For Angularjs Umd Bundles

This post describes how UMD bundles have changed with UI-Router 1.0 for AngularJS.

UI-Router Core

The angular-ui-router has been the defacto standard for routing in AngularJS.
However, over the years UI-Router has undergone some significant transformations.

The core of the library has been refactored into its own library, @uirouter/core.
This core library has been used to create new routers for
React,
It’s Just Angular (2.x+),
Polymer, and even
Backbone/Marionette.

Plugins and UMD bundles

Previously, UI-Router was a single bundle: angular-ui-router.js.

During the UI-Router for AngularJS 1.0 release, we split the code into two bundles:

  • ui-router-core.js: the core of UI-Router (State machine, etc)
  • ui-router-angularjs.js: the AngularJS bits ($location, ui-sref directives, etc)

Users who formerly included only angular-ui-router.js should now include both bundles.

This change was necessary to properly support dependencies to @uirouter/core.
This enables (for example) router plugins which work with the framework-agnostic ui-router-core.js.

Note: we also renamed our NPM packages to scoped package names.

Migration

If you formerly used angular-ui-router.js, e.g.:
js

js

Instead, use both new bundles from the scoped packages.
Include ui-router-core before including ui-router-angularjs:
js


js

Ensure the version of @uirouter/core matches the dependency in @uirouter/angularjs.

Backward compatible mono-bundle

For backwards compatibility, we will continue to publish a monolithic bundle as angular-ui-router.js.
This bundle includes both the core and angularjs code.
Existing users who rely on the angular-ui-router.js bundle do not have to change anything.
However, this bundle is not compatible with UI-Router plugins which depend on @uirouter/core.

Webpack users

Users of webpack (or any bundlers which use node module resolution) should not need to make any changes due to these bundle changes.
Simply require or import from the scoped package @uirouter/angularjs instead of from angular-ui-router.

Install ImageMagick and Imagick on a cPanel Server

The ImageMagick installation steps are simple for a cPanel server. ImageMagick is an addon for your cPanel server which allows image manipulation. ImageMagick is a software suite to create, edit, compose, or convert bitmap images. If you are more curious about it then you can check the details on here

ImageMagick Installation via command line

For command line installation, you simply need to execute the cPanel script for the same


/scripts/installimagemagick

This will take a couple of minutes to complete. After installation you can check the version by executing the following command:


/usr/bin/convert --version

The output will be something similar to the one given below


root@server [~]# /usr/bin/convert - - version
Version: ImageMagick 6.7.2-7 2015-07-29 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2011 ImageMagick Studio LLC
Features: OpenMP
root@server [~]#

In cPanel & WHM version 11.34 or earlier you can run the /scripts/installimagemagick script as the root user to install ImageMagick. But from cPanel/WHM version 11.36 or above, the “installimagemagick” script from the cPanel & WHM system  has been removed. This package installs ImageMagick to the /usr/local/cpanel/3rdparty directory.

Now cPanel runs version 64, so the above method can not be followed now.

In this case, you can install ImageMagick via YUM repository. The basic packages required can be installed running with the following command:


yum -y install ImageMagick-devel ImageMagick-c++-devel

How to verify whether we have installed ImageMagick or not?

You can check the existence of “convert” or “mogrify” binary to confirm whether the ImageMagick is installed or not.

Uninstall ImageMagick

It’s simple, do execute the following command to remove ImageMagick from your server.


/scripts/cleanimagemagick

Imagick

Imagick is a native PHP extension to create and modify images using the ImageMagick API. This extension requires ImageMagick version 6.5.3-10+ and PHP 5.4.0+.

Installation steps for Imagick via WHM control panel

Step 1 : Login to WHM control panel.

Step 2 : Do follow these steps:

Go to WHM -> Software -> Module Installers -> PHP Pecl (manage).
On the box below “Install a PHP Pecl” enter imagick and click “Install Now” button.

Step 3 : Restart Apache.

Module_Installers

Uninstallation steps for Imagick via WHM control panel:

Step 1 : Login to WHM control panel.

Step 2 : Do follow these steps:
Imagick: WHM -> Software -> Module Installers -> PHP Pecl (manage).
Click on Uninstall button for Imagick

Step 3 : Restart Apache.

In cPanel & WHM version 11.36, if you require PHP bindings with Apache, these bindings can be installed via the PECL utility:


/usr/local/bin/pecl install imagick

Installation of ImageMagick on CloudLinux installed server

In a CloudLinux installed server there is an additional step to enable packages to users on that server. Installing packages on the server won’t reflect in these cases. It won’t be available inside CageFS. You will want to install ImageMagick inside CageFS as follows to make those binaries available inside CageFS:

To see the list of RPMs currently installed under CageFS:


cagefsctl --list-rpm

To add a new RPM:


cagefsctl --addrpm ImageMagick

To pick up the changes:


cagefsctl --force-update

Enabling Imagick PHP Extension on CloudLinux server

You can enable imagick PHP Extension through cPanel >> Select PHP Version

That’s it.

Get 24/7 expert server management

The post Install ImageMagick and Imagick on a cPanel Server appeared first on SupportSages – Your IT Infrastructure partner !!.

HTTPS with Advanced Hosting

HTTPS with Advanced Hosting

HTTPS with Advanced Hosting 1You may have read that Register4Less.com has added to our paid hosting plans Let’s Encrypt, a free open-source SSL certificate.  This certificate is now installed automatically when you order an advanced hosting plan, and has been installed for all existing paid hosting plans.

Benefits of having your site visitors connect using the https encrypted protocol include:

  • Better Search Engine ranking
  • Enhanced User Trust
  • Protect your User’s sensitive information

Forcing an https:// Connection

When people visit your website, but default if they type in your domain without specifying https://, they will connect with a standard unencrypted http:// connection.  Older links to your website may also not specify the secure protocol, so these would also provide un-encrypted connection.

You can however quite easily switch an http to an https connection by editing your .htaccess file.  Here’s how to do this.

  1. Log in for your domain, and go to Paid Hosting > Manage Advanced Hosting to open up the cPanel window
  2. In the Files section, click on the icon for File Manager.  This will open in a new window
  3. On the upper right click on Settings.  If the option for Show Hidden Files (dotfiles) is not checked, check it and save.
  4. On the left column, click on the public_html folder.
  5. Look for a file named .htaccess in your public_html folder.  If there isn’t one, go to File and create a new file named .htaccess in the /pubic_html folder.
  6. Select the file, and click Edit
  7. Paste the following two lines into the file, and click the Save Changes button.
          RewriteCond %{HTTPS} off
          RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
About the FODS Architecture

About the FODS Architecture

Over the weekend starting on Friday, May 5, 2017, we deployed a significant upgrade to our architecture and we’d like to share some details.

In The Beginning

single-db-architecture

Above is a picture of our architecture before the weekend deployment.  We had four applications using the same database:

  1. FISbot — our Fetch, Import, SLOC bot
  2. Ohloh analysis — Project, account, organization, etc. analysis generation
  3. Ohloh UI — web and API application
  4. Ohloh Utility (cron jobs)

The database was a single PostgreSQL 9.6 database that was over 1.6 TB in size. With the delivery of the eFISbot features to support Fetch, Import and SLOC operations for the Knowledge Base team as well as our own Open Hub, we clearly saw that even a modest increase in eFISbot request processing impacted the database and resulted in poor performance for the web application. In brief, we couldn’t scale to support the anticipated load on eFISbot.

Current Architecture

In our plans for 2017, we committed to making the backend screamingly fast and talked about how we gotten approval for new servers to support this. Starting at 8 PM EDT on Friday, May 5 we took a major step towards delivering on that commitment. We called it the “FIS Ohloh Database Split” (FODS).

split-db-architecture

We moved the four largest tables that are critical to Fetch, Import, and SLOC operations to a new FIS database and set up PostgreSQL Foreign Data Wrapper (FDW) to send data back and forth between the two. This moved the bulk of the 1.6 TB of the database over to the new (and powerful) servers, leaving only 65 GB on the original database servers.

Not Yet Done

As is often the case in significant architectural upgrades, not everything worked smoothly out of the box. We are seeing two classes of problems. One is apparent when viewing Commit Summary pages for the largest projects.  We’re seeing queries taking a massive amount of time.  The other is the time it takes to execute project analysis jobs: analyze jobs that used to take a couple of minutes can run for more than half a day. Obviously, this is causing a massive backlog of projects that are not getting analyzed. Normally, we complete an AnalyzeJob in a few minutes and can process between 600 and 1000 jobs per hour.

Part of the analyze job run duration, we are also seeing analyze jobs fail in the last step of the analyze job with a PostgreSQL Serialization error. This means that there are analyze jobs that have not been able to complete successfully. Right now (I just checked), we have over 131K AnalyzeJobs scheduled, with about 600 completed in the past few days and about 200 that have failed with 99% of them failing with the PostgreSQL Serialization error, presumably related to our use of the FDW.

Both of these seem to be traceable to the FDW. I’m not blaming the FDW for anything. We are reasonably certain that we are not using the FDW optimally. There were some obvious changes that were needed by adopting FDW and we did those during our development and testing cycle. Then there were things that we did not predict or behave differently in production than they did in staging, even though we did a lot of work to simulate our production environment in staging. But as is usually the case, there are some new things that are found only in production. The two cases described above fall into that category.

Even with these issues, we feel the FODS deployment was a tremendous success because the vast majority of pages display at least as fast, plus we have tremendous capacity to grow the eFISbot service.

Here’s what we doing about it

For the project analysis jobs, we examined the issue from a number of perspectives and identified a few tables that we could migrate from the OH DB to the FIS DB. Initial tests show that Analyze queries that took 12K ms to run are now running in 1.6K ms, almost 8 times faster.

For the Commit pages, we are working with the SecondBase gem to allow the Ohloh UI to directly access the FIS DB for the data stored there rather than push massive queries through the FDW. Initial tests show that this also results in multiples of better performance, although we’re still gathering the numbers.

While the use of a direct connection to the FIS DB will improve performance on the vast majority of Commit pages, the largest projects still represent a special problem. Right now we have just over 676K projects.  Only 3 of them have more than 1,000 enlistments — Fedora Packages, KDE, and Debian. All three of these are Linux distributions. We briefly mentioned distributions in our post about 2016 plans and now is the time to implement them. The plan is to create a new entity called a “Distribution,” which represents a collection of Open Source Software projects.  This is different from an Organization because the Distribution represents a packaged and related collection of OSS projects. By doing this, we can process each of the projects within the Distribution individually and then aggregate the analysis results for the Distribution.

The way this would work would be that, in the case of Fedora packages with 11,956 enlistments, we would create a project for each of the enlistments and then group all those new projects into the Distribution.  When looking at the Distribution, we can provide the list of projects, links to each of them, plus aggregate the data from those projects with a new “Distribution Analysis”. Most importantly, when displaying the Distribution, we won’t have to try to aggregate the commits from all 12K enlistments into a single view.

Next Steps

We are working quickly on testing and verifying behavior using the new distribution of tables and the second DB connection. We hope to have improvements deployed within a week.

Block access to directories or pages on a website with Deny URL in the WAF

Block access to directories or pages on a website with Deny URL in the WAF

Often we’ll receive a request from a customer for assistance blocking a specific directory or page on their site from public access. For example, suppose you run WordPress and you want to prevent someone from going to the /wp-admin/ directory. Or perhaps you want to restrict the site from loading if they simply enter the IP address in their browser vs. the proper hostname like www.example.com. The Total Uptime Web Application Firewall (WAF) makes this all very easy to do with the Deny URL feature. Here are the steps you’ll need to take to implement.

First you’ll want to go to the firewall tab in the networking section. If you don’t see this tab, you’ll need to subscribe to WAF first. At the very bottom of that page you’ll see the Web Application Firewall configuration table as shown below. Click the ADD button to create a new profile if you don’t have one already.WAF profile table

Give the profile a name and save it. Now select it in the table and click EDIT to get to the actual configuration. You should see a dialog box like the one shown below. To complete this example where we want to block specific directories or the IP address, we’ll use the DENY URL feature. Enable it and click the “more” link as shown below.

WAF Deny URL Feature

When you’ve clicked the more feature, a separate window will slide into view showing you a table where you can enter security checks. Once you have them created, they will show here. Click the ADD button as shown below to create your first security check.

Add Deny URL WAF entry

This will now open a new dialog where you can enter a regular expression to filter out your URL. So, as shown in the below example, we want to block access to the /wp-admin/ directory and everything deeper than that. So the regular expression for that is: /wp-admin(/.*)?$

WAF URL entry

We pasted that regular expression into the Deny URL box, checked enabled and now we need to click the save button. Not the save button at the bottom of the window, but the one shown highlighted above. This will take you back to the main screen where you will see a summary of your rules, in this case, just the new one we created as shown below.

Summary of WAF rules

Now you can repeat the process and click the add button again to block additional URLs or even specific pages. In our example above, we wanted to also prevent someone from putting the IP address of the Cloud Load Balancer into their address bar and loading the site. Sure, you could block this with a host header configuration on your server too, but why not block it at the WAF. The regular expression gets quite a bit more complex, but what you need to add is this: b(?:[0-9]{1,3}.){3}[0-9]{1,3}b(/.*)    This will block a dotted IP address and all the directories or pages below/deeper than that.

In my example below you’ll see I’ve added it just like that, and I’ve also added a comment for my reference so I’ll remember what it does, since I’m not a regex guru.

Completed Deny URL WAF rule

For what it’s worth, you might find this regex test tool quite useful. https://regex101.com/  Here you can test building different regular expressions and test them out with a real URL you might use below. It immediately confirms whether a it will work or not. See my example below.

regex check tool

In this example I pasted the regex I wanted to test for detecting an IP in the URL. I then entered my test string/URL below using an IP address. It highlights in blue that it matches on the IP, and then in green afterwards confirming that it would also catch anything below. There right-side of the regex test tool page linked above shows more detail.

We’re not done yet!

Once you’ve added all of your security checks, you still have two steps left.

First, you should click the BACK button on the DENY URL dialog, as shown below, so you return to the main WAF profile settings screen.

Back button

Yes, you could have clicked save to close out the whole dialog, but if you want to add different rule types (block credit cards, SQL Injection etc.) you need to go back, so that’s why we’ve shown that step above. After clicking back, you’re at the main screen again as shown below.

Main WAF entry screen

From this point you can now click save to close this completely.

Last step!

Now that you’ve created a WAF profile, you need to attach it to your pack/config. Until you do that, it won’t have any effect. We have this extra step because the granularity of letting you attach the same WAF profile to multiple packs/configs, or different ones is very handy.

So back on the main page of your config, click on the Firewall icon as shown below:

engage-firewall

This will open up a dialog where you can now attach your newly created firewall profile to this specific config.

attach-waf-profile

Click ADD on the tool bar shown in the above image, give the new attachment rule a name for easy reference, choose your newly created profile from the drop-down menu, check to Apply this profile to all traffic and click save at the bottom of the box.

add-waf-profile2

Within a few minutes your new rules will take effect!

PS: Someone recently asked about the regex and if it was case-insensitive and if forward slashes / needed to be escaped with a backslash before. The answer to this is that neother matter. You can create your Deny URL for /admin(/.*)?$ and it will block attempts to go to the page /ADMIN/ or /AdMiN/ etc. So just build your regex in lowercase and know that everything is covered.

Additionally, there is no need to escape the forward slashes, but you can if you like. If you put /admin(/.*)?$ it will work just the same as if you did not. So why not keep it clean and simple and skip the escapes : )

The post Block access to directories or pages on a website with Deny URL in the WAF appeared first on Total Uptime®.

Part 3 : How To Choose a Good Support Company

So we had been discussing about the reasons to outsource the support and the necessary features for an outsourcing support.

Now the crucial question is how to determine or choose the right support?

Obviously there is no direct answer. Even with the best support companies issues can happen.Here I am trying to give you some indications which can be used to select an appropriate outsourcing partner for your needs.

Freelancer Vs Company

This is one of the most confusing question for a startup company who plans to outsource the support. There are a lot of free lancers and support companies are available in the Internet. Freelances are often cheaper than companies and it simply doesn’t mean that the support is sub standard. There are really good techs with years of experience under their belt, who works as freelancers. You can check for their availability in common website like upwork, fiverr, WHT etc. and check their reviews before proceeding with hiring them.

On the other side, outsourcing to a company is more expensive as they have to cover infrastructure and operational costs. But its advantage over a freelancer is that the company has more responsibility for the work than a freelancer. Its mainly due to the fear of a bad reputation or negative review about the company and its resonating effects through out its existence.

In the case of free lancer, if he has moved to another location or changed his contacts, its practically impossible to trace him and you will be totally clueless about the current status of the work assigned.

Now, how do you evaluate and judge the competency level of a freelance or a company under consideration?

Technical Forum or Blogs

Knowledge contribution is one of the key indicators of the caliber of a person or company. Active participation in various technical forums such as cPanel or WHT can be indicative about the level of knowledge and exposure of the particular person to the field. Same is the case with technical articles or blogs. You can subscribe to such blogs and evaluate contents so that you can have a fair idea about capability of the company/person in adapting to the latest trends and happenings in the industry.

Always check for social media activities and check for their reviews or references in various forums. I remember one incident where one of the web server vulnerability had been actively going on and a test conducted on the websites of the companies which offers patching for the vulnerability were found to vulnerable!!!

So always apply your logic and common sense to identify the veracity of claims given by the companies or individuals.

Active participation in Meet ups and Conferences

Trusting an online entity is always a daunting task. You don’t know anything about the person other than the web identity. Always try to attend technical meet ups and take the opportunity to meet the person or the representatives of the company in person.

Inter personal communications are much more effective in getting a fair picture about the level of technical competency or the work ethics one follows. This will help you to make decision about the direction of further proceedings.

A complete and professional website

A professional and complete website should testimony for the level of quality the company/person
follows. A well maintained and professionally managed website with active sales and marketing
strategies will indicate the work ethics and pattern of work the company follows. An incomplete and
broken website alone can turn down clients from proceeding further.

Reviews

The most important thing one must be careful before proceeding with purchase is analyzing various reviews of the company. You should consider reviews from existing clients as well as staffs too. Staff reviews are available on common platforms such as glassdoor which indicates the level of workplace pressure and comfort level for the employees.

An unhappy employee won’t generate happy customers.

However absolute satisfaction for employees or clients is unachievable and you should be realistic in your expectations.

How much the reviews are trustworthy?

That is one of the area clients become puzzled. You might be getting too much positive reviews about a company but has a certain level of negative reviews as well. So which one should be trusted?

There is no meaning in downplaying the presence of paid positive reviews.  For eg. Some companies follows the unethical way of paid reviews. They may provide some incentives or price reductions for each positive review from a client on a particular forum.

But majority will be genuine reviews. However you can consider negative reviews more dependable as it can only be done by a customer. In the case of a negative review from a non-client, the company will defend and it will be difficult for the reviewer to prove the authenticity of his claims.

So always check the reviews and asses its gravity. For eg. a statement about a response delay for a ftp account password reset is less severe than a complaint about keeping the server down for one hour.

Pricing

Some might be wondering how pricing can be an indicative of support quality?

I agree that pricing alone isn’t a thumb rule for service quality. There can be difference in investment and operational expenditure on metro cities while the same is relatively cheaper in parks due to various incentives given for start up ventures. So it is possible to have same level of support without burning your pockets.

However the difference in pricing should be reasonable and can even be lower during sales promotional periods. In case, if the gap is wider, be assured that something fishy is happening.

Let me get into the maths, for instance suppose a company offers L2 admin for $600, the average expense as salary for an admin with 1 year experience would be approx-$300-350 . Similarly the company has to meet operational, training expenses and periodic appraisals as well. So a plan price can’t go below $500.

If you have been offered an L2 for 400, it means the staff is working for other clients or is not in the league of promised technical level.

Some hosting companies make a try with the notion that “something is better than nothing” and later becomes great fans of the saying “Nothing is better than non sense”

Credibility

You should verify whether the details given by the company are genuine. For a company boasts about its decades of experience in the field you may check the details available in the public domain such as the company ownership date, domain registration date etc. Also you can confirm its existence and details such as phone number, infrastructure etc by physical verification as well.

Same is the case with employee strength, you may get a fair knowledge about their strength by checking various photos published in their site or in various social media platforms such as FB and twitter.

If you plan to hire a dedicated staff either from a company or a free lancer, get a resume and a photo id card. You may request for a copy of the passport. Also if possible conduct a skype/ video chat and check the veracity of the details provided

These are only some of the traits of good support companies. One needn’t be obsessed about these parameters alone and you should use your convictions and gut feelings to proceed further.

Get  24/7 technical helpdesk support

Recommended Readings

In-house or Outsourced Support, Which one to choose?

Part 2: Things To Consider While Outsourcing

The post Part 3 : How To Choose a Good Support Company appeared first on SupportSages – Your IT Infrastructure partner !!.

Researching Project Security Data

This started with a message from the outstanding Marc Laporte about the Project Security Data for the Tiki Wiki CMS Groupware project. Marc took what looks like a healthy amount of time to carefully document the discrepancies and areas of confusion around the security report. In kind, we’ve taken a deep dive into the data.

The Problem (in Brief)

Marc highlighted a few problems.  The first was that we were missing versions.  We were able to address that problem.

The more complicated one is that there are discrepancies in the versions reported affected by a vulnerability as well as inexplicable ordering of the versions.

The Explanation (Not Brief)

The author sat down with one of the senior member of the Black Duck Software (BDS) Knowledge Base (KB) team to look at the data being presented and to start unwinding the trail of data production back to its beginnings.

We looked at the data in our KB, the channels through which those data are obtained, and looked at how we have gotten and are getting those data as well as what we are doing with it.  The issue mostly boils down to “dates are hard.” Note that we’re not talking about engineers getting dates — that’s a different topic altogether — but how a non-trivial system discovers, identifies, and documents dates that are connected to important events such as version releases of software.

Our story starts some 15 years ago in the early days of Black Duck, when ad hoc Open Source Software (OSS) standards were few and the forges were fewer. BDS engineers were interested in getting information about OSS fundamentally for license compliance. Releases were important, but licenses were more so. Dates were captured when available, and typically from the date stamps of files after syncing files locally, but there wasn’t a focused interest on the dates of releases. It seemed like good metadata to have and we like metadata.

One obvious challenge to this model is when a team uploads a body of work onto a forge. Different file date stamps can be lost from the original system and replaced with the timestamp at which time the files were created on the new system. At this point, the KB sees a number of release tags all with the same time stamp.

We layer onto this the reality that projects were often duplicated on different forges or through different release channels; for example, the source forge and the project’s download page. Over the years, the KB Research team has performed tirelessly and relentlessly in hunting down and correcting duplications and merging projects together. Please recall that the KB tracks significantly more projects than the 675,000 projects we track on the OH. All that said, we believe we have an opportunity to re-examine the merge logic and attempt to improve the dates selected for version releases and have opened a ticket to do that work.

However, one of the most complicating factors is that we don’t always know about all the releases in spite of these methods and learn about a release only when it’s mentioned in the CVE report. When this happens, we create a record for the release and, in lieu of any better information, record the date we learned about the release as the release date.

Add into this particular challenge that vulnerability feeds will often state that a vulnerability affects “this and all previous” versions. What exactly does that mean? Is version 6.15 before or after version 8.4? When one is confident in the dates we have for version releases, we can use those to determine what came before, but as we just examined, one cannot always be confident about such dates. What about applying the vulnerability to previous versions across branches?  For example, a vulnerability affects 3.6 “and all previous versions.” We would all likely agree that impacts version 3.5, 3.4, and 3.3 — all previous 3.x versions.  But what about version 2.10? Was that really affected as well? What about all the 2.x or 1.x releases? It just isn’t clear.  And, what if the vulnerability was in a component the project used? That isn’t clear either from the available data feeds.

Oh, and we should mention that vulnerability feeds, such as the NVD Data Feeds, change over time.  For example NVD version 1.2  provided this “affected versions” identifier, but it was dropped in version 2.0, although we expect that it will return in version 3.0.

The takeaway is that we think we can do something in the short term that might help clear up dates to make them more reasonable, but the real fix will come from improved efforts on identifying the actual versions that are affected by vulnerabilities so we can do away with blanket policies.

This is why Black Duck is making a concentrated effort to provide effective information on OSS vulnerabilities. We’ve assembled a dedicated research team that is focusing on this problem to ferret out in greater and more reliable detail the true relationships between vulnerabilities and the OSS projects and versions affected by them.

SSL Error: Could not create certificate-key pair

SSL Error: Could not create certificate-key pair

If you’re attempting to create a certificate key-pair and have received the message shown below, there is usually one common remedy:

SSL Certificate Error
Could not create certificate-key pair — This could be due to an invalid or corrupt key or certificate, or an undesired space control character in the key. Please correct and try again.

The remedy is to run your key through OpenSSL using the RSA key processing tool to change it to the traditional SSLeay compatible format. And yes, to immediately answer our critics: we do support the newer and more secure PKCS#8 key format, but every once in a while they do not pair with the certs, even if the cert/key pair is already successfully deployed somewhere. We haven’t quite determined why, yet. But we’re working on it. You can use the pkcs8 tool to try to remove a hidden space control character from the key first if you like, but this method works 100% of the time.

The OpenSSL command is:

OpenSSL rsa -in old_key_name.key -out new_key_name.key

This generates a new file that you can then upload to the UI and (hopefully) successfully pair with your Certificate. This resolves the error almost all the time, so give it a try.


MORE: For users with Linux systems, OpenSSL is often already installed making the above command easy to run. But for users on Windows systems, OpenSSL is not there by default. To use this utility, you’ll need to install it.

To get open SSL, you will need to download it from this link: http://slproweb.com/products/Win32OpenSSL.html

When at that site, if you scroll down, you’ll see quite a number of options. You’ll probably want Win32 OpenSSL v1.1.0e if you’re on a 32 bit machine (XP, Vista, Windows 7) or this one Win64 OpenSSL v1.1.0e if you are on a 64 bit machine (Windows 7×64, Windows 8).

Go ahead and download, run the installer and remember where you installed it. Default answers to the questions are just fine.

When you’re done, open up the directory where it was installed. In my case, I installed it here:

openssl

Go one step further and open the bin directory. Now take your key file and put it there in the bin directory.

Now open a Command Prompt and change to that directory by typing cd C:openssl-win64bin (or whatever it is to get into that bin directory).

If you type “dir” and hit enter, you should see your key file in there, along with a bunch of other stuff.

Ok, now run the command to ingest your key and spit out the newly formatted one.

OpenSSL rsa -in old_key_name.key -out new_key_name.key

Once complete, delete the prior key file (at least from the bin directory) so you’re just left with the new key. That’s the one that has been converted. Upload it to the UI and attempt to pair your cert and key again. Hopefully success!

Of course, if you don’t have success, contact us by creating a support case, we’re here to help!

 

The post SSL Error: Could not create certificate-key pair appeared first on Total Uptime®.