Month: March 2018

As Asian Market Growth Continues, West 7 Center Ready to Enable International Connectivity

Los Angeles – March 29, 2018 – Rising Realty Partners (Rising), a full-service investment platform specializing in creating world-class commercial and industrial properties, announces that its West 7 Center is strategically positioned to support carriers, OTTs and enterprises looking to expand into growing Asian markets. With over 16 global and domestic carriers on site and 172,000 square feet of space available, the Tier III facility offers prospective clients an attractive and reliable gateway to Asia.

Experts predict that, by 2019, the Asia-Pacific region will generate the most web-traffic in the world – double the volume generated in North America. As a result, data centers like West 7 Center will serve a critical role as reliable colocation partners, providing mission-critical infrastructure and support for the flow of data from the US across the Pacific Ocean.

“As Los Angeles’ largest purpose-built data center, West 7 Center is perfectly situated as an Asian gateway,” says Tyson Strutzenberg, Chief Operating Officer of Rising Realty Partners. “With direct access to One Wilshire, a primary transit center for internet traffic from the US to Asia, we offer the redundancy our customers need for their data and mission-critical applications. Backed by two central plants with N+1 redundancy and 70,000 gallons of fuel, West 7 Center can provide ongoing uptime in case of emergency or power outages.”

“When we were looking to add to our presence in Los Angeles, we specifically chose West 7 Center because of their round-the-clock security and engineering services,” says Arman Khalili, CEO of Evocative, which recently signed a 42,000 square foot lease in the facility. “Our clients include small start-ups and Fortune 500 companies – each in need of a high level of flexibility, service and low latency connectivity. This is one of the best data centers from an infrastructure perspective and it rivals some of the major carrier hotels and disaster recovery sites on a global scale. We are confident that our customers will benefit from West 7 Center’s capabilities as a data storage and colocation facility with easy access to subsea cables that travel to Asia.”

To learn more about West 7 Center’s infrastructure and services, please visit


About West 7 Center

West 7 Center is a Tier III datacenter facility built with mission critical infrastructure, 24/7 on-site engineering and security support in the heart of Los Angeles. The facility has nine (9) floors of office space and 340,000 RSF of datacenter space on three (3) subterranean levels that are supported by the Building’s two (2) central plants with a total of 16.9 MW of generator backed power, 3,000 kW of Building UPS power and 9,000 tons of cooling capacity for telecom, mission critical, co-location and datacenter operations.

Currently, West 7 Center has approximately 13 MW of emergency power and 172,000 sq ft of space available. The building has undergone significant upgrades in order to keep up with the ever-changing technology environment. For more information, please visit


About Rising Realty Partners

Rising Realty Partners is a full-service investment and operating platform specializing in creating world-class commercial and industrial properties. With over 3M SF under management, Rising approaches real estate investing and operating by focusing on three fundamental areas of impact that have proven to create value: environmental, technological, and social. Rising’s team of entrepreneurial, innovative facilitators has a depth of understanding and surpassed track record in identifying prime investment opportunities. Please visit for more information.


About Evocative

Evocative it a North American company and an owner and operator of secure, compliant, highly available data centers. We are the trusted guardians of our clients’ Internet infrastructure. To tour an Evocative data center or receive additional information on data center services, please visit


The post As Asian Market Growth Continues, West 7 Center Ready to Enable International Connectivity appeared first on Evocative Data Centers.

Ensure Application Security with Zend Server and RIPS

Zend Server is the ultimate and most secure software platform for deploying, monitoring, debugging, maintaining, and optimizing enterprise PHP applications. It also helps to keep the technology stack up-to-date and to avoid security risks that stem from outdated components.
However, most of the daily web attacks try to exploit security bugs in the applications’ source code. Popular vulnerability types such as SQL injection and cross-site scripting can enable attackers to steal sensitive user data from the server.
Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04

Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04

Sadly in LEDE Reboot 17.01.4 (latest OpenWRT release) the package fping is missing. It was already included in previous releases, but it’s missing in this stable. It’s already readded to the master branch for future releases. But if you need the fping binary now, it is not available in the opkg installer for 17.01.4. So we have to build it manually.

Download SDK

OpenWRT provides for each router target with the firmware downloads also the Software Development Kit with an already prepared Cross Compile Toolchain. It’s of course possible to create own Cross Compile ToolChain explained explained int the Build System Documentation. But the SDK is already available, so i’ll just use it.

You can find the SDK at the end of the Firmware Download Pages, precompiled and ready to use.

In my case i use at the moment a TP-Link WDR4300 (N750) which contains an Atheros AR9344 CPU @560MHz with MIPS 74Kc Instruction Set, 8 MB NAND Flash, 128MB RAM, Serial, 5x GigE Ports, VLAN capable.

The Firmware can be obtained from The SDK Download is at the end and named

Prerequisites and Dependencies

To use the SDK, the same tools must be available as when the Cross compile Toolchain is created. Check the instructions from Install Buildsystem Documentation.

For Debian following installation procedure should be enough:

  • Debian 7 Wheezy:

    apt-get install libncurses5-dev zlib1g-dev gawk
  • Debian 8 Jessie:

    sudo apt-get install build-essential libncurses5-dev gawk git subversion libssl-dev gettext unzip zlib1g-dev file python
  • Debian 9.3 Stretch:

    sudo apt install build-essential libncurses5-dev gawk git subversion libssl-dev gettext zlib1g-dev


Locate FPing on Master Branch

The FPing packages is available in the master Branch here:

Cross Compile

Documentation on Using the SDK to cross compile packages for a specific target without compiling the whole system from scratch.

  • Extract the SDK on your system.

Package Feeds

After decompressing the SDK archive, edit the feeds.conf.default file to add your packages, by default it has LEDE feeds, and you can add your own feeds, local or remote.

For example, you can add all packages you have in a local folder by adding this line

src-link custom /full/path/to/the/local/folder

Load package lists

./scripts/feeds update -a command will refresh the package lists. It will download from github the LEDE feeds, and then it will also download from github or read from your local folder the packages you have loaded in the Package Feeds step above.

cave@laptop:~/openwrt/sdk/lede-sdk-17.01.4-ar71xx-generic_gcc-5.4.0_musl-1.1.16.Linux-x86_64$ ./scripts/feeds update -a
Updating feed 'base' from ';v17.01.4' ...
Cloning into './feeds/base'...
remote: Counting objects: 8382, done.
remote: Compressing objects: 100% (7378/7378), done.
remote: Total 8382 (delta 1034), reused 4265 (delta 360)
Receiving objects: 100% (8382/8382), 10.76 MiB | 3.45 MiB/s, done.
Resolving deltas: 100% (1034/1034), done.
Checking connectivity... done.
Note: checking out '444add156f2a6d92fc15005c5ade2208a978966c'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b new_branch_name

Create index file './feeds/base.index' 
Collecting package info: feeds/base/package/firmware/lantiq/dsl-vrx200-firmware-Collecting package info: done
Collecting target info: done
Updating feed 'packages' from '^cd5c448758f30868770b9ebf8b656c1a4211a240' ...
Cloning into './feeds/packages'...
remote: Counting objects: 57751, done.
remote: Compressing objects: 100% (25080/25080), done.
remote: Total 57751 (delta 31196), reused 55736 (delta 29454)
Receiving objects: 100% (57751/57751), 13.89 MiB | 3.49 MiB/s, done.
Resolving deltas: 100% (31196/31196), done.
Checking connectivity... done.
Switched to a new branch 'cd5c448758f30868770b9ebf8b656c1a4211a240'
Create index file './feeds/packages.index' 
Collecting package info: done
Collecting target info: done
Updating feed 'luci' from '^d3f0685d63c1291359dc5dd089c82fa1e150e0c6' ...
Cloning into './feeds/luci'...
remote: Counting objects: 104191, done.
remote: Compressing objects: 100% (29395/29395), done.
remote: Total 104191 (delta 61227), reused 101632 (delta 59436)
Receiving objects: 100% (104191/104191), 25.29 MiB | 3.85 MiB/s, done.
Resolving deltas: 100% (61227/61227), done.
Checking connectivity... done.
Switched to a new branch 'd3f0685d63c1291359dc5dd089c82fa1e150e0c6'
Create index file './feeds/luci.index' 
Collecting package info: done
Collecting target info: done
Updating feed 'routing' from '^d11075cd40a88602bf4ba2b275f72100ddcb4767' ...
Cloning into './feeds/routing'...
remote: Counting objects: 6622, done.
remote: Compressing objects: 100% (4253/4253), done.
remote: Total 6622 (delta 2668), reused 5194 (delta 1977)
Receiving objects: 100% (6622/6622), 1.60 MiB | 2.59 MiB/s, done.
Resolving deltas: 100% (2668/2668), done.
Checking connectivity... done.
Switched to a new branch 'd11075cd40a88602bf4ba2b275f72100ddcb4767'
Create index file './feeds/routing.index' 
Collecting package info: done
Collecting target info: done
Updating feed 'telephony' from '^ac6415e61f147a6892fd2785337aec93ddc68fa9' ...
Cloning into './feeds/telephony'...
remote: Counting objects: 6939, done.
remote: Compressing objects: 100% (4836/4836), done.
remote: Total 6939 (delta 3808), reused 3734 (delta 1921)
Receiving objects: 100% (6939/6939), 1.31 MiB | 0 bytes/s, done.
Resolving deltas: 100% (3808/3808), done.
Checking connectivity... done.
Switched to a new branch 'ac6415e61f147a6892fd2785337aec93ddc68fa9'
Create index file './feeds/telephony.index' 
Collecting package info: done
Collecting target info: done

I have not used custom src-link in the feeds.conf.default file. I just added the package from master to the downloaded feed.

I added the parts from to the path ./feeds/packages/net in the sdk directory.

Select Packages

./scripts/feeds install will load the package and its dependencies in the SDK

Then open the SDK menu again, find the package you want to build and select it by pressing “m”, this will also select all the dependencies, and you will see that they are all tagged with “” in the menu.

cave@laptop:~/openwrt/sdk/lede-sdk-17.01.4-ar71xx-generic_gcc-5.4.0_musl-1.1.16.Linux-x86_64$ ./scripts/feeds install fping
Installing package 'fping' from packages

Compile Package

Before we compile, let’s check make menuconfig.

cave@laptop:~/openwrt/sdk/lede-sdk-17.01.4-ar71xx-generic_gcc-5.4.0_musl-1.1.16.Linux-x86_64$ make menuconfig

*** End of the configuration.
*** Execute 'make' to start the build or try 'make help'.

Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04 1Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04 2Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04 3Cross Compile a missing package (fping) for OpenWRT/LEDE Reboot 17.01.04 4

Now let’s build the package fping.

cave@laptop:~/openwrt/sdk/lede-sdk-17.01.4-ar71xx-generic_gcc-5.4.0_musl-1.1.16.Linux-x86_64$ make -j7
# configuration written to .config
 make[1] world
 make[2] package/compile
 make[3] -C package/toolchain compile
 make[3] -C package/linux compile
 make[3] -C feeds/packages/net/fping compile
 make[2] package/index

Package and Install

In ./bin/packages/mips_24kc/packages in the sdk directory should be the compiled output.

cave@laptop:~/openwrt/sdk/lede-sdk-17.01.4-ar71xx-generic_gcc-5.4.0_musl-1.1.16.Linux-x86_64$ ls -lh ./bin/packages/mips_24kc/packages 
total 28K
-rw-r--r-- 1 cave cave 15K Mar 25 20:47 fping_4.0-2_mips_24kc.ipk
-rw-r--r-- 1 cave cave 719 Mar 25 20:48 Packages
-rw-r--r-- 1 cave cave 473 Mar 25 20:48 Packages.gz
-rw-r--r-- 1 cave cave 822 Mar 25 20:48 Packages.manifest

scp the package to your openwrt device and install with opkg install ./fping_4.0-2_mips_24kc.ipk

root@openwrt:~# opkg install ./fping_4.0-2_mips_24kc.ipk 
Installing fping (4.0-2) to root...
Configuring fping.


Async Expressive? Try Swoole!

When we were finalizing features for Expressive 3,
we had a number of users testing using asynchronous PHP web servers. As a
result, we made a number of changes in the last few iterations to ensure that
Expressive will work well under these paradigms.

Specifically, we made changes to how response prototypes are injected into

Response prototypes?

What’s the problem?

In an async system, one advantage is that you can bootstrap the application
once, and then respond to requests until the server is shutdown.

However, this can become problematic with services that compose a response
prototype in order to produce a response (e.g., authentication middleware that
may need to produce an “unauthenticated” response; middleware that will produce
a “not found” response; middleware that will produce a “method not allowed”
response; etc.). We have standardized on providing response prototypes via
dependency injection, using a service named after the interface they implement:

If a particular service accepts a response instance that’s injected during
initial service creation, that same instance will be used for any subsequent
requests that require it. And that’s where the issue comes in.

When running PHP under traditional conditions — php-fpm, the Apache SAPI,
etc. — all requests are isolated; the environment is both created and torn
down for each and every request. As such, passing an instance is perfectly safe;
there’s very little chance, if any, that any other service will be working with
the same instance.

With an async server, however, the same instance will be used on each and every
request. Generally, manipulations of PSR-7
message instances will create new instances, as the interfaces they implement
are specified as immutable. Unfortunately, due to technical limitations of the
PHP language, we were unable to make the body of response messages
immutable. This means that if one process writes to that body, then a
subsequent process — or even those executing in parallel! — will
receive the same changes. This can lead to, in the best case scenario,
duplicated content, and, in the worst, provide incorrect content or perform
information leaking!

To combat these situations, we modified the PsrHttpMessageResponseInterface
service we register with the dependency injection container: it now returns not
an instance of the interface, but a factory capable of producing an instance.
Services should compose this factory, and then call on it each time they need to
produce a response. This fixes the async problem, as it ensures a new instance
is used each time, instead of the same instance.

(Additionally, this change helps us prepare for the upcoming PSR-17, which
describes factories for PSR-7 artifacts; this solution will be compatible with
that specification once complete.)

Why async?

If asynchronous systems operate so differently, why bother?

There’s many reasons, but the one that generally gets the attention of
developers is performance.

We performed benchmarks of Expressive 2 and Expressive 3 under both Apache and
nginx, and found version 3 received around a 10% improvement.

We then tested using Swoole. Swoole is a PHP
extension that provides built-in async, multi-threaded input/output (I/O)
modules; it’s essentially the I/O aspects of node.js — which allow you to
create network servers and perform database and filesystem operations —
but for PHP.

A contributor, Westin Shafer, has written a module
for Expressive 3 that provides an application wrapper for Swoole

that is exposed via a CLI command. We ran our same benchmarks against this, and
the results were astonishing: applications ran consistently 4 times faster
under this asynchronous framework, and used fewer resources!

While performance is a great reason to explore async, there are other reasons as
well. For instance, if you do not need the return value of an I/O call (e.g., a
database transaction or cache operation), you can fire it off asynchronously,
and finish out the response without waiting for it. This can lead to reduced
waiting times for clients, further improving your performance.

We have had fun testing Swoole, and think it has tremendous possibilities when
it comes to creating microservices in PHP. The combination of Expressive and
Swoole is remarkably simple to setup and run, making it a killer combination!

Notes on setting up Swoole

The wshafer/swoole-expressive package requires a version 2 release of the
Swoole extension.

However, there’s a slight bug in the PECL installer whereby it picks up the
most recent release as the “latest”, even if a version with greater stability
exists. As of the time of writing, version 1.10.2 of Swoole was released after
version 2.1.1, causing it to be installed instead of the more 2.X version.

You can force installation of a version by appending the version you want when
invoking the pecl command:

$ pecl install swoole-2.1.1

The version must be fully qualified for it to install correctly; no partials
(such as swoole-2 or swoole-2.1 will work.

Expressive 3!

Yesterday, we tagged and released Expressive 3!

Expressive 3 provides a middleware microframework.

Create a new Expressive application using Composer:

$ composer create-project zendframework/zend-expressive-skeleton

The installer will prompt you for your choice of:

  • Initial application architecture (minimal, flat, modular)
  • Which dependency injection container you would like to use.
  • Which routing library you would like to use.
  • Which templating library you would like to use, if any.
  • Which error handling library you would like to use, if any.

From there, it creates a new project for you, and allows you to get started
developing immediately.

You can read more in our quick start,
and may want to check out our command line tooling
to see what we provide to make development even faster for you!

What are the features?

Expressive 3 embraces modern PHP, and requires PHP 7.1 or higher. Strong
type-hinting, including return type hints, make both our job and your job
easier and more predictable. The ability to use all modern PHP features helps us
deliver a solid base for your application.

Expressive 3 provides full support for the PSR-15 (Middleware and Request
Handlers) standard
. We believe strongly
in supporting standards, to the extent that this release also drops direct
support for the “double-pass” middleware style

we have supported since version 1.0

Expressive 3 massively refactors its internals as well. In fact, the majority of
the code in the zend-expressive package was removed, moved to other existing
packages where it had a better semantic affiliation1,
or extracted to new packages2. This base
package now mainly handles coordinating collaborators and providing a
user-friendly interface to creating your application pipeline and routes.3

Expressive 3 provides more command line tooling and tooling improvements in
order to make developing your application easier. We added a command for
creating factories for existing classes (factory:create).4
The middleware:create command now creates a factory for the
middleware generated. We added support for creating request handlers5,
complete with factory generation and registration, as well as template support.6

Finally, we recognize that Expressive has changed massively between versions 1
and 3, while simultaneously keeping its primary API stable and unchanged.
However, to help users find the information they need for the version they run,
we have rolled out versioned documentation, with each version providing only
information specific to its release cycle:

The most recent version will always be present in the primary navigation, with
links to other versions present as well.

New components!

We have several new components that provide features for Expressive — or
any PSR-15 framework you may be using! These include:

We have a number of other packages in the works around authentication,
authorization, and data validation that we will be releasing in the coming weeks
and months; stay tuned for announcements!

What about upgrading?

We have prepared a migration document
that covers new features, removed features, and a list of all changes.

Additionally, we have provided migration tooling
to aid you in your migration from version 2 to version 3. The tool will not
necessarily give you a fully running application
, but it will take care of
the majority of the changes necessary to bump your application to version 3,
including setting up appropriate dependencies, and updating your bootstrapping
files to conform to the new skeleton application structure.

If you need assistance, you can find community help:

What’s next?

We have been working on a number of API-related modules for Expressive (and any
PSR-15 applications) since last summer, with a number of components already
completed, and others close to completion. We plan to finalize these in the next
few months.

Thank You!

We extend a hearty thank you to everyone who tested the various pre-releases and
provided feedback. Additionally, we are singling out the following individuals
who provided significant contributions to the Expressive 3 project:

  • Enrico Zimuel provided a ton of feedback and
    critique during the design phase, and was a driving force behind many of the
    API usability decisions.

  • Rob Allen did a workshop at SunshinePHP, right as we
    dropped our initial alpha releases, and provided feedback and testing for much
    of our tooling additions.

  • Frank Brückner provided ongoing feedback
    and review of pull requests, primarily around documentation; he is also
    responsible for a forthcoming rewrite of our documentation theme to make it
    more responsive and mobile-friendly.

  • Daniel Gimenes provided feedback and ideas as
    we refactored zend-stratigility; he is the one behind package-level utility
    functions such as ZendStratigilitydoublePassMiddleware(),
    ZendStratigilitypath(), and more.

  • Witold Wasiczko provided the majority of the
    rewrite of zend-stratigility for version 3. He can be celebrated for removing
    over half the code from that repository!

In addition to these people, I want to extend a personal thank you to the
following people:

  • Geert Eltink has helped maintain Expressive v2, and
    particularly the various routers and template engines, making them ready for
    v3 and testing continually. As a maintainer, I was able to rely on him to take
    care of merges as we finalized the releases, and was pleasantly surprised to
    wake up to new releases several times when he fixed critical issues in our
    alpha and RC releases.

  • Michał Bundyra provided a constant stream of
    pull requests related to quality assurance (including ongoing work on our phpcs
    extension!), as well as critical review of incoming patches. He spearheaded
    important work in the refactoring process, including changes to how we handle
    response prototypes, and critical fixes in our routers to address issues with
    how we detect allowed methods for path route matches. We synced each and every
    single day, often arguing, but always coming to consensus and plowing on.

If you get a chance, reach out to these contributors and thank them for the


  • 0: The Expressive ecosystem makes
    use of many other standards as well, including
    PSR-7 HTTP Messages,
    PSR-11 Container, and
    PSR-13 HTTP Links.

  • 1: As an example, the routing,
    dispatch, and “implicit methods” middleware were all moved to the
    package, as they each work with the router and route results.

  • 2: Request generation, application
    dispatch, and response emission were all moved to a new package,

  • 3: These refactors led to a net
    removal of code across the board, vastly simplifying the internals. This
    will lead to ease of maintenance, greater stability, and, based on benchmarks
    we’ve been performing, 10% better performance and less system resource usage.

  • 4: factory:create uses PHP’s
    Reflection API in order to determine what dependencies are in place in order to
    generate a factory class; it also registers the class and factory with the

  • 5: In previous Expressive versions,
    we referred to “actions”, which were any middleware that returned a response
    instead of delegating to another layer of the application. PSR-15 calls such
    classes request handlers. Our tooling provides an action:create command,
    however, for those who prefer the “action” verbiage.

  • 6: The command creates a template
    named after the handler created; it uses the root namespace of the class to
    determine where to put it in the filesystem. Additionally, it alters the
    generated request handler to render the template into a zend-diactoros

Integrate Security Checks with RIPS CLI

Getting started Installation The installation of rips-cli is described in detail in our documentation. You can download the PHAR build of our CLI tool into your bin directory and make it executable with the following commands:
1 2 sudo wget -O /usr/bin/rips-cli sudo chmod 755 /usr/bin/rips-cli The only requirements to run rips-cli are the PHP command line interface and the Zip extension to start scans.

Let’s Build a Great Digital Library Together…Starting with a Wishlist

We are looking for partners to help us build a great physical collection of books to be preserved, digitized, and made available through our Open Libraries project. Working with more than 500 library partners, the Internet Archive has already helped make more than 3 million public domain books available online for free access through We have also brought more than 500,000 in-copyright books online to provide full access to those with print disabilities.

Our goal is to bring 4 million more books online, so that all digital learners have access to a great digital library on par with a major metropolitan public library system. We know we won’t be able to make this vision a reality alone, which is why we’re working with libraries, authors, and publishers to build a collaborative digital collection accessible to any library in the country.

Building a great library starts with great books. We have already gathered more than 1.5 million books in our physical archive. We aspire to have one copy of every book, but enroute to that dream we have created  a “wishlist” to help prioritize preservation and access. This wishlist was compiled using data and assistance from several great projects:

Download the wishlist here.

We are using these datasets to help define a collection of books that has wide appeal and impact for libraries across the US and the patrons they serve. This wishlist is a work-in-progress and will evolve as we incorporate more datasets and review our approach with community input. We’ve made 3 versions our wishlist available to help facilitate use within the library and publishing communities, featuring ISBN-13, ISBN-10, & OCLC identifiers.

Here’s how you can help! We are looking for libraries, authors, publishers, and individual book lovers to help us build this collection. You can help in the following ways:

  • Donate books
    • You can donate books on our wishlist to our physical archive. If you are a library, a publisher, or have a private collection with more than 1,000 books to donate, please contact Chris Freeland, Director of Open Libraries, at If you have a private collection or small number of volumes to donate, please use this form to begin the donation process.  We will add these books to our digitization queue and they will become ebooks available through Open Libraries as funding becomes available.
    • If you already have digital versions of these books, we would love to add them to our print-disabled collection.
  • Scan books
    • If you have books on our wishlist but don’t want to donate them to our physical archive, we offer scanning services and can digitize your books in one of our regional scanning centers.
  • Identify books
    • If you are an author who would like to add your own books to the list, you can donate physical copies, and/or contact us to let us know you’d like us to ensure that your work will be preserved and available to future generations. If you’re a librarian, educator, or other book lover and would like to help us continue to curate the wishlist to ensure that it includes the most useful, important and culturally diverse books, please reach out to us.

And of course, if you don’t have any books to donate but would like to help offset digitization expenses, please donate today! All monetary donations made by April 30, 2018, will be matched by a Challenge Grant from the Pineapple Fund.

If you are interested in participating, or have questions about our program or plans, please contact Chris Freeland, Director of Open Libraries, at

OpenVPN Routed Client Config for OpenWRT

OpenVPN Routed Client Config for OpenWRT

In this case i want to access a remote network where also an OpenWRT Router is in use as the OpenVPN Client. This is a post in a series of OpenVPN Tutorials on this blog.

Network Topology

                    |      |
                    | IPv4 |
                +---+      +---+
                |   +------+   |
                |              |
+---------------+-+          +-+---------------+
| Router A        |          | Router B        |
|                 |          |                 |
| |          | |
|    |          |    |
+-+---------------+          +-^---------------+
  |                            |
  |                            |
+-v-------------+            +-+-------------+
| OpenVPN Tun   |            | OpenVPN Tun   |
| |            | |
| Server        |            | Client        |
|    |            |    |
+---------------+            +---------------+

I want to be able to reach net from and the other direction.

Creation of Client Certificates

See BlogPost Creation of RootCA Certificates. This Blogpost is on the ToDo list.

OpenWRT Config Settings – Routed Client

From /etc/config/openvpn

config openvpn 'cyber'
        option enabled '1'
        option client '1'
        option remote 'vpn.domain.tld 1194'        
        option dev_type 'tun'
        option dev 'cyber_tun0'
        option proto 'udp'        
        option topology 'subnet'
        option resolv_retry 'infinite'        
        option nobind '1'        
        option 'float' '1'
        option pull '1'
        option ca '/etc/ssl/certs/'
        option cert '/etc/ssl/certs/client1.vpn.cavebeat.lan.cert.pem'
        option key '/etc/ssl/private/client1.vpn.cavebeat.lan.key.pem'
        option tls_crypt '/etc/ssl/private/tls-auth.key'
        option cipher 'AES-256-CBC'
        option ncp-ciphers 'AES-256-GCM:AES-128-GCM:AES-256-CBC:AES-128-CBC'
        option auth 'SHA512'
        option tls_client '1'
        option tls_version_min '1.2'
        option remote_cert_tls 'server'
        option verify_x509_name ‘vpn.domain.tld name’
        option log '/var/log/openvpn.log'
        option status '/var/log/openvpn-status.log'
        option mute '5'
        option verb '4'
        option compress 'lzo'
#Connection Reliability
        option persist_key '1'
        option persist_tun '1'
       option user 'nobody'
       option group 'nogroup'

Client Config

Most of the settings are already explained in the previous post OpenVPN Server Hardening – OpenWRT TUN Device. I’ll cover only the Client specific Settings which are new. For example the client config does not contain a DiffieHellman-Parameter setting.

  A helper directive designed to simplify the configuration of OpenVPN's client mode.  
--remote host [port] [proto] 
 Remote host name or IP address.  
 On the client, multiple --remote options may be specified for redundancy, each referring to a different OpenVPN server.  
 Specifying multiple --remote options for this purpose is a special case of the more general connection-profile feature.  
 See the  documentation below. 
--resolv-retry n 
  If hostname resolve fails for --remote, retry resolve for n seconds before failing. 
  Set n to "infinite" to retry indefinitely.
  Do not bind to local address and port.  
  The IP stack will allocate a dynamic port for returning packets.  
  Since the value of the dynamic port could not be known in advance by a peer, this option is only suitable for peers which will be initiating connections by using the --remote option. 
 Allow remote peer to change its IP address and/or port number, such as due to DHCP (this is the default if --remote is not used). 
 --float when specified with --remote
 allows an OpenVPN session to initially connect to a peer at a known 
address, however if packets arrive from a new address and pass all 
authentication tests, the new address will take control of the session. 
 This is useful when you are connecting to a peer which holds a dynamic address such as a dial-in user or DHCP client. 
 Essentially, --float tells OpenVPN to accept authenticated packets from any address, not only the address which was specified in the --remote option. 
  This option must be used on a client which is connecting to a multi-client server.  
  It indicates to OpenVPN that it should accept options pushed by the server, provided they are part of the legal set of pushable options (note that the --pull option is implied by --client ). 
  In particular, --pull allows the server to push routes to the client, so you should not use --pull or --client in situations where you don't trust the server to have control over the client's routing table. 
  Enable TLS and assume client role during TLS handshake. 
--remote-cert-tls client|server 
  Require that peer certificate was signed with an explicit key usage and extended key usage based on RFC3280 TLS rules. 
  This is a useful security option for clients, to ensure that the host they connect to is a designated server.  
  Or the other way around; for a server to verify that only hosts with a client certificate can connect. 
--verify-x509-name name type 
  Accept connections only if a host's X.509 name is equal to name. 
  The remote host must also pass all other tests of verification. 
  Which X.509 name is compared to name depends on the setting of type. 
  type can be "subject" to match the complete subject DN (default), "name" to match a subject RDN or "name-prefix" to match a subject RDN prefix. 
  NOTE: Test against a name prefix only when you are using OpenVPN with a custom CA certificate that is under your control. 
  Never use this option with type "name-prefix" when your client certificates are signed by a third party, such as a commercial web CA. Client Config Dir on Server

CCD – Client Config Dir Settings

Client Config on Server

--client-config-dir dir 
  Specify a directory dir for custom client config files. 
  After a connecting client has been authenticated, OpenVPN will look in this directory for a file having the same name as the client's X509 common name. 
  If a matching file exists, it will be opened and parsed for client-specific configuration options. 
  If no matching file is found, OpenVPN will instead try to open and parse a default file called "DEFAULT", which may be provided but is not required. 
  Note that the configuration files must be readable by the OpenVPN process after it has dropped it's root privileges. 
  This file can specify a fixed IP address for a given client using --ifconfig-push, as well as fixed subnets owned by the client using --iroute.
  One of the useful properties of this option is that it allows client configuration files to be conveniently created, edited, or removed while the server is live, without needing to restart the server. 
  The following options are legal in a client-specific context: --push, --push-reset, --push-remove, --iroute, --ifconfig-push, and --config.
  Require, as a condition of authentication, that a connecting client has a --client-config-dir file. 
On your server check the option client_config_dir ‘/etc/openvpn/ccd/’. In the defined ccd directory place a file for each client. The file must be named according to the X509 common name of the client certificate.
root@openwrt_server:~# cd /etc/openvpn/ccd/
root@openwrt_server:/etc/openvpn/ccd# ls
root@openwrt_server:/etc/openvpn/ccd# cat client1.vpn.cavebeat.lan 

Client Config File

ifconfig-push tells the client the IP address and the netmask. iroute routes the packet from openvpn to the client in combination with route on the server.

--ifconfig-push local remote-netmask [alias]
  Push virtual IP endpoints for client tunnel, overriding the --ifconfig-pool dynamic allocation.
  The parameters local and remote-netmask are set according to the --ifconfig directive which you want to execute on the client machine to configure the remote end of the tunnel. 
  Note that the parameters local and remote-netmask are from the perspective of the client, not the server. 
  They may be DNS names rather than IP addresses, in which case they will be resolved on the server at the time of client connection.
--iroute network [netmask] 
  Generate an internal route to a specific client. 
  The netmask parameter, if omitted, defaults to 
  This directive can be used to route a fixed subnet from the server to a particular client, regardless of where the client is connecting from.
  Remember that you must also add the route to the system routing table as well (such as by using the --route directive).
  The reason why two routes are needed is that the --route directive routes the packet from the kernel to OpenVPN. 
  Once in OpenVPN, the --iroute directive routes to the specific client. 
  This option must be specified either in a client instance config file using --client-config-dir or dynamically generated using a --client-connect script. 
  The --iroute directive also has an important interaction with --push "route ...". 
  --iroute essentially defines a subnet which is owned by a particular client (we will call this client A). 
  If you would like other clients to be able to reach A's subnet, you can use --push "route ..." together with --client-to-client to effect this.
  In order for all clients to see A's subnet, OpenVPN must push this route to all clients EXCEPT for A, since the subnet is already owned by A.
  OpenVPN accomplishes this by not not pushing a route to a client if it matches one of the client's iroutes. 

Route Settings on Server

On the server two route settings must be set. The first one is to tell the Server Router where to send packets for the client network. The push route is to tell the clients where to send packets to the server network.

option route ''
list push 'route'


For more details on this part, have also a look at my other VPN Client Tutorial.

Create Unmanaged Interface

OpenVPN Routed Client Config for OpenWRT 5OpenVPN Routed Client Config for OpenWRT 6OpenVPN Routed Client Config for OpenWRT 7Your /etc/config/network should contain now

root@openwrt:~# cat /etc/config/network 
config interface 'cyber_vpn'
 option proto 'none'
 option ifname 'cyber_tun0'
 option auto '1'

Firewal Zones

OpenVPN Routed Client Config for OpenWRT 8OpenVPN Routed Client Config for OpenWRT 9

OpenVPN Routed Client Config for OpenWRT 10

Your /etc/config/firewall should contain now following parts:

cat /etc/config/firewall 
config zone
 option name 'cyber_vpn'
 option input 'ACCEPT'
 option output 'ACCEPT'
 option network 'cyber_vpn'
 option forward 'ACCEPT'

config forwarding
 option dest 'lan'
 option src 'cyber_vpn'

config forwarding
 option dest 'cyber_vpn'
 option src 'lan'

Routing Table and Ping Checks

These routes should show up on Client and Server to be reach able from both ways.


root@openwrt_client:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface U 0 0 0 cyber_tun0 UG 0 0 0 cyber_tun0 U 0 0 0 br-lan

Ping the Server Router from the Client Router

root@openwrt_client:~# ping -c 2
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=45.757 ms
64 bytes from seq=1 ttl=64 time=37.271 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 37.271/41.514/45.757 ms

Ping the remote and local OpenVPN IP

root@openwrt_client:~# ping -c 2
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=49.015 ms
64 bytes from seq=1 ttl=64 time=55.041 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 49.015/52.028/55.041 ms

root@openwrt_client:~# ping -c 2
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.289 ms
64 bytes from seq=1 ttl=64 time=0.277 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.277/0.283/0.289 ms


root@openwrt_server:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface U 0 0 0 cyber_tun0 UG 0 0 0 cyber_tun0 U 0 0 0 br-lan

Ping the Client Router from the Server Router

root@openwrt:~# ping -c 2 
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=43.559 ms
64 bytes from seq=1 ttl=64 time=34.661 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 34.661/39.110/43.559 ms

Ping the local and remote OpenVPN IP

root@openwrt:~# ping -c 2
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.437 ms
64 bytes from seq=1 ttl=64 time=0.282 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.282/0.359/0.437 ms

root@openwrt_server:~# ping -c 2
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=42.780 ms
64 bytes from seq=1 ttl=64 time=33.573 ms
--- ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 33.573/38.176/42.780 ms


Evocative President and COO Derek Garnier to Speak at Phoenix Data Centers: Communication, Infrastructure & Innovation

Garnier and other panel members will discuss the state of the data center market.

WHO: Evocative President and COO Derek Garnier will speak at Phoenix Data Centers: Communication, Infrastructure & Innovation

Derek Garnier is the President & COO of Evocative and brings with him 29 years of provider experience in data center, network, and compute. Prior to joining Evocative, he served as CEO of Layer42 Networks, which was acquired by Wave Broadband in 2015, with Garnier assuming the position of SVP Data Center Services for Wave.

He has held both management and engineering roles at many top internet infrastructure providers including QTS Datacenters, United Layer, AboveNet Communications, SiteSmith, Global Crossing, Global Center, MFS Datanet, and Cabletron Systems. Garnier frequently moderates industry panels, speaks at both industry events and on radio, and provides consult for investors and companies during M&A processes.

WHAT: Leading the Curve: The State of the Data Center Market

Garnier will join other leading data center developers who are expanding in Phoenix. The panel will discuss how Phoenix is uniquely well-positioned to meet demand from sectors such as technology, banking, financial services, healthcare, retail and e-commerce in 2018.

WHERE: The Camby – 2401 East Camelback Road, Phoenix, AZ 85016

WHEN: March 28, 2018 from 7:30am – 11:00am

Register for Phoenix Data Centers: Communication, Infrastructure & Innovation

For more information on Evocative’s suite of data center services or to take a tour of one of the company’s data centers, please visit

About Evocative
Evocative is a North American company and an owner and operator of secure, compliant, highly available data centers. We are the trusted guardians of our clients’ Internet infrastructure. For additional information, please visit

The post Evocative President and COO Derek Garnier to Speak at Phoenix Data Centers: Communication, Infrastructure & Innovation appeared first on Evocative Data Centers.

Share what you’re reading

Share what you’re reading

To bring in the new year, Open Library announced a new feature called the Reading Log which lets you keep track of the books you’re currently reading, have finished reading, or want to read. Over the last two months since we launched the feature, we’ve received promising feedback from our community. Our reading log stats page shows over 53,000 readers have logged more than 100,000 books! It’s even helped us learn which books our community cares most about. The biggest point of feedback we’ve received is that many readers wish there was a way to share their reading log with friends.

As a library, and as readers ourselves, we take reader privacy seriously. We believe everyone should have the right to feel safe and have their privacy respected when they search for and borrow books. So when we launched the Reading Log feature, we decided to make it private by default, so only you can see what books you’re tracking. We also gave readers full control to manage, add, and remove books from their reading lists. We still think this is the right choice and will continue making the Reading Log private-by-default for all new users.

But now, readers have a choice: Announcing the public Reading Log option!

Starting today, users will be able to go to a new privacy page where they can manage their account settings and make their Reading Logs public so they can share it with their family and friends.

How do I make my Reading Log public?

After going to your privacy page, you can click the “Yes” option to make your Reading Log public.

Share what you’re reading 11

You can then visit your Reading Log and use the Share button to generate a link which you can share with your friends!

Share what you’re reading 12

We hope the public Reading Log feature will give your friends inspiration as to what they should read next!