Month: October 2017

Spammers: Check. Projects next. And Other Goodies.

Hail Open Hubbites! We’ve been working hard and focusedly over the past month and would like to share some updates.

The first, and biggest news, is that we ended our offshore partnership at the end of September. There were a number of drivers for this, but the immediate impact is that we are currently a 4 person team. This explains why it’s been even more difficult than usual to keep up with all the ways folks get in touch with us: forum, blog posts, email, and tweets. We’re sorry about that and are working hard to keep with with everyone’s questions. We are planning on adding more members to the team over time, so this pressure will ease a bit.

We also completed the process of our Machine Learning trained dataset and permanently deleted some 60K spam accounts. We’re really grateful for the work of our intern, Sourav Das, currently at MIT, for his amazing attitude and contribution. He lead this work and created great results. This effort really caps the push we made to get spammers off our our site, when we suspected that as many as 2/3 of our accounts were spam. We currently have some 232K accounts in good standing.  Having cleaned out some 500K accounts, we were pretty close in our estimate. The good news is that the rate of spam account creation is well within our ability to monitor.

There is still more to do in this regard. There are a accounts that created projects and edit on the site, which makes them look like legitimate users, but the projects they created are really spammy advertisements. We plan on applying the ML learning we’ve done to train the algorithm to detect spam projects and start cleaning them off the site.  Let’s take a quick look at what that means:

In our home page, we say that we are indexing 472K projects.  That’s the number of undeleted projects on the Open Hub. However, not every project has analyzable code. When we count projects that are not deleted and have had an analysis in the past — at any time — we see there are 292K such projects; about 62% of all projects. This really could mean that the difference, some 180K projects that have never been analyzed, could really be spam projects.  Some of them are legitimate OSS projects that don’t have analyzable code — documentation projects and that ilk.  But there really are only a tiny number of those. I’d wager that most all of those 180K unanalyzed projects are junk. So that next ML project to find them and get rid of them is important so that we have real numbers about activity in the OSS community.

Let’s turn our attention back to the nearly 300K projects that have had analysis.  Of them there are nearly 28K projects that have a CodePlex repository. As you probably know, CodePlex has gone into Read-Only mode and will be shutting down entirely by the end of this year. Most of those projects have only CodePlex repositories. In accordance with our mandate to provide analytics on available and active Open Source Software project, we will be deleting all those projects. (We’ll start a background effort to find any new locations for those projects).  This will drop the number of undeleted projects that have been analyzed at some point in the past to some 263K projects.

Finally, when we consider projects that have jobs in a permanent state of failure, which blocks our ability to generate updated analytics, we have to remove an additional 40K projects, which drives down the number of active projects that can reliably be updated: we’re really looking at 223K projects.

We have two major strategies to increase the number of available and active OSS projects on the Open Hub. The first is to lower the threshold to getting new projects into the Open Hub. We are working on an overhauled and streamlined workflow for adding projects from GitHub to the Open Hub. Right now, we support only a bulk-upload of all repositories in a GitHub account into a single project.  We plan on letting users create new projects, assign repositories to the same new projects, and assign repositories to existing projects.  This will let users quickly get their projects into the Open  Hub.

If this works as well as hoped, we can expand it to other forges, such as GitLab.  Your requests for other forges are most welcome.

The other strategy is to continue the cleanup work to examine failed jobs and see what can be recovered. But our user community is the best strategy we have.

Even with “only” 223K projects, there is no way we can manually review even the majority of them. Therefore, we will be making it possible to use your GitHub account to create and sign into the Open Hub. By lowering the barrier to being able to make edits on the Open Hub and relying on the maturity of the GitHub environment, we hope that more users will be willing to push their Open Source Software project to the Open Hub, and review existing projects to ensure we have the most up-to-date information.

We are also working on capping the number of enlistments that any one project can have.  This will make it easier to review and maintain projects. We will introduce a new feature ‘Open Hub Collections” to gather blocks of related projects, like Linux distributions, so that users can quickly see which projects make up a collection and quickly navigate to related projects.

Gosh, there is so much more. There are ongoing database architecture upgrades, rewrite of the analytics engine, a future overhaul of Ohloh SCM, ongoing reviews of pull requests against Ohcount, and the daily task of making sure everything we have is working the way it’s supposed to. Oh, and we updated the CSS so that the Open Hub is in line with the standards defined by the Black Duck Marketing team. But, these will have to wait for another post.

Thank you for being part of the Open Source Community and the Open Hub. (And please provide your feedback in our survey, which will be open for only a little bit longer:

How to fix Shockwave Flash crashing with vSphere Web Client 6.X

What happened?

It came as a surprise to me that a few of my production vSphere and vCenter environments all started exhibiting issues with the vSphere web client.
I tried on every browser, both Windows and Mac, but kept getting this message:

Accompanied by a friendly reminder that the page couldn’t render:

As noted by VMware, this impacts all versions of the Web Client.
Additionally, there is no known resolution, but a few workarounds exist that are detailed below. 

  • Personally I chose to patch the pepflashplayer.dll for Windows Chrome.
  • you can find the download at the bottom of this article

It worked like a charm! I was back in the Web Client without even re-launching Chrome!


While logging in to the vSphere Web Client the flash plugin crashes with the following error:
Shockwave Flash has crashed.


This is a known issue with Adobe Shockwave Flash version, and affects all versions of the vSphere Web Client.
For more information see


Currently there is no resolution.

To workaround this issue, use one of these options:

Alternatively, the HTML5 Client and vSphere Client (legacy) are available for most operations, dependent on vSphere version and feature availability.

Additional Information

Community forums have identified a workaround of replacing the pepflashplayer.dll file with an older version as well.
Caution: This has not been verified by Adobe. fp_27.0.0.159 is not the latest version, the latest Flash version is
All versions previous to are impacted with the Critical vulnerability CVE-2017-11292. See,
Windows Workaround for FireFox:
  1. Click Start > run, type appwiz.cpl and click Run.
  2. Uninstall Adobe Flash Player 27 NPAPI Version
  3. Download
  4. Extract the fp_27.0.0.159_archive.zip27_0_r0_159flashplayer27_0r0_159_win.msi.
  5. Close FireFox.
  6. Run the extracted flashplayer27_0r0_159_win.msi.
  7. Click Start run, type services.msc and click Run.
  8. Disable Adobe Flash Player Update Service.
  9. Open the vSphere Web Client in FireFox.
Windows Workaround for Chrome:
  1. Download the attached 2151945_pepflashplayer.7z. [Link at bottom of post]
  2. Extract the pepflashplayer.dll to the Desktop.
  3. Open C:Users%username%AppDataLocalGoogleChromeUser DataPepperFlash27.0.0.170 in File Explorer.
  4. Rename pepflashplayer.dll to pepflashplayer.old.
  5. Copy the pepflashplayer.dll extracted earlier from the desktop to C:Users%username%AppDataLocalGoogleChromeUser DataPepperFlash27.0.0.170
  6. Open the vSphere Web Client in Chrome.
Mac OS X workaround for Firefox:
  1. Uninstall the current Adobe Flash Player. For more information, see
  2. Download the installer for Adobe Flash Player, available at
  3. Extract the flashplayer27_0r0_159_mac.dmg from fp_27.0.0.159_archive.zip27_0_r0_159.
  4. Install Adobe Flash Player using flashplayer27_0r0_159_mac.dmg.
  5. Disable automatic updates for Adobe Flash Player.



How to use VMWare PVSCSI driver in Windows Virtual Machine

VMware PVSCSI Driver in Windows Server (2016)

Have you ever seen this error when trying to install Windows to a VM?

It’s probably because the Paravirtual SCSI driver is not loaded. Windows doesn’t contain this driver, so it’s necessary to insert and install it at boot of the ISO. In a VM, we do this with a virtual floppy drive and a floppy disk image.

This is quite trivial, but new users may not think to add a floppy drive or know that there are floppy images with VMware Paravirtual drivers on their host.

I used VMware ESXi 6.0 and chose to use Windows Server 2016 as the VM guest.

I’ll start by creating a new VM

I’m naming my VM “Windows PVSCSI” because it’s quite appropriate.

As you’re customizing the VM settings, hit “Add other device” and add a new Floppy drive

Browse for an image for this. On your host there is a folder that comes up “vmimages” which contains floppy images for various operating systems.

For the most recent operating systems (Windows 8, 8.1, Server 2012, Server 2012 R2, Server 2016) you should use the Windows 8 floppy.

  • For OS such as Windows 7 you should use the 2008 floppy because it is the same codebase.

So here’s what my VM looks like with the floppy and VMware Paravirtual SCSI controller.

Note that the floppy is there.. it’s important.. xD

Again to review my VM settings, in case you wanted to be clear about something:

And then go ahead and create the VM.

Installing Windows with the PVSCSI driver

So I’ve booted my “Windows PVSCSI” VM but the hard drive can’t be found!

  • Oh yeah, that’s because we need to load the driver xD

Click “Load driver”

Browse for the floppy image you attached when creating the VM

Select the folder for your Guest OS architecture, most likely AMD64

And Windows recognized the driver and will let me install it! So click next:

And after a minute or two…

… The disk appears so Windows can be installed!


Thank you for reading. I hope this has helped you!


The 20th Century Time Machine

The 20th Century Time Machine

by Nancy Watzman & Katie Dahl

Jason Scott

With the turn of a dial, some flashing lights, and the requisite puff of fog, emcees Tracey Jaquith, TV Architect, and Jason Scott, Free Range Archivist, cranked up the Internet Archive 20th Century Time Machine on stage before a packed house at the Internet Archive’s annual party on October 11.

Eureka! The cardboard contraption worked! The year was 1912, and out stepped Alexis Rossi, director of Media and Access, her hat adorned with a 78rpm record.


The 20th Century Time Machine 1

D’Anna Alexander (center) with her mother (right) and grandmother (left).

“Close your eyes and listen,” Rossi asked the audience. And then, out of the speakers floated the scratchy sounds of Billy Murray singing “Low Bridge, Everybody Down” written by Thomas S. Allen. From 1898 to the 1950s, some three million recordings of about three minutes each were made on 78rpm discs. But these discs are now brittle, the music stored on them precious. The Internet Archive is working with partners on the Great 78 Project to store these recordings digitally, so that we and future generations can enjoy them and reflect on our music history. New collections include the Tina Argumedo and Lucrecia Hug 78rpm Collection of dance music collected in Argentina in the mid-1930s.


Next to emerge from the Time Machine was David Leonard, president of the Boston Public Library, which was the first free, municipal library founded in the United States. The mission was and remains bold: make knowledge available to everyone. Knowledge shouldn’t be hidden behind paywalls, restricted to the wealthy but rather should operate under the principle of open access as public good, he explained. Leonard announced that the Boston Public Library would join the Internet Archive’s Great 78 Project, by authorizing the transfer of 200,000 individual 78s and LPs to preserve and make accessible to the public, “a collection that otherwise would remain in storage unavailable to anyone.”

The 20th Century Time Machine 2

David Leonard and Brewster Kahle

Brewster Kahle, founder and Digital Librarian of the Internet Archive, then came through the time machine to present the Internet Archive Hero Award to Leonard. “I am inspired every time I go through the doors,” said Kahle of the library, noting that the Boston Public Library was the first to digitize not just a presidential library, of John Quincy Adams, but also modern books.  Leonard was presented with a tablet imprinted with the Boston Public Library homepage by Internet Archive 2017 Artist in Residence, Jeremiah Jenkins.


Kahle then set the Time Machine to 1942 to explain another new Internet Archive initiative: liberating books published between 1923 to 1941. Working with Elizabeth Townsend Gard, a copyright scholar at Tulane University, the Internet Archive is liberating these books under a little known, and perhaps never used, provision of US copyright law, Section 108h, which allows libraries to scan and make available materials published 1923 to 1941 if they are not being actively sold. The name of the new collection: the Sony Bono Memorial Collection, named for the now deceased congressman and former representative who led the passage of the Copyright Term Extension Act of 1998, which included the 108h provision as a “gift” to libraries.

One of these books includes “Your Life,” a tome written by Kahle’s grandfather, Douglas E. Lurton, a “guide to a desirable living.” “I have one copy of this book and two sons. According to the law, I can’t make one copy and give it to the other son. But now it’s available,” Kahle explained.


The 20th Century Time Machine 3

Sab Masada

The Time Machine cranked to 1944, out came Rick Prelinger, Internet Archive Board member, archivist, and filmmaker. Prelinger introduced a new addition to the Internet Archive’s film collection: long-forgotten footage of an Arkansas Japanese internment camp from 1944.  As the film played on the screen, Prelinger welcomed Sab Masada, 87, who lived at this very camp as a 12-year-old.

Masada talked about his experience at the camp and why it is important for people today to remember it. “Since the election I’ve heard echoes of what I heard in 1942,” Masada said. “Using fear of terrorism to target the Muslims and people south of the border.”


Next to speak was Wendy Hanamura, the director of partnerships. Hanamura explained how as a sixth grader she discovered a book at the library, Executive Order 9066, published in 1972, which chronicled photos of Japanese internment camps during World War II.

“Before I was an internet archivist, I was a daughter and granddaughter of American citizens who were locked up behind barbed wire in the same kind of camps that incarcerated Sab,” said Hanamura. That one book – now out of print – helped her understand what had happened to her family.

Inspired by making it to the semi-final round of the MacArthur 100&Change initiative with a proposal that provides libraries and learners with free digital access to four million books, the Internet Archive is forging ahead with plans, despite not winning the $100 million grant. Among the books the Internet Archive is making available: Executive Order 9066.


The year display turned to 1985, Jason Scott reappeared on stage, explaining his role as a software curator. New this year to the Internet Archive are collections of early Apple software, he explained, with browser emulation allowing the user to experience just what it was like to fire up a Macintosh computer back in its hay day. This includes a collection of the then wildly popular “HyperCards,” a programmatic tool that enabled users to create programs that linked materials in creative ways, before the rise of the world wide web.


After Vinay Goelthis tour through the 20th century, the Time Machine was set to 1997. Mark Graham, Director of the Wayback Machine and Vinay Goel, Senior Data Engineer, stepped on stage. Back in 1997, when the Wayback Machine began archiving websites on the still new World Wide Web, the entire thing amounted to 2.2 terabytes of data. Now the Wayback Machine contains 20 petabytes. Graham explained how the Wayback Machine is preserving tweets, government websites, and other materials that could otherwise vanish. One example: this report from The Rachel Maddow Show, which aired on December 16, 2016, about Michael Flynn, then slated to become National Security Advisor. Flynn deleted a tweet he had made linking to a falsified story about Hillary Clinton, but the Internet Archive saved it through the Wayback Machine.

Goel took the microphone to announce new improvements to Wayback Machine Search 2.0. Now it’s possible to search for keywords, such as “climate change,” and find not just web pages from a particular time period mentioning these words, but also different format types — such as images, pdfs, or yes, even an old Internet Archive favorite, animated gifs from the now-defunct GeoCities–including snow globes!

Thanks to all who came out to celebrate with the Internet Archive staff and volunteers, or watched online. Please join our efforts to provide Universal Access to All Knowledge, whatever century it is from.

Editor’s Note, 10/16/17: Watch the full event  


Syncing Catalogs with thousands of Libraries in 120 Countries through OCLC

Syncing Catalogs with thousands of Libraries in 120 Countries through OCLC

Syncing Catalogs with thousands of Libraries in 120 Countries through OCLC 4

We are pleased to announce that the Internet Archive and OCLC have agreed to synchronize the metadata describing our digital books with OCLC’s WorldCat. WorldCat is a union catalog that itemizes the collections of thousands of libraries in more than 120 countries that participate in the OCLC global cooperative.

What does this mean for readers?
When the synchronization work is complete, library patrons will be able to discover the Internet Archive’s collection of 2.5 million digitized monographs through the libraries around the world that use OCLC’s bibliographic services. Readers searching for a particular volume will know that a digital version of the book exists in our collection. With just one click, readers will be taken to to examine and possibly borrow the digital version of that book. In turn, readers who find a digital book at will be able, with one click, to discover the nearest library where they can borrow the hard copy.

There are additional benefits: in the process of the synchronization, OCLC databases will be enriched with records describing books that may not yet be represented in WorldCat.

“This work strengthens the Archive’s connection to the library community around the world. It advances our goal of universal access by making our collections much more widely discoverable. It will benefit library users around the globe by giving them the opportunity to borrow digital books that might not otherwise be available to them,” said Brewster Kahle, Founder and Digital Librarian of the Internet Archive. “We’re glad to partner with OCLC to make this possible and look forward to other opportunities this synchronization will present.”

“OCLC is always looking for opportunities to work with partners who share goals and objectives that can benefit libraries and library users,” said Chip Nilges, OCLC Vice President, Business Development. “We’re excited to be working with Internet Archive, and to make this valuable content discoverable through WorldCat. This partnership will add value to WorldCat, expand the collections of member libraries, and extend the reach of Internet Archive content to library users everywhere.”

We believe this partnership will be a win-win-win for libraries and for learners around the globe.

Better discovery, richer metadata, more books borrowed and read.

Read the OCLC press release.

How to enable SSH access to your ESXi server

Here’s to all the articles I’ve implicitly stated “enable SSH access” or “SSH should be enabled” but offered no further assistance for those who wanted to know how to do that. Now I can link to this post! xD

Enabling SSH access to your ESXi server can be done in the vSphere application or the WebUI for your host. I’ll show you how to do both, since VMware wants to transition us to the WebUI and deprecate the vSphere client.

vSphere Client:

After logging in, proceed to the “Configuration” tab.

Scroll down and click “Security Profile” on the left-hand side, under the “Software” header.

In the upper right corner of the Security Profile page, click on “Properties…”

Find SSH in the list and click on “Options…”

Here you have a few options.

You can change the startup policy if you wish, or otherwise just hit the “Start” button. The latter will enable SSH for the duration, until you disable it or the host reboots.

As you can see, the “Summary” page of my vSphere client is notifying me that SSH is enabled.


After logging in, proceed to the “Manage” link on the left-hand side.

Click on the “Services” tab

Select TSM-SSH in the list and proceed to the “Actions” menu

Similarly to the vSphere client, you can opt to start the SSH service for the duration, or alternatively select a startup policy.


If for some reason you’ve lost network access to your host but still need to SSH into it? Or otherwise find it more simple or easier to log into the console than any sort of interface — you can also do this in the console under the Troubleshooting Options menu.

Thank you for reading. I hope this helps you!

How to install Synology NFS VAAI VIB on vSphere ESXi

Does your Synology NFS datastore look like this? Is Hardware Acceleration “not supported”?

Well, you’re lucky because this is a quick and vendor-supported fix.

By enabling the VAAI plugin, you will see much better performance and also gain access to the full set of VMWare features:

 Synology supports VAAI primitives including Hardware Assisted Locking (ATS), Block Zero, Full Copy, and Thin Provisioning. The complete VAAI support offloads tasks from ESXi to the storage array, realizing significant gains in performance and efficiency in storage utilization. By incorporating Synology storage solutions for virtualization, users can expand their capacity with minimal cost and complexity.

How do I install Synology NFS VAAI Plug-in on an ESXi host?

To install via the command line:

    1. Download the Synology NFS Plug-in online bundle and upload this .vib file to a datastore on your ESXi host. Alternatively, you can directly run wget on your ESXi host to download the .vib file (online bundle).
    2. Connect to your ESXi host via SSH, and execute the following command. Please make sure to replace the datastore name in the command with the name of your own.
     esxcli software vib install –v /vmfs/volumes/Datastore/esx-nfsplugin.vib

Reboot the ESXi host.

The enhanced features with hardware acceleration will take effect when it boots back up.

Books from 1923 to 1941 Now Liberated!

Books from 1923 to 1941 Now Liberated!

[press: boingboing]

The Internet Archive is now leveraging a little known, and perhaps never used, provision of US copyright law, Section 108h, which allows libraries to scan and make available materials published 1923 to 1941 if they are not being actively sold. Elizabeth Townsend Gard, a copyright scholar at Tulane University calls this “Library Public Domain.”  She and her students helped bring the first scanned books of this era available online in a collection named for the author of the bill making this necessary: The Sonny Bono Memorial Collection. Thousands more books will be added in the near future as we automate. We hope this will encourage libraries that have been reticent to scan beyond 1923 to start mass scanning their books and other works, at least up to 1942.

Books from 1923 to 1941 Now Liberated! 5

While good news, it is too bad it is necessary to use this provision.

Books from 1923 to 1941 Now Liberated! 6

Trend of Maximum U.S. General Copyright Term by Tom W Bell

If the Founding Fathers had their way, almost all works from the 20th century would be public domain by now (14-year copyright term, renewable once if you took extra actions).

Some corporations saw adding works to the public domain to be a problem, and when Sonny Bono got elected to the House of Representatives, representing Riverside County, near Los Angeles, he helped push through a law extending copyright’s duration another 20 years to keep things locked-up back to 1923.  This has been called the Mickey Mouse Protection Act due to one of the motivators behind the law, but it was also a result of Europe extending copyright terms an additional twenty years first. If not for this law, works from 1923 and beyond would have been in the public domain decades ago.

Lawrence Lessig

Lawrence Lessig

Creative Commons founder, Larry Lessig fought the new law in court as unreasonable, unneeded, and ridiculous.  In support of Lessig’s fight, the Internet Archive made an Internet bookmobile to celebrate what could be done with the public domain. We drove the bookmobile across the country to the Supreme Court to make books during the hearing of the case. Alas, we lost.

Books from 1923 to 1941 Now Liberated! 7

Internet Archive Bookmobile in front of
Carnegie Library in Pittsburgh: “Free to the People”

But there is an exemption from this extension of copyright, but only for libraries and only for works that are not actively for sale — we can scan them and make them available. Professor Townsend Gard had two legal interns work with the Internet Archive last summer to find how we can automate finding appropriate scanned books that could be liberated, and hand-vetted the first books for the collection. Professor Townsend Gard has just released an in-depth paper giving libraries guidance as to how to implement Section 108(h) based on her work with the Archive and other libraries. Together, we have called them “Last Twenty” Collections, as libraries and archives can copy and distribute to the general public qualified works in the last twenty years of their copyright.  

Today we announce the “Sonny Bono Memorial Collection” containing the first books to be liberated. Anyone can download, read, and enjoy these works that have been long out of print. We will add another 10,000 books and other works in the near future. “Working with the Internet Archive has allowed us to do the work to make this part of the law usable,” reflected Professor Townsend Gard. “Hopefully, this will be the first of many “Last Twenty” Collections around the country.”

Now it is the chance for libraries and citizens who have been reticent to scan works beyond 1923, to push forward to 1941, and the Internet Archive will host them. “I’ve always said that the silver lining of the unfortunate Eldred v. Ashcroft decision was the response from people to do something, to actively begin to limit the power of the copyright monopoly through action that promoted open access and CC licensing,” says Carrie Russell, Director of ALA’s Program of Public Access to Information. “As a result, the academy and the general public has rediscovered the value of the public domain. The Last Twenty project joins the Internet Archive, the HathiTrust copyright review project, and the Creative Commons in amassing our public domain to further new scholarship, creativity, and learning.”

We thank and congratulate Team Durationator and Professor Townsend Gard for all the hard work that went into making this new collection possible. Professor Townsend Gard, along with her husband, Dr. Ron Gard, have started a company, Limited Times, to assist libraries, archives, and museums implementing Section 108(h), “Last Twenty” collections, and other aspects of the copyright law.

Books from 1923 to 1941 Now Liberated! 8

Prof. Elizabeth
Townsend Gard

Books from 1923 to 1941 Now Liberated! 9

Tomi Aina
Law Student

Books from 1923 to 1941 Now Liberated! 10

Stan Sater
Law Student







Hundreds of thousands of books can now be liberated. Let’s bring the 20th century to 21st-century citizens. Everyone, rev your cameras!

The Best Way to Convert VMware ESXi 6.5 VM to Hyper-V Virtual Machine (2017)

Usually I find that I’ll need to convert a Hyper-V VM to VMWare ESXi, but in this rare scenario, I had to convert ESXi 6.5 Virtual Machines into Hyper-V (2012 R2) virtual machines.

This is trivial if you have a supported environment: you simply use Microsoft Virtual Machine Converter [MVMC 3.1] if you have ESXi 6.0 and below.

  • Lucky me, I am running the latest 6.5 in my environments!

In regards to ESXi 6.5 conversions to Hyper-V, based on my research there is no good way to do this for free or without a headache…

…or so I thought.
I came across a piece of software by 5nine called 5nine V2V Easy Converter.

And it’s free!

Overview of ESXi to Hyper-V Conversion Process

  1. The VM is shutdown and then its configuration settings are remapped from VMware (.vmx) to Hyper-V (.xml) including the name, memory, virtual networks, virtual disks, etc. set in the wizard.
  2. The VM’s hard disk is copied to a temporary location from the VMware (.vmdk) to the Hyper-V (.vhd/x) format. This includes the OS and data disks.
  3. A new VM is created on Hyper-V by combining the configuration file and disk.


You’ll have to register through their website and obtain a download link via e-mail.


Inside the ZIP there is more than what you need for this. Simply take the folder “5nine EasyConverter” and extract that to your desktop or some working directory.

Run Converter

Open up the folder you just extracted and look for + run EsxToHyperVConverter.exe

Input ESXi Host Info

Enter the information for your ESXi host or vCenter cluster.

  • For me, inputting my vCenter address did not return any VMs, but inputting a specific host worked just fine.

Select a VM to convert

Here you can choose any VMs you want to convert. The conversion will not work if the machine is powered off. The conversion software will, however, power off the virtual machine as part of its process.

  • Here I am converting 22100-MAPS to a Hyper-V VM. It’s already converting, but I went back through the wizard because I missed a few screen shots.

Configure destination VM

Here is where you configure the basic details of the VM, but I just left it at the defaults.

Select Disks to convert

It was already checked for me by default, but you might want to check that all of your disks will be converted.

Select destination host

Enter the IP or hostname of your Hyper-V server or Failover Cluster

It showed me that there were 0MB available but that could be ignored because I knew I had space.

Define destination storage location

For my environment, I chose to store it in a ClusterStorage volume and also made sure to check VM Traffic (yours may be named differently) to put the new VM on the proper network.

Select temporary conversion path

I don’t have a screenshot of this part, but basically you will define a location on your machine that has enough space to stage this conversion.

Begin conversion

Here I’m converting the 22100-MAPS server and it’s downloaded ~50GB of the 80GB thin disk.

When the download is complete, the software will begin converting the disk(s) to the proper format.

When this is complete, you’ll receive a friendly notice that the virtual machine has been converted.

Let’s boot it!

It works!

Is it thin provisioned, as my source VM was?

I wonder, since it tried to copy all 80GB upon conversion, will the resulting disks be 80GB or thin-provisioned?

Bingo! It’s thin provisioned!

I hope this helps you as much as it has helped me.


How to create a Bootable ISO image of macOS 10.13 High Sierra installer

Normally you can’t obtain bootable media of macOS. OS X was a different story, but also you had to pay for those versions. As an owner of a MacBook Pro, it’s slightly unsettling that I wouldn’t necessarily be able to plug in a bootable USB or insert a DVD with the macOS installer image in the event that I needed to re-install my OS because my SSD ate the dust, or something.

This guide will also be useful for those who can run Virtual Machines of macOS in environments like VirtualBox etc.

To abide with Apple’s terms of use, you must go through official channels to obtain the macOS installer. This means you actually need a Mac or a MacBook to create this bootable ISO.

Overview of how to create a bootable macOS 10.13 High Sierra ISO image:

  1. Download macOS from app store
  2. Open Terminal
  3. Run commands
  4. Rename to .ISO



Click this link to open the macOS High Sierra download in the App Store

Especially if you’ve already upgraded to high sierra and deleted the installer data (with CleanMyMac etc) you will need to download this again before proceeding with this article.

Run commands in Terminal

Run these commands one at a time (Update 2/6/2018 :: I changed the 5130m to 5200m based on feedback from the comments)

hdiutil create -o /tmp/HighSierra.cdr -size 5200m -layout SPUD -fs HFS+J
hdiutil attach /tmp/HighSierra.cdr.dmg -noverify -mountpoint /Volumes/install_build
sudo /Applications/Install macOS High --volume /Volumes/install_build
mv /tmp/HighSierra.cdr.dmg ~/Desktop/InstallSystem.dmg
hdiutil detach /Volumes/Install macOS High Sierra
hdiutil convert ~/Desktop/InstallSystem.dmg -format UDTO -o ~/Desktop/HighSierra.iso

Here are some of my outputs for you to review (after the first three commands)

The resulting file on my desktop is almost ready to use.

Rename file

Rename the file, removing .cdr from the end. Confirm by clicking “Use .iso”


The resulting ISO can be used to create bootable USBs, DVDs, install VMs, or simply to archive for your backups “just in case”.