Category: Linuxaria

What You Don’t Know About Linux Open Source Could Be Costing to More Than You Think

What You Don’t Know About Linux Open Source Could Be Costing to More Than You Think

Guest post by Marc Fisher

If you would like to test out Linux before completely switching it as your everyday driver, there are a number of means by which you can do it. Linux was not intended to run on Windows, and Windows was not meant to host Linux. To begin with, and perhaps most of all, Linux is open source computer software. In any event, Linux outperforms Windows on all your hardware.

If you’ve always wished to try out Linux but were never certain where to begin, have a look at our how to begin guide for Linux. Linux is not any different than Windows or Mac OS, it’s basically an Operating System but the leading different is the fact that it is Free for everyone. Employing Linux today isn’t any more challenging than switching from one sort of smartphone platform to another.

You’re most likely already using Linux, whether you are aware of it or not. Linux has a lot of distinct versions to suit nearly any sort of user. Today, Linux is a small no-brainer. Linux plays an essential part in keeping our world going.

 

Even then, it is dependent on the build of Linux that you’re using. Linux runs a lot of the underbelly of cloud operations. Linux is also different in that, even though the core pieces of the Linux operating system are usually common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in reality very user-friendly, and it’s no longer the case you have to have advanced skills to get started using them. Linux was the very first major Internet-centred open-source undertaking. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only.

You are able to remove Linux later in case you need to. Linux plays a vital part in keeping our world going. Linux supplies a huge library of functionality which can be leveraged to accelerate development.

Even then, it’s dependent on the build of Linux that you’re using. Linux is also different in that, even though the core pieces of the Linux operating system are typically common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in fact very user-friendly, and it’s no longer the case you require to have advanced skills to get started using them. Linux runs a lot of the underbelly of cloud operations. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only. Read More, open source projects including Linux are incredibly capable because of the contributions that all these individuals have added over time.

Life After Linux Open Source

The development edition of the manual typically has more documentation, but might also document new characteristics that aren’t in the released version. Fortunately, it’s so lightweight you can just jump to some other version in case you don’t like it. It’s extremely hard to modify the compiled version of the majority of applications and nearly not possible to see exactly the way the developer created different sections of the program.

On the challenges of bottoms-up go-to-market It’s really really hard to grasp the difference between your organic product the product your developers use and love and your company product, which ought to be, effectively, a different product. As stated by the report, it’s going to be hard for developers to switch. Developers are now incredibly important and influential in the purchasing procedure. Some OpenWrt developers will attend the event and get ready to reply to your questions!

When the program is installed, it has to be configured. Suppose you discover that the software you bought actually does not do what you would like it to do. Open source software is much more common than you believe, and an amazing philosophy to live by. Employing open source software gives an inexpensive method to bootstrap a business. It’s more difficult to deal with closed source software generally. So regarding Application and Software, you’re all set if you are prepared to learn an alternate software or finding a means to make it run on Linux. Possibly the most famous copyleft software is Linux.

Facts, Fiction and Linux Open Source

Important if you’re new to Linux. While Linux might appear intimidatingly complicated and technical to the ordinary user, contemporary Linux distros are in fact very user-friendly, and it’s no longer the case that you want to have advanced skills to begin using them. Linux includes the user-friendly installation since most of the Linux distributions are user-friendly. Installing Linux isn’t challenging.

There are a number of other forms of Linux available at Pendrivelinux.com. Maintaining Linux is extremely easy as it has their own software repository. Linux is not any different than Windows or Mac OS, it’s basically an Operating System but the key different is it is Free for everyone. Utilizing Linux today isn’t any more challenging than switching from one kind of smartphone platform to another, and you can find it running most of today online website or online games such as Vegas Palms online slots

Flattr this!

Useful Resources for Those Who Want to Know More About Linux

Useful Resources for Those Who Want to Know More About Linux

Guest post by Lucy Benton

Linux is one of the most popular and versatile operating systems available. It can be used on a smartphone, computer and even a car. Linux has been around since the 1990s and is still one of the most widespread operating systems.

Linux is actually used to run most of the Internet as it is considered to be rather stable compared to other operating systems. This is one of the reasons why people choose Linux over Windows. Besides, Linux provides its users with privacy and doesn’t collect their data at all, while Windows 10 and its Cortana voice control system always require updating your personal information.

Linux has many advantages. However, people do not hear much about it, as it has been squeezed out from the market by Windows and Mac. And many people get confused when they start using Linux, as it’s a bit different from popular operating systems.

So to help you out we’ve collected 5 useful resources for those who want to know more about Linux.



1. Linux for Absolute Beginners

If you want to learn as much about Linux as you can, you should consider taking a full course for beginners, provided by Eduonix. This course will introduce you to all features of Linux and provide you with all necessary materials to help you find out more about the peculiarities of how Linux works.

You should definitely choose this course if:

  • you want to learn the details about the Linux operating system;
  • you want to find out how to install it;
  • you want to understand how Linux cooperates with your hardware;
  • you want to learn how to operate Linux command line.

2. PC World: A Linux Beginner’s Guide

A free resource for those who want to learn everything about Linux in one place. PC World specializes in various aspects of working with computer operating systems, and it provides its subscribers with the most accurate and up-to-date information. Here you can also learn more about the benefits of Linux and latest news about his operating system.

This resource provides you with information on:

  • how to install Linux;
  • how to use command line;
  • how to install additional software;
  • how to operate Linux desktop environment.

3.Linux Training

A lot of people who work with computers are required to learn how to operate Linux in case Windows operating system suddenly crashes. And what can be better than using an official resource to start your Linux training?

This resource provides online enrollment on the Linux training, where you can get the most updated information from the authentic source. “A year ago our IT department offered us a Linux training on the official website”, says Martin Gibson, a developer at Assignmenthelper.com.au. “We took this course because we needed to learn how to back up all our files to another system to provide our customers with maximum security, and this resource really taught us everything.”

So you should definitely use this resource if:

  • you want to receive firsthand information about the operating system;
  • want to learn the peculiarities of how to run Linux on your computer;
  • want to connect with other Linux users and share your experience with them.

4. The Linux Foundation: Training Videos

If you easily get bored from reading a lot of resources, this website is definitely for you. The Linux Foundation provides training videos, lectures and webinars, held by IT specialists, software developers and technical consultants.

All the training videos are subdivided into categories for:

  • Developers: working with Linux Kernel, handling Linux Device Drivers, Linux virtualization etc.;
  • System Administrators: developing virtual hosts on Linux, building a Firewall, analyzing Linux performance etc.;
  • Users: getting started using Linux, introduction to embedded Linux and so on.

5. LinuxInsider

Did you know that Microsoft was so amazed by the efficiency of Linux that it allowed users to run Linux on Microsoft cloud computing device? If you want to learn more about this operating system, Linux Insider provides its subscribers with the latest news on Linux operating systems, gives information about the latest updates and Linux features.

On this resource, you will have the opportunity to:

  • participate in Linux community;
  • learn about how to run Linux on various devices;
  • check out reviews;
  • participate in blog discussions and read the tech blog.

Wrapping up…

Linux offers a lot of benefits, including complete privacy, stable operation and even malware protection. It’s definitely worth trying, learning how to use will help you better understand how your computer works and what it needs to operate smoothly.

Lucy Benton is a digital marketing specialist, business consultant and helps people to turn their dreams into the profitable business.  Now she is writing for marketing and business resources. Also Lucy has her own blog Prowritingpartner.com ,where you can check her last publications.


Flattr this!

Spectre and Meltdown explained

Spectre and Meltdown explained

I found this great article of Anton Gostev about Spectre and Meltdown, so I’m reposting it here :

By now, most of you have probably already heard of the biggest disaster in the history of IT – Meltdown and Spectre security vulnerabilities which affect all modern CPUs, from those in desktops and servers, to ones found in smartphones. Unfortunately, there’s much confusion about the level of threat we’re dealing with here, because some of the impacted vendors need reasons to explain the still-missing security patches. But even those who did release a patch, avoid mentioning that it only partially addresses the threat. And, there’s no good explanation of these vulnerabilities on the right level (not for developers), something that just about anyone working in IT could understand to make their own conclusion. So, I decided to give it a shot and deliver just that.



First, some essential background. Both vulnerabilities leverage the “speculative execution” feature, which is central to the modern CPU architecture. Without this, processors would idle most of the time, just waiting to receive I/O results from various peripheral devices, which are all at least 10x slower than processors. For example, RAM – kind of the fastest thing out there in our mind – runs at comparable frequencies with CPU, but all overclocking enthusiasts know that RAM I/O involves multiple stages, each taking multiple CPU cycles. And hard disks are at least a hundred times slower than RAM. So, instead of waiting for the real result of some IF clause to be calculated, the processor assumes the most probable result, and continues the execution according to the assumed result. Then, many cycles later, when the actual result of said IF is known, if it was “guessed” right – then we’re already way ahead in the program code execution path, and didn’t just waste all those cycles waiting for the I/O operation to complete. However, if it appears that the assumption was incorrect – then, the execution state of that “parallel universe” is simply discarded, and program execution is restarted back from said IF clause (as if speculative execution did not exist). But, since those prediction algorithms are pretty smart and polished, more often than not the guesses are right, which adds significant boost to execution performance for some software. Speculative execution is a feature that processors had for two decades now, which is also why any CPU that is still able to run these days is affected.

Now, while the two vulnerabilities are distinctly different, they share one thing in common – and that is, they exploit the cornerstone of computer security, and specifically the process isolation. Basically, the security of all operating systems and software is completely dependent on the native ability of CPUs to ensure complete process isolation in terms of them being able to access each other’s memory. How exactly is such isolation achieved? Instead of having direct physical RAM access, all processes operate in virtual address spaces, which are mapped to physical RAM in the way that they do not overlap. These memory allocations are performed and controlled in hardware, in the so-called Memory Management Unit (MMU) of CPU.

At this point, you already know enough to understand Meltdown. This vulnerability is basically a bug in MMU logic, and is caused by skipping address checks during the speculative execution (rumors are, there’s the source code comment saying this was done “not to break optimizations”). So, how can this vulnerability be exploited? Pretty easily, in fact. First, the malicious code should trick a processor into the speculative execution path, and from there, perform an unrestricted read of another process’ memory. Simple as that. Now, you may rightfully wonder, wouldn’t the results obtained from such a speculative execution be discarded completely, as soon as CPU finds out it “took a wrong turn”? You’re absolutely correct, they are in fact discarded… with one exception – they will remain in the CPU cache, which is a completely dumb thing that just caches everything CPU accesses. And, while no process can read the content of the CPU cache directly, there’s a technique of how you can “read” one implicitly by doing legitimate RAM reads within your process, and measuring the response times (anything stored in the CPU cache will obviously be served much faster). You may have already heard that browser vendors are currently busy releasing patches that makes JavaScript timers more “coarse” – now you know why (but more on this later).

As far as the impact goes, Meltdown is limited to Intel and ARM processors only, with AMD CPUs unaffected. But for Intel, Meltdown is extremely nasty, because it is so easy to exploit – one of our enthusiasts compiled the exploit literally over a morning coffee, and confirmed it works on every single computer he had access to (in his case, most are Linux-based). And possibilities Meltdown opens are truly terrifying, for example how about obtaining admin password as it is being typed in another process running on the same OS? Or accessing your precious bitcoin wallet? Of course, you’ll say that the exploit must first be delivered to the attacked computer and executed there – which is fair, but here’s the catch: JavaScript from some web site running in your browser will do just fine too, so the delivery part is the easiest for now. By the way, keep in mind that those 3rd party ads displayed on legitimate web sites often include JavaScript too – so it’s really a good idea to install ad blocker now, if you haven’t already! And for those using Chrome, enabling Site Isolation feature is also a good idea.

OK, so let’s switch to Spectre next. This vulnerability is known to affect all modern CPUs, albeit to a different extent. It is not based on a bug per say, but rather on a design peculiarity of the execution path prediction logic, which is implemented by so-called Branch Prediction Unit (BPU). Essentially, what BPU does is accumulating statistics to estimate the probability of IF clause results. For example, if certain IF clause that compares some variable to zero returned FALSE 100 times in a row, you can predict with high probability that the clause will return FALSE when called for the 101st time, and speculatively move along the corresponding code execution branch even without having to load the actual variable. Makes perfect sense, right? However, the problem here is that while collecting this statistics, BPU does NOT distinguish between different processes for added “learning” effectiveness – which makes sense too, because computer programs share much in common (common algorithms, constructs implementation best practices and so on). And this is exactly what the exploit is based on: this peculiarity allows the malicious code to basically “train” BPU by running a construct that is identical to one in the attacked process hundreds of times, effectively enabling it to control speculative execution of the attacked process once it hits its own respective construct, making one dump “good stuff” into the CPU cache. Pretty awesome find, right?

But here comes the major difference between Meltdown and Spectre, which significantly complicates Spectre-based exploits implementation. While Meltdown can “scan” CPU cache directly (since the sought-after value was put there from within the scope of process running the Meltdown exploit), in case of Spectre it is the victim process itself that puts this value into the CPU cache. Thus, only the victim process itself is able to perform that timing-based CPU cache “scan”. Luckily for hackers, we live in the API-first world, where every decent app has API you can call to make it do the things you need, again measuring how long the execution of each API call took. Although getting the actual value requires deep analysis of the specific application, so this approach is only worth pursuing with the open-source apps. But the “beauty” of Spectre is that apparently, there are many ways to make the victim process leak its data to the CPU cache through speculative execution in the way that allows the attacking process to “pick it up”. Google engineers found and documented a few, but unfortunately many more are expected to exist. Who will find them first?

Of course, all of that only sounds easy at a conceptual level – while implementations with the real-world apps are extremely complex, and when I say “extremely” I really mean that. For example, Google engineers created a Spectre exploit POC that, running inside a KVM guest, can read host kernel memory at a rate of over 1500 bytes/second. However, before the attack can be performed, the exploit requires initialization that takes 30 minutes! So clearly, there’s a lot of math involved there. But if Google engineers could do that, hackers will be able too – because looking at how advanced some of the ransomware we saw last year was, one might wonder if it was written by folks who Google could not offer the salary or the position they wanted. It’s also worth mentioning here that a JavaScript-based POC also exists already, making the browser a viable attack vector for Spectre.

Now, the most important part – what do we do about those vulnerabilities? Well, it would appear that Intel and Google disclosed the vulnerability to all major vendors in advance, so by now most have already released patches. By the way, we really owe a big “thank you” to all those dev and QC folks who were working hard on patches while we were celebrating – just imagine the amount of work and testing required here, when changes are made to the holy grail of the operating system. Anyway, after reading the above, I hope you agree that vulnerabilities do not get more critical than these two, so be sure to install those patches ASAP. And, aside of most obvious stuff like your operating systems and hypervisors, be sure not to overlook any storage, network and other appliances – as they all run on some OS that too needs to be patched against these vulnerabilities. And don’t forget your smartphones! By the way, here’s one good community tracker for all security bulletins (Microsoft is not listed there, but they did push the corresponding emergency update to Windows Update back on January 3rd).

Having said that, there are a couple of important things you should keep in mind about those patches. First, they do come with a performance impact. Again, some folks will want you to think that the impact is negligible, but it’s only true for applications with low I/O activity. While many enterprise apps will definitely take a big hit – at least, big enough to account for. For example, installing the patch resulted in almost 20% performance drop in the PostgreSQL benchmark. And then, there is this major cloud service that saw CPU usage double after installing the patch on one of its servers. This impact is caused due to the patch adding significant overhead to so-called syscalls, which is what computer programs must use for any interactions with the outside world.

Last but not least, do know that while those patches fully address Meltdown, they only address a few currently known attacks vector that Spectre enables. Most security specialists agree that Spectre vulnerability opens a whole slew of “opportunities” for hackers, and that the solid fix can only be delivered in CPU hardware. Which in turn probably means at least two years until first such processor appears – and then a few more years until you replace the last impacted CPU. But until that happens, it sounds like we should all be looking forward to many fun years of jumping on yet another critical patch against some newly discovered Spectre-based attack. Happy New Year! Chinese horoscope says 2018 will be the year of the Earth Dog – but my horoscope tells me it will be the year of the Air Gapped Backup.” © Veeam


Flattr this!

Install MariaDB Galera Cluster on Ubuntu 16.04

Today I’m glad to publish a guest post about Galera a software that I’ve used in the past to manage a multi-master mysql cluster:

MariaDB Galera Cluster is a multi-master cluster for MariaDB. MariaDB Galera Cluster currently supports only the InnoDB/XtraDB storage engine. It is mostly used for database redundancy as it consists of multiple master servers that are synchronized and connected to each other. In today’s tutorial we are going to learn how to install MariaDB Galera Cluster on a clean Ubuntu 16.04 VPS, on optimized MariaDB hosting.

Requirements:

Before we start there are a couple of requirements we need to fulfill in order to satisfy the minimum for MariaDB Galera Cluster:

  • Three nodes with Ubuntu 16.04 server installed and running
  • The nodes must have static IP addresses assigned in the same subnet
  • Sudo privileges

1. Upgrade the system

As usual before we begin installing software we make sure our system is up to date, run the following commands to do that:

# sudo apt-get -y update

# sudo apt-get -y upgrade

2. Install MariaDB Galera Cluster

Now we are going to install MariaDB Galera Cluster, to do that we are first going to install MariaDB 10.1 as MariaDB 10.1 is “Galera ready” and contains the MariaDB Galera Server package along with the wsrep patch. MariaDB 10.1 is not available in the Ubuntu 16.04 repositories by default so we have to add the repository manually. Do this step for all three nodes.

Execute the following command to add the MariaDB repository key:

# sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

Then install the appropriate packages to run the add-apt-repository command:

# sudo apt-get -y install software-properties-common python-software-properties

Add the repository and then update the packages:

# sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://mirror.jaleco.com/mariadb/repo/10.1/ubuntu xenial main'

# sudo apt-get -y update

Install MariaDB and rsync:

# sudo apt-get -y install mariadb-server rsync

After the installation is complete, in order to improve the security of your MariaDB server run the mysql_secure_installation script:

# sudo mysql_secure_installation

Answer the questions like in the example below:

Enter current password for root (enter for none): press enter

Change the root password? [Y/n] n

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] y

Remove test database and access to it? [Y/n] y

Reload privilege tables now? [Y/n] y

If you receive this error “ERROR 1524 (HY000): Plugin ‘unix_socket’ is not loaded” when pressing enter on the first question, do the following steps:

1.Hit CTRL+C to exit the script then stop MariaDB and load MariaDB with skip-grant-tables:

# sudo systemctl stop mariadb
# sudo mysqld_safe --skip-grant-tables &

# sudo mysql -u root

Now that you’ve logged in the MariaDB shell execute the following commands:

MariaDB [(none)]> use mysql;

MariaDB [(none)]> update user set plugin="mysql_native_password";

MariaDB [(none)]> quit;

3.Kill existing MariaDB processes and start MariaDB:

# sudo kill -9 $(pgrep mysql)

# sudo systemctl start mariadb

4.Run the mysql_secure_installation script again:

# sudo mysql_secure_installation

3. Setup MariaDB Galera Cluster on the nodes

To set up MariaDB Galera Cluster we are going to use the nodes that we configured on the subnet 10.20.30.0/24, you should use your own subnet and have the nodes configured with static IP addresses. Do this step for all three nodes.

Open the following file with nano:

nano /etc/mysql/conf.d/galera-cluster.cnf

Then paste the following contents inside the file:

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://10.20.30.1,10.20.30.2,10.20.30.3"

# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="10.20.30.1"
wsrep_node_name="Galera_Node1"

Save the file when you are done making changes.

Note: The node IP addresses in the wsrep_cluster_address variable need to be changed to your own node IP addresses and are in order from first to last, change the IP address in wsrep_node_address to match the IP address of the node you are configuring and change the wsrep_node_name accordingly.

4. Configure the firewall

In order to keep the nodes secure we are going to enable the Uncomplicated Firewall on the nodes, do that by typing in the following command:

# sudo ufw enable

Then enable the ports we need in order for the nodes to be able to communicate with each other:

# sudo ufw allow 3306/tcp
# sudo ufw allow 4444/tcp
# sudo ufw allow 4567/tcp
# sudo ufw allow 4568/tcp
# sudo ufw allow 4567/udp

Check the status of the firewall:

# sudo ufw status

If the firewall is configured on all three nodes, you can proceed with the step below.

5. Start the MariaDB Galera Cluster

Now, we are finally ready to start the MariaDB Galera Cluster we configured in the previous steps. First stop MariaDB on all nodes:

# sudo systemctl stop mariadb

Then start the MariaDB Galera Cluster on the first node:

# sudo galera_new_cluster

Check whether the cluster is running by executing the following command:

# sudo mysql -u root -e "show status like 'wsrep_cluster_size'"

You should see something like the following output if the cluster is running:

+--------------------+-------+

| Variable_name      | Value |

+--------------------+-------+

| wsrep_cluster_size | 1     |

+--------------------+-------+

Start the MariaDB service on the second node:

# sudo systemctl start mariadb

Your second node should automatically connect to the cluster, check if it’s connected to the cluster by running the following command:

# sudo mysql -u root -e "show status like 'wsrep_cluster_size'"

You should see something like the following output if it’s connected to the cluster:

+--------------------+-------+

| Variable_name      | Value |

+--------------------+-------+

| wsrep_cluster_size | 2     |

+--------------------+-------+

Start the MariaDB service on the third node:

# sudo systemctl start mariadb

Then check if it’s connected to the cluster with the following command:

# sudo mysql -u root -e "show status like 'wsrep_cluster_size'"

You should see something like the following output if it’s connected to the cluster:

+--------------------+-------+

| Variable_name      | Value |

+--------------------+-------+

| wsrep_cluster_size | 3     |

+--------------------+-------+

6. Testing database replication on the nodes

In the final step we will test if a database is successfully replicated across the cluster, run the following command to create a test database and to display all databases so we can see if the test database was successfully created:

# sudo mysql -u root -e "create database test; show databases;"

The output should look like:

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

Then on your second and third node run the following command to check if the test database was successfully replicated:

# sudo mysql -u root -e "show databases;"

The output should look like:

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

If you can see the test database on all three nodes in the output then you have successfully set up a MariaDB Galera Cluster on three nodes.

Filesystem-Eating Bug Discovered in Linux 4.14

Filesystem-Eating Bug Discovered in Linux 4.14

Recent news has reported that an urgent data corruption issued has destroyed file systems in
Linux 4.14 and if you are using bcache to speed up your Linux 4.14 you are being urged to stop it immediately if you want your data to survive.

Filesystem-Eating Bug Discovered in Linux 4.14 1

Linux Compatibility

Linux is an open-source technology and hundreds of programmers have been involved with adding software that is completely open source and accessible to all. Anyone can download the source code and make changes to it and one of the main advantages of this is the wide range of options that users have and the increased security that can be found in it too. There are several distributions available including Debian, Ubuntu, and Mint. Security is exceptional and several hackers have been involved in contributing to the security of Linux.

Linux is becoming more and more popular and with a little knowledge and a computer you can do so much. You can use Siri to control your home, you can set up a home surveillance system, control your air conditioning, and so much more. In fact, you can probably do pretty much anything using Linux if you give it a little thought. Linux is an operating system and is essentially a collection of basic instructions written in code that tell the different electronic parts on your computer what to do and how to work. As the software is open source, it means that anyone can use it to see how it works and if they want they can make changes to it.

Linux was initially intended to be developed into its own operating system but this never happened. The great thing about Linux is that it is also used without GNU in embedded systems, mobile phones, appliances, and more. There is so much that can be done with this open source tool and that is why it is so popular. Users want the freedom to explore, to change, to make things better and by using Linux they get the chance to do this and become a part of a massive community of coders and developers. Bugs are found very quickly and dealt with very quickly too.

The Linux operating system is a great alternative to Windows and is being recognized in so many different circles including online casinos. There was a time when Linux users were excluding from online gaming but the big name brands can now access a lot of the best online gambling and gaming sites where there can play the best games. You can get more info on these games here. Ubuntu is the most widely used of the Linux operating systems and this is mainly due to its ease of use and because it is free. Any Ubuntu browser that supports flash graphics and animations will be able to access online games and the only thing you need to install is Adobe Flash Player.

Filesystem-eating Bug Fix for Linux 4.14

The news that a filesystem-eating but has been found in Linux 4.14 has spread like wildfire. Updates and discussions are available on all the major developer and coding sites around the world. The filesystem-eating but was first reported by Pavel Goran, a developer who informed the community that a problem had struck bcache. He said that he noticed the problem after an upgrade of Gentoo from version 4.13 to 4.14. He explained that what he noticed was that the read from the bcache device were actually producing different data between the two iterations.

He spent some time going over and analyzing the problem before going live with his findings and said, “this looks like a very serious bug that can corrupt or completely destroy the data on bcache devices.”

If you don’t know, bcache allows the use of a SSD drive as a read/write cache and can also be used for as a data storage instrument for another drive or to store information onto faster media disc.

After reading Goran’s report, others checked out the issue and agreed with the problem.
The bug was then added to the Gentoo Bugs list so all users would be made aware of it. The cause of the problem was investigated and was later identified as being:

A new field was introduced in 74d46992e0d9, bi_partno, instead of using bdev->bd_contains and encoding the partition information in the bi_bdev field. __bio_clone_fast was changed to copy the disk information, but not the partition information. At minimum, this regressed bcache and caused data corruption.

A fix for the problem has been found and implemented which will appear in Linux 4.14.1. Kernel
4.14.2 stable is also out and it fixes the bcache corruption bug. The fix is not on antic repo yet. It will also be included in Linux 4.15 which is due to be released in the next month or two. Many users don’t know how to use bcache and if they don’t use it they won’t have any problems with Linux 4.14.

Reddit has a very active thread where users are reporting on their experiences and how the
fixes are working.

Flattr this!

Paas and continuos integration

Paas and continuos integration

Today I want to repost a great article first posted on sysadvent blog.

I think it’s a great post that show how to integrate different software to achieve a modern continuos integration.

Original article by:
Written by: Paul Czarkowski (@pczarkowski)
Edited by: Dan Phrawzty (@phrawzty)

Docker and the ecosystem around it have done some great things for developers, but from an operational standpoint, it’s mostly just the same old issues with a fresh coat of paint. Real change happens when we change our perspective from Infrastructure (as a Service) to Platform (as a Service), and when the ultimate deployment artifact is a running application instead of a virtual machine.

Even Kubernates still feels a lot like IaaS – just with containers instead of virtual machines. To be fair, there are already some platforms out there that shift the user experience towards the application (Cloud Foundry and Heroku come to mind), but many of them have a large operations burden, or are provided in a SaaS model only.

In the Docker ecosystem we are starting to see more of these types of platforms, the first of which was Dokku which started as a single machine Heroku replacement written in about 100 lines of Bash. Building on top of that work other, richer systems like Deisand Flynn have emerged, as well as custom solutions built in-house, like Yelp’s PaaSta.




Actions speak louder than words, so I decided to document (and demonstrate) a platform built from the ground up (using Open Source projects) and then deploy an application to it via a Continuous Integration/Deployment (CI/CD) pipeline.

You could (and probably would) use a public cloud provider for some (or all) of this stack; however, I wanted to demonstrate that a system like this can be built and run internally, as not everybody is able to use the public cloud.

As I wrote this I discovered that while figuring out the right combination of tools to run was a fun process, the really interesting stuff was building the actual CI/CD pipeline to deploy and run the application itself. This means that while I’ll briefly describe the underlying infrastructure, I will not be providing a detailed installation guide.

INFRASTRUCTURE

While an IaaS is not strictly necessary here (I could run Deis straight on bare metal), it makes sense to use something like OpenStack as it provides the ability to request services via API and use tooling like Terraform. I installed OpenStack across across a set of physical machines using Blue Box’s Ursula.

Next the PaaS itself. I have familiarity with Deis already and I really like its (Heroku-esque) user experience. I deployed a three node Deis cluster on OpenStack using the Terraform instructions here.

I also deployed an additional three CoreOS nodes using Terraform on which I ran Jenkins using the standard Jenkins Docker image.

Finally, there is a three-node Percona database cluster running on the CoreOS nodes, itself fronted by a load balancer, both of which use etcd for auto-discovery. Docker images are available for both the cluster and the load balancer.

GHOST

The application I chose to demo is the Ghost blogging platform. I chose it because it’s a fairly simple app with well-known backing service (MySQL). The source, including my Dockerfile and customizations, can be found in the paulczar/ci-demo GitHub repository.

The hostname and database credentials of the MySQL load balancer are passed into Ghost via environment variables (injected by Deis) to provide a suitable database backing service.

For development, I wanted to follow the GitHub Flow methodology as much as possible. My merge/deploy steps are a bit different, but the basic flow is the same. This allows me to use GitHub’s notification system to trigger Jenkins jobs when Pull Requests are created or merged.

I used the Deis CLI to create two applications: ghost from the code in the master branch, and stage-ghost from the code in the development branch. These are my production and staging environments, respectively.

Both the development and master branches are protected with GitHub settings that restrict changes from being pushed directly to the branch. Furthermore, any Pull Requests need to pass tests before they can be merged.

DEIS

Deploying applications with Deis is quite easy and very similar to deploying applications to Heroku. As long as your git repo has a Dockerfile (or supports being discovered by the cedar tooling), Deis will figure out what needs to be done to run your application.

Deploying an application with Deis is incredibly simple:

  1. First you use deis create to create an application (on success the Deis CLI will add a remote git endpoint).
  2. Then you run git push deis master which pushes your code and triggers Deis to build and deploy your application.
$ git clone https://github.com/deis/example-go.git
$ cd example-go
$ deis login http://deis.xxxxx.com
$ deis create helloworld 
Creating Application... ...
done, created helloworld
Git remote deis added
$ git push deis master

Counting objects: 39, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (38/38), done.
Writing objects: 100% (39/39), 5.17 KiB | 0 bytes/s, done.
Total 39 (delta 12), reused 0 (delta 0)

-----> Building Docker image
remote: Sending build context to Docker daemon 5.632 kB
<<<<<<<   SNIP   >>>>>>>
-----> Launching... 
       done, helloworld:v2 deployed to Deis
       http://helloworld.ci-demo.paulcz.net

JENKINS

After running the Jenkins Docker container I had to do a few things to prepare it:

  1. Run docker exec -ti jenkins bash to enter the container and install the Deis CLI tool and run deis login which saves a session file so that I don’t have to login on every job.
  2. Add the GitHub Pull Request Builder (GHPRB) plugin.
  3. Secure it with a password.
  4. Run docker commit to commit the state of the Jenkins container.

I also had to create the jobs to perform the actual work. The GHPRB plugin made this fairly simple and most of the actual jobs were variations of the same script:

#!/bin/bash

APP="ghost"
git checkout master

git remote add deis ssh://git@deis.ci-demo.paulcz.net:2222/${APP}.git
git push deis master | tee deis_deploy.txt

CONTINUOUS INTEGRATION / DEPLOYMENT

Local Development

Docker’s docker-compose is a great tool for quickly building development environments (combined with Docker Machine it can deploy locally, or to the cloud of your choice). I have placed a docker-compose.yml file in the git repo to launch a mysql container for the database, and a ghost container:

ghost:
  build: .
  ports:
    - 5000:5000
  volumes:
    - .:/ghost
  environment:
    URL: http://localhost:5000
    DB_USER: root
    DB_PASS: ghost
  links:
    - mysql
mysql:
  image: percona
  ports:
   - "3306:3306"
  environment:
    MYSQL_ROOT_PASSWORD: ghost
    MYSQL_DATABASE: ghost

I also included an aliases file with some useful aliases for common tasks:

alias dc="docker-compose"
alias npm="docker-compose run --rm --no-deps ghost npm install --production"
alias up="docker-compose up -d mysql && sleep 5 && docker-compose up -d --force-recreate ghost"
alias test="docker run -ti --entrypoint='sh' --rm test /app/test"
alias build="docker-compose build"

Running the development environment locally is as simple as cloning the repo and calling a few commands from the aliases file. The following examples show how I added s3 support for storing images:

$ git clone https://github.com/paulczar/ci-demo.git
$ cd ci-demo
$ . ./aliases
$ npm
> sqlite3@3.0.8 install /ghost/node_modules/sqlite3
> node-pre-gyp install --fallback-to-build
...
...
$ docker-compose run --rm --no-deps ghost npm install --save ghost-s3-storage
ghost-s3-storage@0.2.2 node_modules/ghost-s3-storage
├── when@3.7.4
└── aws-sdk@2.2.17 (xmlbuilder@0.4.2, xml2js@0.2.8, sax@0.5.3)
$ up

Docker Compose v1.5 allows variable substitution so I can pull AWS credentials from environment variables which means they don’t need to be saved to git and each dev can use their own bucket etc. This is done by simply adding these lines to the docker-compose.yml file in the environment section:

ghost:
  environment:
    S3_ACCESS_KEY_ID: ${S3_ACCESS_KEY_ID}
    S3_ACCESS_KEY: ${S3_ACCESS_KEY}

I then added the appropriate environment variables to my shell and ran up to spin up a local development environment of the application. Once it was running I was able to confirm that the plugin was working by uploading the following image to the s3 bucket via the Ghost image upload mechanism:

ghost_blog-1447652183265

Pull Request

All new work is done in feature branches. Pull Requests are made to the developmentbranch of the git repo which Jenkins watches using the github pull request plugin (GHPR). The development process looks a little something like this:

$ git checkout -b s3_for_images
Switched to a new branch 's3_for_images'

Here I added the s3 module and edited the appropriate sections of the Ghost code. Following the GitHub flow I then created a Pull Request for this new feature.

$ git add .
$ git commit -m 'use s3 to store images'
[s3_for_images 55e1b3d] use s3 to store images
 8 files changed, 170 insertions(+), 2 deletions(-)
 create mode 100644 content/storage/ghost-s3/index.js
$ git push origin s3_for_images 
Counting objects: 14, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (14/14), 46.57 KiB | 0 bytes/s, done.
Total 14 (delta 5), reused 0 (delta 0)
To git@github.com:paulczar/ci-demo.git
 * [new branch]      s3_for_images -> s3_for_images

github_show_pr_testing-1448031318823
Jenkins will be notified when a developer opens a new Pull Request against the development branch and will kick off tests. Jenkins will then create and deploy an ephemeral application in Deis named for the Pull Request ID (PR-11-ghost).

jenkins_pr_testing-1448031335825

The ephemeral environment can be viewed at http://pr-xx-ghost.ci-demo.paulczar.net by anyone wishing to review the Pull Request. Subsequent updates to the PR will update the deployed application.

We can run some manual tests specific to the feature being developed (such as uploading photos) once the URL to the ephemeral application is live.

Staging

Jenkins will see that a Pull Request is merged into the development branch and will perform two jobs:

  1. Delete the ephemeral environment for Pull Request as it is no longer needed.
  2. Create and deploy a new release of the contents of the development branch to the staging environment in Deis (http://stage-ghost.ci-demo.paulczar.net).

ci_staging_deploy-1448031350720

stage_ghost-1448031723495

Originally when I started building this demo I had assumed that being able to perform actions on PR merges/closes would be simple, but I quickly discovered that none of the CI tools, that I could find, supported performing actions on PR close. Thankfully I was able to find a useful blog post that described how to set up a custom job with a webhook that could process the GitHub payload.

Production

Promoting the build from staging to production is a two step process:

  1. The user who wishes to promote it creates a pull request from the development branch to the master branch. Jenkins will see this and kick off some final tests.

pr_to_master-1448031582932

  1. Another user then has to merge that pull request which will fire off a Jenkins job to push the code to Deis which cuts a new release and deploys it to the production environment (http://ghost.ci-demo.paulczar.net).

ci_prod_deploy-1448031629520

CONCLUSION

Coming from an operations background, I thought that figuring out how to build and run a PaaS from the metal up would be a really interesting learning exercise. It was! What I didn’t expect to discover, however, was that actually running an application on that PaaS would be so compelling. Figuring out the development workflow and CI/CD pipeline was an eye-opener as well.

That said, the most interesting outcome of this exercise was increased empathy: the process of building and using this platform placed me directly in the shoes of the very developers I support. It further demonstrated that by changing the focus of the user experience to that person’s core competency (the operator running the platform, and the developer using the platform) we allow the developer to “own” their application in production without them needing to worry about VMs, firewall rules, config management code, etc.

I also (re-)learned that while many of us default to cloud services such as AWS, Heroku, and Travis CI, there are solid alternatives that can be run in-house. I was also somewhat surprised at how powerful (and simple) Jenkins can be (even if it is still painful to automate).

I am grateful that Sysadvent gave me a reason to perform this little experiment. I learned a lot, and I hope that this article passes on some of that knowledge and experience to others.

Flattr this!

Why Smartphone Apps Will Help Home Automation Thrive

Why Smartphone Apps Will Help Home Automation Thrive

Guest post by Emma.

Home automation technology has struggled to gain serious traction because, for all its promised convenience, the current tangle of cords and communication standards still has consumers in knots. New quote-unquote-smart systems have made huge strides recently, but not everyone has chosen invest – yet. Totally “smart” homes still await the mainstream. To unlock this new market, technology companies are looking to a tool we’ve taken to heart already: the smart mobile phone.



Not everyone has purchased a full fleet of smart home devices, but the number of smartphone users in the world has reached nearly two billion. Emerging smartphone apps which both control and augment smart home devices and systems are becoming more popular, indicating a broad future for automation in our everyday lives. As everything from entertainment to heating and appliances becomes more interconnected, our smartphones act increasingly as portals to the wider world.

With the smartphone as the central control point, several apps lead the way in pushing the concept of “what’s next” for home automation technology.

SmartThings

The free SmartThings app from Samsung ties all your home automation devices together into one platform. You’ll be able to control your lights, air conditioner, coffeemaker and other appliances right from the palm of your hand. You can even set up certain functions to take place without your manual intervention. You’ll need a $300 SmartThings Hub to make it all work, but the system as a whole is compatible with hardware from many other manufacturers. You can download SmartThings for iOS, watchOS, Android or Windows Phone.

Nest

Nest’s app interfaces with the firm’s cameras, alarms and thermostats to give you a fine level of control over them. You can manage two separate homes from a single instance of the app, and it supports dozens of pieces of hardware in each home. You can set up as many as 10 user accounts to allow your family members to use the app. It’s available for Android devices, iPhones, iPads and Apple Watches, and you can also access it through any modern web browser.

Vivint Sky

As one of the most popular providers of integrated smarthome solutions, Vivint is clearly interested in making its systems as easy to use as possible. Vivint’s Sky App accomplishes this goal by allowing users to adjust the settings of their equipment with their cell phones. Homeowners can even view footage from any cameras that are installed, which helps set their minds at ease if they’re at work or traveling. There’s a downloadable Sky app for iOS and Android, but those with other operating systems can use the Vivint Sky web page instead.

Insteon

Insteon’s versatile hub costs just around $80, and you’ll be able to direct it using a free app for iOS, watchOS, Android and Windows Phone. What sets this app apart is the fact that it offers almost unparalleled power to configure and manipulate individual devices in your smarthome. Rotate your cameras around, set up lighting scenes for later use and customize any Insteon Wall Keypads that you have in your house to enhance their functionality.

Apple Home

The Apple Home app is a long-awaited improvement to the HomeKit interface that actually makes it convenient for end users. You can group your devices by room, set up scenes that consist of preset actions or settings and access each compatible product from within a single program. Apple Home’s biggest selling point is the fact that it works with products from more than a dozen vendors. Sorry Android fans – Apple Home is only for iOS and watchOS.

Wink Relay

The Wink Relay attaches to your wall in place of a light switch and acts as a control center for all your smart devices. Using the included app for iOS, Android and watchOS platforms, you can manage your automated systems and even program them to perform certain actions with IFTTT rules. The Wink is compatible with merchandise from some of the top brands in the industry, like Philips, Honeywell, Lutron and GE.

Connected home devices capable of delivering the versatility and responsiveness that’s expected of them are finally making their way towards the horizon. While most efforts have focused on Android and iOS, there are a few systems that employ Linux too. By harnessing the processing horsepower of sophisticated smartphone technology, these machines expand their own capabilities and allow for seamless interaction with each other and with the user.

Flattr this!

Storing large binary files in git repositories

Storing large binary files in git repositories

This is a small update (1 year later) of a great article by Ilmari Kontulainen, first posted on blog.deveo.com.

I’ll post the original article in blockquote and my notes in green.

Storing large binary files in Git repositories seems to be a bottleneck for many Git users. Because of the decentralized nature of Git, which means every developer has the full change history on his or her computer, changes in large binary files cause Git repositories to grow by the size of the file in question every time the file is changed and the change is committed. The growth directly affects the amount of data end users need to retrieve when they need to clone the repository. Storing a snapshot of a virtual machine image, changing its state and storing the new state to a Git repository would grow the repository size approximately with the size of the respective snapshots. If this is day-to-day operation in your team, it might be that you are already feeling the pain from overly swollen Git repositories.

Luckily there are multiple 3rd party implementations that will try to solve the problem, many of them using similar paradigm as a solution. In this blog post I will go through seven alternative approaches for handling large binary files in Git repositories with respective their pros and cons. I will conclude the post with some personal thoughts on choosing appropriate solution.



git-annex

git-annex allows managing files with git, without checking the file contents into git. While that may seem paradoxical, it is useful when dealing with files larger than git can currently easily handle, whether due to limitations in memory, time, or disk space.

Git-annex works by storing the contents of files being tracked by it to separate location. What is stored into the repository, is a symlink to the to the key under the separate location. In order to share the large binary files between a team for example the tracked files need to be stored to a different backend. At the time of writing (23rd of July 2015): S3 (Amazon S3, and other compatible services), Amazon Glacier, bup, ddar, gcrypt, directory, rsync, webdav, tahoe, web, bittorrent, xmpp backends were available. Ability to store contents in a remote of your own devising via hooks is also supported.


Now (July 2016) there are also some new (and in my opinion interesting) backend: Box.com, Google drive, Google Cloud Storage, Mega.co.nz, SkyDrive, OwnCloud, Flickr, IMAP, chef-vault, pCloud, ipfs, Ceph, Backblaze’s B2, Dropbox, Openstack Swift / Rackspace cloud files / Memset Memstore, Microsoft One Drive and Yandex Disk.

You can see the complete list with links to all the backends at this page

Git-annex uses separate commands for checking out and committing files, which makes its learning curve bit steeper than other alternatives that rely on filters. Git-annex has been written in haskell, and the majority of it is licensed under the GPL, version 3 or higher. Because git-annex uses symlinks, Windows users are forced to use a special direct mode that makes usage more unintuitive.

Latest version of git-annex at the time of writing is 5.20150710, released on 10th of July 2015, and the earliest article I found from their website was dated 2010. Both facts would state that the project is quite mature.

.

Now the latest version is 6.20160613, released on .. 2016/06/13, for what I see on their news page there are a lot of releases, around 1 every 1 or 2 months with bug fixes and new features, so the project seem to be well maintained.

Pros:

Cons:

  • Windows support in beta. See here.
  • Users need to learn separate commands for day-to-day work

.
Project home page: https://git-annex.branchable.com/

Git Large File Storage (Git LFS)

Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git

gitlfs

The core Git LFS idea is that instead of writing large blobs to a Git repository, only a pointer file is written. The blobs are written to a separate server using the Git LFS HTTP API. The API endpoint can be configured based on the remote which allows multiple Git LFS servers to be used. Git LFS requires a specific server implementation to communicate with. An open source reference server implementation as well as at least another server implementation is available. The storage can be offloaded by the Git LFS server to cloud services such as S3 or pretty much anything else if you implement the server yourself.

Git LFS uses filter based approach meaning that you only need to specify the tracked files with one command, and it handles rest of it invisibly. Good side about this approach is the ease of use, however there is currently a performance penalty because of how Git works internally. Git LFS is licensed under MIT license and is written in Go and the binaries are available for Mac, FreeBSD, Linux, Windows. The version of Git LFS is 0.5.2 at the time of writing, which suggests it’s still in quite early shape, however at the time of writing there were 36 contributors to the project. However as the version number is still below 1, changes to APIs for example can be expected.

Now the version is 1.2.1 so we can see that this project is well support (by github mainly) and that is now more “production ready” than 1 year ago

Pros:

  • Github behind it.
  • Supported by Gitlab .
  • Ready binaries available to multiple operating systems.
  • Easy to use.
  • Transparent usage.

Cons:

Project home page: https://git-lfs.github.com/

git-bigfiles – Git for big files

The goals of git-bigfiles are pretty noble, making life bearable for people using Git on projects hosting very large files and merging back as many changes as possible into upstream Git once they’re of acceptable quality. Git-bigfiles is a fork of Git, however the project seems to be dead for some time. Git-bigfiles is is developed using the same technology stack as Git and is licensed with GNU General Public License version 2 (some parts of it are under different licenses, compatible with the GPLv2).

The project is still dead, so I’d suggest to stay away from it… unless you want to start to be the new maintainer 🙂

Pros:

  • If the changes would be backported, they would be supported by native Git operations.

Cons:

  • Project is dead.
  • Fork of Git which might make it non-compatible.
  • Allows configuring threshold of file size only when tracking what is considered a large file.

Project home page: http://caca.zoy.org/wiki/git-bigfiles

git-fat

git-fat works in similar manner as git lfs. Large files can be tracked using filters in .gitattributes file. The large files are stored to any remote that can be connected through rsync. Git-fat is licensed under BSD 2 license. Git-fat is developed in Python which creates more dependencies for Windows users to install. However the installation itself is straightforward with pip. At the time of writing git-fat has 13 contributors and latest commit was made on 25th of March 2015.

The latest commit on github on this project it’s still the one at 25th of March 2015, and this is not good at all.

Pros:

  • Transparent usage.

Cons:

Project home page: https://github.com/jedbrown/git-fat

git-media

Licensed under MIT license and supporting similar workflow as the above mentioned alternatives git lfs and git-fat, git media is probably the oldest of the solutions available. Git-media uses the similar filter approach and it supports Amazon’s S3, local filesystem path, SCP, atmos and WebDAV as backend for storing large files. Git-media is written in Ruby which makes installation on Windows not so straightforward. The project has 9 contributors in GitHub, but latest activity was nearly a year ago at the time of writing.

There seem to be some low activity on this project, latest commit is now on Sep 27, 2015

Pros:

  • Supports multiple backends
  • Transparent usage

Cons:

  • No longer developed.
    Low activity on the project
  • Ambiguous commands (e.g. git update-index –really refresh).
  • Not fully Windows compatible.

Project home page: https://github.com/alebedev/git-media

git-bigstore

Git-bigstore was initially implemented as an alternative to git-media. It works similarly as the others above by storing a filter property to .gitattributes for certain type of files. It supports Amazon S3, Google Cloud Storage, or Rackspace Cloud account as backends for storing binary files. git-bigstore claims to improve the stability when collaborating between multiple people. Git-bigstore is licensed under Apache 2.0 license. As git-bigstore does not use symlinks, it should be more compatible with Windows. Git-bigstore is written in Python and requires Python 2.7+ which means Windows users might need an extra step during installation. Latest commit to the project’s GitHub repository at the time of writing was made on April 20th, 2015 and there is one contributor in the project.

Last commit on June 2016, and there are now 3 contributors.

Pros:

  • Requires only Python 2.7+
  • Transparent

Cons:

  • Only cloud based storages supported at the moment.

Project home page: https://github.com/lionheart/git-bigstore

git-sym

Git-sym is the newest player in the field, offering an alternative to how large files are stored and linked in git-lfs, git-annex, git-fat and git-media. Instead of calculating the checksums of the tracked large files, git-sym relies on URIs. As opposed to its rivals that store also the checksum, git-sym only stores the symlinks in the git repository. The benefits of git-sym are thus performance as well as ability to symlink whole directories. Because of its nature, the main downfall is that it does not guarantee data integrity. Git-sym is used using separate commands. Git-sym also requires Ruby which makes it more tedious to install on Windows. The project has one contributor according to its project home page.

Last commit on Jun 20, 2015, the project seem dead now

Pros:

  • Performance compared to solutions based on filters.
  • Support for multiple backends.

Cons:

Project home page: https://github.com/cdunn2001/git-sym


Conclusion

There are multiple ways to handle large files in Git repositories and many of them use nearly similar workflows and ways to handle the files. Some of the solutions listed are no longer developed actively, thus if I were to choose a solution currently, I would go with git-annex as it has the largest community and support for various backends. If Windows support or transparency is a must-have requirement and you are okay living with the performance penalty, I would go with Git LFS, as it will likely have long term support because of its connections with GitHub.

How have you solved the problem of storing large files in git repositories? Which of the aforementioned solutions have you used in production?

1 year later, I totally agree with the choice of Git LFS, it’s well supported and backed by Github and in 1 year has received many improvments and features.

Useful links

If You’re Not Using Git LFS, You’re Already Behind!
Git Annex vs. Git LFS
Git LFS 1.2: Clone Faster

Flattr this!

Are You Prepared for a Catastrophic Data Loss?

Are You Prepared for a Catastrophic Data Loss?

If you walked into the office this morning to find that your customer information had been compromised, or a disgruntled employee had wiped a database clean, would you be prepared? Have you set preventative measures in place to safeguard you against total loss? Do you have security features in place to help you retrieve lost data? Are you able to continue with business as usual or would a security breach such as this bring you to a standstill?

It’s a lot to think about, but according to USA Today, approximately 43% of businesses encountered a data breach at some level in the year 2014. With percentages like this, the likelihood of it happening to your business is high. So again, are you prepared? Below are a few signs to determine whether your data loss prevention plan is intact or if your company’s data is vulnerable:

1.  Do you have the proper software?

Most small businesses will assume that having the basic virus protection on their computers is enough to ward off impending threats. However, the truth is that experienced hackers and even internal employees can steal, delete, or damage sensitive data despite the basic virus protection.

If all you have is a simple software package to protect against viruses, you may want to think again. Investing in more comprehensive software like data loss prevention software can safeguard you against internal and external threats. Such software keeps track of suspicious behavior, blocks access, when necessary and reports it to key personnel so that the issue can be resolved before it becomes serious.

On Open source you could test OpenDLP

OpenDLP is a free and open source, agent- and agentless-based, centrally-managed, massively distributable data loss prevention tool released under the GPL. Given appropriate Windows, UNIX, MySQL, or MSSQL credentials, OpenDLP can simultaneously identify sensitive data at rest on hundreds or thousands of Microsoft Windows systems, UNIX systems, MySQL databases, or MSSQL databases from a centralized web application. OpenDLP has two components:

  • A web application to manage Windows agents and Windows/UNIX/database agentless scanners
  • A Microsoft Windows agent used to perform accelerated scans of up to thousands of systems simultaneously

2.  Is Your Sensitive Data Encrypted?

When you have sensitive data, such as company financials, consumer information and so on, it becomes important to add several layers of protection. If you’re simply saving documents to files that are stored on company computers, this isn’t enough protection. It leaves information susceptible to being stolen and used at a later date.

Encryption is necessary when working with sensitive data. Encrypted documents require a password for someone to gain access. Without this password or encryption key, the document is seemingly useless in that the codes cannot be deciphered.

Open source offer a lot of different solution on this field, as first step I’d suggest to check the GnuPG project:

GnuPG is a complete and free implementation of the OpenPGP standard as defined by RFC4880 (also known as PGP). GnuPG allows to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kinds of public key directories. GnuPG, also known as GPG, is a command line tool with features for easy integration with other applications.

There are a lot of frontend applications that you can use on your PC, a long list it’s available on this page

3.  Is All Information Backed Up?

When files are created and software is installed on your company server, is the information being backed up? All too often, businesses make the mistake of assuming that a saved document will always be there. The truth is, if the system was to be wiped out or even accidentally deleted, there is no getting it back.

All companies should back up their data. This way if there is a security breach you won’t have to waste time and money trying to recreate the pertinent information. There are several ways a company can assure their data is backed up. This includes saving everything to a physical device (i.e. a USB flash drive), setting up backup features through Microsoft, or storing all information in the cloud.

As backup server for small to middle enteprise I’d suggest to check bacula

Bacula is a set of Open Source, computer programs that permit you (or the system administrator) to manage backup, recovery, and verification of computer data across a network of computers of different kinds. Bacula is relatively easy to use and very efficient, while offering many advanced storage management features that make it easy to find and recover lost or damaged files. In technical terms, it is an Open Source, network based backup program.
According to Source Forge statistics (rank and downloads), Bacula is by far the most popular Open Source backup program.

Or for your personal desktop you could check these software

4.  Do You (and Staff) Change Passwords Often?

Passwords are a great layer of protection for companies that utilize software and databases on a regular basis. Creating authentic passwords is one way to ensure they’re not compromised, but changing them from time to time is also advised. If your company passwords are simple to figure out and have been the same for the past five years, you’re leaving company data vulnerable to a breach.

Instruct your staff to switch their passwords at least once or twice a year. You should also remember to change passwords and usernames to accounts of old employees to ensure that they cannot access the information and use it to their advantage.

5.  Have Your Employees Been Properly Educated?

Do you have rules, regulations in place as it pertains to data protection and security? If it’s been a while since you’ve had a staff meeting or training on data protection and security, you could be at risk for a breach.

Your employees have access to important information that could easily be compromised (intentionally or unintentionally). In order to ensure that they’re aware of the potential breaches, how to handle information and passwords, and what to do if they suspect suspicious behavior, you’ll need to train them on a continual basis. Training annually, having policies and procedures in a general area and having staff sign off on contracts is a surefire way to keep everyone on the same page.

It’s a digital world we live in. While technologies and software make it easier for us to do business, it also opens the doors for potential threats. If you answered ‘no’ to any of the above questions, you’re not prepared for a possible security breach. Taking preventative measures are necessary, whether you own a brick and mortar shop or an e-commerce site. If you’re not sure where your vulnerabilities lie, consult with an IT professional for a security audit to bring these risks to light.

 

Flattr this!

Useful steps to Install Linux Mint on your PC

Useful steps to Install Linux Mint on your PC

Guest Post by Oliver.

Linux Mint is one of the most powerful and elegant Operating system. The purpose of launching a Linux Mint in the market is to provide the modern and comfortable OS which is easy to use for every user. The best thing about Linux Mint is that it is both free and Open Source. Also, it comes at the third position after Microsoft Windows and Apple Mac OS. Due to its simplicity and clean design, it is worth for the user. Many users prefer it because it works well on low end machines or old hardware components. The latest version of Linux Mint is 17.3 and one can easily download it from the Search Engines. The process of Installing Linux Mint on your Windows PC is quite simple. Let’s see how we can install the Linux Mint on our System.

Go ahead and get started.

mint17_1


Step 1: The first step which we have to take is – Preparing the Installation DVD.

  1. Take the Backup:

For this, the first thing that you need to do is to take the backup of all your important data stored in your system. This is because, while installing the Linux Mint on the existing OS, the probability of data loss is much high. To avoid such loss, you can take the back up of your meaningful data. To avoid such loss, you can take the backup of your meaningful data. After taking the backup, download the Linux Mint ISO which is a file which can be burn to a DVD. The latest version of Linux Mint is 17.3 “Rosa” which is available at Linux Mint Website. Once you visit this page, you can get the various options displayed on the Screen, but the most widely used version is “Cinnamon” (This is the basic and default desktop style for Linux Mint. There are also some other options available on the page, which is suitable for the advanced users). You will have to decide that which version you want to use: 32-bit or 64-bit and which version is correct for your system. Refer this guide, which explains how to determine what kind of hardware you have in Windows and guide for Ubuntu Users.

As the size for downloading is quite big, therefore, for downloading it, the speed of your Internet connection must be considerably high.

  1. Copy the ISO File to DVD:

After selecting the version, the next step to be taken is to copy the ISO File to DVD:

For copying the ISO file to DVD, Image Burning Program is required. The most famous program is: ImgBurn, easily available at download 1, download 2, etc. After copying the data, now you can burn the disc.

Key Points: Linux Mint 17.3 “Rosa” is the latest version available in the Market. The most important thing while downloading is to check whether your internet connection is working good. You have two options of the most widely used version “Cinnamon” 32-bit or 64-bit. Choose the best one which suits for your system.

Step 2: The Second step is to Install the Linux Mint”.

To begin, check the basic needs for the proper installation. The basic needs to be checked are:

  1. Check the Free Space: First of all, you need to check whether you have a sufficient space for downloading or not. The minimum requirement is – You must have at least 3.5GB of your free disk space.

  2. Check the Internet connection: Whether you have active internet connection or not.

  3. Check the Power Socket: If you are using your laptop, ensure that it must be connected with the power cable during the installation period.

After checking the basic needs, we have to proceed with the priority setting of booting, i.e., to run the Linux Mint disc, you just have to boot from the DVD drive instead of the Hard drive. After installing, now we can begin our journey with the installation process. Install the Linux and do modifications in the settings as required by you. You can also select the installation type, as here, you get two options which are mentioned below:

  1. Remove the Disk and Install Linux Mint: This option will let you delete the complete data of your existing OS before installing the Linux Mint. If you want only Linux Mint as your OS in the system, then opt for this option. It will delete all the data and the existing OS.

  1. Other Option: This option will create a free space for installing the Linux Mint partition. This will allow Linux Mint to have access with the existing OS. One can also choose the size of the Linux Partition after selecting this option.

After selecting any option mentioned above, now you have to select the location for the hard drive where you want to install it.

Remember!

  • The minimum requirement for Linux Mint partition space is 6 GB, and the swap partitions should be at least 1.5 times the amount of installed RAM.

  • If you opt for the first option, the complete selected disk will be deleted during the installation process.

During the process of Installation of Linux Mint, you have to Create a User, where you have to fill all the entities asked. These entities will become your admin details, which can be asked whenever you do modification in your system. These entities will automatically create Username as well as Machine name and can be automatically be filled while you are filling your name in the entry.

Note: The “Machine Name” is the name which your computer will display to the other computers on the network.

Now, after all these procedures, you have to wait for a while till the Linux will start copying all the files. Once this completes, the installation will begin and the hardware will be configured. This will take some time especially for old systems.

Now, press the “Restart” Button.

Key Pointers:

  1. Check all the basic requirements while installing the Linux Mint.

  2. Check whether you have a perfect internet connection as it takes time for its installation. You can also select the type of installation required for downloading the Linux Mint. Also, remember the basic criteria which has to be followed for the proper installation.

Step 3: The last and the final step is to “Configure the Linux Mint”.

mint17_2

The most important thing while configuring the Linux Mint is “Booting into Linux Mint”. Here, you can get two options for booting. First one is, it depends upon which option you have opted; you have been provided an option to choose from a list of options of installed OS or your system will directly have booted to Mint. Secondly, you can manually start up the Linux Mint and log in to your account to get the desktop.

After selecting this, you can customize your Linux OS by changing the beautiful wallpapers, installing the additional programs which are mostly free but some are little paid.

Booting into Linux Mint is the most important task. After configuration, you can modify your screen by just applying the beautiful wallpapers. You can also enjoy the applications supported by the Linux Mint. For the Game Lovers, you can easily play the games in the Mint.

Enjoy using the latest version of Linux Mint!!!

Author Bio:

Data recovery expert at Stellar Data Recovery, recovering data since 1993. Playing around with hard drives, Linux OS issues and suggesting fixes is among the major activities I love. Share my knowledge and expertise over different media channels from time to time or as soon as find a new one.


Flattr this!