Category: Magento

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines


This post was written by Bitbucket user Ayush Sharma. 


Bitbucket Pipelines has fascinated me for several weeks now. I?ve already explored using it for Serverless deployments, and I recently spent some time exploring it for container deployments.

In this post, we continue our exploration further. The plan is to build a container and push it to a container registry, all from within Bitbucket Pipelines.

The method works for any container registry that understands docker commands, but today we use AWS Elastic Compute Registry as our target.

What is AWS Elastic Container Registry?

Amazon Elastic Container Registry product use-case.

AWS Elastic Container Registry, or ECR, is a fully-managed container registry service provided by AWS. Think Docker Hub on the AWS platform. It integrates well with existing AWS services, such as ECS (Elastic Container Service) and IAM (Identity and Access Management), to provide a secure and straightforward way to manage and deploy container images in your AWS environment.

A quick overview of ECR?s features:

  1. Container images are stored in S3, encrypted at rest, and transferred to and from ECR over HTTPS.
  2. Supports Docker Image Manifest V2 and OCI images.
  3. Existing docker command-line tools work with ECR.
  4. It supports expiring unused images via lifecycle policies.
  5. Supports resource tags, making governance and cost analysis easier.
  6. Repository tags can be mutable (tags are overwritable) or immutable (tags are not overwritable).

Goal: Build a Docker image and push it to ECR using Bitbucket Pipelines

To build and push our Docker image to ECR, we?re going to need the following:

  1. A Dockerfile for building the image.
  2. An ECR repository for our Docker images.
  3. An IAM user with a policy to push our image to ECR.
  4. A Bitbucket Pipeline to run all the above steps.

So let?s get started.

Step 1: Creating a Docker image

For this exercise, we?re going to be deploying a simple Apache web server container.

Create a Dockerfile and add the following contents:

  FROM ubuntu:18.04
# Install dependencies
RUN apt-get update && 
apt-get -y install apache2
# Install apache and write hello world message
RUN echo 'Hello World!' > /var/www/html/index.html
# Configure apache
RUN echo '#!/bin/bash' > /root/run_apache.sh
RUN echo '. /etc/apache2/envvars' >> /root/run_apache.sh && 
echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && 
echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && 
echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && 
echo 'localhost' > /etc/hostname && 
chmod 755 /root/run_apache.sh
EXPOSE 80
ENTRYPOINT ["/root/run_apache.sh"]

Let?s build the above Dockerfile using:

  docker build -t my-apache-image:my-tag . --network host

A successful build log looks like this (some lines removed):

  ...
Step 3/7 : RUN echo 'Hello World!' > /var/www/html/index.html
 ---> Running in 908ad0bee81a
Removing intermediate container 908ad0bee81a
 ---> 30b2e3dcd394
Step 4/7 : RUN echo '#!/bin/bash' > /root/run_apache.sh
 ---> Running in aec34d2fe7a4
Removing intermediate container aec34d2fe7a4
 ---> ddf05a9b474f
Step 5/7 : RUN echo '. /etc/apache2/envvars' >> /root/run_apache.sh && echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && echo 'localhost' > /etc/hostname && chmod 755 /root/run_apache.sh
 ---> Running in b6a7069cee6d
Removing intermediate container b6a7069cee6d
 ---> 13eaea68825d
Step 6/7 : EXPOSE 80
 ---> Running in 1f5ebfc89616
Removing intermediate container 1f5ebfc89616
 ---> 541cb3a1728f
Step 7/7 : ENTRYPOINT ["/root/run_apache.sh"]
 ---> Running in b437bf63d423
Removing intermediate container b437bf63d423
 ---> c80bea22e854
Successfully built c80bea22e854
Successfully tagged my-apache-image:my-tag

To test the image, run the above container using:

   docker run -p 80:80 my-apache-image:my-tag

Let?s check whether Apache is working:

  curl http://localhost
Hello World!

Step 2: Creating a ECR repository

With our Dockerfile ready and tested, we?re ready to create our ECR repository.

Head over to AWS ECR and create a new repo. The process is pretty simple: pick a repo name and select the tag immutability preference. I?m going to name my repo ayush-sharma-testing.

Creating a new Amazon Elastic Container Registry named ayush-sharma-testing.

For tag immutability, we have two options: mutable or immutable. Mutable tags are overwritable by future builds, but immutable tags are not. For example, a mutable repo allows re-deploying tags like release-v1.0.0 or latest, but an immutable repo will throw an error when doing so. For this exercise, I?ll go with mutable tags.

Step 3: Creating an IAM user with an ECR policy

Our Bitbucket repo needs AWS IAM user credentials to push the images to ECR.

For IAM permissions, we?re going to pick the AmazonEC2ContainerRegistryPowerUser managed policy. This will give our Pipeline the basic access it needs to push images to the repository.

The policy document is as follows:

  {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecr:BatchGetImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

You can refer to the steps we used in the Bitbucket Pipes tutorial to create an IAM user.

For the IAM user created above, generate an access-ID/secret-key pair for the user. Add the credentials to the Bitbucket repository variables. Use the names AWS_KEY for the key and AWS_SECRET for the secret. Remember to obscure the values when you save them!

It?s essential to follow the least-privilege principle while creating this IAM user. Keep the policy as tight as possible, since we want to re-use it in many deployments. Consider limiting the region of the ECR in the Resource section.

Step 4: Create our Pipelines file

Now create a bitbucket-pipelines.yml file and add the following:

  image: python:3.7.4-alpine3.10

pipelines:
  tags:
    ecr-release-*:
      - step:
          services:
            - docker
          caches:
            - pip
          script:
            - pip3 install awscli
            - IMAGE="/ayush-sharma-testing"
            - TAG=${BITBUCKET_BRANCH:-$BITBUCKET_TAG}
            - aws configure set aws_access_key_id "${AWS_KEY}"
            - aws configure set aws_secret_access_key "${AWS_SECRET}"
            - eval $(aws ecr get-login --no-include-email --region ap-southeast-1 | sed 's;https://;;g')
            - docker build -t $IMAGE:$TAG .
            - docker push $IMAGE:$TAG

There are a few things going on in the pipelines file above:

  1. We?re using the python:3.7.4-alpine3.10 Docker image in our pipeline. This Alpine-based image loads quickly and has pip3 already installed.
  2. ecr-release-* is our tag regex, so when we create a Bitbucket tag with this pattern, our Pipeline executes for that tag.
  3. services: docker enables Docker commands within Pipelines.
  4. caches: pip step caches all the pip dependencies for later use.
  5. IMAGE is the URI of our ECR repo on AWS. Replace  with your repo URI, which you can get from your ECR.

Step 5: Executing our deployment

With everything set, we?re now ready to test our deployment. To do this, commit and push the Dockerfile and bitbucket-pipelines.yml files we created above, and create a tag in the format ecr-release-*.

In my example Bitbucket repo, when I create a tag ecr-release-0.1.0, my Pipelines log looks like this:

A successful Bitbucket Pipelines deployment pushing a Docker image to AWS ECR.

Heading over to my ECR repo, I can see my new image tagged with my Bitbucket release tag:

A Docker image successfully pushed to AWS ECR from Bitbucket Pipelines.

ECR considerations for production use

ECR treats image tags in two different ways: mutable and immutable. Mutable tags are overwritable, which allows creating the latest tag repeatedly, pointing it to the latest image. However, this also means that older tags, such as ecr-release-0.1.0, can also be overwritten by re-running those older Pipelines. Immutable ECR repos ensure tags, once created, cannot be modified, but this means that the technique of tagging latest images with the latest tag no longer work. This trade-off is essential to consider and plan for before deploying ECR in production.

Additionally, like other AWS services, ECR is available in multiple AWS regions. Since the purpose of a repository is to be a single source of truth for all images, having ECR repositories in multiple regions should be carefully considered. AWS currently does not support automatic inter-region repo mirroring for disaster recovery. So have a plan for recovering ECR images during outages.

Wrapping it up

With the above Pipeline ready and deployed, we can use other Bitbucket features to improve it. Features like merge checks, branch permissions, and deployment targets can make deployments smoother. We can also tighten the IAM permissions to ensure it has access to only the resources it needs.

A custom Bitbucket Pipe can also abstract away much boilerplate code. Using Pipes, we can use standards and best practices across all ECR deployments.

I hope you enjoyed this tutorial. Thanks, and happy coding 🙂

Resources


Author bio: Ayush Sharma is a software engineer specializing in infrastructure and automation. In his free time he enjoys hot tea and a good book.


This post was originally posted in Ayush Sharma?s Notes.

Love sharing your technical expertise? Learn more about the Bitbucket writing program.

The post Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines appeared first on Bitbucket.

General practices in protecting the privacy of your data

General practices in protecting the privacy of your data

The ability of an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively is called Privacy. With the advancement of the digital age, personal information vulnerabilities have increased and data security has become a concern. A need to limit the information to be made available public came along with this. Information privacy is considered an important aspect of information sharing which can be applied in numerous ways, including encryption, authentication and data masking, each attempting to ensure that information is available only to those with authorized access. These protective measures are geared toward preventing data mining and the unauthorized use of personal information, which are illegal in many parts of the world.

Here, we will list or discuss some measures or guidelines that can be followed while using Linux to make information restricted or private.

Command Line Methods:

Using shred command

Normally, when you delete a file, that portion of the disk is marked as being ready for another file to be written to it, but the data will be still there. If a third party were to gain physical access to your disk, using advanced techniques, they could access the data that you thought you had deleted. shred is a program that will overwrite your files in a way that makes them very difficult to recover by a third party. The way that shred accomplishes this is to overwrite repeatedly, as many times as you specify, the data you want to destroy, replacing it with some random data.

If you just want to overwrite a file, use

$ shred 

By default, shred overwrites a file 3 times. However, you can change this number (say 10) using the -n option.

$ shred -n 10 

If you want to delete the file as well, use the -u option

$ shred -u 

If you require only to overwrite a set number (say 10) of bytes from the file, you can use the -s option.

$ shred -s 10 
$ cat a
012345678901234567890123456789
$ shred -s 10 a
$ cat a
�ಽ�V#n�C01234567890123456789

To show verbose information about the shredding progress, use -v option.

$ shred -v a
shred: a: pass 1/3 (random)...
shred: a: pass 2/3 (random)...
shred: a: pass 3/3 (random)...

To add a final overwrite with zeros to hide shredding, use -z option. You will see an empty file when using cat command, but the file will be of some size.

$ cat a
012345678901234567890123456789
$ shred -z a
$ cat a
$ ll a
-rw-rw-r-- 1 user user 1048576 Aug 30 21:33 a

Linux Bash Shell History

We can make the command executed in the terminal to be hidden from the bash history. This can be done by adding a space “ “ before the command. However, even these can be captured in the history. We can alter this using the HISTCONTROL variable. HISTCONTROL controls how bash stores command history. There are two possible flags: ignorespace and ignoredups. The ignorespace flag tells bash to ignore commands that start with spaces. The other flag, ignoredups, tells bash to ignore duplicates. You can concatenate and separate the values with a colon, ignorespace:ignoredups, if you wish to specify both values, or you can just specify ignoreboth.

$ history
1 whoami
2 pwd
3 history

$ HISTCONTROL= ignorespace
$ touch test.txt

$ history
1 whoami
2 pwd
3 history
4 HISTCONTROL= ignorespace

To clear the history, you can use the -c option with history command.

$ history -c

This will clear all the commands in the current session. We can clear an individual line using the -d option and full wipe using -cw.

$ history -d 
$ history -cw

Encrypting File in CLI

A file can be encrypted and decrypted from the command line using openssl.

The syntax for encrypting a file using an algorithm is as below. You can enter the input filename (with-in option) and the required output filename (with-out option). Furthermore, you can securely remove the original file using the shred command as we discussed above. To see the available algorithms, you can press “tab” key twice after openssl.

$ openssl aes-128-cbc -in testin -out testout
enter aes-128-cbc encryption password:
Verifying - enter aes-128-cbc encryption password:

Do remember the password to decrypt the file. Use -d option to decrypt.

$ openssl aes-128-cbc -d -in testout -out testresult
enter aes-128-cbc decryption password:

Encrypting in VI Editor

We have dealt with encrypting a file from the command line. We can furthermore encrypt a file using the Vi/Vim editor. For this, you will need to type the :X in the command line mode. Then save and quit using :wq.

To see the contents of the file, you will need to open it using the vi editor and enter the secret key.

By default Vi will use the zip algorithm to encrypt. You can change it using the below.

:setlocal cm=zip

:setlocal cm=blowfish

:setlocal cm=blowfish2

To show the encryption method for the current file, use

:setlocal cm?

To decrypt the file permanently, use

:set key=

Browser related Methods:

Choosing a browser

Chrome is the most used web browser. Chrome is owned by Google and acts as a gateway between us and the Internet. A large amount of our data is been collected by Google through Chrome and this is used for ads. Its entire business model is based on using that data to generate revenue through ads. Privacy is probably Chrome’s biggest weakness as a browser. Although it’s possible to delete the data Google has on you, it’s difficult to trust how effective this is. That’s because the company has been involved in various privacy scandals, such as cooperating with the NSA’s PRISM program and continuing to collect location data, despite users turning off location services.

There’s plenty of Google services enabled by default in Chrome that collects tons of data on users, such as URL prediction and search suggestions. The privacy policy is also needlessly complicated and hard to read, making it difficult for most users to track exactly what of their information Chrome collects.

Mozilla Firefox is an open-source and is not dominated by a single company. Firefox is great with privacy. The company is a nonprofit and doesn’t derive any revenue from ads, which means it’s much easier to trust what Firefox says about protecting your data. Mozilla’s privacy policy clearly lays out what information it collects and what it is or isn’t used for, as well as that it never sells or gives your data to third parties for any reason. The browser also has excellent tracking protection controls, which are very flexible. You can individually block trackers, cookies, crypto miners and finger printers, letting you pick and choose exactly what you want to allow.

The best browser used for anonymity is the Tor browser. Tor directs Internet traffic through a free, worldwide, volunteer overlay network consisting of more than seven thousand relays to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult to trace Internet activity to the user. This includes visits to websites, online posts, instant messages, and other communication forms. Tor’s intended use is to protect the personal privacy of its users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.

The advantages include these:

  • Individual data packets bounce through multiple nodes, making surveillance impractical. IP addresses are also impossible to track or trace.
  • Because it’s open-source software, it’s difficult to hide malware in Tor. There’s protection against malicious code, but few criminals bother targeting this niche browser.
  • Downloading Tor costs nothing, and it works on PCs, Macs and Linux computers. There’s no intrusive advertising or cookies, either.
  • Any site with a .onion suffix (most of the deep web) is visible through Tor, but not through other mainstream web browsers.
  • The absurdly-named DuckDuckGo is to search engines what Tor is to web browsers – the most secure and anonymous choice.

Tor has some disadvantages also:

  • Tor takes ages to load its homepage. That will shock regular users of the streamlined Chrome browser.
  • Because of its layer-like data distribution, Tor runs extremely slowly. You’d struggle to watch streaming media content, even across fiber broadband.
  • Because it’s effectively anonymous, Tor doesn’t bother encrypting data. A separate VPN is required for encryption, further slowing average transfer times.
  • With its grey banners and retro fonts, Tor resembles the defunct 1990s Netscape Navigator browser rather than modern-day rivals like Safari or Firefox.
  • Information is delivered anonymously, but the browser software contains vulnerabilities, especially when viewing HTTP sites rather than encrypted HTTPS ones.

Apart from this, there are a few more browsers that have improved privacy protection. They include Epic, Brave and Safari. You will have the option to choose the data collected by most of the browsers from its preference. However, opting out may not completely stop it.

Choosing a Search Engine

Using a secure search engine is becoming more common and popular as privacy concerns grow in and increased public awareness of the problems. Some of the best ones with improved privacy features are listed below.

Search Encrypt uses local encryption to secure your searches. It combines AES-256 encryption with SSL encryption. Search Encrypt then retrieves your search results from its network of search partners. After you’re done the searching, your search terms expire so they are private even if someone else has access to your computer. Features include privacy-friendly news, videos, and maps that can be viewed right on their search interface, search terms and history expire when the session is done, eliminates pre-roll adds when viewing the video.

DuckDuckGo is probably the most well-known alternative search engine. Searches are sourced mostly from Yahoo and brought to users via a secure search interface. Users can directly search other sites, like Amazon, Wikipedia, or Youtube, by starting their query with an exclamation mark without collecting cookies and other user data from you.

StartPage uses results from Google, which is a good thing if you prefer Google’s result without tracking. Ixquick, which is an independent search engine that uses its own results, developed StartPage to include results from Google. Its features include a proxy service, URL generator, and HTTPS support. The URL generator is a unique feature that eliminates the need for cookies. It remembers your settings in a privacy friendly way. StartPage conducts searches via a proxy server. It doesn’t record IP addresses, location, or search terms.

Gibiru sources its search results from a modified Google algorithm. It provides reliable search results without all the tracking that Google does today. It can be used with a VPN and employs HTTPS 256-bit encryption. There is no cookies or IP address tracking and follows a strict no-log policy.

Other secure search engines include Privatelee, Swisscows, Disconnect Search, WolframAlpha, Yippy, and Qwant.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 1

The post General practices in protecting the privacy of your data appeared first on SupportSages – Your IT Infrastructure partner !!.

Keep your dependencies up-to-date with Snyk auto upgrade for Bitbucket Cloud

Keep your dependencies up-to-date with Snyk auto upgrade for Bitbucket Cloud


This article was written by Sarah Conway from Snyk, a company that helps organizations find and fix vulnerabilities in open source dependencies and container images.


Keeping your dependencies up to date has a lot of value – it solves bugs, supports new features and fixes security vulnerabilities. Ideally updating libraries should be an easy and automated process, a process that ensures no code breaks or new vulnerabilities are introduced and most importantly – should be done natively as an integral part of the development process. 

Improve project health and eliminate potential vulnerabilities

This is exactly what Snyk’s Auto Upgrades allows you to do, directly from Bitbucket Cloud. With this new functionality, you are able to automatically upgrade your dependencies, improve overall project health and avoid new vulnerabilities or code breaks.

Snyk automatically creates pull requests to update your out-dated dependencies. Currently, npm and Maven-central packages are supported with other languages to follow. Every PR lists any vulnerabilities remediated as part of the upgrade, and will not introduce new vulnerabilities. See Snyk’s blog post for more information.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 2

Find and fix vulnerabilities using Snyk for Bitbucket

This capability is part of Snyk’s native solution for Bitbucket Cloud, which automates scanning and fixing of open source libraries. Using Snyk for Bitbucket Cloud allows you to scan every new PR and prevent a merge when needed, open an automated fix PR for vulnerabilities, monitor the repository and much more.

We’re always excited to see integrations that work seamlessly with Bitbucket evolve to help teams develop better software faster.

Get started today, for free! 

If your team is using Bitbucket Cloud, enable integration between Bitbucket and Snyk to start managing your vulnerabilities. Check out Snyk’s official documentation. Need help? Reach us or find answers to many common questions here.

Want to try Snyk for free? Sign up here for a limited number of monthly tests, including this functionality, to see what vulnerabilities exist in your application.

The post Keep your dependencies up-to-date with Snyk auto upgrade for Bitbucket Cloud appeared first on Bitbucket.

How to update build status for commits on Bitbucket Server


This guest post is by Vidhyadharan Deivamani, a Senior Engineering Specialist at Software AG.


Introduction

Working on a successfully committed revision always saves a great deal of your time and effort. In this article, we will see how to update build status of a specific commit in Bitbucket Server to let other users pick the correct revision. Also, we will see how to view the build status and pick the right build.

Overview 

A successful build badge

Developers can easily check the build status using a build badge. However, the build badge displays the status of a particular branch and it?s not always feasible to check the status of each branch or all branches at once. 

Now imagine in an Enterprise development workstream where multiple developers work on multiple branches. In such an environment, there are high chances that the build would break when working on HOT features. 

Thus to save time and effort, developers must ensure that they work on builds that are successfully committed. This situation can be sorted using the Bitbucket build status API, which displays the status of builds. So, with the usage of Bitbucket build status API you can always ensure that the developer picks a build with GREEN status which indicates passing.

In this article we will discuss:

  • What is a a continuous build Pipeline with Bitbucket server notifier?
  • How to configure the Jenkins plugin ? Bitbucket Server Notifier
  • How to generate notifybitbucket pipeline script
  • How to add this new pipeline function
  • How to view commit build status

What is a continuous build pipeline with Bitbucket server notifier?

In Jenkins, a pipeline is a group of events or jobs which are interconnected with one another in a sequence.

In a Jenkins pipeline, every job or event has some sort of dependency on at least one or more events.  The picture below represents a build or compilation flow and continuous delivery pipeline in Jenkins.

In general, when a developer commits into a repository, a webhook trigger is initialized by Bitbucket to trigger the Jenkins pipeline job. The pipeline passes through different sequential stages such as, checkout, mark the commit as IN-PROGRESS, Compile/Build/pack. Finally, the commit of build is marked as FAILED/SUCCESS.

These stages are interlinked. Every stage has its events, which work in a sequence called a continuous delivery pipeline.

In the repository developer can see the status of continuous build, whether it be success, failure, or in-progress.

The picture seen below displays three different branches with unique build status. 

You can click each of the build statuses to view the corresponding Jenkins job. 

How to configure Jenkins plugin ? Bitbucket Server Notifier?

To add the build status in Jenkins pipeline, you must configure Bitbucket service notifier Jenkins plugin.

In Jenkins, go to plugin manager and install the Bitbucket Server Notifier plugin. The picture shown below displays the Installed Bitbucket Server Notifier plugin in Jenkins. 

How to generate notifybitbucket pipeline script?

After installing the Jenkins bitbucket server notifier plugin, generate the pipe scripts using pipeline syntax. The below steps explain how to generate notifier pipeline script:

  1. Go to the http://jenkins:8080/pipeline-syntax/
  2. From the Sample step drop-down list, select ?notifyBitbucket Notify Stash instance?.
  3. Click Advanced.
  4. Fill the following details:
    1. Stash URL: Your on-prem bitbucket URL.
    2. Credentials: Click Add and select Create app password from the drop-down list. This password is your bitbucket server app password/or your account password.
    3. Commit-SHA-1: The commit ID that will be replaced dynamically. 
    4.  Select the following checkboxes:
  • Ignore unverified SSL certificates
  • Keep repeated builds in Stash
  • Consider UNSTABLE builds as SUCCESS notification?
  1. Click Generate Pipeline Script. The pipeline script is generated like shown in the above screenshot.
  2. Copy the generated notifyBitbuket script.

How to add this new pipeline function

The generated notifyBitbuket script is abstract . You must create a parameterized function, so that the notifybitbucket will be called to update the build status in Jenkins pipeline during each stage.  

  1. Create a new function in your Jenkins file.
   
 def notifyBitbucket(String state) {
 
    if('SUCCESS' == state || 'FAILED' == state) {
    // Set result of currentBuild !Important!
        currentBuild.result = state
    }
 
    notifyBitbucket commitSha1: this.COMMIT_HASH, considerUnstableAsSuccess:        true, credentialsId: 'BitbucketAppPassword', disableInprogressNotification: false, ignoreUnverifiedSSLPeer: true, includeBuildNumberInKey: false, prependParentProjectKey: false, projectKey: '', stashServerBaseUrl: 'http://repository.url/'
 
}

In the above notifyBitbucket function, the commitSha1 is assigned from the git scm variable.

Note: The commitSha1 is assigned by custom global variable at checkout stage.

After adding the function, the Jenkins file will look like seen below. 

COMMIT_HASH  will be loaded from the GIT SCM variables, GIT_COMMIT.

  #!/usr/bin/env groovy
def COMMIT_HASH
node {
    stage('checkout') {
        // Before start the build set as INPROGRESS               
        scmVars =  checkout scm
        this.COMMIT_HASH = scmVars.GIT_COMMIT
        this.notifyBitbucket('INPROGRESS')
    }
   
    
  stage('install dependency') {
    // Skipped for this DEMO
  }
 
  stage('packaging') {
        try {
            sh "npm run build"           
            currentBuild.result = "SUCCESS"
        } catch (e) {
            // If there was an exception thrown, the build failed
            currentBuild.result = "FAILED"
            throw e
        } finally {
            // Success or failure, always send notifications            
            this.notifyBitbucket(currentBuild.result)
        }
  }
}
 
 def notifyBitbucket(String state) {
 
    if('SUCCESS' == state || 'FAILED' == state) {
    // Set result of currentBuild !Important!
        currentBuild.result = state
    }
 
    notifyBitbucket commitSha1: this.COMMIT_HASH, considerUnstableAsSuccess: true, credentialsId: ' BitbucketAppPassword', disableInprogressNotification: false, ignoreUnverifiedSSLPeer: true, includeBuildNumberInKey: false, prependParentProjectKey: false, projectKey: '', stashServerBaseUrl: 'http://repository.url/'
 
}

The example seen above displays the three stages of Jenkinsfile namely checkout, install dependency, and packaging. The currentBuild.result variable will be set for each stage to indicate its status. 

Call the notifyBitbucket on initial stage, in our case checkout stage, and call at the end of the stage packaging.

How to view commit build status

Follow the below steps to view commit status:

  1. In the bitbucket server, log on to your repository and navigate to All Branches Graph.

You will see all commits with their respective build status.

From the All Branch Graph, copy your successful commit, and execute the below command to work on a particular successful commit.

  $ git checkout 

Where, is 192ccbf60ee – the success build commit.

In this article, we have seen how to update build status in a Jenkins pipeline by using Bitbucket server notifier plugin. With that, we have found a way to insure that we always work on a successful build revision. 


Author bio: Vidhyadharan Deivamani is an engineer with over 10+ years of development experience in both Java and front end environment in conjunction with complex architectures.


Love sharing your technical expertise? Learn more about the Bitbucket writing program.

The post How to update build status for commits on Bitbucket Server appeared first on Bitbucket.

How to update build status for commits on Bitbucket Server


This guest post is by Vidhyadharan Deivamani, a Senior Engineering Specialist at Software AG.


Introduction

Working on a successfully committed revision always saves a great deal of your time and effort. In this article, we will see how to update build status of a specific commit in Bitbucket Server to let other users pick the correct revision. Also, we will see how to view the build status and pick the right build.

Overview 

A successful build badge

Developers can easily check the build status using a build badge. However, the build badge displays the status of a particular branch and it?s not always feasible to check the status of each branch or all branches at once. 

Now imagine in an Enterprise development workstream where multiple developers work on multiple branches. In such an environment, there are high chances that the build would break when working on HOT features. 

Thus to save time and effort, developers must ensure that they work on builds that are successfully committed. This situation can be sorted using the Bitbucket build status API, which displays the status of builds. So, with the usage of Bitbucket build status API you can always ensure that the developer picks a build with GREEN status which indicates passing.

In this article we will discuss:

  • What is a a continuous build Pipeline with Bitbucket server notifier?
  • How to configure the Jenkins plugin ? Bitbucket Server Notifier
  • How to generate notifybitbucket pipeline script
  • How to add this new pipeline function
  • How to view commit build status

What is a continuous build pipeline with Bitbucket server notifier?

In Jenkins, a pipeline is a group of events or jobs which are interconnected with one another in a sequence.

In a Jenkins pipeline, every job or event has some sort of dependency on at least one or more events.  The picture below represents a build or compilation flow and continuous delivery pipeline in Jenkins.

In general, when a developer commits into a repository, a webhook trigger is initialized by Bitbucket to trigger the Jenkins pipeline job. The pipeline passes through different sequential stages such as, checkout, mark the commit as IN-PROGRESS, Compile/Build/pack. Finally, the commit of build is marked as FAILED/SUCCESS.

These stages are interlinked. Every stage has its events, which work in a sequence called a continuous delivery pipeline.

In the repository developer can see the status of continuous build, whether it be success, failure, or in-progress.

The picture seen below displays three different branches with unique build status. 

You can click each of the build statuses to view the corresponding Jenkins job. 

How to configure Jenkins plugin ? Bitbucket Server Notifier?

To add the build status in Jenkins pipeline, you must configure Bitbucket service notifier Jenkins plugin.

In Jenkins, go to plugin manager and install the Bitbucket Server Notifier plugin. The picture shown below displays the Installed Bitbucket Server Notifier plugin in Jenkins. 

How to generate notifybitbucket pipeline script?

After installing the Jenkins bitbucket server notifier plugin, generate the pipe scripts using pipeline syntax. The below steps explain how to generate notifier pipeline script:

  1. Go to the http://jenkins:8080/pipeline-syntax/
  2. From the Sample step drop-down list, select ?notifyBitbucket Notify Stash instance?.
  3. Click Advanced.
  4. Fill the following details:
    1. Stash URL: Your on-prem bitbucket URL.
    2. Credentials: Click Add and select Create app password from the drop-down list. This password is your bitbucket server app password/or your account password.
    3. Commit-SHA-1: The commit ID that will be replaced dynamically. 
    4.  Select the following checkboxes:
  • Ignore unverified SSL certificates
  • Keep repeated builds in Stash
  • Consider UNSTABLE builds as SUCCESS notification?
  1. Click Generate Pipeline Script. The pipeline script is generated like shown in the above screenshot.
  2. Copy the generated notifyBitbuket script.

How to add this new pipeline function

The generated notifyBitbuket script is abstract . You must create a parameterized function, so that the notifybitbucket will be called to update the build status in Jenkins pipeline during each stage.  

  1. Create a new function in your Jenkins file.
   
 def notifyBitbucket(String state) {
 
    if('SUCCESS' == state || 'FAILED' == state) {
    // Set result of currentBuild !Important!
        currentBuild.result = state
    }
 
    notifyBitbucket commitSha1: this.COMMIT_HASH, considerUnstableAsSuccess:        true, credentialsId: 'BitbucketAppPassword', disableInprogressNotification: false, ignoreUnverifiedSSLPeer: true, includeBuildNumberInKey: false, prependParentProjectKey: false, projectKey: '', stashServerBaseUrl: 'http://repository.url/'
 
}

In the above notifyBitbucket function, the commitSha1 is assigned from the git scm variable.

Note: The commitSha1 is assigned by custom global variable at checkout stage.

After adding the function, the Jenkins file will look like seen below. 

COMMIT_HASH  will be loaded from the GIT SCM variables, GIT_COMMIT.

  #!/usr/bin/env groovy
def COMMIT_HASH
node {
    stage('checkout') {
        // Before start the build set as INPROGRESS               
        scmVars =  checkout scm
        this.COMMIT_HASH = scmVars.GIT_COMMIT
        this.notifyBitbucket('INPROGRESS')
    }
   
    
  stage('install dependency') {
    // Skipped for this DEMO
  }
 
  stage('packaging') {
        try {
            sh "npm run build"           
            currentBuild.result = "SUCCESS"
        } catch (e) {
            // If there was an exception thrown, the build failed
            currentBuild.result = "FAILED"
            throw e
        } finally {
            // Success or failure, always send notifications            
            this.notifyBitbucket(currentBuild.result)
        }
  }
}
 
 def notifyBitbucket(String state) {
 
    if('SUCCESS' == state || 'FAILED' == state) {
    // Set result of currentBuild !Important!
        currentBuild.result = state
    }
 
    notifyBitbucket commitSha1: this.COMMIT_HASH, considerUnstableAsSuccess: true, credentialsId: ' BitbucketAppPassword', disableInprogressNotification: false, ignoreUnverifiedSSLPeer: true, includeBuildNumberInKey: false, prependParentProjectKey: false, projectKey: '', stashServerBaseUrl: 'http://repository.url/'
 
}

The example seen above displays the three stages of Jenkinsfile namely checkout, install dependency, and packaging. The currentBuild.result variable will be set for each stage to indicate its status. 

Call the notifyBitbucket on initial stage, in our case checkout stage, and call at the end of the stage packaging.

How to view commit build status

Follow the below steps to view commit status:

  1. In the bitbucket server, log on to your repository and navigate to All Branches Graph.

You will see all commits with their respective build status.

From the All Branch Graph, copy your successful commit, and execute the below command to work on a particular successful commit.

  $ git checkout 

Where, is 192ccbf60ee – the success build commit.

In this article, we have seen how to update build status in a Jenkins pipeline by using Bitbucket server notifier plugin. With that, we have found a way to insure that we always work on a successful build revision. 


Author bio: Vidhyadharan Deivamani is an engineer with over 10+ years of development experience in both Java and front end environment in conjunction with complex architectures.


Love sharing your technical expertise? Learn more about the Bitbucket writing program.

The post How to update build status for commits on Bitbucket Server appeared first on Bitbucket.

An introduction to Bitbucket Pipelines

An introduction to Bitbucket Pipelines

CI/CD tools are an integral part of a software team’s development cycle. Whether you’re using it to automate tests, a release process, or deployments to customers, all teams can benefit by incorporating CI/CD into their workflow.

Want to know the different between continuous integration, continuous delivery, and continuous deployment? Check out our CI/CD microsite for articles, guides, and more.

Bitbucket Pipelines is CI/CD for Bitbucket Cloud that’s integrated in the UI and sits alongside your repositories, making it easy for teams to get up and running building, testing, and deploying their code. Teams new to CI/CD all the way through to those with sophisticated delivery and deployment pipelines

Easy setup

Teams new to CI/CD or familiar with setting up their own CI servers will appreciate how easy it is to get started with Pipelines. It’s a 2-step process to configure a pipeline and there’s a number of templates for languages available to get started. And because Pipelines is a cloud-native CI/CD tool you never have to worry about provisioning or managing physical infrastructure, meaning more time focusing on other priorities.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 3

Integrations

We know every team has a different way of working and this extends to the tools they use in their workflow. With Pipes it’s easy to connect your CI/CD pipeline in Bitbucket with any of the tools you use to test, scan, and deploy in a plug and play fashion. They’re supported by the vendor which means you don’t need to manage or configure them and, best of all, it’s easy to write your own pipes that connects your preferred tools to your workflow.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 4

There are currently over 60 pipes offered by leading vendors such as AWS, Microsoft, Slack, and more. Learn more about our integrations and get started.

Deployments

For those looking to implement a more mature CI/CD workflow that involves a release process or deployment to environments, Pipelines offers a robust set of deployment functionality that provides teams the confidence and flexibility to track your code from development through code review, build, test, and deployment all the way to production. For more sophisticated workflows you can create up to 10 environments to deploy to, and see what code is being deployed where via the deployment dashboard.

Learn how to set up Bitbucket Deployments.

Increased visibility and collaboration

Visibility into what’s going on and what’s been deployed to customers is vital to all teams. Pipelines has integrations with tools like Jira, Slack, and Microsoft Teams that provides context on your builds and deployments right where your team plans and collaborates. For collaboration tools like Slack it’s easy to see what’s happening with your CI/CD tool and act on it too.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 5

When integrated with Jira Software, Pipelines provides visibility for everyone who works inside Jira from backlog all the way through to deployment, surfacing deployment and build status in Jira issues as well as which environments a Jira issue has been deployed to as well.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 6

Pricing

Pipelines pricing is based off a simple, consumption-based model of build minutes used, and every Bitbucket plan includes build minutes. Unlike other cloud vendors we don’t charge for concurrency, meaning you don?t pay extra to follow CI/CD best practice and run your pipelines steps as fast as you can.

Plan type Build minutes per month
Free 50 minutes
Standard 2500 minutes
Premium 3500 minutes

If you’re wondering where your team might stand when it comes to build minutes usage, we typically see small teams with fast builds using about 200-600 minutes. Head here to learn more about build minutes and how they work.

Get started with CI/CD today

Every team should have a CI/CD tool as part of their development toolchain, whether you’re simply interested in automated testing or looking to create sophisticated deployment workflows.

Whatever your requirements may be, a tool like Pipelines is perfect for your needs and it’s free to get started!

For a step-by-step tutorial of how to set up Pipelines for your team, head on over here.

The post An introduction to Bitbucket Pipelines appeared first on Bitbucket.

Preview your pull requests and more with Atlassian for VS Code 2.0

Preview your pull requests and more with Atlassian for VS Code 2.0

Earlier this year we released the Atlassian for Visual Studio Code extension, bringing Bitbucket and Jira closer to where developers work every day. The extension allowed you to view and create issues, pull requests, and pipelines without ever leaving the IDE. The extension speeds up your workflow with time-saving features like the ?Start work? button (to create a branch and transition an issue in one step) and the ability to create an issue from a //TODO code comment directly from your editor.

Today we’re excited to announce the release of Atlassian for Visual Studio Code 2.0 and 2.1! You can now preview pull request diffs before you create them so you can catch any last minute fixes. These releases also add support for Server & Data Center deployments of Bitbucket and Jira and a wider variety of Jira issue configurations, helping more developers save time. We’ve also got a single view for all of your issues and fine-tuned notifications so you can stay up to date on your work.

Pull request diff previews

While the extension allowed you to create pull requests directly from VSCode, you had to go back to the Bitbucket website in order to preview the diff you were about to ask teammates to review. In 2.1 you can now preview your pending PR’s diff before you create it, helping you catch any last minute fixes.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 7

Support for Server and Data Center editions of Jira and Bitbucket

With version 2.0 you can connect your IDE with on-premise deployments of Jira and Bitbucket to view issues, create pull requests, and more. You can also connect both to on-premise and cloud versions simultaneously.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 8
Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 9

Support for more Jira issue fields and configurations

Jira supports a wide range of fields and workflows adaptable for every team’s needs. This release adds support for more fields such as time tracking, adding issues to sprints, and removed default fields. Now you can view and update even more issues without having to leave your IDE.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 10

View all your issues across multiple sites and instances

Many large organizations split their work across several on-premise instances, cloud sites, or a combination of both. Contract developers find themselves working across many sites as they work with their customers. With this update, you can see a single list of all of your issues across all the sites and instances you’re working in. This gives you a complete view of your upcoming work so you can prioritize what to work on next.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 11

Fine-tuned notifications

The existing in-app notifications on new issue creation help reduce context switching. In this version you can now choose which specific Jira filters you want to be notified about. For example, you might want to receive a notification when a new issue is assigned to you but not for every new issue in a project.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 12

We’d love for you to try out the latest version of Atlassian for Visual Studio Code. Got feedback? Let us know via the Send Feedback button on the Atlassian Settings screen. Happy coding!

The post Preview your pull requests and more with Atlassian for VS Code 2.0 appeared first on Bitbucket.

Preview your pull requests and more with Atlassian for VS Code 2.0

Preview your pull requests and more with Atlassian for VS Code 2.0

Earlier this year we released the Atlassian for Visual Studio Code extension, bringing Bitbucket and Jira closer to where developers work every day. The extension allowed you to view and create issues, pull requests, and pipelines without ever leaving the IDE. The extension speeds up your workflow with time-saving features like the ?Start work? button (to create a branch and transition an issue in one step) and the ability to create an issue from a //TODO code comment directly from your editor.

Today we’re excited to announce the release of Atlassian for Visual Studio Code 2.0 and 2.1! You can now preview pull request diffs before you create them so you can catch any last minute fixes. These releases also add support for Server & Data Center deployments of Bitbucket and Jira and a wider variety of Jira issue configurations, helping more developers save time. We’ve also got a single view for all of your issues and fine-tuned notifications so you can stay up to date on your work.

Pull request diff previews

While the extension allowed you to create pull requests directly from VSCode, you had to go back to the Bitbucket website in order to preview the diff you were about to ask teammates to review. In 2.1 you can now preview your pending PR’s diff before you create it, helping you catch any last minute fixes.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 13

Support for Server and Data Center editions of Jira and Bitbucket

With version 2.0 you can connect your IDE with on-premise deployments of Jira and Bitbucket to view issues, create pull requests, and more. You can also connect both to on-premise and cloud versions simultaneously.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 14
Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 15

Support for more Jira issue fields and configurations

Jira supports a wide range of fields and workflows adaptable for every team’s needs. This release adds support for more fields such as time tracking, adding issues to sprints, and removed default fields. Now you can view and update even more issues without having to leave your IDE.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 16

View all your issues across multiple sites and instances

Many large organizations split their work across several on-premise instances, cloud sites, or a combination of both. Contract developers find themselves working across many sites as they work with their customers. With this update, you can see a single list of all of your issues across all the sites and instances you’re working in. This gives you a complete view of your upcoming work so you can prioritize what to work on next.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 17

Fine-tuned notifications

The existing in-app notifications on new issue creation help reduce context switching. In this version you can now choose which specific Jira filters you want to be notified about. For example, you might want to receive a notification when a new issue is assigned to you but not for every new issue in a project.

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 18

We’d love for you to try out the latest version of Atlassian for Visual Studio Code. Got feedback? Let us know via the Send Feedback button on the Atlassian Settings screen. Happy coding!

The post Preview your pull requests and more with Atlassian for VS Code 2.0 appeared first on Bitbucket.

Community Contest: Software Development Horror Stories

Community Contest: Software Development Horror Stories

To celebrate Halloween season, the Bitbucket team put out a call to the Atlassian Community to share their spookiest software development stories. And Community delivered, sharing their tales of zombie issues, lurking semi-colons that snuck in right before a code freeze, and bugs that just wouldn’t die.


Below is the winning story from Jimmy, a Bitbucket user and Developer for Igloo Software in Canada. 

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 19

When we first started using Git and Bitbucket Server (which was Stash at the time).  I accidentally reverted a merge commit in our master branch the day before we were supposed to deploy to production.  

In my inexperience at the time, my first plan to fix this was to reset the head commit to an earlier point in history. Instead, I ended up wiping out about two months of development changes.

Let?s give some context. This all happened 5 years ago when our company did a shift from SVN to Git and no one was experienced with Git?s commands or structure.  We were getting ready for our current release, and I was asked if we could make three versions of our “Release” branch that contained different sets of features. 

We were just finishing testing a couple of features and the project manager weren’t sure if we were ready to release feature “A”, feature “B” or both. In hindsight, the feature release should never have been dictated by the project manager. If testing wasn?t finished, we shouldn?t have been trying to ship either of them!

Still, I went to work to setup these branches. I managed to merge things out of order and ended up getting the features mashed together in the wrong branches.  My first response was to revert the merge commit that matched the feature I didn’t want to merge. Unfortunately, this also reverted a number of bug fixes required for our release.  So, my next action was to simply reset to a point in history before I made any of the merges.  

No big deal right?

 I didn?t realize what I was doing at the time, but the specific options I was passing in actually deleted all the history to that point.  So now I’m in a state where my local and the remote have a bunch of history deleted, which anyone who uses Git can explain is no good. I decide that I now need to raise some red flags to the development managers and explain what I did.  They were very understanding, since no one was very experienced with Git and thankfully, I wasn?t blamed for doing the wrong thing.

At this point, we talked with the rest of the development team to see if anyone had not pulled from the remote since I pushed those ?bad? changes. Most people had already pulled the bad changes, but we managed to find a few people who had not. Still, their versions were missing some of the other changes we needed.  

We were able to combine their changes with some cherry-picking from one of my branches into a single branch and we forced that branch on top of the “bad” release branch on the remote.

We made sure that everyone fetched this copy of the branch so that everyone would have this new consistent history. From then on, we started looking closer at understanding the Git commands and their options. And the features in Bitbucket that we can use to help prevent these types of mistakes in the future, such as preventing rewriting history, became key to our workflow. 

While that incident was a long and stressful night as we tried to recover in time for our production deployment, I did learn  three big lessons out of the experience that?s stuck with me since

  1. Always double check everything locally before pushing to the remote.
  2. Creating backup branches locally ?just in case? never hurts.
  3. Using serious branch permissions to prevent deleting branches and rewriting history can save you from stupid mistakes.

The post Community Contest: Software Development Horror Stories appeared first on Bitbucket.

Community Contest: Software Development Horror Stories

Community Contest: Software Development Horror Stories

To celebrate Halloween season, the Bitbucket team put out a call to the Atlassian Community to share their spookiest software development stories. And Community delivered, sharing their tales of zombie issues, lurking semi-colons that snuck in right before a code freeze, and bugs that just wouldn’t die.


Below is the winning story from Jimmy, a Bitbucket user and Developer for Igloo Software in Canada. 

Automating Amazon Elastic Container (ECR) container builds using Bitbucket Pipelines 20

When we first started using Git and Bitbucket Server (which was Stash at the time).  I accidentally reverted a merge commit in our master branch the day before we were supposed to deploy to production.  

In my inexperience at the time, my first plan to fix this was to reset the head commit to an earlier point in history. Instead, I ended up wiping out about two months of development changes.

Let?s give some context. This all happened 5 years ago when our company did a shift from SVN to Git and no one was experienced with Git?s commands or structure.  We were getting ready for our current release, and I was asked if we could make three versions of our “Release” branch that contained different sets of features. 

We were just finishing testing a couple of features and the project manager weren’t sure if we were ready to release feature “A”, feature “B” or both. In hindsight, the feature release should never have been dictated by the project manager. If testing wasn?t finished, we shouldn?t have been trying to ship either of them!

Still, I went to work to setup these branches. I managed to merge things out of order and ended up getting the features mashed together in the wrong branches.  My first response was to revert the merge commit that matched the feature I didn’t want to merge. Unfortunately, this also reverted a number of bug fixes required for our release.  So, my next action was to simply reset to a point in history before I made any of the merges.  

No big deal right?

 I didn?t realize what I was doing at the time, but the specific options I was passing in actually deleted all the history to that point.  So now I’m in a state where my local and the remote have a bunch of history deleted, which anyone who uses Git can explain is no good. I decide that I now need to raise some red flags to the development managers and explain what I did.  They were very understanding, since no one was very experienced with Git and thankfully, I wasn?t blamed for doing the wrong thing.

At this point, we talked with the rest of the development team to see if anyone had not pulled from the remote since I pushed those ?bad? changes. Most people had already pulled the bad changes, but we managed to find a few people who had not. Still, their versions were missing some of the other changes we needed.  

We were able to combine their changes with some cherry-picking from one of my branches into a single branch and we forced that branch on top of the “bad” release branch on the remote.

We made sure that everyone fetched this copy of the branch so that everyone would have this new consistent history. From then on, we started looking closer at understanding the Git commands and their options. And the features in Bitbucket that we can use to help prevent these types of mistakes in the future, such as preventing rewriting history, became key to our workflow. 

While that incident was a long and stressful night as we tried to recover in time for our production deployment, I did learn  three big lessons out of the experience that?s stuck with me since

  1. Always double check everything locally before pushing to the remote.
  2. Creating backup branches locally ?just in case? never hurts.
  3. Using serious branch permissions to prevent deleting branches and rewriting history can save you from stupid mistakes.

The post Community Contest: Software Development Horror Stories appeared first on Bitbucket.