Using Atlassian Stash pull requests for mandatory code reviews

During my early years of software development I used to think of Code Reviews as a necessary bureaucratic monster, process designed to stop me from delivering the value and focus on pointing out mistakes.

My outlook at it has changed. There are many benefits of code reviews. Some of them that are more important for me:

  • increases quality of code therefore improves maintenance
  • facilitates sharing of the information and knowledge with fellow developers
  • improves my coding skills thanks to feedback

At RBS we are using Subversion and GIT as our SVC tools. We are using Stash to manage our repositories. Stash has a very useful features that could help setup the code review as a mandatory process, before the code is merged into the main branch. In this post I would like to show you how to set it up and how to use it.

Use case for code reviews

The use case for the Mandatory code review is taken from a real case brought at my work by one of the teams. The team was typical, Technical Lead, Senior Devs and Junior Devs. They wanted to leverage the Code Review goodness for learning.

What the users wanted to do is:

  • allow only specific users to be able to modify code in Master branch of GIT repository
  • allow everyone else on the team to create their local branches and push those branches into remote repository
  • have the ability to raise a code review of changes made on a user branch before merging the changes into the Master
  • have the ability to comment, decline the changes
  • once the changes were accepted to allow anyone with enough permissions to merge the code

You might notice similarity in that process to the one that is quite common in the Open Source community and was championed by GitHub, called Pull-Request (on a side note, this site is great EpicPullRequests).

Preparing repository for code reviews (or for Pull Requests)

First thing to do would be to make sure that all the people in your team are Contributors to a project. I have a group of users in Stash called superheroes. I need to set them as a Contributors on my project.

My user group Superheroes setup in Stash
My user group Superheroes setup in Stash
Project level permission settings
Project level permission settings

What I’ve done above means that everyone superhero in the group would be able to contribute to the project. The next step will restrict the changes on Master branch and allow it only for a specific user (in our case, Superman).

Adding branch permissions for Superman to a Master
Adding branch permissions for Superman to a Master

The above action will result in only Superman being able to make any changes on Master.

Batman trying to push into Superman repository and fails
Batman trying to push into Superman repository and fails

What would Batman do?

For Batman (the user that is restricted on Master but allowed on Project level) to be able to work he needs to work on a branch, push that branch into Stash and create a merge request (Pull request).

Batman working on a Batmobile feature on it's own branch. Pushing to remote repository after the work is done
Batman working on a Batmobile feature on it’s own branch. Pushing to remote repository after the work is done

Creating the Pull Request

When Batman finished working on the feature he would like to Batmobile to become mainstream and be adopted by all Superheroes. What he needs to do is to merge hist feature into the Master branch. We know already that he cannot do it as someone need to review his changes. In our case it’s the Superman.

Batman creates a Pull Request.

Batman Creates a pull request for his changes to be merged into Master. The selected reviewer is Superman
Batman Creates a pull request for his changes to be merged into Master. The selected reviewer is Superman

What Superman will see once he is logged into Stash he can review the Pull Request, approve them, decline, comment, etc.

Screen that Superman see when he reviews Barman's pull request
Screen that Superman see when he reviews Barman’s pull request

Once the request is approved, Superman or anyone else with the permissions to modify Master can merge it.

Possibly worth to mention the fact that it is possible for anyone to review the changes as it is possible for Batman to request anyone to be the reviewer, however, only the users with enough privileges will be able to merge the changes.

Superheroes conclusion

The above setup leverages the feature of Branch Permissions in Stash. Anyone who would like for changes to be merged into the Master branch will need to go through Code Review.

Wishing you many happy reviews and much more learning.

How to use Gradle Wrapper to build project in TeamCity inside enterprise network

Gradle is a great tool for building projects. I’m using it to build Java and Groovy modules. TeamCity is a Continuous Integration server that many teams are using in RBS.

We have a rather large farm of build agents. Some of them are specifically build to suit various build requirements (for example OS, or Browser version). However, majority of the Agents are generic and could be used by any build and project.

By default we don’t have Gradle distribution installed on those TeamCity agents. TeamCity doesn’t come with bundled version of Gradle either. We could install versions of Gradle on the Agents, however it’s impractical due to the number of the Agents and the fact that there is many distributions of Gradle that could be required.

Solution to that problem could be Gradle Wrapper. Gradle Wrapper contains few files that you should include as a part of your project.

In this article I will introduce Gradle Wrapper, how to use it and how to set it up in TeamCity so it works behind firewall/proxy in enterprise network.

IntelliJ Project view with Gradle Wrapper files
IntelliJ Project view with Gradle Wrapper files

The main role of wrapper is to Download distribution of Gradle and execute the build independently of the platform.

The interesting bit is that you can use Gradle to generate those files.

Creating Gradle Wrapper files

The Gradle Wrapper files could be copied from another project or generated using Gradle Wrapper task.

task prepareWrapper(type: Wrapper) {
   gradleVersion = '1.4'
}

The above lines show how to create the wrapper task in your project build.gradle file. There are number of properties that you can set. I will discuss those further. Documentation for those properties can be find at Gradle documentation page: http://www.gradle.org/docs/current/dsl/org.gradle.api.tasks.wrapper.Wrapper.html

Results of Execting Prepare Wrapper
Results of executing Prepare Wrapper

The task will generate folders and files that could be seen in the top picture of this post.

Using Gradle Wrapper 

You should be able to use Gradle Wrapper in the same way you use Gradle from your command line.

Using gradle wrapper script to list the tasks.
Using Gradle wrapper script to list the tasks.

When you execute the Wrapper for the first time it will download the distribution first (just as you can see on the picture above).

You could face first problem if you are inside a corporate network, behind a firewall and proxy.

Setting Wrapper to work behind Proxy

There are two ways you can address the issue:

  1. Setup proxy details on your Gradle Wrapper Script
  2. Provide Wrapper Distribution URL somewhere reachable within your corporate network

To setup proxy details you could modify gradlew and gradlew.bat files. Top of both files contain DEFAULT_JVM_OPTS system variable that you could set. For example:

#!/usr/bin/env bash

##############################################################################
##
##  Gradle start up script for UN*X
##
##############################################################################

# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS="-Dhttp.proxyHost=proxy.host.net -Dhttp.proxyPort=8080 -Dhttp.proxyUser=proxy.user -Dhttp.proxyPassword='awesome-password"

To provide alternative Gradle Distribution URL you can set it up before you generate Gradle Wrapper files in your task of the build.gradle file.

task prepareWrapper(type: Wrapper) {
   gradleVersion = '1.4'
   distributionUrl = 'alternative.location'
}

Or alternatively you can modify the gradle/wrapper/gradle-wrapper.properties file.

#Wed Feb 27 11:54:01 GMT 2013
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=http://some.location.net/gradle-distributions/gradle-1.4-bin.zip

The benefits of first approach is that you don’t need to host Gradle distribution anywhere within your network. The minus is when proxy requires authentication, you will need to put credentials in the file.

The benefits of a second approach is that you don’t need to modify gradlew script files and you don’t need to provide proxy user credentials. Also, distribution is hosted internally which potentially could mean faster downloads. The downside is the fact that you need to host it somewhere internally accessible via HTTP protocol.

Setting up TeamCity build

The task is relatively simple with only one hurdle to overcome. Once the Gradle Wrapped downloads the Gradle distribution, where does it actually put it, and where will the dependencies downloaded during build phase go.

TeamCity build setup page with Gradle Wrapper enabled
TeamCity build setup page with Gradle Wrapper enabled

Note that I’ve not declared where are those directories that the downloads (Gradle distributions and build dependencies) will go. We can set this up in two places:

  1. gradle-wrapper.properties file
  2. TeamCity system property for the build

Setting the Wrapper Properties file

The gradle/wrapper/gradle-wrapper.properties file could be modified directly or setup during the prepareWrapper phase.

The prepareWrapper task:

task prepareWrapper(type: Wrapper) {
    gradleVersion = '1.4'
    distributionUrl = 'alternative.location'
    distributionBase='/some/location/on/agent/gradle'
    zipStoreBase='/some/location/on/agent/gradle'
}

The properties file:

distributionBase=/some/location/on/agent/gradle
distributionPath=wrapper/dists
zipStoreBase=/some/location/on/agent/gradle
zipStorePath=wrapper/dists
distributionUrl=http://some.location.net/gradle-distributions/gradle-1.4-bin.zip

Benefits of this approach is that you don’t have to configure anything specific in TeamCity. The downside is that you need to know in your script the details of Agent file system.

Setting the TeamCity system property

The system property to set should be the one referenced by gradle-wrapper.properties which is GRADLE_USER_HOME.

Example of TeamCity system property setup
Example of TeamCity system property setup

At this point it is important to mention one thing: if the same GRADLE_USER_HOME is used within different builds, it could potentially save time on downloading Gradle distribution and build dependencies.

I wish you many happy builds.

Modelling Deployment Pipeline with JetBrains TeamCity

Deployment Pipeline is a concept of Continuous Delivery. In the simplest way possible explaining Deployment Pipeline I would say that it is an automated way of getting the software from version control into the hand of end user.

The process of getting the software from the version control such as Subversion or Git into the end user typically involves traveling through checkpoints. Those checkpoints (or steps) could be: Building the software -> Unit testing -> Automated acceptance testing -> Deployment into QA/UAT/Staging environment -> Manual QA -> Release into production.

Example of Software Delivery

Example process diagram for changes moving through the Deployment Pipeline.

For more detailed description of Deployment Pipeline I recommend this article http://www.informit.com/articles/article.aspx?p=1621865  from Continuous Delivery gurus (Jez Humble, David Farley).

No matter what is the process there is a need for tools to automate and support the Deployment Pipeline. This automation is typically handled by the Continuous Integration servers. Some of the CI server out there:

I’m going to take a closer look on the way TeamCity supports modelling the Deployment Pipeline.

How to setup TeamCity

Step dependencies

When designing the Deployment Pipeline it is very important to create number of steps reflecting delivery process that have dependency on a previous Steps (checkpoints) of the process. Among many features that TeamCity have, exists the one for adding Snapshot Dependency between different builds.

Picture below shows example of two TeamCity builds representing two steps in my project delivery.

Build dependencies

Build called Acceptance tests on Staging has dependency on Deploy to Staging.

What this mean in practice is that TeamCity will execute the Acceptance Test on Staging build only if there is a successful Deployment to Staging environment.

Triggering subsequent builds

Subsequent build could be triggered automatically when previous build in the Deployment Pipeline was successful. For example, in out configuration Automated Acceptance tests on Staging environment are triggered automatically as soon as deployment finishes.

This is achieved by setting up TeamCity build to be triggered on Finished build, just like in the next screen cap example.

Trigger

There are steps in the Deployment Pipeline that require and should be manually triggered. In our case the Deployment to Staging environment is manually triggered by user. This happens as we only want some versions and control when they are released into that environment. It is enough to remove any triggers from the Build Triggering configuration to make it manually triggered.

Viewing the build pipeline

Latest version of TeamCity (I’m using TC 7.1.3) has a nice feature in the project view called Build Chains. It’s nothing else but a unified single view of dependent builds (Deployment Pipeline).

Example pipeline from my project:

Deployment pipeline

In our Deployment Pipeline, every commit of a code will trigger first build Clean and Compile the project. Pipeline stops at the deployment into smoke environment. Once confirmed the next station is a deployment into Staging. Final stop is the Release into production.

In our case the view of a build chain is the exact representation of our automated Deployment Pipeline.

Setting up Grails directories to NOT use default user home folder

I have my user home folder at work mapped to a network drive. It is incredibly slow. Unfortunately many applications use home folder as the default dumping ground. The same applies to Grails. It puts it’s cached files, ivy dependencies and who knows what else in there.

It is not a straight forward to change that location and to be honest, not very well documented.

Anyway, here is the best possible solution I managed to find on my Windows box:

  • in folder : %USER_HOME_FOLDER%/.grails create file called settings.groovy
  • in settings.groovy file add line: grails.dependency.cache.dir= “path/you/want” – this is for the Ivy downloaded cache dependencies
  • in settings.groovy file add line: grails.work.dir=”another/path”
  • a system variable called: GRAILS_AGENT_CACHE_DIR to another location you would like.

These settings made Grails stay away from my mapped network home folder.

Greg

TestNG NG, Next Generation of a runner for TestNG

I’ve been working with TestNG for 2 years now. The team that I work on made a decision to switch to TestNG thanks to one very important feature, @BeforeSuite and @BeforeClass. We are using Gradle as our build tool. Gradle supports TestNG tests execution.

With all the great features that TestNG comes with it also comes with features that could obfuscate test code readability. We also observed that there is no guarantee that the tests in the same class file will be executed together as a set.

With 25 minutes time of my train ride to/from work and determination to build something sweet and simple, I decided to create a Gradle plugin that will run TestNG tests in deterministic order, supporting only a small set of TestNG features.

I’ve made the code available here: https://bitbucket.org/gigu/testngng. The code is still work in progress, however there is already a functional Gradle plugin that can produce same style of XML reports as TestNG itself. There is also option of a Html report with simple and pretty style. I’ve written it in Groovy as I like Groovy. It’s Open and available to anyone.

Features supported by TestNGNG

  1. TestNGNG will recursively scan class folder passed in as parameter, in search of possible tests files. It treats this top-level folder a Suite.
  2. TestNGNG ENSURES all the tests in a class file are executed as a set of tests.
  3. TestNGNG will build a tree of tests and dependencies at the beginning of a run, before it executes a single test. The test tree is build in a form of a: One Test Suite -> Many Test Classes -> Many Tests.
  4. TestNGNG supports original TestNG annotations, it doesn’t have it’s own annotations as it is only a runner.
  5. TestNGNG support @Test annotation of a method or a class.
  6. It supports dependencies between test methods with:
    @Test(dependsOnMethods=”foo”)
  7.  It supports @BeforeSuite/Class/Test/Method and @AfterSuite/Class/Test/Method. However, @BeforeTest, @AfterTest and @BeforeMethod, @AfterMethod are treated in the same way and executed before/after each test. Just to avoid (or add to) confusion.
  8. It supports disabling of a test by enabled attribute of an annotation:
    @Test(enabled=false)

    I’m not proud of this feature though and am tempted to remove it.

  9. It supports exception expectation by expectedException attribute:
    @Test(expectedException=WhatevaException.class)
  10. TestNGNG supports data providers, however they have to be declared within the same Test Class file. Data providers could be named, or anonymous. For example:
    @Test(dataProvider = "makeMeSomeData")
      public void testWithProvider(String v1, String v2){
      System.out.println(String.format("%s - %s ", v1, v2));
    }
    @DataProvider
    public Object[][] makeMeSomeData() {
      return new Object[][]{
        {"some1", "Some2"},
        {"some3", "some4"}
       };
    }
    
  11. TestNGNG supports setup methods inherited from base classes. For example:
    public abstract class BaseForTestWithTestSetupMethods {
        public String baseValue = "";
        @BeforeMethod
        public void executeBeforeSuite() {
            baseValue = "BeforeMethod";
        }
    }
    
    public class TestClassExtendingFromBaseClass extends BaseForTestWithTestSetupMethods {
        @Test
        public void shouldPass() {
            assertThat(baseValue, is("BeforeMethod"));
        }
    }
    

Gradle plugin

Gradle plugin is very simple to use. It only requires plugin jar on a class path and TestNG as a testCompile time dependency. The plugin adds testngng task to the project. Sample use of a plugin:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath fileTree(dir: '../gradle-plugin/build/libs/', include: '*.jar')
        classpath fileTree(dir: '../testngng/build/libs/', include: '*.jar')
        classpath group: 'org.codehaus.groovy', name: 'groovy-all', version: '1.8.0'
    }
}
apply plugin: 'java'
apply plugin: 'testngng'
repositories {
    mavenCentral()
}
dependencies {
    testCompile('org.testng:testng:6.5.1')
}

Most of the other TestNG features are not covered as they are useless and very often stand in a way of tests readability and simplicity.

Don’t ask what you test framework can do for you, ask what can you do for your test framework.

How to build the plugin and use it

You would need to checkout the project from a BitBucket and build it with Gradle by typing from command line:

gradle jar

The build jar will be used by test runner Gradle plugin inside inside the testngng_gradle_sample project. You can ran gradle command from the sample project folder:

gradle testngng

I would love to see someone having a go and providing me with some feedback. There is still a whole lot of stuff on my list that needs development in the near future, like a build tests running feedback, more plugin configurations and multiple suites per project/module.

Cheers, Greg

Post-Redirect-Get pattern with Grails

In this post I would like to share with you a very common pattern used in web application development and how I implemented the pattern in my Grails application. Let’s start with a simple explanation of the pattern and what is it useful for.

Post-Redirect-Get

Post-Redirect-Get refers to the flow or process that web applications follow. When user submits the form in the browser the information is typically send with HTTP POST method to a web server. Application processes the information and sends back the response. Response is sent in a form of redirect to another view. Browser loads new state from web server using GET method.

Post-Redirect-Get pattern

Some of the benefits of using this patterns are:

  • Pressing refresh button in the browser will not cause duplicate form submission. The most annoying dialog box asking you if you want to resubmit your form will be gone.
  • Bookmarking result page would be possible.
  • Pages with forms submissions will be gone from your browser history.
  • Nice and clean separation of the HTTP methods that change state of an object (POST, UPDATE, DELETE) from non-destructive, read-only methods (GET).
  • Easier to test response, as all you need to check is the redirection. Typically when form is submitted successfully it will redirect to different page then failed submission. To test the behavior it is enough to test the redirection.

With the pattern in mind and the benefits let us try to look at the concrete FooBar example.

Grails example

I assume you know what Grails is. I’ve created a project and created domain object called Foo. The only property it has is a String, Bar that cannot be blank.

package post.redirect.pattern
class Foo {
  String bar
  static constraints = {
    bar blank: false
  }
}

Foo controller with two methods responsible for creation of new object.

class FooController {
…
def create() {
  def instance = new Foo(params)
  if (flash.model){
    instance = flash.model
  }
  [fooInstance: instance]
}
def save() {
  def fooInstance = new Foo(params)
  if (!fooInstance.save(flush: true)) {
    flash.model = fooInstance
    redirect(action: "create")
    return
  }
  flash.message = "Hooray, you did it!"
  redirect(action: "show", id: fooInstance.id)
}
…
}

When user submits the form and the object is invalid, the browser is redirected back to the form instead of rendering the form with error messages.

There is an extra check to see if user arrived on this page after unsuccessful form submission.

I used flash to store invalid object between pages, so the user could get feedback on what is wrong with provided values.

If you want to try on another example, the easiest way is to create a domain class and generate the scaffolding for it. You can try the browser behavior on generated code.Try refreshing the page after form submission (valid and invalid), try to navigate between pages using Back and Forwards buttons of your browser, try to bookmark the submitted page.

Later, go and modify the code, replace renders with redirects and see browser behavior after that.

Summary

This is all there is to it. It’s rather simple and easy to follow pattern with number of benefits. Grails makes it a doodle to implements,. Have fun redirecting.

Greg

Unit testing Grails controllers with duplicate form submission check functionality

I’ve been doing some Grail 2.0.1 development recently. I like the maturity of framework and the ease of doing things.

One of the things Grails comes with is a simple way of avoiding duplicate form submission. Have a look at the code bellow:

def myControllerMethod(){
  withForm{
    render “theGoodStuff”
  } .invalidToken {
    render “theBadStuff”
  }
}

That’s the controller bit. In your view you need to enable use of that feature by passing useToken parameter to form tag:

<g:form action=”myControllerMethod” useToken=”true”></g:form>

It looks very simple and elegant. However when we would like to test the controller we need to make sure we match the token when calling the method.

Documentation on testing Grails application and this particular functionality contains the way of doing so, however I found it not working with Grails 2.0.1.  Not much was blogged about it so I looked through the mailing lists. I found one trail and a bug report for this issue.

Anyway, to make it work, the piece of documentation from Grails, version 1.4.x explains how to do it, and it works.

In controller test method we need to place this code:

…
def token = SynchronizerTokensHolder.store(session)
params[SynchronizerTokensHolder.TOKEN_URI] = '/myController/myControllerMethod’
params[SynchronizerTokensHolder.TOKEN_KEY] = token.generateToken(params[SynchronizerTokensHolder.TOKEN_URI])

controller.myControllerMethod()
…

Happy testing. Greg

Useful links:

http://grails.1312388.n4.nabble.com/grails-2-0-testing-controller-with-withForm-invalid-notation-td4316150.html
http://jira.grails.org/browse/GRAILS-8504
http://grails.org/doc/1.4.x/guide/9.%20Testing.html
http://grails.org

Automated software release in complex environments

Automating software release might seem like an impossible task in a complex environment. I would like to give you, my dear reader handy tips and tricks for achieving it.

breaking chain

Software journey

Either you know it or not, the piece of software that enables you to read this article went a long journey. It was invented, thought through, designed, developed, tested, fixed and then delivered to you. Software continues to evolve. New functionality is invented, designed, developed, tested, fixed and delivered to you again. This cycle happens over and over again. If it doesn’t the software is dead, and I wouldn’t use it if I were you J

Note the last cycle in the software journey: delivery. I like to call this process Software Release or Unleashing the Software. Release depends on many factors. One of them is software purpose. It is enough to make desktop application available (via web site download for example) so the user can get it and install it on a desktop. Web applications are delivered to a web server.

Release process

Software Release process could be simple or could be very complicated. Lets examine three examples:

  1. Desktop application release could involve copying (perhaps extracting) number of files into known location and perhaps making some changes to saved files (for example user files with configuration and settings in user directory).
  2. Web application release could involve pushing new version of the application to web servers, restarting them, migrating the data stored in database.
  3. Multi tiered enterprise applications could involve stopping a number of services, waiting for suitable services state (for example empty EMS service), migrating the data schemas (for example database migration), re-starting multiple applications, restoring application state, etc.

As you can see complexity of the process could be great. Some steps in the process could depend on another. Other steps could take great amount of time. Sometimes it is impossible to stop part of the application or a service for required release.

Whatever the problematic, time consuming and fragile steps of the release process are, you can be sure that sooner or later, someone will make a mistake during the release, causing downtime or even damage to a system.

That is why it is important to have automated release.

Automated release

My favorite form of automated release is One-Click-Deploy. A tool (script most likely) that will perform all the hard work, with a single user interaction.

Some of the benefits of release automation like this are:

  • Reduced mistakes when performing manual steps
  • No time wasted by a poor soul who have to go through manual process
  • Reusability and simplicity of using the same tool to release into different environments
  • A tool that verifies the release process itself (if something went wrong, it probably means that process needs updating)

Complex environments

Automating in complex environment is even more important then anywhere else. This is due to the fact that it is much simpler to make a mistake while releasing and cause problems. It also reduced the great amount of time consumed by manual steps.

It might be hard to automate release in complex systems. Consider the system I work on my current project.

Large distributed cache makes it difficult to stop the application, as data will be lost. EMS topics are read by database persistence application and fail-safe environment, so we need to wait for the topics to be read entirely. Once everything is stopped we need to roll the new code base to 20 – 40 physical hosts and apply all necessary configurations changes. Persistence layer needs to be migrated to represent Domain correctly. When we restart applications we need to reload it to the previous, valid state (load 300 GB worth of data into distributed cache). That is only a tip of the Iceberg that represents how difficult it could be to automate the release.

There are things that you can do or adopt on a project that would make it much easier for the automation to happen.

Tips and tricks

Tools and deployment environments

Use tools and deployment environments that have accessible API for management.

The environment or the containers that you deploy to should have open API for the management. If you are deploying into web server, you would like to be able to programmatically stop the server deploy new code, perform configuration changes, start it again and validate.

Most of the cloud services contain this kind of API. Typical Java Web containers are manageable via a public API.

If there is no public API but there is a web console for management it is possible to automate via typical Web Testing tool (eg: Selenium, Geb).

If you are starting the project and still have a chance to select appropriate environment, then you are lucky. You can make your life easier by selecting environment with public management API. If you are deep into the project and it is too late, there are always a ways to hack the system. Give it a try.

Use a tool that will be flexible and won’t enforce specific way of working.

Number of times I was on a project where the tool used for a release was enforcing a specific way of working. Instead of being productive in automating release process I was forced to fight it and hack it.

There are better things to automate your release with than XML.

If the tool doesn’t entirely do what you need, wrap it, enhance it or dump it in favor of other.

Many tools that are typically used for deployment are quite flexible and have build in mechanism for extending it. I found that writing a plugin or simply wrapping the tool in another process that controls it’s startup, makes it simpler to use and produces repeatable pattern.

If you are having more trouble with the tool itself or simply can’t get result required, don’t be afraid and dump it. There are always other tools. In a worse scenario you can write something simple and tailored for your specific needs.

Application design

Build your application in a way that will support state persistence and restoration.

It is not only the process and tools that you are creating or going to use that should support the automation; it should be the Application itself.

In the example of a complex environment I provided, I mentioned the distributed cache. If the cache is full of data, releasing new version of application could potentially cause the data to be lost. You need to think of a way to get it back to the previous state. The release process should accommodate for restoring the state from some king of storage (eg. Disk, database, replicated cluster, etc.).

Process

Keep your configuration with your source code.

Having environment configurations collocated with the source code in the source version control system has number of benefits:

  • It could be unit tested.
  • Provides a history of configuration changes. It is always possible to revert to a previous configuration that was working.
  • Configuration lives close to developers, who usually know the most on subject of required configuration changes.
  • Configuration ships with the deployable.
  • It contains a name of a person that modified it, so whenever configuration changes are not clear you can always ask the person who made a change.

Automate every step.

Don’t leave a chance of error creeping into your process by allowing manual steps. It might take some extra effort to automate those simple, little steps but the award is saved time and reliably released software.

Make sure to have everything that application needs to live contained within the release.

Don’t leave anything out: extra libraries, additional software installed, new versions, etc. Ship it all as a part of the release. It doesn’t have to be collocated with a source code (however it is quite beneficial sometimes). Make sure it is accessible from every point you are deploying your software into.

Keep logs, summary and history of releases.

Having release log helps to track the progress of release, identify issues and even test the release itself.  It is also handy when release takes long time and nursing it needs to be handed over to another person.

Summary page helps quickly identify the version of the software released into environment.

Famous last words

I know that sometimes automation might seem impossible but I also believe that impossible doesn’t exist. There is only easy and less easy to do. Automate your release and make your and your comrades life more enjoyable.

Many happy automated releases.

Greg

Summary of 2011

Past

Time for a little retrospective on what I learned during 2011.

Technologies I learned and improved skills in:

  • Gradle – build and release tool.
  • Groovy – dynamic programming language that runs on JVM
  • Coherence – distributed cache
  • Grails – web framework that runs on JVM. Groovy and Spring paired nicely together
  • Gaelyk – another web framework
  • GAE (Google App Engine) – cloud platform from Google
  • Objective-C and iOS development
  • Scala and Clojure, programming languages that runs on JVM

I did some research into Creativity and Motivation that resulted in few posts.

Future

This year I will focus more on functional languages and functional style of programming.

Main target this year for me is the work on GigReflex, Service that I work on with my friend Mike. Service will be deployed on one of the available PaaS cloud services running Grails web application framework.

Bye, bye 2011.

2012, here I come.

Greg

When done is DONE (or not)

Not too long ago I had a conversation with one of the senior member of the management team of project I’m working on. I had some ideas on how can we do things faster by improving our testing (not developer testing but end-to-end, QA and regression testing). After few minutes of conversation I was asked a very basic question: “What is the definition of DONE on our project?” I was just going to open my mouth and jump out with an answer like: “Well, it takes us usually 3 days to develop piece of functionality”, but I stopped. I actually wasn’t sure. We spend few more minutes discussing some other issues but when I left I felt that this question is still on the back of my mind, trying to desperately find the answer.

After number of attempts I decided to rephrase the question. What is the Goal of the project? Couldn’t find a simple answer. So I asked even more general question. What is the goal of the company? I recalled books by Eliyahu M. Goldratt, “The Goal” and “The Race”. “The goal of the company is to make money in the present as well as in the future”. The goal is to win the race for customers.

My team works on creating and maintaining the software that produces the data for other systems within the company. Those systems are used to deal with clients, to provide them with reports, to sell them information, to protect client interests. This means that our project indirectly contributes towards company’s goal.

All the other teams and projects that are receiving data from us are our customers. We should make all the effort to deliver the necessary features to the consumers in a timely manner as they will be use to generate the revenue.

That’s it. This is my understanding of DONE.

In other words:

  • It’s NOT DONE when BA finalizes the requirements and forms them as stories that got accepted by all stakeholders
  • It’s NOT DONE when developer finishes coding the solution and fixes all the bugs
  • It’s NOT DONE when QAs, BAs finish testing and approve the deliverables
  • It’s NOT DONE when Downstream systems receive the data and confirm it’s quality
  • It’s DONE when client receives the service that he or she requested thanks to the piece of software my team delivered. That’s when it’s DONE!

I think there is an important aspect to touch on. It is the effort of the entire team before DONE could be announced. No one should silo himself into a specific role and take responsibility for the specified area only. Developers should help with delivery of tools for release and testing automation, BAs should help with testing, QAs should help to form requirements, etc.

So, next time when you think you’re DONE, think again. Perhaps you are not really there yet but you could help someone else to make it happen.

Wish you all many happy DONEs in the future. Greg