First a big thanks to all of you for visiting our booth and for the great discussions we’ve had.
The line-up of this conference was impressive and I attended my best session for a while thanks to Angie Jones, Test automation at Twitter. We also had great time at the with great products update and demo. I’d like to highlight the discussions we’ve had with Perfecto mobile. This solution is a great fit with Hiptest if you want to execute your BDD scenarios in the cloud for your mobile or Web App.
Test automation at Twitter
At Hiptest we (as a DevOps team) use massive test automation and are eager to find new ways to improve our process. Here are 3 great practices Angie shared with u
1. Which tests should be automated?
When working on a new feature, we may have couple of acceptance criteria defined. To validate the feature and make sure there will be no regression in the future, we should automate them. Should we? That’s tough question because these feature tests are usually complex to automate and quite long to execute. So they tend to slow down the CI/CD process and feedback loop.
A good way to identify the acceptance criteria to automate is having the discussion with the business. Which use cases are critical? These are the one we should automate. And the other tests will be executed manually. So, getting back to the business needs and priorities is definitely a great tip
2. Include test automation as part of the definition of done
Before starting the implementation of a feature, Angie recommends having a 1 hour discussion between developers and test automation engineers. During this discussion, they will discuss how the feature and GUI should look like. This sounds familiar to teams practicing Behavior Driven Development. But she also recommends defining object Ids during this meeting. This way, test automation engineers and developer can work in parallel. They have a shared understanding of the GUI objects to create and use for automation. Awesome advice
3. Use multiple levels of the pyramid
The last advice I found really striking is the different automation layers used for these feature tests. To be close to the real user experience, these tests should be automated at the GUI level. But not necessary all the steps. Let’s take an example:
Feature: as a user of ACME website, I want to be able remove articles from my cart
Given the cart has 1 article When I delete the article Then the cart should have 0 article
The “When” step should be automated at the GUI level as we want to make sure the delete button works. This is the main action. But the setup don’t need to be automated at the GUI level. No need to reuse “I search for an article” and “I add an article to my cart” at the GUI level. These actions are already tested in other feature tests. So let’s use directly the service level to setup this context. That will speed up the execution and will avoid having multiple tests failed if the action “I add an article to my cart” fails.
What about you? Do you have any other tips or advices to share?