At a previous company, we were using Robot Framework, extended with Python scripts, in order to automate our testing. I created a ‘recommended practices’ guideline back then for my colleagues that I’d like to share for future reference. The guidelines were tailored to address specific concerns with the automation suite, and the below list is not intended to be a one stop list of best practices.
- easy to maintain
- fast to run against every commit
- reliable – to prove the system works & to identify bugs
Challenges specific to the website under test
- business requirements change
- refactoring causes code to change
- some features are shared across multiple systems in our product suite
- dynamic web content
The following are recommended practices for test case design & style.
Automation test distribution
Develop an automated test strategy that distributes tests according to Mike Cohn’s testing pyramid:
Image source and further details on the approach are available from: ‘The forgotten Layer of the Test Automation Pyramid” by Mike Cohn.
The strategy is advocated and described in further detail by Thoughtwork’s Alistair Scott on his WatirMelon blog.
Robot Framework is a keyword driven test tool. By making use of Robot Framework’s keyword & template features we can reduce code duplication & make tests easier to read at all layers of abstraction.
Robot Framework Libraries
An advantage with Robot Framework is that many functions are already available as keywords via the Built In library or external libraries. New libraries can also be created. Make use of the libraries where possible.
Layers of abstraction
Abstract code into multiple layers using the Page Resources Model and/or UI Maps. If the business logic changes, often changes are only required in the highest layer of abstraction. If code changes, then often changes are only required in the page objects / UI Map layer. A test project can be abstracted according to the layers below:
Move complex logic to resources / libraries
To ensure that test scripts are more readable, complex logic can be moved into scripts which can be executed via keywords. Eg, for parsing data from an email, for looping through data in a table etc.
Polling vs Sleep
Wherever possible, always use Selenium’s ‘wait for’ commands vs sleep in order to progress to the next step in the test case in the fastest possible time.
Each test should be able to be executed independently / in parallel. Avoid dependency between tests in case 1 fails. Some techniques:
- Start each test by hitting the URL of the page under test directly
- Write each test with consideration for running it independently
- Use keywords for reusable test steps
- Make effective use of preconditions such as Background and Given commands
- Run on commands
Test set up / tear down
Ensure that the set up / tear down steps allow Robot Framework to progress to the next test in the case of test failure.
Avoid testing design implementation details
E2E and Acceptance tests scripts are best focussed on workflow and functionality, rather than the implementation detail of the design such as look / feel. Eg, write a step to ‘select cash payment option’ rather than a step which tests ‘select the radio button for cash payment’. This focusses the testing against ensuring it meets the business requirements rather than testing the detailed implementation on the UI which may change.
Cross browser testing
E2E and Acceptance test scripts are best executed against a single browser. The effort required to configure scripts to run against multiple browsers is often better invested elsewhere.
For UI design focussed testing, investigate the ability to create a Robot Framework keyword to compare images (expected vs actual). Screenshots can be captured using Robot Framework’s screenshot library as a starting point. Else, defer to other tools such as Galen, that are designed for this very purpose, of finding platform/browser specific bugs that manifest in the way a site is rendered.