Thursday, 4 June 2015

GUI Test Automation Principles - Cause & Effect Check

When your automation code does something that should cause an effect in the AUT you should check that the expected effect did actually take place before the test continues.
One aspect of this is gui object syncronisation. When you take an action in your AUT that, perhaps, takes you to a new page or screen you should check that the AUT really is displaying that new page or screen before the test continues.

It's good practice to use a dynamic check for GUI elements, checking for the expected status every x seconds for a maximum time of y seconds.  e.g. checking every one second with a 30 second maximum timeout.  This makes the test resilient to variable response times in the AUT whilst avoiding unnecessary waiting.  This is preferable to hard coded maximum wait times.

This approach supports the Fail Fast principle.




AUT = Application Under Test

Thursday, 7 May 2009

Java Client Embedded Browser QTP Headaches

The thick Java client that we test using QTP has an embedded browser hosting the Kana Knowledge Management System (KMS). You can configure QTP to play with the embedded browser. It works but it's a bit awkward because you can only use the Object Repository and Spy if you disable the QTP Java addin and even then QTP only recognises a subset of the objects in the embedded browser.

The subset of objects that QTP was willing to recognise in the embedded browser was enough to verify the test points that we were interested in but a recent update of the KMS has changed things, now QTP recognises fewer of the objects and this is causing us a bit of a headache.

My suspicion is that there's yet another object embedded in the embedded browser but I've not been able to get any more info on that yet.

Tuesday, 5 May 2009

Updated Hudson version

I updated our Hudson version last night. This has fixed a minor issue that was making life tedious for us. We have around 15 views (tabs) in Hudson. This is too many to fit on the screen and Hudson has no method of Wrapping or Stacking them so we get a horizontal scroll bar.

We use Hudson to run GUI Integration tests (using QTP) and it's very typical for us to manually kick of a single test. The problem with the many views and horizontal scroll bar is the the Hudson 'run build' button is off the right hand side of the screen and you have to scroll across to it. This becomes tedious when you're doing it a lot.

This new version of Hudson, 1.3.0.3 allows us to add, remove and relocate the Hudson columns so now the run build button is in the first column rather than the last. A simple problem but a sweet fix.

Monday, 13 April 2009

Results Signatures and Auto Outcome Adjust

One of the area's I work in is automated regression testing of our CRM solution. We've been improving our test execution solution since since we brought the reponsibility for execution and reporting back in house in Jan 09.

In our world we have test environments that may be unreliable or defective, bad test data causing test failures and, God forbid, maybe even faults in the automated test solution... ;) So there's a fair bit of investigation required before we decide that a test failure really is a fault with the product under test.

This blog entry discusses our Automatic Adjust Solution.

A Fail Signature is a unique collection of failed test point ID's.

We've been working with a Fail Signature Auto Adjust solution for a wee while and it's saving us from tedious, repetitive and wasteful root cause investigation.

Every test point within our automated tests (pass or fail) has a unique ID. So If we run a single test case, as well as a big log file telling us what happened during the test run we'll also have a list of all the failed test points... the Fail Signature.

Starting from a premise that ALL automated test failures need to be investigated before a REAL fail can be reported to the customer of our testing, we use our Auto Adjust to handle predictable, repeatable fail patterns to save us the time and effort looking into every fail.

Example: If our java client fails to load during the test the test will fail. The fail record will contain a unique ID for this test point. The Fail Signature is the collection of all the fail points. If the Fail Signature contains ONLY the 'client failed to load' fail point then the test results will be automatically adjusted to an 'EXEC FAILURE' with a comment that the client failed to load. We don't need to drill into the test logs to derive this conclusion.

When a tester investigates the root cause of a test fail and decides that the behaviour is likley to recur he adds the Fail Signature to our Auto Adjust database so that the same root cause analysis work is not needed next time. The tester decides on the outcome and comment that the Auto Adjust solution should add to the test report.

This automatic analysis of Fail Signature has been a real help in reducing the effort required for initial test fail root cause investigation. It has enabled us to significantly reduce the time taken to deliver the results of a test run to our customers.

Thursday, 20 March 2008

Subversion Commit Monitor


Commit Monitor is a tool developed by Stefan's Tools | A small collection of tools and utilities. It sits in your systray watching your subversion repository(s).

When files are commited to repositories / projects you're interested in it alerts you and allows you to check out the commit comments and even do a unified diff.


We previously used to send emails to the team when ever we commited changes that had significance for others. It's much easier to use the Commit Monitor as people can see the changes being made without relying on someone remembering / bothering to send an email.

This tool has made it easy for us to keep upto date with code commits with minimum effort for all team members.


Thanks to Stefan for this superb tool.

Cool Tools

Well...... I've not managed to do much with this blog. I've decided I might blog a bit about the tools I use for test automation. I'm always discovering little tools to solve little problems so I'll write a little about them I think.

Glenn

Thursday, 8 March 2007

Automated Code Documentation for QTP Actions

Goal:
My goal was to be able to automatically produce code documentation for my QTP Test Actions to make it easier for others to re-use the actions and avoid having to manually create and maintain the documentation.
Background:
I had already selected NaturalDocs (http://www.naturaldocs.org/) for documenting my VBScript support functions. I wanted to be able to produce similar documentation for my re-usable QTP Test Actions.
Developing the solution:
Note: I often use the term Function to refer to a QTP Action.
On inspecting the QTP test script directory I found that each Action has a directory beneath the QTP Test directory and each Action directory contains a script file: Script.mts.
NaturalDocs recursively searches the specified directory looking for code files. It expects a file header and function headers in the code file. I added a file header and function header to the first Action of my QTP Test and added a function header to each of the remaining Actions. When I ran NaturalDocs it found each of the QTP Action script files and produced the documentation from the File and Function headers but because each Action has a separate script file only the first Action has the file header and hence the other Actions do not get associated with the QTP script name. To fix this problem I wrote a small perl script that concatenates all the individual Action scripts into a single file named after the QTP Test. These concatenated script files are placed in a dedicated directory and this directory is processed by NaturalDocs. This approach ensures all the QTP Actions are documented as part of their QTP Test. I added a call to this perl script in the batch file that is used to run NaturalDocs so that the concatenated files are produced each time I run NaturalDocs.
I extended the NaturalDocs configuration a little to recognise the keyword Action and document it in the same was as a Function. I had already added the keyword Parameter for documenting function parameters in my VBScript support libraries.


Problems
  • Keyword View: The large number of comment lines in the Action Header are all shown in the Keyword View. This doesn't look very good and just wastes screen space. perhaps there’s a way to configure Keyword View to no show the comment lines?
Example of documented QTP Test Actions: Screen Shot


Setup QTP automated code documentation