Thursday, 7 May 2009

Java Client Embedded Browser QTP Headaches

The thick Java client that we test using QTP has an embedded browser hosting the Kana Knowledge Management System (KMS). You can configure QTP to play with the embedded browser. It works but it's a bit awkward because you can only use the Object Repository and Spy if you disable the QTP Java addin and even then QTP only recognises a subset of the objects in the embedded browser.

The subset of objects that QTP was willing to recognise in the embedded browser was enough to verify the test points that we were interested in but a recent update of the KMS has changed things, now QTP recognises fewer of the objects and this is causing us a bit of a headache.

My suspicion is that there's yet another object embedded in the embedded browser but I've not been able to get any more info on that yet.

Tuesday, 5 May 2009

Updated Hudson version

I updated our Hudson version last night. This has fixed a minor issue that was making life tedious for us. We have around 15 views (tabs) in Hudson. This is too many to fit on the screen and Hudson has no method of Wrapping or Stacking them so we get a horizontal scroll bar.

We use Hudson to run GUI Integration tests (using QTP) and it's very typical for us to manually kick of a single test. The problem with the many views and horizontal scroll bar is the the Hudson 'run build' button is off the right hand side of the screen and you have to scroll across to it. This becomes tedious when you're doing it a lot.

This new version of Hudson, allows us to add, remove and relocate the Hudson columns so now the run build button is in the first column rather than the last. A simple problem but a sweet fix.

Monday, 13 April 2009

Results Signatures and Auto Outcome Adjust

One of the area's I work in is automated regression testing of our CRM solution. We've been improving our test execution solution since since we brought the reponsibility for execution and reporting back in house in Jan 09.

In our world we have test environments that may be unreliable or defective, bad test data causing test failures and, God forbid, maybe even faults in the automated test solution... ;) So there's a fair bit of investigation required before we decide that a test failure really is a fault with the product under test.

This blog entry discusses our Automatic Adjust Solution.

A Fail Signature is a unique collection of failed test point ID's.

We've been working with a Fail Signature Auto Adjust solution for a wee while and it's saving us from tedious, repetitive and wasteful root cause investigation.

Every test point within our automated tests (pass or fail) has a unique ID. So If we run a single test case, as well as a big log file telling us what happened during the test run we'll also have a list of all the failed test points... the Fail Signature.

Starting from a premise that ALL automated test failures need to be investigated before a REAL fail can be reported to the customer of our testing, we use our Auto Adjust to handle predictable, repeatable fail patterns to save us the time and effort looking into every fail.

Example: If our java client fails to load during the test the test will fail. The fail record will contain a unique ID for this test point. The Fail Signature is the collection of all the fail points. If the Fail Signature contains ONLY the 'client failed to load' fail point then the test results will be automatically adjusted to an 'EXEC FAILURE' with a comment that the client failed to load. We don't need to drill into the test logs to derive this conclusion.

When a tester investigates the root cause of a test fail and decides that the behaviour is likley to recur he adds the Fail Signature to our Auto Adjust database so that the same root cause analysis work is not needed next time. The tester decides on the outcome and comment that the Auto Adjust solution should add to the test report.

This automatic analysis of Fail Signature has been a real help in reducing the effort required for initial test fail root cause investigation. It has enabled us to significantly reduce the time taken to deliver the results of a test run to our customers.