There is a common belief in the digital industry that test automation is the solution to improve defect detection as part of regression testing. In fact, if well implemented, automated testing can form the basis of a sound Quality Assurance process. This approach is the driver for companies to invest money in resources and tools that aid testing automation. These tools tend to be "user friendly" and the learning curve for adoption is much smaller than learning a programming language from scratch or learning how to program within a test framework.
However, even with the promises of improvements advertised by companies that build these tools, most QA teams soon realise that many of their claims fade into a sea of issues when it comes to their day-to-day usage; from problems with the software itself to the time spent customising for reporting or waiting for support teams to answer niggling questions on how best to implement or use the software. Test cases can fail due to brittle code behind the scripts and an unproductive amount of time can be spent on script maintenance. Such issues often result in frustration and negative opinions about the chosen software and QA teams can end up ditching the automation tool altogether immediately after their first project. Others will keep moving from one tool to the next, seeking the "Holy Grail" of test automation that best suits their needs.
Whilst these testing automation tools offer some great advantages, we also have to consider their flaws and learn how to overcome them. One way to do this is for QA teams to write the code for the test themselves, based on a framework such as Selenium webdriver, Cucumber, or Microsoft Coded UI. This places the QA team in control from of their destiny from the outset and allows much more flexibility – leaving behind the standard "record and play" functionality that most tools are limited to. Your QA team’s imagination is the only limit here, and of course, their expertise in programming!
At ClearPeople, our testing team went through some of the learning and experiences outlined above during the last year. We have standardised and implemented Telerik Test Studio and have been on that steep learning curve with some of our larger projects over recent months. Moving into 2015, we hope we can leverage even more from this tool and continue to integrate our own code to build even more robust tests to ensure quality deliverables to our clients.