I opened with the following questions:
- What should be automated?
- Can manual testing be eliminated by implementing automated testing?
- Can QA Automation improve productivity?
The following diagram depicts the areas and focus of testing that can and should be automated.
Can manual testing be eliminated?
We are an Agile/Scrum development shop, and our experience has proven that automating within a sprint is not advantageous. From an efficiency and effectiveness standpoint, features need to be complete before they are automated. Manual testing can and should focus on validating the new features and developing good test plans that can be automated later.
Can QA Automation improve productivity?
There are a few objectives that QA Automation can achieve.
Reducing the “QA bottleneck” that is invariably present at the end of software projects, and also where the squeeze to make deadlines frequently occurs.
My other observation is that automated tests take less effort to run than manual tests, and as a result they are likely to be run often. This alone can increase your confidence in the software.
When I look at QA automation, my philosophy matches my opinion of manual QA, and is that you don’t want to be in the business of “testing your way to quality.” If you feel that you have a large number of defects that are occurring in the system test phase, look for other areas to improve in your software development process. Perhaps you need to improve your requirements definition, or improve on design and code reviews – preventing defects from finding their way into the code in the first place.
QA and Development Role in QA Automation
We have experimented with using traditional QA and Software Developers paired together. This has proven successful, and I think that pairing leverages strengths that both professions bring to the table. The first area that I explored was the subject of unit tests. There are those who state that the developer is the best qualified resource to write unit tests, because the developer knows the code. I don't share this opinion!
Yes, the developer knows the code, and therein is the danger. Most developers that I know aren’t good at testing. One of the reasons for this is that if you know the code, you are likely to be thinking in terms of the “happy path,” – how the code should function – and not in terms of the fringe cases that a good QA tester will consider. Pairing can produce better tests.
I asked about experiences that those in attendance had with Test-Driven Development (TDD), where you start with unit tests that fail and then write code to make the test pass, but no one in the audience had any real-world experience to share.
My take is that TDD is useful, particularly since the book The Art of Agile Development noted that this forces developers to think in terms of interfaces first and implementation details second, providing a design that is more comprehensible. I would advocate the QA could help strengthen the tests here as well, for the same reasons noted above.
Conversely, we’ve found that Development can help QA. They can help QA with things like code organization and source control. Developers naturally have a lot of experience with the design and development of common functions, and can bring practices like design and code reviews to the QA Automation table.
A group discussion yielded some more perspective on this, the main takeaway being the realization that it was possible to get into QA Automation without considering code design and management. And just like any software, you could end up with spaghetti-code that is difficult or impossible to maintain, neutralizing any gains from your investment in QA Automation.
The bottom line for me: QA Automation is an investment, and you need to consider the cost of the tools, training and mentoring, script (or code) organization and management. Target those areas where there will be a need and a return for the automation dollar. Ultimately, you need to answer the following questions related to QA in order to determine if your software is truly “done”:
- How much of the product is being tested through regularly-executed, automated testing?
- Does manual testing adequately cover the remaining bases, or are there gaps?
- Is your failure rate is known and defensible?