I find the big problem with automation is that the word "automation" is generally used as some sort of all-encompassing silver bullet keyword. Cold feet? Try automation! Lost your sense of smell? Try automation! The very word seems to imply some sort of panacea for all your testing aches and pains.
I've found that automation works best when it is solving specific problems, with a narrow scope and focus. A good problem or process to automate would be one where you want to build your program, copy it to a server, run some sanity tests against it, and then email you to tell you it's done. A bad problem to attempt to automate would be trying to run all unit, functional, and system tests on your build, across multiple operating systems, programming languages, and is extensible enough to withstand future changes in requirements. In my opinion, this type of program is too large in scope to be reliable. I'm sure there are people (or companies, rather) that have attempted and succeeded at such a project, but an automation project of that size needs its own separate QA team.
What I'm trying to get at is when testing complex systems, don't use even more complex systems. Complexity will by nature introduce errors into the system. Keeping your testing systems simple will reduce the amount of errors within your test system. Obviously there is some loss of functionality, which is why I'm a fan of stringing together multiple simple systems in order to do more complex automation, when necessary. Modularity in automation is the key to success.
Here are a few good blog posts on the subject that cover different ways to view the same problem:
Wednesday, December 20, 2006
Subscribe to:
Post Comments (Atom)
1 comment:
There is also an important concept with respect to automation that most test engineers do not realize. There are things that one can be more effective at manually and things that one can be more effective at in an automated sense.
For example, one can write a small test program to generate all sorts of inputs to the UUT that would be infeasible to do manually. By using automation, we have increased our reach or coverage.
On the other hand, there are things that humans do very well (think CAPTCHA) that computers can not handle easily. Humans are good at detecting things that look off. Automation often fails here as it only does what we told it-- which usually does not cover all sorts of interesting sideaffects.
So although there is a lot of talk about the interesction (those that we do manually that we could reasonably automate), we often miss the big picture and don't focus our manual and automated test efforts in the right place.
Post a Comment