Sunday, December 31, 2006

Test Automation Architectures

I recently received an email that held gold. Well gold in the way of interesting reading material, but gold nonetheless. It was a link to a PDF on "Test Automation Architectures" by Bret Pettichord, and linked off of the famous CSTER website.

The entire concept of testing and automation patterns is one that interests me, and I think is one that is ripe for some research and further investigation. Patterns in software development are common now with the Gang of Four's famous book, Design Patterns. But the development of niche patterns, in areas like testing and automation, are very interesting indeed. I am aware that work in this field is being distributed and collected by, and I think their work is fantastic.

Sunday, December 24, 2006

The Absolute Need to Document

During my time in the field of QA and testing, I have learned to appreciate the need for documentation. As a student, it always seemed that commenting and documenting your code was a frivolous waste of time and energy. To be truthful, as a student, documentation was a waste of time and energy. Nobody will see your code ever again after it's turned in. Nobody will be inheriting your code later on, and have to struggle to figure out what the hell you were doing with your code. Also, most of my projects simply weren't large enough to warrant thorough documentation.

However, this lack of documenting creates bad habits that are passed on after one eventually graduates and enters the work force. The size and complexity of the programs one works on become much grander in scale, and yet the skill and desire for proper documentation is not present. This creates more problems than you can know, and has caused severe headaches and a lot of wasted time in my case.

If an alpha-geek super programmer makes you an amazing program that does exactly what you want, is completely localized, and has very few defects, I'm happy for you. If he or she didn't document it, you're completely screwed. When the program in question needs to be updated, added to, patched, or modified in any way (which is almost a certainty), you are dependent on the original programmer to be able to remember how they implemented the program the first time (assuming that the original programmer still works for you), or you must have another programmer waste considerable amounts of time understanding how the program works. If, however, that original programmer effectively documented the program in a way that made knowledge transfer possible, you can have another programmer pick up the slack with much less ramp-up time.

In this world where knowledge is king, there must be a way to transfer knowledge effectively, or it is essentially lost. Enforcing documentation is simply a smart way to protect your company assets. You cannot assume that the developers who know how something works will work for you eternally, and you must provide a way to communicate the internal workings of the program to others. Otherwise those that need to know how something works (QA, managers, other programmers) will waste the programmers time over and over again while they explain how their program works.

In addition, I've found that documenting helps me to focus on what I'm working on, and flesh out implementation details that I'd put off in the back of my mind. By having it down on paper, it's like I have a map of exactly what I need to create and implement. Documenting is about looking ahead, thinking ahead, and spending time now to save it in the future. So the next time a programmer tells you he doesn't need to document his program, tell him to suck it up and do it anyway, because documenting isn't about him or her - it's about everyone else. Spending a couple hours now to document a program will save you and others hours and hours later on.

Friday, December 22, 2006 takes on Testing

I couldn't help but laugh at this picture. All comedy has its roots in tragedy.

The Importance of Test Design

"More than the act of testing, the act of designing tests is one of the
best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded - indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest."
-Boris Beizer, Software Testing Techniques

This is so true it's almost painful. I've seen projects where thousands of tests are run, but they aren't the right tests, and hence defects are missed. Creating massive automation infrastructures and purhasing expensive commercial testing programs (or "solutions", as they like to be called), is really a worthless exercise in time and money if you're not designing intelligent tests.

For example, lets say you have a function that takes in a string and returns the number of characters in that string. All the fancy test programs in the world aren't going to help you if your QA engineer doesn't test it with unicode characters, null strings, or strings with no null terminator.

Test design is absolutely one of the more interesting and creative aspects of QA. The simple process of coming up with possibile ways that one can break a component is a creative process on its own, but also one that requires technical expertise. How can you know the many ways to break a product on a Windows platform if you don't have indepth technical knowledge of Windows? It's almost a philosophical question - how can you know the things you don't know? There is only one answer: keep learning and expand your knowledge boundaries.

Wednesday, December 20, 2006

Defining Automation

I find the big problem with automation is that the word "automation" is generally used as some sort of all-encompassing silver bullet keyword. Cold feet? Try automation! Lost your sense of smell? Try automation! The very word seems to imply some sort of panacea for all your testing aches and pains.

I've found that automation works best when it is solving specific problems, with a narrow scope and focus. A good problem or process to automate would be one where you want to build your program, copy it to a server, run some sanity tests against it, and then email you to tell you it's done. A bad problem to attempt to automate would be trying to run all unit, functional, and system tests on your build, across multiple operating systems, programming languages, and is extensible enough to withstand future changes in requirements. In my opinion, this type of program is too large in scope to be reliable. I'm sure there are people (or companies, rather) that have attempted and succeeded at such a project, but an automation project of that size needs its own separate QA team.

What I'm trying to get at is when testing complex systems, don't use even more complex systems. Complexity will by nature introduce errors into the system. Keeping your testing systems simple will reduce the amount of errors within your test system. Obviously there is some loss of functionality, which is why I'm a fan of stringing together multiple simple systems in order to do more complex automation, when necessary. Modularity in automation is the key to success.

Here are a few good blog posts on the subject that cover different ways to view the same problem:

Sunday, December 17, 2006

Open Source Chocolate Factory

I had a fantastic visit to the Scharffen Berger chocolate factory in Berkeley today. They were quite generous with the samples, and even more interesting, they showed us the exact process they use to make their chocolate, recipes and all. They even let us take pictures of their equipment inside the factory. As the tour guide (Mandy) told us, "We're the only chocolate factory in the US or Europe that not only allows you inside the factory, but we let you take pictures". It's like an open source chocolate factory.

Ok, not exactly. I can't start modifying their chocolate making process with my own beans and spices. But the transparency of their process and ingredients reminded me of open source software. The transparency builds trust with their clients, who are now more confident about exactly what they're eating. They are obviously confident that nobody will steal their processes, and for good reason. I may know now how they make chocolate, but investing the millions of dollars to copy them is not something I'm willing to spend the time and effort to do. As you can probably tell by now, I'm making parallels to software here.

Friday, December 15, 2006

PHP Static Analysis Tools

I did some searching today for PHP static analysis tools, and came across some interesting ones.

PHP-Front: Not yet even at v0.1 yet, but I await the release eagerly. You can download unstable releases if you want to test it out.

PHP-SAT: Made by the same people (person?) as PHP-Front, and also not yet even at v0.1 yet. You can download unstable releases if you want to test it out.

Pixy: This looks like an academic project, but at least they have something working! The analysis tool deals mainly with detecting XSS vulnerabilities.

Searching for PHP dynamic analysis tools did not turn out as fruitful.

Thursday, December 14, 2006

Mutation Testing

When thinking of the phrase, "who polices the police?", I immediately think of testing. Indeed, who tests your tests? How does one even delve into the process of meta-testing, which seems to create an infinite loop of testing? Why would one think of such a thing?

The fact is that some companies are very keen on simply having test cases pass, and don't consider whether or not those test cases were even worth executing. A coworker recently told me that on his last team, a test suite that was run on a computer that didn't even have the software installed passed with an impressive rate. Why? Because they were being measured on how many test cases passed, not whether they were good test cases. As stated by Dr. Adam Kolawa,

"In general, there is no easy way to tell if the test suite thoroughly tests the program or not. If the program passes the test suite, one may only say that program works correctly on all the cases that are included in the test suite. The more cases a test suite contains, the higher the probability that the program will work correctly in the real world."

Outside of fixing the reviewing process in such a company, which is absolutely a solid way to help alleviate this problem, there is a method of testing known as Mutation Testing that I'd like to discuss. Lets assume that you have a perfect test suite, and the perfect program. All test cases that can be run, are run correctly. If you change the code of the program (creating a mutant), and run the test suite against it, the suite should detect some kind of error. If the suite fails to detect the change in the code, you have another test case to create :).

Obviously there have to be some kind of bounds on this. First, selecting what code to change has to be the intuitive and creative task that abounds in engineers involved in testing. Changing a variable name is not inherently an interesting piece of code to change if the change is global. Changing the database calls is an interesting test case. Second, you must limit how much mutation testing you do. There is no end to the number of different mutants you create, and with a smart enough engine, I'm sure you could semi-automate this task. However, mutation testing is merely one of many types of testing one should focus on, and should not be the main focus.

You can read Dr. Adam Kolawa's paper on Mutation Testing on StickyMinds.

Wednesday, December 13, 2006

The New Outsourcing of Testing

Most programmers I know that get excited about a new idea are really only interested in creating prototypes. The creation of the initial 60-70% of the program is exciting and fun, especially if there are new languages and technologies to learn, but finishing or even *gasp* testing the program are not within their world-view of fun and exciting. This in itself isn't a problem - I am also guilty of such facts when I have a new idea in my head.

However, that doesn't excuse the person or company for releasing that prototype out to the public as a "Beta". Yes, the new outsourcing of testing I'm referring to is the shift of the testing duties from in-house engineers to the external users. By labelling the product or site as a "Beta", the company can reduce their own testing effort and force the burden on their users. When users do have problems, the company can say with confidence, "Well no wonder, it's a beta!".

Unfortunately, the psyche of the Beta user has decided to go along with this plan. When using a Beta, the user feels like an early adopter, and is less likely to feel angered when the system crashes and causes their hard drive to catch fire. I mean, it's a beta, right?

Lets face it, creating quality products is a difficult task, but that's no reason to get lazy. Beta used to mean, "it's basically done, but we want feedback on the feature sets and usability". To be fair, some companies still operate this way. Most of the startups I know in the bay area do not operate this way. They are in line with the "release early, release often" mantra of such greats as Paul Graham and Tim O'Reilly, however for them, release early means release crap and see what happens.

With the many high-profile security problems plaguing businesses across the world over the past 5-10 years, security has become a big deal in the computing industry. I'd like to see the same push with Quality.