There was a great blog on the Laws of Software Development, and there were a few I saw as related to testing that I thought I should share.
Brooks' Law: Adding manpower to a late software project makes it later. Or testers, in this case.
Conway's Law: Any piece of software reflects the organizational structure that produced it. Good to keep in mind when testing.
Heisenbug Uncertainty Principle: Most production software bugs are soft: they go away when you look at them.
Hoare's Law of Large Programs: Inside every large problem is a small problem struggling to get out. Finding those small problems is worth searching for.
Lister's Law: People under time pressure don’t think faster. Or test faster.
Nathan's First Law: Software is a gas; it expands to fill its container.
Tuesday, July 24, 2007
Sunday, July 15, 2007
Scalable Test Selection
A few weeks ago I gave a presentation at the Google Scalability Conference on the idea of Scalable Test Selection (video can be found here). Now that the video is up, I thought I'd share with you all this idea developed within my team. I must preface this by saying that this was not my idea -- the credit belongs to Amit and Fiona.
As Cem Kaner has said before me, there is never enough time to do all the testing. Time, in essence, is your scarce resource that must be allocated intelligently. There are many test strategies one can choose from when testing a product, however the question must be raised: how do you know whether you're testing the right things, given the amount of time you have to test? This is an impossible question to answer, however there are situations where the question is pertinent. For instance: when a change is introduced to your product at the last minute before release, and you have to decide what to test, how do you choose? There are probably some intelligent guesses you can make on what to test based on what has changed, but how can you know that this change hasn't broken a distant dependency?
This is the situation where we believe Scalable Test Selection can help. Given a source code change, what test cases are associated with that source code? Essentially, how can we link test cases to source code using test artifacts?
We have identified (and presented on) three ways to associate test cases with source code:
What we're doing is applying simple data-mining techniques to the test data. There is much more to the idea that I'm not talking about (prioritization of test cases, implementation ideas, etc), however I hope you get the jist. I fully recommend you watch the video if this topic interests you, and feel free to email me if you want the slides :).
As Cem Kaner has said before me, there is never enough time to do all the testing. Time, in essence, is your scarce resource that must be allocated intelligently. There are many test strategies one can choose from when testing a product, however the question must be raised: how do you know whether you're testing the right things, given the amount of time you have to test? This is an impossible question to answer, however there are situations where the question is pertinent. For instance: when a change is introduced to your product at the last minute before release, and you have to decide what to test, how do you choose? There are probably some intelligent guesses you can make on what to test based on what has changed, but how can you know that this change hasn't broken a distant dependency?
This is the situation where we believe Scalable Test Selection can help. Given a source code change, what test cases are associated with that source code? Essentially, how can we link test cases to source code using test artifacts?
We have identified (and presented on) three ways to associate test cases with source code:
- Requirements: If source code is checked in to satisfy requirements, and test cases are checked in to satisfy requirements, then a connection can be made.
- Defects: A test case fails, which then is associated with a new defect. When source code is checked in to fix that defect, an association is made between the code and the test case
- Build Correlation: For a single build, you can associate a set of source code changes with a set of test case failures. Now iterate that over successive builds, and you have a large set of source code and test case associations
What we're doing is applying simple data-mining techniques to the test data. There is much more to the idea that I'm not talking about (prioritization of test cases, implementation ideas, etc), however I hope you get the jist. I fully recommend you watch the video if this topic interests you, and feel free to email me if you want the slides :).
Monday, July 9, 2007
CAST Conference
I just finished Day 1 of the AST Conference (CAST), and boy did I have a good time. The talks were good, but the people were better. There was definitely a general feeling of community there, which is something I haven't seen much at conferences I've been to lately. The crowd was relatively small (~180), but all good people.
Highlights for me:
* Hearing Lee Copeland talk about the role of QA as a services group (this makes almost too much sense to me)
* Talking to Jon Bach about the future of AST, CAST, and the other conferences that AST is hosting
* Watching a small group gather after my talk on Meta-Frameworks to share their experience with Meta-Frameworks
Anyway, now that I'm leaving tomorrow, I wish I was staying another day.
Highlights for me:
* Hearing Lee Copeland talk about the role of QA as a services group (this makes almost too much sense to me)
* Talking to Jon Bach about the future of AST, CAST, and the other conferences that AST is hosting
* Watching a small group gather after my talk on Meta-Frameworks to share their experience with Meta-Frameworks
Anyway, now that I'm leaving tomorrow, I wish I was staying another day.
Saturday, July 7, 2007
The Time Trade-Off
I started prepping for the CAST conference next week by reading up on some test patterns that the AST group has produced in the past. I was reading some great stuff this weekend by Cem Kaner on Scenario Testing, when I came across a fantastic quote:
It's so true, it's painful. Deciding on what to test is becoming increasingly important in my own work, as the amount of work stacks up, and the amount of time to test it decreases. It's an interesting balancing act.
"The fundamental challenge of all software testing is the time tradeoff. There is never enough time to do all of the testing, test planning, test documentation, test result reporting, and other test-related work that you rationally want to do. Any minute you spend on one task is a minute that cannot be spent on the other tasks. Once a program has become reasonably stable, you have the potential to put it through complex, challenging tests. It can take a lot of time to learn enough about the customers, the environment, the risks, the subject matter of the program, etc. in order to write truly challenging and informative tests."
Subscribe to:
Posts (Atom)