Thursday, May 29, 2008

A Different Kind of QA: Calling all Engineers

This is a repost of a blog post I made on the Xobni blog. You can read the original post here.


It’s common for people to ask why a good engineer like myself would want to work in QA, especially when you have to fight the stigma’s of QA, namely:

1) You are in QA because you are not good enough for development

2) You are in QA as a stepping stone for development

3) You are in QA because you don’t like coding

My response to those statements: pish-posh. While these statements may apply to some people in the field, they certainly don’t apply to the people serious about QA. A good QA Engineer solves quality problems with an algorithmic intensity that rivals traditional programmers. They are a true hacker in the older sense of the word - they are here to find and exploit the problems in the system in any way possible.

Every problem has its boundaries. For most developers, the boundaries for implementing solutions are usually confined to one language, stack, or technology. The boundaries for problem solving in QA are generally much wider, simply because our solutions don’t have to be productized, exposed to the public, and aren’t necessarily even in the same language or stack.

This allows a much wider range of creative freedom when solving problems. Learning new languages and technologies becomes essential for your work. Having a large arsenal of tools to attack a problem becomes a necessary part of the job. This provides you with even more of a reason to learn about the latest and greatest in tech, which is something that appeals to all engineers alike.

At Xobni we approach QA differently than most. The people we look for are not here because they are not good enough for development. They are not here because they don’t like coding. The QA people here are expected to be at the top of their game. They are expected to build and create software that can topple the Jenga-like building blocks of our product. They are expected to be creative people who like to learn, explore, and exploit software.

That being said, Xobni is looking for a QA engineer! Check out the job post, and send resumes to ryan dot gerard at xobni.com if you think you can rock our world.

Tuesday, March 11, 2008

Productivity and Flow

I've been thinking a lot recently about that state of mind where you lose track of time, focus intensely, and generally are very productive. This state of mind is not only associated with working; it can be found when exercising, playing games, and other associated focus-intensive activities, but generally I associate it with work, mostly because my best work is done in this state of mind. I find that if I can lose myself for a while and disassociate with reality while working, I generally come out of that state with an amazing amount of work done.

I was thinking recently that if I could induce this state of mind more frequently, I could become a more productive person. After a short amount of googling, I discovered that there is an entire area of research in psychology devoted to this subject; it's known as "flow". One of the main researchers in this field is a Hungarian psychology professor named Mihaly Csikszentmihalyi. He's published a few books on the subject, and the one I picked up is called "Flow: The psychology of optimal experience".

The book itself was a little too self-helpy for my tastes (if you look carefully, you'll see that the subtitle of the book above is "Steps toward enhancing the quality of life"), however I found tidbits inside could have been taken out of any of the software management books I've read.

After skimming through the bits on how and why you desire happiness, I found the core of the book: the elements of flow experiences.


1. Engaging in a Challenging Activity

He explains that the activity you're engaged in has to be at the edge between skill and anxiety. Even if your activity is complex, if you're too familiar with it, it won't be considered "challenging" to your psyche. You have to find something that is within your reach to learn or finish, but isn't easy.

2. Merging of Action and Awareness

Your attention is completely devoted to the activity, such that you have no awareness of the outside world. It is intense concentration, but seems effortless when deeply involved.

3. Clear Goals and Feedback

This is fairly self-explanatory, and it is also where I started seeing parallels in software development and management. On the development side, having small coding goals that are constantly achieved and iterated on is how I think many productive people program. On the management side, providing clear feedback and goals to your employees is a staple of good management.

4. Concentration on the task at hand

This is probably obvious, however what I found interesting was that he believes only a select range of information can be allowed into your awareness when in this state. Irrelevant information in your mental activity can break your concentration, and hence your flow.

Relating this to back to software environments, he goes on to state that quiet environments are essential to keeping your concentration. Much has been written already about how loud environments are productivity killers, and this just provides more evidence to that.

5. The Paradox of Control


He says that the flow experience is strongly associated with a sense of control. This resonates strongly with programming in my experience. One of the psychological benefits of programming (in my non-expert opinion) is the sense of mastery and control one gains over the system you're programming against. "Hacking" (in the Paul Graham sense, not the Kevin Mitnik sense) is merely another way of asserting your control and power over the system, by finding a non-obvious or faster solution to a specific problem. It's a very primal feeling that I think many, if not most of us, desire.

Mihaly then writes that the "paradox" of control "...is that it is not possible to feel a sense of control unless one is willing to give up the safety of protective routines". In essence, your sense of control comes by putting yourself into situations where you actually have less control, since the unknowns are much greater than in situations that you've experienced before. As he writes, "Only when a doubtful outcome is at stake...can a person really know whether they are in control".

6. The Loss of Self-Consciousness

Losing your sense of self-consciousness is a phenomena typically talked about in association with meditation, or zen-like activites. This loss is typically accompanied by, "a feeling of union with the environment". Projected onto programmers, this environment you feel a union with is typically whatever framework, system, or specific program you're working in.

Mihaly explains that what is temporarily lost is not the sense of self, but the concept of the self. High-performing violinists are very aware of their fingers, as runners are aware of their breathing. They haven't lost their sense of self, but the boundaries for how they define the self have temporarily vanished. This can be a very liberating experience, as "...the feeling that the boundaries of our own being have been pushed forward".

7. The Transformation of Time

It is normal to emerge from a flow experience and see that hours have passed without your awareness. What you're measuring when in this activity is not time, but states or milestones. When programming intensely, it's not uncommon to think of your progress not in terms of minutes and hours, but in terms of functions written, functionality working, and pieces integrated. Your world turns into a state-driven world, and not a time-driven one.


For the skeptical types (like me), I want to say that these elements are conclusions drawn from many studies of people experiencing flow in many different types of activities. While this doesn't mean that the conclusions are true, it does have more credibility than just some quack spouting off what he thinks brings about flow experiences.

Hopefully this has provided you with some thought-food to chew on regarding your own productivity. I think the main take-aways from this for me are that to really engage deeply in an activity, one needs:
  • A challenging task
  • A quiet environment
  • Clear Feedback (usually in the form of finished functions and functionality in what I'm writing)
  • A clear mind
  • Enough time set aside to engage deeply with the activity
I said earlier that I found parallels with this book and other software books I've read, and these take-aways prove it. These bullet points could be taken directly out of "Peopleware" or "Managing Humans", or any other book that deals with the topic of software productivity. It's always interesting to find parallels between different disciplines, and I think the psychology of programming to be particularly interesting.

Wednesday, January 23, 2008

The Great TODO List

Organizing and planning your work is tough. It's not hard to list everything you need to do, but prioritizing what you need to do can be an artform. For instance, do you focus on exploring a problem with the server (which is high-priority to your co-worker), editing a design document (which your boss wants done soon), or finding the root of that newest bug (which is really what I'm getting paid for)? There are definitely subtle tones of politics in these decisions, but I try to keep those matters out of my head when prioritizing.

I was discussing day-to-day planning tactics with some friends yesterday, and we found we all had different methods. I thought I'd share mine with you. I call it the Great TODO List. It's quite simple really. Anyone can start using this method immediately. The low-techness of it is astounding.

Step 1: Open Notepad
Step 2: Write down everything you need to do
Step 3: Put everything you need or want to do today at the top of the list

Amazing, isn't it?

Every day or so I go through the list and roughly prioritize what should be at the top that I may have forgotten about. As I go through the day, when I start to feel like I should be working on something else, I just consult the list, and pop from the top. Yes, the list is a stack.

The one downside to this method is that the list continually grows. I have stuff on my list from a few weeks back that I should still do at some point, but the likelihood of me doing that stuff is getting smaller and smaller by the day. The list needs love and pruning.

Tuesday, November 13, 2007

Quality and Perception

Well it's a new start for me. I have a new job with Xobni as their Quality Jedi, and it's a nice change. The startup life has been something I've been talking about for a while, and the time for action presented itself in the form of a ripe opportunity.

I've been thinking a lot more about product quality lately, in the general sense. This is probably because product quality will be much more on my shoulders than before, as I'm working with a relatively new product that has already been launched (in the form of a beta). I've been thinking about how one can attribute the status of "high quality" to any product, and I've realized that the only person who can bestow that status is the consumer.

Ultimately, no matter how much a product is tested and sent through the wringers of QA, the consumer is the one who decides whether your product is of high quality. It is the perception of quality that makes something of high quality. Everyone I know who has owned a BMW has moaned a little about how often it has to be taken into the shop for repairs. And yet BMW retains the status that their cars are of high quality, due to the fact that they are "german-engineered".

The perception of quality is powerful, and can directly contribute to product success. The first product put out by Xobni is called "Insight", and is a plug-in for Outlook that gives you a "people-centric" view of email. Email is data that is important to the consumer (vitally important to some), and anything that builds on top of that data must provide value while not corrupting or interfering with the basic tasks (emailing) in any way.

That is the quality challenge with this product, as I see it from a high-level. What is more important than verifying the product functionality is working as expected, is making sure that the consumers current tasks and environments aren't disturbed by the product.

That being said, now the question is, how can we affect the perception of quality to the user? This is a question I'll have to ponder more. The value we're building into Outlook will allow them to accomplish their tasks faster, and more efficiently. I think this increase in efficiency is the key to the quality perception for our product: the users are getting more done by using our product, without having their current email environment disturbed. This may sound quite basic and obvious, but I think it's good to reinforce these base points as to why we're building what we're building.

Interesting times are ahead.

Thursday, August 23, 2007

GTAC!

Sorry for the lack of posts -- I've been out of town recently.

I'm currently at the Google Test Automation Conference, at Google's New York office. Thus far, it's been a great experience. There are two things I've learned about (this morning) that I wanted to blog about: how to design a conference, and Google's testing philosophy.

Allen Hutchinson, an engineering manager at Google, gave some good insight into how they designed the conference, and why they made the decisions they did. I pulled lots of good insights from this talk.
*Keep the conference to under 150 people. Sociological research has shown this is an upper bound (and a magic number) that allows a group to keep a sense of community
*Provide outlets for people to discuss the conference. Their high-profile testing blog is open for unmoderated comments on this conference, and a google group was created as well.
*Make the talks available online after the conference
*Try to decrease the amount of repeat speakers from past conferences. They want fresh blood and new ideas introduced in their conference
*Ask old speakers to help select new speakers - or just have people outside your organization help select the speakers

Allen also mentioned that they kept the conference to a single track so that you can see all the talks, but to be honest I rather like the multiple track system. It allows more speakers, and allows you to skip talks that do not interest you.

Pat Copeland, a Director in Google, then spoke about the testing philosophy that Google maintains, which I found quite interesting.

*Measure everything you can. More data allows for more analysis.
*Continuous builds and faster builds allow more time for testing
*Focus on improvement of the system, and not the bits and pieces
*Testing Goal: fully test a product in one day

Pat also mentioned some challenges that they face as an organization, which I think applies to pretty much everyone:
1. Simulating real-world load, scale, and chaos is difficult
2. Deciding what to measure is difficult
3. Complex failure scenarios are expensive to test

The talks (and more importantly, the people) have been quite interesting thus far. I'll hopefully have more to update after tomorrow.

Tuesday, July 24, 2007

My Favorite Laws of Software Development

There was a great blog on the Laws of Software Development, and there were a few I saw as related to testing that I thought I should share.

Brooks' Law: Adding manpower to a late software project makes it later. Or testers, in this case.

Conway's Law: Any piece of software reflects the organizational structure that produced it. Good to keep in mind when testing.

Heisenbug Uncertainty Principle: Most production software bugs are soft: they go away when you look at them.

Hoare's Law of Large Programs: Inside every large problem is a small problem struggling to get out. Finding those small problems is worth searching for.

Lister's Law: People under time pressure don’t think faster. Or test faster.

Nathan's First Law: Software is a gas; it expands to fill its container.

Sunday, July 15, 2007

Scalable Test Selection

A few weeks ago I gave a presentation at the Google Scalability Conference on the idea of Scalable Test Selection (video can be found here). Now that the video is up, I thought I'd share with you all this idea developed within my team. I must preface this by saying that this was not my idea -- the credit belongs to Amit and Fiona.

As Cem Kaner has said before me, there is never enough time to do all the testing. Time, in essence, is your scarce resource that must be allocated intelligently. There are many test strategies one can choose from when testing a product, however the question must be raised: how do you know whether you're testing the right things, given the amount of time you have to test? This is an impossible question to answer, however there are situations where the question is pertinent. For instance: when a change is introduced to your product at the last minute before release, and you have to decide what to test, how do you choose? There are probably some intelligent guesses you can make on what to test based on what has changed, but how can you know that this change hasn't broken a distant dependency?

This is the situation where we believe Scalable Test Selection can help. Given a source code change, what test cases are associated with that source code? Essentially, how can we link test cases to source code using test artifacts?

We have identified (and presented on) three ways to associate test cases with source code:
  • Requirements: If source code is checked in to satisfy requirements, and test cases are checked in to satisfy requirements, then a connection can be made.
  • Defects: A test case fails, which then is associated with a new defect. When source code is checked in to fix that defect, an association is made between the code and the test case
  • Build Correlation: For a single build, you can associate a set of source code changes with a set of test case failures. Now iterate that over successive builds, and you have a large set of source code and test case associations
With all this data that you can use to associate source code to test cases, when future source code changes are checked in, a tool can be written that can find all the test cases that are associated with that source code. In the case that you're in a time-crunched situation, you can have another source that suggests what test cases you should run in your limited amount of time.

What we're doing is applying simple data-mining techniques to the test data. There is much more to the idea that I'm not talking about (prioritization of test cases, implementation ideas, etc), however I hope you get the jist. I fully recommend you watch the video if this topic interests you, and feel free to email me if you want the slides :).