// locate an index of key k in sorted array a[],

// or if k is not present, return -1

int binarySearch( int[] a, int k ){

int i = 0;

int j = a.length - 1;

while( i <= j ){

int m = (i + j) / 2;

if( k < a[m] ){ // left half

j = m - 1;

} else if( a[m] < k ){ // right half

i = m + 1;

} else { // found

return m;

}

}

return -1; // not present

}

The culprit is sly: Operating on a large enough array, computing m = (i+j)/2 is subject to integer overflow, becoming negative due to wraparound, e.g.

int M = 0x7fffffff; // 2^31 - 1

int[] a = new int[ M ];

a[ M - 1 ] = 1;

// the call...

int r = binarySearch( a, 1 );

// ...throws an ArrayIndexOutOfBoundsException !

Fortunately the midpoint can be computed safely via m = i + (j-i)/2. Already there are many blog posts on this blemish, but none concerning quality. What might a software engineer learn about testing from this long-hidden oversight?

1. A lesson from numerical analysis, that time-honored formulas aren't necessarily robust when copied to finite arithmetic.

2. Vary your test data. Surely the Sun test battery for binarySearch passed, again and again, on the same sets of arrays and keys. Fine for 9 years -- but static lists of tests become stale. Which arrays should we select, or search for? Read on...

3. Specialized conditions are required to pinpoint rare defects. This flaw cannot be exposed for sizes below 2^30, about 1 billion; no wonder nobody noticed. The truth is that the endpoints of anything almost always provide interesting insights. Here, searching far to the left always passes, and only searching far enough to the right can throw the Exception. Aha. This leads to a favorite maxim...

4.

*Design tests specific to the implementation*. The greatest of the Java

`int`primitives is M -- nothing to do with binary search -- instead, a boundary in the language itself! Therefore, an array of that length makes a worthwhile test case

*in Java*(cf. Mathematica, Python). Something to remember when writing your own test suites.

Professor Kahan at Berkeley once summed up his lessons about testing (from the infamous Pentium divide glitch) into a fitting quote:

`Seek singularities! There lie all the errors.`

Wisely put, as are Josh's remarks about programming defensively, vigilance, code reviews, static analysis, solid proofs, and formal verification. (I would disagree with his view that there will always be bugs, but opposing formidable Dr. Bloch isn't really something one ought to do).

The paradox is that although testing can't exterminate our errors, it still needs to be done to provide corroborative evidence of correctness, even for well-proved algorithms! Later, I'll augment this post with test strategies which might have exposed the flaw. With test methods in mind, we might be able to expose ever more errors, and add to our QA repertoire.

An exercise: Using 32-bit two's compliment integer arithmetic, compute the average of i and j, correctly rounding to even. Test it with i and j at the extremes. My solution is not short.