The fact is that some companies are very keen on simply having test cases pass, and don't consider whether or not those test cases were even worth executing. A coworker recently told me that on his last team, a test suite that was run on a computer that didn't even have the software installed passed with an impressive rate. Why? Because they were being measured on how many test cases passed, not whether they were good test cases. As stated by Dr. Adam Kolawa,
"In general, there is no easy way to tell if the test suite thoroughly tests the program or not. If the program passes the test suite, one may only say that program works correctly on all the cases that are included in the test suite. The more cases a test suite contains, the higher the probability that the program will work correctly in the real world."
Outside of fixing the reviewing process in such a company, which is absolutely a solid way to help alleviate this problem, there is a method of testing known as Mutation Testing that I'd like to discuss. Lets assume that you have a perfect test suite, and the perfect program. All test cases that can be run, are run correctly. If you change the code of the program (creating a mutant), and run the test suite against it, the suite should detect some kind of error. If the suite fails to detect the change in the code, you have another test case to create :).
Obviously there have to be some kind of bounds on this. First, selecting what code to change has to be the intuitive and creative task that abounds in engineers involved in testing. Changing a variable name is not inherently an interesting piece of code to change if the change is global. Changing the database calls is an interesting test case. Second, you must limit how much mutation testing you do. There is no end to the number of different mutants you create, and with a smart enough engine, I'm sure you could semi-automate this task. However, mutation testing is merely one of many types of testing one should focus on, and should not be the main focus.