I find the big problem with automation is that the word "automation" is generally used as some sort of all-encompassing silver bullet keyword. Cold feet? Try automation! Lost your sense of smell? Try automation! The very word seems to imply some sort of panacea for all your testing aches and pains.
I've found that automation works best when it is solving specific problems, with a narrow scope and focus. A good problem or process to automate would be one where you want to build your program, copy it to a server, run some sanity tests against it, and then email you to tell you it's done. A bad problem to attempt to automate would be trying to run all unit, functional, and system tests on your build, across multiple operating systems, programming languages, and is extensible enough to withstand future changes in requirements. In my opinion, this type of program is too large in scope to be reliable. I'm sure there are people (or companies, rather) that have attempted and succeeded at such a project, but an automation project of that size needs its own separate QA team.
What I'm trying to get at is when testing complex systems, don't use even more complex systems. Complexity will by nature introduce errors into the system. Keeping your testing systems simple will reduce the amount of errors within your test system. Obviously there is some loss of functionality, which is why I'm a fan of stringing together multiple simple systems in order to do more complex automation, when necessary. Modularity in automation is the key to success.
Here are a few good blog posts on the subject that cover different ways to view the same problem: