I have heard rumblings of controversy around testing with mock objects. For example, Jeff Perrin had an interesting link in his blog to an article about why not to use mock objects. I find it hard when I'm trying to illustrate an idea to come up with simple examples off the top of my head. They tend to be either very specific to a project I'm working on or very contrived. That's why I figured I would take an opportunity to describe something simple yet concrete that I encountered the other day at work. In the project I'm currently working on, I was trying to fix a bug. To do so, I added a condition to a function that retrieved a list of wells. This function was retrieving all of the wells connected to a given battery (effectively a storage tank for oil produced at a well). I modified the function to return only wells that had not been "shut-in." While running all of our tests, I ran across a few tests that were breaking as a result of my change. The tests had to do with contracts, an area of the system I am not very familiar with. I asked one of our on-site business users about this and it turned out that it made sense in certain circumstances to include as part of processing a contract wells that were shut-in.
I thought this was interesting because the tests we had did not use mock objects. Because of this I was able to see failures resulting as part of my change in tests that had real business meaning which explained to me why my change was not a good idea. With mock obects, I imagine the tests would have been decoupled. There would be tests for the function that returned wells at a battery. Then there would be tests to make sure contracts were being processed correctly. These latter tests would mock or stub out the function that searched for wells at a battery. Therefore the change I made would only have likely broken one test that made sure that particulr function included shut-in wells, outside of any other context. On one hand that seems like a good thing but the problem is, without having the context of the business problems in which shut-in wells should be included, I could imagine thinking the failing test was probably wrong.
One could argue first of all that's what the on-site customer is for in XP, and secondly that one should also have customer facing integration tests (FIT tests) that would probably fail as a result of this type of change. Still, in the first case the customers were busy and may not have thought of the problem the failing contract test had identified, at least not right on the spot. As for the integration tests, by definition these would probably be run as a batch job overnight so I wouldn't find out about the problem I had checked into source control until quite a bit later on.
It's this sort of thinking that leads me to believe that over-mocking is a bad idea. If a test is triggering code that accesses external resources that a) are not available at the time the test is being run or b) the test takes too long to run, or c) the application code does not yield consistent results over time (as in the case of a stock-ticker or a system clock), that test is a good candidate for using stubs or mocks. Otherwise I think it's best to avoid mocking or stubbing as a matter of principle.