Sunday, June 11, 2006

Just-In-Time Performance Tuning

I've been thinking a bit about performance tuning business applications lately, after reading some of Ted Ogrady's recent writings on the subject (see Empirical Performance , Elaboration, and Reponse to my concerns about risk). Ted pointed out that some authorities in the agile realm recommend avoiding premature optimization (see Fowler and Beck). It's true that optimizing code early can lead to problems, and especially that one should not optimize code without profiling it first. However, I do think that letting the performance get bad enough to upset users is a bad thing, and seems out of character in agile development. For example, Kent Beck writes:

Later, a developer pulled Kent aside and said, "You got me in trouble."

"About what?"

"You said I could tune performance after I got the design right."

"Yes, I did. What happened?"

"We spent all that time getting the design right. Then the users wanted sub-second response time. The system was taking as long as fifteen seconds to respond to a query. It took me all weekend to change the design around to fix the performance problem."

"How fast is it now?"

"Well, now it comes back in a third of a second."

"So, what’s the problem?"

"I was really nervous that I wouldn’t be able to fix the performance problem."


Kent's suggestion is to do some envelope calculations to assess performance early in the project, but that's just a written down artifact, the kind of thing that agile practices generally discount.

As I see it, the history of the performance tuning debate goes something like this: Early on, performance tuning had to be done all the time because hardware was so slow and memory was so expensive. Later on, a overall computer performance improved, there was a backlash against this kind of optimize-always behaviour. If you aren't desperately worried about memory, you don't need to use bit fields - that sort of thing. However, as a result of leaving performance issues for late in the project, I've seen a number of projects now where the performance becomes really terrible. Users get upset and the overall perception of management is not positive. The developers in this case say "Don't worry, we'll solve the performance problems later." However, these performance problems can start to affect the project. User who are testing that app spend too much time navigating from one screen to another, and even the time to run automated tests written by the development team suffers.

Basically, I think there is a better way: One of the best idea in software development I've come across in the last while is the notion of a FIT test. A FIT test is a customer-facing test. My suggestion is to devote time to developing a relatively small number of performance-oriented FIT tests during each iteration. These tests execute an area of code where performance is important under conditions that are as realistic as possible. Just as with normal FIT tests, performance FIT tests can be written before the actual code exists. Initially, there is no processing done and the test passes trivially. Each iteration, someone is responsible for maintaining the fit test - adding setup data and making sure it runs without errors. If the test meets the established performance criteria, the bar is green, otherwise it's red. That's when we jump in with the profiler to get the performance back to acceptable levels. The code should remain as clean as possible, and only the minimum amount of tweaking required to make the test pass should be done. That way the users won't run across unacceptable levels of performance as the app is being developed, thus reducing risk and stress for everyone. The basic point I am trying to make is not that performance cannot be improved late in a project, but that maybe it doesn't have to be that way.

No comments: