Tuesday, December 27, 2005

The Business Major Approach

We as technical people often get too caught up in the technology and preciseness of our industry and forget that sometimes "good enough" is the right answer. I call this "the business major approach" and this term was coined by my good friend and colleague Neil Jensen. Many times there is no economic justification to process the extra 20% (aka the 80-20 rule).

There was a question recently on the Oracle-L list about the appsdba.com script, cpu_test.sql, and whether its variation of testing LIOs was a valid approach. A couple of people responded, and two main objections were raised. One, since not all LIOs are the same one should test across a broader spectrum of LIO types, and two since CPU speed is just a small part of a "system" one should really take into account all the other factors of system capacity and scalability.

Of course this is all true, and even echoes the disclaimer in the actual script, but back to my main point. The script takes the business major approach of giving some quantifiable means of providing a first stroke at judging relative CPU speed. The beauty of the script is its simplicity. Yes, it only measures the simplest of Logical I/Os and yes it does it in a PL/SQL loop. That's not the point. The point is it does it consistently, uses CPU exclusively, and has proven over the years to be a pretty reliable predictor of the relative CPU speed between platforms or even database releases. Would you base your hardware acquisition decisions on it? Of course not, but it does give an indication of the relative differences between systems and it's very easy to use.

So, when do we use the business major approach and when do we take the scientific approach? There is a place for both, and ultimately I think the decision is based on cost and risk. In fact, that probably means the business major always wins.

Saturday, December 17, 2005

New SQL Trace Data?

I found a new extended SQL trace line or class of lines this week while working on an Oracle Applications system. Many may have already run across this, but I hadn't and I found it very interesting. Apparently when Oracle Forms makes RPC calls now, they get logged in the extended SQL trace file. Here's the format of the lines:

RPC CALL:APPS.FND_TRACE.GET_TRACE_FILENAME;

RPC EXEC:c=70000,e=68146

So, I updated the AppsDBA Interval Resource Profiler to account for these new lines. Instead of adding the cpu time directly to cpu service though, I decided to create a new "event" called RPC EXEC and put the CPU time there. We'll see how this works, but for now I thought this was a pretty cool discovery.

System Level Data

I was scanning Tom Kyte's blog and found a comment referring to Mogens Noorgard's blog We Do Not Use Blogs. I took a look and his latest discussions have concerned system wide performance and if Statspack is useful. There is a general consensus among the enlightened that Statspack, and system wide data in general, is not useful for performance tuning and actually hurts the effort to fix poorly performing jobs. I don't disagree, but I think we're missing the point. As I've pointed out for a long time now, Statspack or the data that it's based on, is not performance tuning data, it's capacity data. In other words, use system level data for capacity planning and workload measurement. It's not the case that it's useless. It is just misused! If you're really interested in this, see appsdba.com. By the way, I whole heartedly agree with using extended SQL trace data for performance tuning, or response time optimization.