You Asked
Can you please help interpreting this output?
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 160878 26.95 25.80 4 18 30 0
Execute 160878 130.97 126.94 355 31756 163997 17532
Fetch 139738 68.34 66.24 9 269118 23464 119624
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 461494 226.26 218.98 368 300892 187491 137156
Misses in library cache during parse: 40
Misses in library cache during execute: 33
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 692413 0.00 1.20
SQL*Net message from client 692412 0.75 1183.87
db file sequential read 361 0.02 0.65
log file sync 5920 0.62 9.09
Does high SQL*Net message from client means network latency?
Thanks in advance.
and we said...
In general, sqlnet message from client is the "idle wait event experienced by the server while waiting for the client to tell it to do something".
http://asktom.oracle.com/pls/ask ... 4234872764#16398636
71441
for example - say the client
o at 12:00:00.0000 submits "select * from dual"
o spends 30 seconds pondering the results
o at 12:00:30.0000 submits "select sysdate from dual"
you will see some 30 seconds of "sql net message from client" wait.
If you have an application that should CONSTANTLY be doing work in the database (eg: a batch process) and you see high "sqlnet message from client"- that is time spent in the client outside the database - and it could be a problem in that case (the client is spending a lot of time doing something OUTSIDE of the database)
So, if a developer comes to you and says "database is slow, my program takes 20-25 minutes to run, database is very slow" and you develop a tkprof report that looks like the above, you can say to them:
Look - the database spent 1,183 seconds (about 20 minutes) waiting to be told to do something by you. When you did tell us to do stuff, we finished it in about 218.98 seconds (the time spent doing sql). So, we spent 4 minutes doing work in the database - and you spent 20 minutes doing "something". Now, even if we made the database infinitely fast - you will still take 20 minutes to run. Since we cannot make the database infinitely fast - I suggest you figure out where you are spending your 20 minutes and I'll look in the tkprof to see I I can make that less than 4 minutes any smaller.
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- ------- -------- ----------
Parse 160878 26.95 25.80 4 18 30 0
Execute 160878 130.97 126.94 355 31756 163997 17532
Fetch 139738 68.34 66.24 9 269118 23464 119624
------- ------ -------- ---------- ----- ------- -------- ----------
total 461494 226.26 218.98 368 300892 187491 137156
Misses in library cache during parse: 40
Misses in library cache during execute: 33
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 692413 0.00 1.20
SQL*Net message from client 692412 0.75 1183.87
db file sequential read 361 0.02 0.65
log file sync 5920 0.62 9.09
so, what can we say about that:
a) your parse = execute. Since I doubt you have 160,878 UNIQUE sql statements, your developers need to learn that you do not need to parse every time you execute - they should parse ONCE and execute many times. 10% of your runtime was spent parsing sql - not executing it. This is a huge scalability inhibitor and work that needs not be done.
A sql statement needs to be parsed at least once by a program to execute it. A sql statement needs to be parsed on MOST once by a program to execute it as many times as it likes
b) Most of your parses are soft parses (misses in library cache during parse: 40 - only 40 hard parses). Gives strength to comments in (a). You have few unique statements - but you parse them over and over.
c) your shared pool might be small - maybe. something to investigate. 33 times during your execution, a statement that had been in the shared pool was not there anymore. We did an implicit hard parse - that could be caused by other things (ddl, statistics, another program flooding the shared pool with tons of literal sql...)
d) you run tons of tiny sql - I based that on the number of IO's (query+current) divided by number of executions. Perhaps you have the classic slow by slow program that would benefit greatly from set based operations
e) you have no significant waits, you waited 5,920 times for a commit to finish (the log file sync) so each commit caused you to wait 0.0015 seconds - not bad.
f) most of your time was spent in the client - you spent 1,183 seconds waiting while this client was connected to the database to be told to do something. If the client is an interactive application - that might well be OK (see the above link to sqlplus example). That would be end user think time and end users are slow. If this is a batch process - you probably have a problem with the client - it is using 5 units of time for every 1 unit of database time. |
|