summaryrefslogtreecommitdiffstats
path: root/Doc/libprofile.tex
blob: 43d92c1e6675ad0cb78f0a3ae11bb2352cc797b8 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
\chapter{The Python Profiler}
\label{profile}

Copyright \copyright{} 1994, by InfoSeek Corporation, all rights reserved.
\index{InfoSeek Corporation}

Written by James Roskind\index{Roskind, James}.%
\footnote{
Updated and converted to \LaTeX\ by Guido van Rossum.  The references to
the old profiler are left in the text, although it no longer exists.
}

Permission to use, copy, modify, and distribute this Python software
and its associated documentation for any purpose (subject to the
restriction in the following sentence) without fee is hereby granted,
provided that the above copyright notice appears in all copies, and
that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of InfoSeek not be used in
advertising or publicity pertaining to distribution of the software
without specific, written prior permission.  This permission is
explicitly restricted to the copying and modification of the software
to remain in Python, compiled Python, or other languages (such as C)
wherein the modified or derived code is exclusively imported into a
Python module.

INFOSEEK CORPORATION DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS. IN NO EVENT SHALL INFOSEEK CORPORATION BE LIABLE FOR ANY
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.


The profiler was written after only programming in Python for 3 weeks.
As a result, it is probably clumsy code, but I don't know for sure yet
'cause I'm a beginner :-).  I did work hard to make the code run fast,
so that profiling would be a reasonable thing to do.  I tried not to
repeat code fragments, but I'm sure I did some stuff in really awkward
ways at times.  Please send suggestions for improvements to:
\email{jar@netscape.com}.  I won't promise \emph{any} support.  ...but
I'd appreciate the feedback.


\section{Introduction to the profiler}
\nodename{Profiler Introduction}

A \dfn{profiler} is a program that describes the run time performance
of a program, providing a variety of statistics.  This documentation
describes the profiler functionality provided in the modules
\module{profile} and \module{pstats}.  This profiler provides
\dfn{deterministic profiling} of any Python programs.  It also
provides a series of report generation tools to allow users to rapidly
examine the results of a profile operation.
\index{deterministic profiling}
\index{profiling, deterministic}


\section{How Is This Profiler Different From The Old Profiler?}
\nodename{Profiler Changes}

(This section is of historical importance only; the old profiler
discussed here was last seen in Python 1.1.)

The big changes from old profiling module are that you get more
information, and you pay less CPU time.  It's not a trade-off, it's a
trade-up.

To be specific:

\begin{description}

\item[Bugs removed:]
Local stack frame is no longer molested, execution time is now charged
to correct functions.

\item[Accuracy increased:]
Profiler execution time is no longer charged to user's code,
calibration for platform is supported, file reads are not done \emph{by}
profiler \emph{during} profiling (and charged to user's code!).

\item[Speed increased:]
Overhead CPU cost was reduced by more than a factor of two (perhaps a
factor of five), lightweight profiler module is all that must be
loaded, and the report generating module (\module{pstats}) is not needed
during profiling.

\item[Recursive functions support:]
Cumulative times in recursive functions are correctly calculated;
recursive entries are counted.

\item[Large growth in report generating UI:]
Distinct profiles runs can be added together forming a comprehensive
report; functions that import statistics take arbitrary lists of
files; sorting criteria is now based on keywords (instead of 4 integer
options); reports shows what functions were profiled as well as what
profile file was referenced; output format has been improved.

\end{description}


\section{Instant Users Manual}

This section is provided for users that ``don't want to read the
manual.'' It provides a very brief overview, and allows a user to
rapidly perform profiling on an existing application.

To profile an application with a main entry point of \samp{foo()}, you
would add the following to your module:

\begin{verbatim}
import profile
profile.run("foo()")
\end{verbatim}
%
The above action would cause \samp{foo()} to be run, and a series of
informative lines (the profile) to be printed.  The above approach is
most useful when working with the interpreter.  If you would like to
save the results of a profile into a file for later examination, you
can supply a file name as the second argument to the \function{run()}
function:

\begin{verbatim}
import profile
profile.run("foo()", 'fooprof')
\end{verbatim}
%
The file \file{profile.py} can also be invoked as
a script to profile another script.  For example:

\begin{verbatim}
python /usr/local/lib/python1.5/profile.py myscript.py
\end{verbatim}

When you wish to review the profile, you should use the methods in the
\module{pstats} module.  Typically you would load the statistics data as
follows:

\begin{verbatim}
import pstats
p = pstats.Stats('fooprof')
\end{verbatim}
%
The class \class{Stats} (the above code just created an instance of
this class) has a variety of methods for manipulating and printing the
data that was just read into \samp{p}.  When you ran
\function{profile.run()} above, what was printed was the result of three
method calls:

\begin{verbatim}
p.strip_dirs().sort_stats(-1).print_stats()
\end{verbatim}
%
The first method removed the extraneous path from all the module
names. The second method sorted all the entries according to the
standard module/line/name string that is printed (this is to comply
with the semantics of the old profiler).  The third method printed out
all the statistics.  You might try the following sort calls:

\begin{verbatim}
p.sort_stats('name')
p.print_stats()
\end{verbatim}
%
The first call will actually sort the list by function name, and the
second call will print out the statistics.  The following are some
interesting calls to experiment with:

\begin{verbatim}
p.sort_stats('cumulative').print_stats(10)
\end{verbatim}
%
This sorts the profile by cumulative time in a function, and then only
prints the ten most significant lines.  If you want to understand what
algorithms are taking time, the above line is what you would use.

If you were looking to see what functions were looping a lot, and
taking a lot of time, you would do:

\begin{verbatim}
p.sort_stats('time').print_stats(10)
\end{verbatim}
%
to sort according to time spent within each function, and then print
the statistics for the top ten functions.

You might also try:

\begin{verbatim}
p.sort_stats('file').print_stats('__init__')
\end{verbatim}
%
This will sort all the statistics by file name, and then print out
statistics for only the class init methods ('cause they are spelled
with \samp{__init__} in them).  As one final example, you could try:

\begin{verbatim}
p.sort_stats('time', 'cum').print_stats(.5, 'init')
\end{verbatim}
%
This line sorts statistics with a primary key of time, and a secondary
key of cumulative time, and then prints out some of the statistics.
To be specific, the list is first culled down to 50\% (re: \samp{.5})
of its original size, then only lines containing \code{init} are
maintained, and that sub-sub-list is printed.

If you wondered what functions called the above functions, you could
now (\samp{p} is still sorted according to the last criteria) do:

\begin{verbatim}
p.print_callers(.5, 'init')
\end{verbatim}

and you would get a list of callers for each of the listed functions. 

If you want more functionality, you're going to have to read the
manual, or guess what the following functions do:

\begin{verbatim}
p.print_callees()
p.add('fooprof')
\end{verbatim}
%
\section{What Is Deterministic Profiling?}
\nodename{Deterministic Profiling}

\dfn{Deterministic profiling} is meant to reflect the fact that all
\dfn{function call}, \dfn{function return}, and \dfn{exception} events
are monitored, and precise timings are made for the intervals between
these events (during which time the user's code is executing).  In
contrast, \dfn{statistical profiling} (which is not done by this
module) randomly samples the effective instruction pointer, and
deduces where time is being spent.  The latter technique traditionally
involves less overhead (as the code does not need to be instrumented),
but provides only relative indications of where time is being spent.

In Python, since there is an interpreter active during execution, the
presence of instrumented code is not required to do deterministic
profiling.  Python automatically provides a \dfn{hook} (optional
callback) for each event.  In addition, the interpreted nature of
Python tends to add so much overhead to execution, that deterministic
profiling tends to only add small processing overhead in typical
applications.  The result is that deterministic profiling is not that
expensive, yet provides extensive run time statistics about the
execution of a Python program.

Call count statistics can be used to identify bugs in code (surprising
counts), and to identify possible inline-expansion points (high call
counts).  Internal time statistics can be used to identify ``hot
loops'' that should be carefully optimized.  Cumulative time
statistics should be used to identify high level errors in the
selection of algorithms.  Note that the unusual handling of cumulative
times in this profiler allows statistics for recursive implementations
of algorithms to be directly compared to iterative implementations.


\section{Reference Manual}
\stmodindex{profile}
\label{module-profile}


The primary entry point for the profiler is the global function
\function{profile.run()}.  It is typically used to create any profile
information.  The reports are formatted and printed using methods of
the class \class{pstats.Stats}.  The following is a description of all
of these standard entry points and functions.  For a more in-depth
view of some of the code, consider reading the later section on
Profiler Extensions, which includes discussion of how to derive
``better'' profilers from the classes presented, or reading the source
code for these modules.

\begin{funcdesc}{run}{string\optional{, filename\optional{, ...}}}

This function takes a single argument that has can be passed to the
\keyword{exec} statement, and an optional file name.  In all cases this
routine attempts to \keyword{exec} its first argument, and gather profiling
statistics from the execution. If no file name is present, then this
function automatically prints a simple profiling report, sorted by the
standard name string (file/line/function-name) that is presented in
each line.  The following is a typical output from such a call:

\begin{verbatim}
      main()
      2706 function calls (2004 primitive calls) in 4.504 CPU seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     2    0.006    0.003    0.953    0.477 pobject.py:75(save_objects)
  43/3    0.533    0.012    0.749    0.250 pobject.py:99(evaluate)
 ...
\end{verbatim}

The first line indicates that this profile was generated by the call:\\
\code{profile.run('main()')}, and hence the exec'ed string is
\code{'main()'}.  The second line indicates that 2706 calls were
monitored.  Of those calls, 2004 were \dfn{primitive}.  We define
\dfn{primitive} to mean that the call was not induced via recursion.
The next line: \code{Ordered by:\ standard name}, indicates that
the text string in the far right column was used to sort the output.
The column headings include:

\begin{description}

\item[ncalls ]
for the number of calls, 

\item[tottime ]
for the total time spent in the given function (and excluding time
made in calls to sub-functions),

\item[percall ]
is the quotient of \code{tottime} divided by \code{ncalls}

\item[cumtime ]
is the total time spent in this and all subfunctions (i.e., from
invocation till exit). This figure is accurate \emph{even} for recursive
functions.

\item[percall ]
is the quotient of \code{cumtime} divided by primitive calls

\item[filename:lineno(function) ]
provides the respective data of each function

\end{description}

When there are two numbers in the first column (e.g.: \samp{43/3}),
then the latter is the number of primitive calls, and the former is
the actual number of calls.  Note that when the function does not
recurse, these two values are the same, and only the single figure is
printed.

\end{funcdesc}

Analysis of the profiler data is done using this class from the
\module{pstats} module:

% now switch modules....
\stmodindex{pstats}

\begin{classdesc}{Stats}{filename\optional{, ...}}
This class constructor creates an instance of a ``statistics object''
from a \var{filename} (or set of filenames).  \class{Stats} objects are
manipulated by methods, in order to print useful reports.

The file selected by the above constructor must have been created by
the corresponding version of \module{profile}.  To be specific, there is
\emph{no} file compatibility guaranteed with future versions of this
profiler, and there is no compatibility with files produced by other
profilers (e.g., the old system profiler).

If several files are provided, all the statistics for identical
functions will be coalesced, so that an overall view of several
processes can be considered in a single report.  If additional files
need to be combined with data in an existing \class{Stats} object, the
\method{add()} method can be used.
\end{classdesc}


\subsection{The \module{Stats} Class}

\setindexsubitem{(Stats method)}

\begin{methoddesc}{strip_dirs}{}
This method for the \class{Stats} class removes all leading path
information from file names.  It is very useful in reducing the size
of the printout to fit within (close to) 80 columns.  This method
modifies the object, and the stripped information is lost.  After
performing a strip operation, the object is considered to have its
entries in a ``random'' order, as it was just after object
initialization and loading.  If \method{strip_dirs()} causes two
function names to be indistinguishable (i.e., they are on the same
line of the same filename, and have the same function name), then the
statistics for these two entries are accumulated into a single entry.
\end{methoddesc}


\begin{methoddesc}{add}{filename\optional{, ...}}
This method of the \class{Stats} class accumulates additional
profiling information into the current profiling object.  Its
arguments should refer to filenames created by the corresponding
version of \function{profile.run()}.  Statistics for identically named
(re: file, line, name) functions are automatically accumulated into
single function statistics.
\end{methoddesc}

\begin{methoddesc}{sort_stats}{key\optional{, ...}}
This method modifies the \class{Stats} object by sorting it according
to the supplied criteria.  The argument is typically a string
identifying the basis of a sort (example: \code{"time"} or
\code{"name"}).

When more than one key is provided, then additional keys are used as
secondary criteria when the there is equality in all keys selected
before them.  For example, \samp{sort_stats('name', 'file')} will sort
all the entries according to their function name, and resolve all ties
(identical function names) by sorting by file name.

Abbreviations can be used for any key names, as long as the
abbreviation is unambiguous.  The following are the keys currently
defined: 

\begin{tableii}{|l|l|}{code}{Valid Arg}{Meaning}
  \lineii{'calls'}{call count}
  \lineii{'cumulative'}{cumulative time}
  \lineii{'file'}{file name}
  \lineii{'module'}{file name}
  \lineii{'pcalls'}{primitive call count}
  \lineii{'line'}{line number}
  \lineii{'name'}{function name}
  \lineii{'nfl'}{name/file/line}
  \lineii{'stdname'}{standard name}
  \lineii{'time'}{internal time}
\end{tableii}

Note that all sorts on statistics are in descending order (placing
most time consuming items first), where as name, file, and line number
searches are in ascending order (i.e., alphabetical). The subtle
distinction between \code{"nfl"} and \code{"stdname"} is that the
standard name is a sort of the name as printed, which means that the
embedded line numbers get compared in an odd way.  For example, lines
3, 20, and 40 would (if the file names were the same) appear in the
string order 20, 3 and 40.  In contrast, \code{"nfl"} does a numeric
compare of the line numbers.  In fact, \code{sort_stats("nfl")} is the
same as \code{sort_stats("name", "file", "line")}.

For compatibility with the old profiler, the numeric arguments
\samp{-1}, \samp{0}, \samp{1}, and \samp{2} are permitted.  They are
interpreted as \code{"stdname"}, \code{"calls"}, \code{"time"}, and
\code{"cumulative"} respectively.  If this old style format (numeric)
is used, only one sort key (the numeric key) will be used, and
additional arguments will be silently ignored.
\end{methoddesc}


\begin{methoddesc}{reverse_order}{}
This method for the \class{Stats} class reverses the ordering of the basic
list within the object.  This method is provided primarily for
compatibility with the old profiler.  Its utility is questionable
now that ascending vs descending order is properly selected based on
the sort key of choice.
\end{methoddesc}

\begin{methoddesc}{print_stats}{restriction\optional{, ...}}
This method for the \class{Stats} class prints out a report as described
in the \function{profile.run()} definition.

The order of the printing is based on the last \method{sort_stats()}
operation done on the object (subject to caveats in \method{add()} and
\method{strip_dirs()}.

The arguments provided (if any) can be used to limit the list down to
the significant entries.  Initially, the list is taken to be the
complete set of profiled functions.  Each restriction is either an
integer (to select a count of lines), or a decimal fraction between
0.0 and 1.0 inclusive (to select a percentage of lines), or a regular
expression (to pattern match the standard name that is printed; as of
Python 1.5b1, this uses the Perl-style regular expression syntax
defined by the \module{re} module).  If several restrictions are
provided, then they are applied sequentially.  For example:

\begin{verbatim}
print_stats(.1, "foo:")
\end{verbatim}

would first limit the printing to first 10\% of list, and then only
print functions that were part of filename \samp{.*foo:}.  In
contrast, the command:

\begin{verbatim}
print_stats("foo:", .1)
\end{verbatim}

would limit the list to all functions having file names \samp{.*foo:},
and then proceed to only print the first 10\% of them.
\end{methoddesc}


\begin{methoddesc}{print_callers}{restrictions\optional{, ...}}
This method for the \class{Stats} class prints a list of all functions
that called each function in the profiled database.  The ordering is
identical to that provided by \method{print_stats()}, and the definition
of the restricting argument is also identical.  For convenience, a
number is shown in parentheses after each caller to show how many
times this specific call was made.  A second non-parenthesized number
is the cumulative time spent in the function at the right.
\end{methoddesc}

\begin{methoddesc}{print_callees}{restrictions\optional{, ...}}
This method for the \class{Stats} class prints a list of all function
that were called by the indicated function.  Aside from this reversal
of direction of calls (re: called vs was called by), the arguments and
ordering are identical to the \method{print_callers()} method.
\end{methoddesc}

\begin{methoddesc}{ignore}{}
This method of the \class{Stats} class is used to dispose of the value
returned by earlier methods.  All standard methods in this class
return the instance that is being processed, so that the commands can
be strung together.  For example:

\begin{verbatim}
pstats.Stats('foofile').strip_dirs().sort_stats('cum') \
                       .print_stats().ignore()
\end{verbatim}

would perform all the indicated functions, but it would not return
the final reference to the \class{Stats} instance.%
\footnote{
This was once necessary, when Python would print any unused expression
result that was not \code{None}.  The method is still defined for
backward compatibility.
}
\end{methoddesc}


\section{Limitations}

There are two fundamental limitations on this profiler.  The first is
that it relies on the Python interpreter to dispatch \dfn{call},
\dfn{return}, and \dfn{exception} events.  Compiled \C{} code does not
get interpreted, and hence is ``invisible'' to the profiler.  All time
spent in \C{} code (including built-in functions) will be charged to the
Python function that invoked the \C{} code.  If the \C{} code calls out
to some native Python code, then those calls will be profiled
properly.

The second limitation has to do with accuracy of timing information.
There is a fundamental problem with deterministic profilers involving
accuracy.  The most obvious restriction is that the underlying ``clock''
is only ticking at a rate (typically) of about .001 seconds.  Hence no
measurements will be more accurate that that underlying clock.  If
enough measurements are taken, then the ``error'' will tend to average
out. Unfortunately, removing this first error induces a second source
of error...

The second problem is that it ``takes a while'' from when an event is
dispatched until the profiler's call to get the time actually
\emph{gets} the state of the clock.  Similarly, there is a certain lag
when exiting the profiler event handler from the time that the clock's
value was obtained (and then squirreled away), until the user's code
is once again executing.  As a result, functions that are called many
times, or call many functions, will typically accumulate this error.
The error that accumulates in this fashion is typically less than the
accuracy of the clock (i.e., less than one clock tick), but it
\emph{can} accumulate and become very significant.  This profiler
provides a means of calibrating itself for a given platform so that
this error can be probabilistically (i.e., on the average) removed.
After the profiler is calibrated, it will be more accurate (in a least
square sense), but it will sometimes produce negative numbers (when
call counts are exceptionally low, and the gods of probability work
against you :-). )  Do \emph{NOT} be alarmed by negative numbers in
the profile.  They should \emph{only} appear if you have calibrated
your profiler, and the results are actually better than without
calibration.


\section{Calibration}

The profiler class has a hard coded constant that is added to each
event handling time to compensate for the overhead of calling the time
function, and socking away the results.  The following procedure can
be used to obtain this constant for a given platform (see discussion
in section Limitations above).

\begin{verbatim}
import profile
pr = profile.Profile()
print pr.calibrate(100)
print pr.calibrate(100)
print pr.calibrate(100)
\end{verbatim}

The argument to \method{calibrate()} is the number of times to try to
do the sample calls to get the CPU times.  If your computer is
\emph{very} fast, you might have to do:

\begin{verbatim}
pr.calibrate(1000)
\end{verbatim}

or even:

\begin{verbatim}
pr.calibrate(10000)
\end{verbatim}

The object of this exercise is to get a fairly consistent result.
When you have a consistent answer, you are ready to use that number in
the source code.  For a Sun Sparcstation 1000 running Solaris 2.3, the
magical number is about .00053.  If you have a choice, you are better
off with a smaller constant, and your results will ``less often'' show
up as negative in profile statistics.

The following shows how the trace_dispatch() method in the Profile
class should be modified to install the calibration constant on a Sun
Sparcstation 1000:

\begin{verbatim}
def trace_dispatch(self, frame, event, arg):
    t = self.timer()
    t = t[0] + t[1] - self.t - .00053 # Calibration constant

    if self.dispatch[event](frame,t):
        t = self.timer()
        self.t = t[0] + t[1]
    else:
        r = self.timer()
        self.t = r[0] + r[1] - t # put back unrecorded delta
    return
\end{verbatim}

Note that if there is no calibration constant, then the line
containing the callibration constant should simply say:

\begin{verbatim}
t = t[0] + t[1] - self.t  # no calibration constant
\end{verbatim}

You can also achieve the same results using a derived class (and the
profiler will actually run equally fast!!), but the above method is
the simplest to use.  I could have made the profiler ``self
calibrating'', but it would have made the initialization of the
profiler class slower, and would have required some \emph{very} fancy
coding, or else the use of a variable where the constant \samp{.00053}
was placed in the code shown.  This is a \strong{VERY} critical
performance section, and there is no reason to use a variable lookup
at this point, when a constant can be used.


\section{Extensions --- Deriving Better Profilers}
\nodename{Profiler Extensions}

The \class{Profile} class of module \module{profile} was written so that
derived classes could be developed to extend the profiler.  Rather
than describing all the details of such an effort, I'll just present
the following two examples of derived classes that can be used to do
profiling.  If the reader is an avid Python programmer, then it should
be possible to use these as a model and create similar (and perchance
better) profile classes.

If all you want to do is change how the timer is called, or which
timer function is used, then the basic class has an option for that in
the constructor for the class.  Consider passing the name of a
function to call into the constructor:

\begin{verbatim}
pr = profile.Profile(your_time_func)
\end{verbatim}

The resulting profiler will call \code{your_time_func()} instead of
\function{os.times()}.  The function should return either a single number
or a list of numbers (like what \function{os.times()} returns).  If the
function returns a single time number, or the list of returned numbers
has length 2, then you will get an especially fast version of the
dispatch routine.

Be warned that you \emph{should} calibrate the profiler class for the
timer function that you choose.  For most machines, a timer that
returns a lone integer value will provide the best results in terms of
low overhead during profiling.  (\function{os.times()} is
\emph{pretty} bad, 'cause it returns a tuple of floating point values,
so all arithmetic is floating point in the profiler!).  If you want to
substitute a better timer in the cleanest fashion, you should derive a
class, and simply put in the replacement dispatch method that better
handles your timer call, along with the appropriate calibration
constant :-).


\subsection{OldProfile Class}

The following derived profiler simulates the old style profiler,
providing errant results on recursive functions. The reason for the
usefulness of this profiler is that it runs faster (i.e., less
overhead) than the old profiler.  It still creates all the caller
stats, and is quite useful when there is \emph{no} recursion in the
user's code.  It is also a lot more accurate than the old profiler, as
it does not charge all its overhead time to the user's code.

\begin{verbatim}
class OldProfile(Profile):

    def trace_dispatch_exception(self, frame, t):
        rt, rtt, rct, rfn, rframe, rcur = self.cur
        if rcur and not rframe is frame:
            return self.trace_dispatch_return(rframe, t)
        return 0

    def trace_dispatch_call(self, frame, t):
        fn = `frame.f_code`
        
        self.cur = (t, 0, 0, fn, frame, self.cur)
        if self.timings.has_key(fn):
            tt, ct, callers = self.timings[fn]
            self.timings[fn] = tt, ct, callers
        else:
            self.timings[fn] = 0, 0, {}
        return 1

    def trace_dispatch_return(self, frame, t):
        rt, rtt, rct, rfn, frame, rcur = self.cur
        rtt = rtt + t
        sft = rtt + rct

        pt, ptt, pct, pfn, pframe, pcur = rcur
        self.cur = pt, ptt+rt, pct+sft, pfn, pframe, pcur

        tt, ct, callers = self.timings[rfn]
        if callers.has_key(pfn):
            callers[pfn] = callers[pfn] + 1
        else:
            callers[pfn] = 1
        self.timings[rfn] = tt+rtt, ct + sft, callers

        return 1


    def snapshot_stats(self):
        self.stats = {}
        for func in self.timings.keys():
            tt, ct, callers = self.timings[func]
            nor_func = self.func_normalize(func)
            nor_callers = {}
            nc = 0
            for func_caller in callers.keys():
                nor_callers[self.func_normalize(func_caller)] = \
                    callers[func_caller]
                nc = nc + callers[func_caller]
            self.stats[nor_func] = nc, nc, tt, ct, nor_callers
\end{verbatim}

\subsection{HotProfile Class}

This profiler is the fastest derived profile example.  It does not
calculate caller-callee relationships, and does not calculate
cumulative time under a function.  It only calculates time spent in a
function, so it runs very quickly (re: very low overhead).  In truth,
the basic profiler is so fast, that is probably not worth the savings
to give up the data, but this class still provides a nice example.

\begin{verbatim}
class HotProfile(Profile):

    def trace_dispatch_exception(self, frame, t):
        rt, rtt, rfn, rframe, rcur = self.cur
        if rcur and not rframe is frame:
            return self.trace_dispatch_return(rframe, t)
        return 0

    def trace_dispatch_call(self, frame, t):
        self.cur = (t, 0, frame, self.cur)
        return 1

    def trace_dispatch_return(self, frame, t):
        rt, rtt, frame, rcur = self.cur

        rfn = `frame.f_code`

        pt, ptt, pframe, pcur = rcur
        self.cur = pt, ptt+rt, pframe, pcur

        if self.timings.has_key(rfn):
            nc, tt = self.timings[rfn]
            self.timings[rfn] = nc + 1, rt + rtt + tt
        else:
            self.timings[rfn] =      1, rt + rtt

        return 1


    def snapshot_stats(self):
        self.stats = {}
        for func in self.timings.keys():
            nc, tt = self.timings[func]
            nor_func = self.func_normalize(func)
            self.stats[nor_func] = nc, nc, tt, 0, {}
\end{verbatim}