summaryrefslogtreecommitdiffstats
path: root/Doc/lib/libprofile.tex
diff options
context:
space:
mode:
authorFred Drake <fdrake@acm.org>1998-02-13 06:58:54 (GMT)
committerFred Drake <fdrake@acm.org>1998-02-13 06:58:54 (GMT)
commit1947991c2f85db781fb3fcdc9e3bcfe2905e58e2 (patch)
tree260789493c7151408f009eaa84a7815ce4d28246 /Doc/lib/libprofile.tex
parentdc8af0acc1fbeec89e43f1ea43bf1a4d016f4fc6 (diff)
downloadcpython-1947991c2f85db781fb3fcdc9e3bcfe2905e58e2.zip
cpython-1947991c2f85db781fb3fcdc9e3bcfe2905e58e2.tar.gz
cpython-1947991c2f85db781fb3fcdc9e3bcfe2905e58e2.tar.bz2
Remove all \bcode / \ecode cruft; this is no longer needed. See previous
checkin of myformat.sty. Change "\renewcommand{\indexsubitem}{(...)}" to "\setindexsubitem{(...)}" everywhere. Some other minor nits that I happened to come across.
Diffstat (limited to 'Doc/lib/libprofile.tex')
-rw-r--r--Doc/lib/libprofile.tex96
1 files changed, 48 insertions, 48 deletions
diff --git a/Doc/lib/libprofile.tex b/Doc/lib/libprofile.tex
index a333744..20da0b6 100644
--- a/Doc/lib/libprofile.tex
+++ b/Doc/lib/libprofile.tex
@@ -106,10 +106,10 @@ rapidly perform profiling on an existing application.
To profile an application with a main entry point of \samp{foo()}, you
would add the following to your module:
-\bcode\begin{verbatim}
+\begin{verbatim}
import profile
profile.run("foo()")
-\end{verbatim}\ecode
+\end{verbatim}
%
The above action would cause \samp{foo()} to be run, and a series of
informative lines (the profile) to be printed. The above approach is
@@ -118,10 +118,10 @@ save the results of a profile into a file for later examination, you
can supply a file name as the second argument to the \code{run()}
function:
-\bcode\begin{verbatim}
+\begin{verbatim}
import profile
profile.run("foo()", 'fooprof')
-\end{verbatim}\ecode
+\end{verbatim}
%
\code{profile.py} can also be invoked as
a script to profile another script. For example:
@@ -131,10 +131,10 @@ When you wish to review the profile, you should use the methods in the
\code{pstats} module. Typically you would load the statistics data as
follows:
-\bcode\begin{verbatim}
+\begin{verbatim}
import pstats
p = pstats.Stats('fooprof')
-\end{verbatim}\ecode
+\end{verbatim}
%
The class \code{Stats} (the above code just created an instance of
this class) has a variety of methods for manipulating and printing the
@@ -142,9 +142,9 @@ data that was just read into \samp{p}. When you ran
\code{profile.run()} above, what was printed was the result of three
method calls:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.strip_dirs().sort_stats(-1).print_stats()
-\end{verbatim}\ecode
+\end{verbatim}
%
The first method removed the extraneous path from all the module
names. The second method sorted all the entries according to the
@@ -152,18 +152,18 @@ standard module/line/name string that is printed (this is to comply
with the semantics of the old profiler). The third method printed out
all the statistics. You might try the following sort calls:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.sort_stats('name')
p.print_stats()
-\end{verbatim}\ecode
+\end{verbatim}
%
The first call will actually sort the list by function name, and the
second call will print out the statistics. The following are some
interesting calls to experiment with:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.sort_stats('cumulative').print_stats(10)
-\end{verbatim}\ecode
+\end{verbatim}
%
This sorts the profile by cumulative time in a function, and then only
prints the ten most significant lines. If you want to understand what
@@ -172,26 +172,26 @@ algorithms are taking time, the above line is what you would use.
If you were looking to see what functions were looping a lot, and
taking a lot of time, you would do:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.sort_stats('time').print_stats(10)
-\end{verbatim}\ecode
+\end{verbatim}
%
to sort according to time spent within each function, and then print
the statistics for the top ten functions.
You might also try:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.sort_stats('file').print_stats('__init__')
-\end{verbatim}\ecode
+\end{verbatim}
%
This will sort all the statistics by file name, and then print out
statistics for only the class init methods ('cause they are spelled
with \code{__init__} in them). As one final example, you could try:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.sort_stats('time', 'cum').print_stats(.5, 'init')
-\end{verbatim}\ecode
+\end{verbatim}
%
This line sorts statistics with a primary key of time, and a secondary
key of cumulative time, and then prints out some of the statistics.
@@ -202,19 +202,19 @@ maintained, and that sub-sub-list is printed.
If you wondered what functions called the above functions, you could
now (\samp{p} is still sorted according to the last criteria) do:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.print_callers(.5, 'init')
-\end{verbatim}\ecode
+\end{verbatim}
%
and you would get a list of callers for each of the listed functions.
If you want more functionality, you're going to have to read the
manual, or guess what the following functions do:
-\bcode\begin{verbatim}
+\begin{verbatim}
p.print_callees()
p.add('fooprof')
-\end{verbatim}\ecode
+\end{verbatim}
%
\section{What Is Deterministic Profiling?}
\nodename{Deterministic Profiling}
@@ -251,7 +251,7 @@ of algorithms to be directly compared to iterative implementations.
\section{Reference Manual}
-\renewcommand{\indexsubitem}{(profiler function)}
+\setindexsubitem{(profiler function)}
The primary entry point for the profiler is the global function
\code{profile.run()}. It is typically used to create any profile
@@ -273,7 +273,7 @@ function automatically prints a simple profiling report, sorted by the
standard name string (file/line/function-name) that is presented in
each line. The following is a typical output from such a call:
-\bcode\begin{verbatim}
+\begin{verbatim}
main()
2706 function calls (2004 primitive calls) in 4.504 CPU seconds
@@ -283,7 +283,7 @@ ncalls tottime percall cumtime percall filename:lineno(function)
2 0.006 0.003 0.953 0.477 pobject.py:75(save_objects)
43/3 0.533 0.012 0.749 0.250 pobject.py:99(evaluate)
...
-\end{verbatim}\ecode
+\end{verbatim}
The first line indicates that this profile was generated by the call:\\
\code{profile.run('main()')}, and hence the exec'ed string is
@@ -348,7 +348,7 @@ need to be combined with data in an existing \code{Stats} object, the
\subsection{The \sectcode{Stats} Class}
-\renewcommand{\indexsubitem}{(Stats method)}
+\setindexsubitem{(Stats method)}
\begin{funcdesc}{strip_dirs}{}
This method for the \code{Stats} class removes all leading path information
@@ -447,17 +447,17 @@ Python 1.5b1, this uses the Perl-style regular expression syntax
defined by the \code{re} module). If several restrictions are
provided, then they are applied sequentially. For example:
-\bcode\begin{verbatim}
+\begin{verbatim}
print_stats(.1, "foo:")
-\end{verbatim}\ecode
+\end{verbatim}
%
would first limit the printing to first 10\% of list, and then only
print functions that were part of filename \samp{.*foo:}. In
contrast, the command:
-\bcode\begin{verbatim}
+\begin{verbatim}
print_stats("foo:", .1)
-\end{verbatim}\ecode
+\end{verbatim}
%
would limit the list to all functions having file names \samp{.*foo:},
and then proceed to only print the first 10\% of them.
@@ -487,10 +487,10 @@ returned by earlier methods. All standard methods in this class
return the instance that is being processed, so that the commands can
be strung together. For example:
-\bcode\begin{verbatim}
+\begin{verbatim}
pstats.Stats('foofile').strip_dirs().sort_stats('cum') \
.print_stats().ignore()
-\end{verbatim}\ecode
+\end{verbatim}
%
would perform all the indicated functions, but it would not return
the final reference to the \code{Stats} instance.%
@@ -551,27 +551,27 @@ function, and socking away the results. The following procedure can
be used to obtain this constant for a given platform (see discussion
in section Limitations above).
-\bcode\begin{verbatim}
+\begin{verbatim}
import profile
pr = profile.Profile()
pr.calibrate(100)
pr.calibrate(100)
pr.calibrate(100)
-\end{verbatim}\ecode
+\end{verbatim}
%
The argument to calibrate() is the number of times to try to do the
sample calls to get the CPU times. If your computer is \emph{very}
fast, you might have to do:
-\bcode\begin{verbatim}
+\begin{verbatim}
pr.calibrate(1000)
-\end{verbatim}\ecode
+\end{verbatim}
%
or even:
-\bcode\begin{verbatim}
+\begin{verbatim}
pr.calibrate(10000)
-\end{verbatim}\ecode
+\end{verbatim}
%
The object of this exercise is to get a fairly consistent result.
When you have a consistent answer, you are ready to use that number in
@@ -584,7 +584,7 @@ The following shows how the trace_dispatch() method in the Profile
class should be modified to install the calibration constant on a Sun
Sparcstation 1000:
-\bcode\begin{verbatim}
+\begin{verbatim}
def trace_dispatch(self, frame, event, arg):
t = self.timer()
t = t[0] + t[1] - self.t - .00053 # Calibration constant
@@ -596,14 +596,14 @@ def trace_dispatch(self, frame, event, arg):
r = self.timer()
self.t = r[0] + r[1] - t # put back unrecorded delta
return
-\end{verbatim}\ecode
+\end{verbatim}
%
Note that if there is no calibration constant, then the line
containing the callibration constant should simply say:
-\bcode\begin{verbatim}
+\begin{verbatim}
t = t[0] + t[1] - self.t # no calibration constant
-\end{verbatim}\ecode
+\end{verbatim}
%
You can also achieve the same results using a derived class (and the
profiler will actually run equally fast!!), but the above method is
@@ -632,9 +632,9 @@ timer function is used, then the basic class has an option for that in
the constructor for the class. Consider passing the name of a
function to call into the constructor:
-\bcode\begin{verbatim}
+\begin{verbatim}
pr = profile.Profile(your_time_func)
-\end{verbatim}\ecode
+\end{verbatim}
%
The resulting profiler will call \code{your_time_func()} instead of
\code{os.times()}. The function should return either a single number
@@ -664,7 +664,7 @@ stats, and is quite useful when there is \emph{no} recursion in the
user's code. It is also a lot more accurate than the old profiler, as
it does not charge all its overhead time to the user's code.
-\bcode\begin{verbatim}
+\begin{verbatim}
class OldProfile(Profile):
def trace_dispatch_exception(self, frame, t):
@@ -714,7 +714,7 @@ class OldProfile(Profile):
callers[func_caller]
nc = nc + callers[func_caller]
self.stats[nor_func] = nc, nc, tt, ct, nor_callers
-\end{verbatim}\ecode
+\end{verbatim}
%
\subsection{HotProfile Class}
@@ -725,7 +725,7 @@ function, so it runs very quickly (re: very low overhead). In truth,
the basic profiler is so fast, that is probably not worth the savings
to give up the data, but this class still provides a nice example.
-\bcode\begin{verbatim}
+\begin{verbatim}
class HotProfile(Profile):
def trace_dispatch_exception(self, frame, t):
@@ -761,4 +761,4 @@ class HotProfile(Profile):
nc, tt = self.timings[func]
nor_func = self.func_normalize(func)
self.stats[nor_func] = nc, nc, tt, 0, {}
-\end{verbatim}\ecode
+\end{verbatim}