| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
| |
__oct__, __hex__ don't return a string.
Klocwork 308
|
|
|
|
|
|
|
| |
I modified this patch some by fixing style, some error checking, and adding
XXX comments. This patch requires review and some changes are to be expected.
I'm checking in now to get the greatest possible review and establish a
baseline for moving forward. I don't want this to hold up release if possible.
|
| |
|
|
|
|
| |
Pass the char* and size around rather than PyObject's.
|
|
|
|
| |
Bottom factor out some common code.
|
|
|
|
| |
Also improve error message on overflow.
|
|
|
|
| |
first argument.
|
|
|
|
|
| |
about extra semi-colons. It may have been the HP C compiler.
This file will trigger a bunch of those warnings now.
|
|
|
|
|
| |
and use it for string copy operations. this gives a 20% speedup on some
string benchmarks.
|
| |
|
|
|
|
| |
where appropriate
|
| |
|
| |
|
| |
|
|
|
|
| |
patterns in a string, only the number needed by the max limit.
|
| |
|
|
|
|
| |
find helpers; updated unicodeobject to use stringlib_count
|
|
|
|
|
|
|
|
|
|
| |
(If compiled without FAST search support, changed the pre-memcmp test
to check the last character as well as the first. This gave a 25%
speedup for my test case.)
Rewrote the split algorithms so they stop when maxsplit gets to 0.
Previously they did a string match first then checked if the maxsplit
was reached. The new way prevents a needless string search.
|
| |
|
|
|
|
| |
broken, someone would have noticed by now ;-)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
the algorithm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
results list.
Originally it allocated 0 items and used the list growth during append. Now
it preallocates 12 items so the first few appends don't need list reallocs.
("Here are some words ."*2).split(None, 1) is 7% faster
("Here are some words ."*2).split() is is 15% faster
(Your milage may vary, see dealership for details.)
File parsing like this
for line in f:
count += len(line.split())
is also about 15% faster. There is a slowdown of about 3% for large
strings because of the additional overhead of checking if the append is
to a preallocated region of the list or not. This will be the rare case.
It could be improved with special case code but we decided it was not
useful enough.
There is a cost of 12*sizeof(PyObject *) bytes per list. For the normal
case of file parsing this is not a problem because of the lists have
a short lifetime. We have not come up with cases where this is a problem
in real life.
I chose 12 because human text averages about 11 words per line in books,
one of my data sets averages 6.2 words with a final peak at 11 words per
line, and I work with a tab delimited data set with 8 tabs per line (or
9 words per line). 12 encompasses all of these.
Also changed the last rstrip code to append then reverse, rather than
doing insert(0). The strip() and rstrip() times are now comparable.
|
| |
|
|
|
|
|
| |
length (thanks, neal!). and yes, I've verified that this doesn't
slow things down ;-)
|
|
|
|
|
| |
~15% faster for the current tests (which is noticable faster than a corre-
sponding find call). thanks to neal-who-never-sleeps for the tip.
|
|
|
|
| |
feel free to improve the documentation and the docstrings.
|
|
|
|
|
|
|
|
| |
this is on par with a corresponding find, and nearly twice as fast
as split(sep, 1)
full tests, a unicode version, and documentation will follow to-
morrow.
|
| |
|
| |
|
| |
|
|
|
|
| |
related tests are now about 10x faster.
|
|
|
|
|
| |
new string is over max Py_ssize_t. I have no way to test it on my
box or any box I have access to. At least it doesn't break anything.
|
| |
|
|
|
|
| |
for the related stringbench tests.
|
|
|
|
|
|
|
|
|
|
|
| |
made a copy of the string using PyString_FromStringAndSize(s, n) and modify
the copied string in-place. However, 1 (and 0) character strings are shared
from a cache. This cause "A".replace("A", "a") to change the cached version
of "A" -- used by everyone.
Now may the copy with NULL as the string and do the memcpy manually. I've
added regression tests to check if this happens in the future. Perhaps
there should be a PyString_Copy for this case?
|
|
|
|
|
| |
now about 3x faster on my machine, for the replace tests from string-
bench.
|
| |
|
|
|
|
| |
versions if they're not defined.
|
|
|
|
|
| |
strings too... (thanks to georg brandl for spotting the exact problem
faster than anyone else)
|
|
|
|
|
|
|
| |
(the unicode versions of these are still 2x faster on windows,
though...)
based on work by Andrew Dalke, with tweaks by yours truly.
|
|
|
|
| |
total number of occurences
|
|
|
|
|
|
|
|
|
|
|
|
| |
about "%u", "%lu" and "%zu" formats.
Since PyString_FromFormat and PyErr_Format have exactly the same rules
(both inherited from PyString_FromFormatV), it would be good if someone
with more LaTeX Fu changed one of them to just point to the other.
Their docs were way out of synch before this patch, and I just did a
mass copy+paste to repair that.
Not a backport candidate (this is a new feature).
|
| |
|
|
|
|
| |
string, because of a cast to int.
|