summaryrefslogtreecommitdiffstats
path: root/Objects/stringobject.c
Commit message (Collapse)AuthorAgeFilesLines
* Whitespace normalizationNeal Norwitz2006-07-301-35/+32
|
* Bug #1515471: string.replace() accepts character buffers again.Neal Norwitz2006-07-301-71/+51
| | | | Pass the char* and size around rather than PyObject's.
* Update doc to make it agree with code.Neal Norwitz2006-06-111-10/+4
| | | | Bottom factor out some common code.
* Apply perky's fix for #1503157: "/".join([u"", u""]) raising OverflowError.Georg Brandl2006-06-101-1/+1
| | | | Also improve error message on overflow.
* RFE #1491485: str/unicode.endswith()/startswith() now accept a tuple as ↵Georg Brandl2006-06-091-60/+90
| | | | first argument.
* Remove ; at end of macro. There was a compiler recently that warnedNeal Norwitz2006-06-011-1/+1
| | | | | about extra semi-colons. It may have been the HP C compiler. This file will trigger a bunch of those warnings now.
* needforspeed: added Py_MEMCPY macro (currently tuned for Visual C only),Fredrik Lundh2006-05-281-37/+37
| | | | | and use it for string copy operations. this gives a 20% speedup on some string benchmarks.
* needforspeed: stringlib refactoring: use find_slice for stringobjectFredrik Lundh2006-05-271-12/+15
|
* needforspeed: replace improvements, changed to Py_LOCAL_INLINEFredrik Lundh2006-05-271-16/+16
| | | | where appropriate
* cleanup - removed trailing whitespaceAndrew Dalke2006-05-271-1/+1
|
* needforspeed: more stringlib refactoringFredrik Lundh2006-05-271-55/+39
|
* Added description of why splitlines doesn't use the prealloc strategyAndrew Dalke2006-05-261-0/+8
|
* Added limits to the replace code so it does not count all of the matchingAndrew Dalke2006-05-261-22/+19
| | | | patterns in a string, only the number needed by the max limit.
* needforspeed: stringlib refactoring: use stringlib/find for string findFredrik Lundh2006-05-261-19/+6
|
* needforspeed: stringlib refactoring, continued. added count andFredrik Lundh2006-05-261-2/+2
| | | | find helpers; updated unicodeobject to use stringlib_count
* substring split now uses /F's fast string matching algorithm.Andrew Dalke2006-05-261-40/+57
| | | | | | | | | | (If compiled without FAST search support, changed the pre-memcmp test to check the last character as well as the first. This gave a 25% speedup for my test case.) Rewrote the split algorithms so they stop when maxsplit gets to 0. Previously they did a string match first then checked if the maxsplit was reached. The new way prevents a needless string search.
* needforspeed: added rpartition implementationFredrik Lundh2006-05-261-0/+36
|
* needforspeed: remove remaining USE_FAST macros; if fastsearch wasFredrik Lundh2006-05-261-67/+2
| | | | broken, someone would have noticed by now ;-)
* needforspeed: cleanupFredrik Lundh2006-05-261-4/+8
|
* needforspeed: stringlib refactoring (in progress)Fredrik Lundh2006-05-261-34/+5
|
* needforspeed: stringlib refactoring (in progress)Fredrik Lundh2006-05-261-92/+7
|
* needforspeed: use Py_LOCAL on a few more locals in stringobject.cFredrik Lundh2006-05-261-26/+27
|
* Eeked out another 3% or so performance in split whitespace by cleaning up ↵Andrew Dalke2006-05-261-35/+38
| | | | the algorithm.
* Changes to string.split/rsplit on whitespace to preallocate space in theAndrew Dalke2006-05-261-56/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | results list. Originally it allocated 0 items and used the list growth during append. Now it preallocates 12 items so the first few appends don't need list reallocs. ("Here are some words ."*2).split(None, 1) is 7% faster ("Here are some words ."*2).split() is is 15% faster (Your milage may vary, see dealership for details.) File parsing like this for line in f: count += len(line.split()) is also about 15% faster. There is a slowdown of about 3% for large strings because of the additional overhead of checking if the append is to a preallocated region of the list or not. This will be the rare case. It could be improved with special case code but we decided it was not useful enough. There is a cost of 12*sizeof(PyObject *) bytes per list. For the normal case of file parsing this is not a problem because of the lists have a short lifetime. We have not come up with cases where this is a problem in real life. I chose 12 because human text averages about 11 words per line in books, one of my data sets averages 6.2 words with a final peak at 11 words per line, and I work with a tab delimited data set with 8 tabs per line (or 9 words per line). 12 encompasses all of these. Also changed the last rstrip code to append then reverse, rather than doing insert(0). The strip() and rstrip() times are now comparable.
* use Py_LOCAL also for string and unicode objectsFredrik Lundh2006-05-261-13/+1
|
* needforspeed: use Py_ssize_t for the fastsearch counter and skipFredrik Lundh2006-05-261-1/+1
| | | | | length (thanks, neal!). and yes, I've verified that this doesn't slow things down ;-)
* needforspeed: use METH_O for argument handling, which made partition someFredrik Lundh2006-05-261-6/+2
| | | | | ~15% faster for the current tests (which is noticable faster than a corre- sponding find call). thanks to neal-who-never-sleeps for the tip.
* needforspeed: partition implementation, part two.Fredrik Lundh2006-05-261-15/+15
| | | | feel free to improve the documentation and the docstrings.
* needforspeed: partition for 8-bit strings. for some simple tests,Fredrik Lundh2006-05-251-5/+66
| | | | | | | | this is on par with a corresponding find, and nearly twice as fast as split(sep, 1) full tests, a unicode version, and documentation will follow to- morrow.
* squelch gcc4 darwin/x86 compiler warningsBob Ippolito2006-05-251-1/+1
|
* needforspeed: use insert+reverse instead of appendFredrik Lundh2006-05-251-16/+8
|
* * eliminate warning by reverting tmp_s type to 'const char*'Jack Diederich2006-05-251-1/+1
|
* needforspeed: use fastsearch also for find/index and contains. theFredrik Lundh2006-05-251-1/+25
| | | | related tests are now about 10x faster.
* Added overflow test for adding two (very) large strings where theAndrew Dalke2006-05-251-2/+7
| | | | | new string is over max Py_ssize_t. I have no way to test it on my box or any box I have access to. At least it doesn't break anything.
* Comment typoAndrew M. Kuchling2006-05-251-1/+1
|
* needforspeed: use "fastsearch" for count. this results in a 3x speedupFredrik Lundh2006-05-251-1/+122
| | | | for the related stringbench tests.
* Fixed problem identified by Georg. The special-case in-place code for replaceAndrew Dalke2006-05-251-2/+5
| | | | | | | | | | | made a copy of the string using PyString_FromStringAndSize(s, n) and modify the copied string in-place. However, 1 (and 0) character strings are shared from a cache. This cause "A".replace("A", "a") to change the cached version of "A" -- used by everyone. Now may the copy with NULL as the string and do the memcpy manually. I've added regression tests to check if this happens in the future. Perhaps there should be a PyString_Copy for this case?
* needforspeed: new replace implementation by Andrew Dalke. replace isFredrik Lundh2006-05-251-182/+605
| | | | | now about 3x faster on my machine, for the replace tests from string- bench.
* needforspeed: check for overflow in replace (from Andrew Dalke)Fredrik Lundh2006-05-251-2/+21
|
* needforspeed: _toupper/_tolower is a SUSv2 thing; fall back on ISO CFredrik Lundh2006-05-251-0/+9
| | | | versions if they're not defined.
* needforspeed: make new upper/lower work properly for single-characterFredrik Lundh2006-05-251-4/+8
| | | | | strings too... (thanks to georg brandl for spotting the exact problem faster than anyone else)
* needforspeed: speed up upper and lower for 8-bit string objects.Fredrik Lundh2006-05-251-22/+20
| | | | | | | (the unicode versions of these are still 2x faster on windows, though...) based on work by Andrew Dalke, with tweaks by yours truly.
* docstring tweaks: count counts non-overlapping substrings, notFredrik Lundh2006-05-221-3/+3
| | | | total number of occurences
* Teach PyString_FromFormat, PyErr_Format, and PyString_FromFormatVTim Peters2006-05-131-13/+22
| | | | | | | | | | | | about "%u", "%lu" and "%zu" formats. Since PyString_FromFormat and PyErr_Format have exactly the same rules (both inherited from PyString_FromFormatV), it would be good if someone with more LaTeX Fu changed one of them to just point to the other. Their docs were way out of synch before this patch, and I just did a mass copy+paste to repair that. Not a backport candidate (this is a new feature).
* Revert 43315: Printing of %zd must be signed.Martin v. Löwis2006-05-131-2/+2
|
* Py_ssize_t issue; repr()'ing a very large string would result in a teensyThomas Wouters2006-04-211-1/+1
| | | | string, because of a cast to int.
* Make s.replace() work with explicit counts exceeding 2Gb.Thomas Wouters2006-04-191-2/+2
|
* Use Py_ssize_t to hold the 'width' argument to the ljust, rjust, center andThomas Wouters2006-04-191-8/+8
| | | | | | zfill stringmethods, so they can create strings larger than 2Gb on 64bit systems (even win64.) The unicode versions of these methods already did this right.
* C++ compiler cleanup: bunch-o-casts, plus use of unsigned loop index var in ↵Skip Montanaro2006-04-181-1/+1
| | | | a couple places
* No need to cast a Py_ssize_t, use %z in PyErr_FormatNeal Norwitz2006-04-171-2/+2
|