diff options
author | Stefan Krah <skrah@bytereef.org> | 2012-02-25 11:24:21 (GMT) |
---|---|---|
committer | Stefan Krah <skrah@bytereef.org> | 2012-02-25 11:24:21 (GMT) |
commit | 9a2d99e28a5c2989b2db4023acae4f550885f2ef (patch) | |
tree | 29bb99fc008de30ecc1e765d6d14ee35cd5bdfe5 | |
parent | 5a3d04623b0dc8219326989bc3619d5f56737a94 (diff) | |
download | cpython-9a2d99e28a5c2989b2db4023acae4f550885f2ef.zip cpython-9a2d99e28a5c2989b2db4023acae4f550885f2ef.tar.gz cpython-9a2d99e28a5c2989b2db4023acae4f550885f2ef.tar.bz2 |
- Issue #10181: New memoryview implementation fixes multiple ownership
and lifetime issues of dynamically allocated Py_buffer members (#9990)
as well as crashes (#8305, #7433). Many new features have been added
(See whatsnew/3.3), and the documentation has been updated extensively.
The ndarray test object from _testbuffer.c implements all aspects of
PEP-3118, so further development towards the complete implementation
of the PEP can proceed in a test-driven manner.
Thanks to Nick Coghlan, Antoine Pitrou and Pauli Virtanen for review
and many ideas.
- Issue #12834: Fix incorrect results of memoryview.tobytes() for
non-contiguous arrays.
- Issue #5231: Introduce memoryview.cast() method that allows changing
format and shape without making a copy of the underlying memory.
-rw-r--r-- | Doc/c-api/buffer.rst | 485 | ||||
-rw-r--r-- | Doc/c-api/memoryview.rst | 29 | ||||
-rw-r--r-- | Doc/c-api/typeobj.rst | 88 | ||||
-rw-r--r-- | Doc/library/stdtypes.rst | 298 | ||||
-rw-r--r-- | Doc/whatsnew/3.3.rst | 56 | ||||
-rw-r--r-- | Include/abstract.h | 2 | ||||
-rw-r--r-- | Include/memoryobject.h | 84 | ||||
-rw-r--r-- | Include/object.h | 7 | ||||
-rw-r--r-- | Lib/ctypes/test/test_pep3118.py | 76 | ||||
-rw-r--r-- | Lib/test/test_buffer.py | 3437 | ||||
-rw-r--r-- | Lib/test/test_memoryview.py | 50 | ||||
-rw-r--r-- | Lib/test/test_sys.py | 4 | ||||
-rw-r--r-- | Misc/ACKS | 1 | ||||
-rw-r--r-- | Misc/NEWS | 17 | ||||
-rw-r--r-- | Misc/valgrind-python.supp | 11 | ||||
-rw-r--r-- | Modules/_testbuffer.c | 2683 | ||||
-rw-r--r-- | Modules/_testcapimodule.c | 91 | ||||
-rw-r--r-- | Objects/abstract.c | 14 | ||||
-rw-r--r-- | Objects/memoryobject.c | 2865 | ||||
-rw-r--r-- | Objects/object.c | 3 | ||||
-rw-r--r-- | PCbuild/_testbuffer.vcproj | 521 | ||||
-rw-r--r-- | PCbuild/pcbuild.sln | 21 | ||||
-rw-r--r-- | PCbuild/readme.txt | 3 | ||||
-rw-r--r-- | setup.py | 2 |
24 files changed, 9844 insertions, 1004 deletions
diff --git a/Doc/c-api/buffer.rst b/Doc/c-api/buffer.rst index d98ece3..2d19992 100644 --- a/Doc/c-api/buffer.rst +++ b/Doc/c-api/buffer.rst @@ -7,6 +7,7 @@ Buffer Protocol .. sectionauthor:: Greg Stein <gstein@lyra.org> .. sectionauthor:: Benjamin Peterson +.. sectionauthor:: Stefan Krah .. index:: @@ -20,7 +21,7 @@ as image processing or numeric analysis. While each of these types have their own semantics, they share the common characteristic of being backed by a possibly large memory buffer. It is -then desireable, in some situations, to access that buffer directly and +then desirable, in some situations, to access that buffer directly and without intermediate copying. Python provides such a facility at the C level in the form of the *buffer @@ -60,8 +61,10 @@ isn't needed anymore. Failure to do so could lead to various issues such as resource leaks. -The buffer structure -==================== +.. _buffer-structure: + +Buffer structure +================ Buffer structures (or simply "buffers") are useful as a way to expose the binary data from another object to the Python programmer. They can also be @@ -81,246 +84,400 @@ can be created. .. c:type:: Py_buffer - .. c:member:: void *buf + .. c:member:: void \*obj + + A new reference to the exporting object or *NULL*. The reference is owned + by the consumer and automatically decremented and set to *NULL* by + :c:func:`PyBuffer_Release`. + + For temporary buffers that are wrapped by :c:func:`PyMemoryView_FromBuffer` + this field must be *NULL*. - A pointer to the start of the memory for the object. + .. c:member:: void \*buf + + A pointer to the start of the logical structure described by the buffer + fields. This can be any location within the underlying physical memory + block of the exporter. For example, with negative :c:member:`~Py_buffer.strides` + the value may point to the end of the memory block. + + For contiguous arrays, the value points to the beginning of the memory + block. .. c:member:: Py_ssize_t len - :noindex: - The total length of the memory in bytes. + ``product(shape) * itemsize``. For contiguous arrays, this is the length + of the underlying memory block. For non-contiguous arrays, it is the length + that the logical structure would have if it were copied to a contiguous + representation. + + Accessing ``((char *)buf)[0] up to ((char *)buf)[len-1]`` is only valid + if the buffer has been obtained by a request that guarantees contiguity. In + most cases such a request will be :c:macro:`PyBUF_SIMPLE` or :c:macro:`PyBUF_WRITABLE`. .. c:member:: int readonly - An indicator of whether the buffer is read only. + An indicator of whether the buffer is read-only. This field is controlled + by the :c:macro:`PyBUF_WRITABLE` flag. + + .. c:member:: Py_ssize_t itemsize + + Item size in bytes of a single element. Same as the value of :func:`struct.calcsize` + called on non-NULL :c:member:`~Py_buffer.format` values. + + Important exception: If a consumer requests a buffer without the + :c:macro:`PyBUF_FORMAT` flag, :c:member:`~Py_Buffer.format` will + be set to *NULL*, but :c:member:`~Py_buffer.itemsize` still has + the value for the original format. + + If :c:member:`~Py_Buffer.shape` is present, the equality + ``product(shape) * itemsize == len`` still holds and the consumer + can use :c:member:`~Py_buffer.itemsize` to navigate the buffer. + + If :c:member:`~Py_Buffer.shape` is *NULL* as a result of a :c:macro:`PyBUF_SIMPLE` + or a :c:macro:`PyBUF_WRITABLE` request, the consumer must disregard + :c:member:`~Py_buffer.itemsize` and assume ``itemsize == 1``. - .. c:member:: const char *format - :noindex: + .. c:member:: const char \*format - A *NULL* terminated string in :mod:`struct` module style syntax giving - the contents of the elements available through the buffer. If this is - *NULL*, ``"B"`` (unsigned bytes) is assumed. + A *NUL* terminated string in :mod:`struct` module style syntax describing + the contents of a single item. If this is *NULL*, ``"B"`` (unsigned bytes) + is assumed. + + This field is controlled by the :c:macro:`PyBUF_FORMAT` flag. .. c:member:: int ndim - The number of dimensions the memory represents as a multi-dimensional - array. If it is 0, :c:data:`strides` and :c:data:`suboffsets` must be - *NULL*. - - .. c:member:: Py_ssize_t *shape - - An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the - shape of the memory as a multi-dimensional array. Note that - ``((*shape)[0] * ... * (*shape)[ndims-1])*itemsize`` should be equal to - :c:data:`len`. - - .. c:member:: Py_ssize_t *strides - - An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the - number of bytes to skip to get to a new element in each dimension. - - .. c:member:: Py_ssize_t *suboffsets - - An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim`. If these - suboffset numbers are greater than or equal to 0, then the value stored - along the indicated dimension is a pointer and the suboffset value - dictates how many bytes to add to the pointer after de-referencing. A - suboffset value that it negative indicates that no de-referencing should - occur (striding in a contiguous memory block). - - Here is a function that returns a pointer to the element in an N-D array - pointed to by an N-dimensional index when there are both non-NULL strides - and suboffsets:: - - void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides, - Py_ssize_t *suboffsets, Py_ssize_t *indices) { - char *pointer = (char*)buf; - int i; - for (i = 0; i < ndim; i++) { - pointer += strides[i] * indices[i]; - if (suboffsets[i] >=0 ) { - pointer = *((char**)pointer) + suboffsets[i]; - } - } - return (void*)pointer; - } + The number of dimensions the memory represents as an n-dimensional array. + If it is 0, :c:member:`~Py_Buffer.buf` points to a single item representing + a scalar. In this case, :c:member:`~Py_buffer.shape`, :c:member:`~Py_buffer.strides` + and :c:member:`~Py_buffer.suboffsets` MUST be *NULL*. + The macro :c:macro:`PyBUF_MAX_NDIM` limits the maximum number of dimensions + to 64. Exporters MUST respect this limit, consumers of multi-dimensional + buffers SHOULD be able to handle up to :c:macro:`PyBUF_MAX_NDIM` dimensions. - .. c:member:: Py_ssize_t itemsize + .. c:member:: Py_ssize_t \*shape + + An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim` + indicating the shape of the memory as an n-dimensional array. Note that + ``shape[0] * ... * shape[ndim-1] * itemsize`` MUST be equal to + :c:member:`~Py_buffer.len`. + + Shape values are restricted to ``shape[n] >= 0``. The case + ``shape[n] == 0`` requires special attention. See `complex arrays`_ + for further information. + + The shape array is read-only for the consumer. + + .. c:member:: Py_ssize_t \*strides + + An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim` + giving the number of bytes to skip to get to a new element in each + dimension. + + Stride values can be any integer. For regular arrays, strides are + usually positive, but a consumer MUST be able to handle the case + ``strides[n] <= 0``. See `complex arrays`_ for further information. + + The strides array is read-only for the consumer. + + .. c:member:: Py_ssize_t \*suboffsets + + An array of :c:type:`Py_ssize_t` of length :c:member:`~Py_buffer.ndim`. + If ``suboffsets[n] >= 0``, the values stored along the nth dimension are + pointers and the suboffset value dictates how many bytes to add to each + pointer after de-referencing. A suboffset value that is negative + indicates that no de-referencing should occur (striding in a contiguous + memory block). - This is a storage for the itemsize (in bytes) of each element of the - shared memory. It is technically un-necessary as it can be obtained - using :c:func:`PyBuffer_SizeFromFormat`, however an exporter may know - this information without parsing the format string and it is necessary - to know the itemsize for proper interpretation of striding. Therefore, - storing it is more convenient and faster. + This type of array representation is used by the Python Imaging Library + (PIL). See `complex arrays`_ for further information how to access elements + of such an array. - .. c:member:: void *internal + The suboffsets array is read-only for the consumer. + + .. c:member:: void \*internal This is for use internally by the exporting object. For example, this might be re-cast as an integer by the exporter and used to store flags about whether or not the shape, strides, and suboffsets arrays must be - freed when the buffer is released. The consumer should never alter this + freed when the buffer is released. The consumer MUST NOT alter this value. +.. _buffer-request-types: -Buffer-related functions -======================== +Buffer request types +==================== +Buffers are usually obtained by sending a buffer request to an exporting +object via :c:func:`PyObject_GetBuffer`. Since the complexity of the logical +structure of the memory can vary drastically, the consumer uses the *flags* +argument to specify the exact buffer type it can handle. -.. c:function:: int PyObject_CheckBuffer(PyObject *obj) +All :c:data:`Py_buffer` fields are unambiguously defined by the request +type. + +request-independent fields +~~~~~~~~~~~~~~~~~~~~~~~~~~ +The following fields are not influenced by *flags* and must always be filled in +with the correct values: :c:member:`~Py_buffer.obj`, :c:member:`~Py_buffer.buf`, +:c:member:`~Py_buffer.len`, :c:member:`~Py_buffer.itemsize`, :c:member:`~Py_buffer.ndim`. - Return 1 if *obj* supports the buffer interface otherwise 0. When 1 is - returned, it doesn't guarantee that :c:func:`PyObject_GetBuffer` will - succeed. +readonly, format +~~~~~~~~~~~~~~~~ -.. c:function:: int PyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags) + .. c:macro:: PyBUF_WRITABLE - Export a view over some internal data from the target object *obj*. - *obj* must not be NULL, and *view* must point to an existing - :c:type:`Py_buffer` structure allocated by the caller (most uses of - this function will simply declare a local variable of type - :c:type:`Py_buffer`). The *flags* argument is a bit field indicating - what kind of buffer is requested. The buffer interface allows - for complicated memory layout possibilities; however, some callers - won't want to handle all the complexity and instead request a simple - view of the target object (using :c:macro:`PyBUF_SIMPLE` for a read-only - view and :c:macro:`PyBUF_WRITABLE` for a read-write view). + Controls the :c:member:`~Py_buffer.readonly` field. If set, the exporter + MUST provide a writable buffer or else report failure. Otherwise, the + exporter MAY provide either a read-only or writable buffer, but the choice + MUST be consistent for all consumers. - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a :exc:`BufferError` unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - :c:data:`Py_buffer` structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. + .. c:macro:: PyBUF_FORMAT - On success, 0 is returned and the *view* structure is filled with useful - values. On error, -1 is returned and an exception is raised; the *view* - is left in an undefined state. + Controls the :c:member:`~Py_buffer.format` field. If set, this field MUST + be filled in correctly. Otherwise, this field MUST be *NULL*. - The following are the possible values to the *flags* arguments. - .. c:macro:: PyBUF_SIMPLE +:c:macro:`PyBUF_WRITABLE` can be \|'d to any of the flags in the next section. +Since :c:macro:`PyBUF_SIMPLE` is defined as 0, :c:macro:`PyBUF_WRITABLE` +can be used as a stand-alone flag to request a simple writable buffer. - This is the default flag. The returned buffer exposes a read-only - memory area. The format of data is assumed to be raw unsigned bytes, - without any particular structure. This is a "stand-alone" flag - constant. It never needs to be '|'d to the others. The exporter will - raise an error if it cannot provide such a contiguous buffer of bytes. +:c:macro:`PyBUF_FORMAT` can be \|'d to any of the flags except :c:macro:`PyBUF_SIMPLE`. +The latter already implies format ``B`` (unsigned bytes). - .. c:macro:: PyBUF_WRITABLE - Like :c:macro:`PyBUF_SIMPLE`, but the returned buffer is writable. If - the exporter doesn't support writable buffers, an error is raised. +shape, strides, suboffsets +~~~~~~~~~~~~~~~~~~~~~~~~~~ - .. c:macro:: PyBUF_STRIDES +The flags that control the logical structure of the memory are listed +in decreasing order of complexity. Note that each flag contains all bits +of the flags below it. - This implies :c:macro:`PyBUF_ND`. The returned buffer must provide - strides information (i.e. the strides cannot be NULL). This would be - used when the consumer can handle strided, discontiguous arrays. - Handling strides automatically assumes you can handle shape. The - exporter can raise an error if a strided representation of the data is - not possible (i.e. without the suboffsets). - .. c:macro:: PyBUF_ND ++-----------------------------+-------+---------+------------+ +| Request | shape | strides | suboffsets | ++=============================+=======+=========+============+ +| .. c:macro:: PyBUF_INDIRECT | yes | yes | if needed | ++-----------------------------+-------+---------+------------+ +| .. c:macro:: PyBUF_STRIDES | yes | yes | NULL | ++-----------------------------+-------+---------+------------+ +| .. c:macro:: PyBUF_ND | yes | NULL | NULL | ++-----------------------------+-------+---------+------------+ +| .. c:macro:: PyBUF_SIMPLE | NULL | NULL | NULL | ++-----------------------------+-------+---------+------------+ - The returned buffer must provide shape information. The memory will be - assumed C-style contiguous (last dimension varies the fastest). The - exporter may raise an error if it cannot provide this kind of - contiguous buffer. If this is not given then shape will be *NULL*. - .. c:macro:: PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS +contiguity requests +~~~~~~~~~~~~~~~~~~~ - These flags indicate that the contiguity returned buffer must be - respectively, C-contiguous (last dimension varies the fastest), Fortran - contiguous (first dimension varies the fastest) or either one. All of - these flags imply :c:macro:`PyBUF_STRIDES` and guarantee that the - strides buffer info structure will be filled in correctly. +C or Fortran contiguity can be explicitly requested, with and without stride +information. Without stride information, the buffer must be C-contiguous. - .. c:macro:: PyBUF_INDIRECT ++-----------------------------------+-------+---------+------------+--------+ +| Request | shape | strides | suboffsets | contig | ++===================================+=======+=========+============+========+ +| .. c:macro:: PyBUF_C_CONTIGUOUS | yes | yes | NULL | C | ++-----------------------------------+-------+---------+------------+--------+ +| .. c:macro:: PyBUF_F_CONTIGUOUS | yes | yes | NULL | F | ++-----------------------------------+-------+---------+------------+--------+ +| .. c:macro:: PyBUF_ANY_CONTIGUOUS | yes | yes | NULL | C or F | ++-----------------------------------+-------+---------+------------+--------+ +| .. c:macro:: PyBUF_ND | yes | NULL | NULL | C | ++-----------------------------------+-------+---------+------------+--------+ - This flag indicates the returned buffer must have suboffsets - information (which can be NULL if no suboffsets are needed). This can - be used when the consumer can handle indirect array referencing implied - by these suboffsets. This implies :c:macro:`PyBUF_STRIDES`. - .. c:macro:: PyBUF_FORMAT +compound requests +~~~~~~~~~~~~~~~~~ - The returned buffer must have true format information if this flag is - provided. This would be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An exporter should always - be able to provide this information if requested. If format is not - explicitly requested then the format must be returned as *NULL* (which - means ``'B'``, or unsigned bytes). +All possible requests are fully defined by some combination of the flags in +the previous section. For convenience, the buffer protocol provides frequently +used combinations as single flags. - .. c:macro:: PyBUF_STRIDED +In the following table *U* stands for undefined contiguity. The consumer would +have to call :c:func:`PyBuffer_IsContiguous` to determine contiguity. - This is equivalent to ``(PyBUF_STRIDES | PyBUF_WRITABLE)``. - .. c:macro:: PyBUF_STRIDED_RO - This is equivalent to ``(PyBUF_STRIDES)``. ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| Request | shape | strides | suboffsets | contig | readonly | format | ++===============================+=======+=========+============+========+==========+========+ +| .. c:macro:: PyBUF_FULL | yes | yes | if needed | U | 0 | yes | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_FULL_RO | yes | yes | if needed | U | 1 or 0 | yes | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_RECORDS | yes | yes | NULL | U | 0 | yes | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_RECORDS_RO | yes | yes | NULL | U | 1 or 0 | yes | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_STRIDED | yes | yes | NULL | U | 0 | NULL | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_STRIDED_RO | yes | yes | NULL | U | 1 or 0 | NULL | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_CONTIG | yes | NULL | NULL | C | 0 | NULL | ++-------------------------------+-------+---------+------------+--------+----------+--------+ +| .. c:macro:: PyBUF_CONTIG_RO | yes | NULL | NULL | C | 1 or 0 | NULL | ++-------------------------------+-------+---------+------------+--------+----------+--------+ - .. c:macro:: PyBUF_RECORDS - This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT | - PyBUF_WRITABLE)``. +Complex arrays +============== - .. c:macro:: PyBUF_RECORDS_RO +NumPy-style: shape and strides +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The logical structure of NumPy-style arrays is defined by :c:member:`~Py_buffer.itemsize`, +:c:member:`~Py_buffer.ndim`, :c:member:`~Py_buffer.shape` and :c:member:`~Py_buffer.strides`. + +If ``ndim == 0``, the memory location pointed to by :c:member:`~Py_buffer.buf` is +interpreted as a scalar of size :c:member:`~Py_buffer.itemsize`. In that case, +both :c:member:`~Py_buffer.shape` and :c:member:`~Py_buffer.strides` are *NULL*. + +If :c:member:`~Py_buffer.strides` is *NULL*, the array is interpreted as +a standard n-dimensional C-array. Otherwise, the consumer must access an +n-dimensional array as follows: + + ``ptr = (char *)buf + indices[0] * strides[0] + ... + indices[n-1] * strides[n-1]`` + ``item = *((typeof(item) *)ptr);`` + + +As noted above, :c:member:`~Py_buffer.buf` can point to any location within +the actual memory block. An exporter can check the validity of a buffer with +this function: + +.. code-block:: python + + def verify_structure(memlen, itemsize, ndim, shape, strides, offset): + """Verify that the parameters represent a valid array within + the bounds of the allocated memory: + char *mem: start of the physical memory block + memlen: length of the physical memory block + offset: (char *)buf - mem + """ + if offset % itemsize: + return False + if offset < 0 or offset+itemsize > memlen: + return False + if any(v % itemsize for v in strides): + return False + + if ndim <= 0: + return ndim == 0 and not shape and not strides + if 0 in shape: + return True + + imin = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] <= 0) + imax = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] > 0) + + return 0 <= offset+imin and offset+imax+itemsize <= memlen + + +PIL-style: shape, strides and suboffsets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In addition to the regular items, PIL-style arrays can contain pointers +that must be followed in order to get to the next element in a dimension. +For example, the regular three-dimensional C-array ``char v[2][2][3]`` can +also be viewed as an array of 2 pointers to 2 two-dimensional arrays: +``char (*v[2])[2][3]``. In suboffsets representation, those two pointers +can be embedded at the start of :c:member:`~Py_buffer.buf`, pointing +to two ``char x[2][3]`` arrays that can be located anywhere in memory. + + +Here is a function that returns a pointer to the element in an N-D array +pointed to by an N-dimensional index when there are both non-NULL strides +and suboffsets:: + + void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides, + Py_ssize_t *suboffsets, Py_ssize_t *indices) { + char *pointer = (char*)buf; + int i; + for (i = 0; i < ndim; i++) { + pointer += strides[i] * indices[i]; + if (suboffsets[i] >=0 ) { + pointer = *((char**)pointer) + suboffsets[i]; + } + } + return (void*)pointer; + } - This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT)``. - .. c:macro:: PyBUF_FULL +Buffer-related functions +======================== - This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT | - PyBUF_WRITABLE)``. +.. c:function:: int PyObject_CheckBuffer(PyObject *obj) - .. c:macro:: PyBUF_FULL_RO + Return 1 if *obj* supports the buffer interface otherwise 0. When 1 is + returned, it doesn't guarantee that :c:func:`PyObject_GetBuffer` will + succeed. - This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT)``. - .. c:macro:: PyBUF_CONTIG +.. c:function:: int PyObject_GetBuffer(PyObject *exporter, Py_buffer *view, int flags) - This is equivalent to ``(PyBUF_ND | PyBUF_WRITABLE)``. + Send a request to *exporter* to fill in *view* as specified by *flags*. + If the exporter cannot provide a buffer of the exact type, it MUST raise + :c:data:`PyExc_BufferError`, set :c:member:`view->obj` to *NULL* and + return -1. - .. c:macro:: PyBUF_CONTIG_RO + On success, fill in *view*, set :c:member:`view->obj` to a new reference + to *exporter* and return 0. - This is equivalent to ``(PyBUF_ND)``. + Successful calls to :c:func:`PyObject_GetBuffer` must be paired with calls + to :c:func:`PyBuffer_Release`, similar to :c:func:`malloc` and :c:func:`free`. + Thus, after the consumer is done with the buffer, :c:func:`PyBuffer_Release` + must be called exactly once. .. c:function:: void PyBuffer_Release(Py_buffer *view) - Release the buffer *view*. This should be called when the buffer is no - longer being used as it may free memory from it. + Release the buffer *view* and decrement the reference count for + :c:member:`view->obj`. This function MUST be called when the buffer + is no longer being used, otherwise reference leaks may occur. + + It is an error to call this function on a buffer that was not obtained via + :c:func:`PyObject_GetBuffer`. .. c:function:: Py_ssize_t PyBuffer_SizeFromFormat(const char *) - Return the implied :c:data:`~Py_buffer.itemsize` from the struct-stype - :c:data:`~Py_buffer.format`. + Return the implied :c:data:`~Py_buffer.itemsize` from :c:data:`~Py_buffer.format`. + This function is not yet implemented. -.. c:function:: int PyBuffer_IsContiguous(Py_buffer *view, char fortran) +.. c:function:: int PyBuffer_IsContiguous(Py_buffer *view, char order) - Return 1 if the memory defined by the *view* is C-style (*fortran* is - ``'C'``) or Fortran-style (*fortran* is ``'F'``) contiguous or either one - (*fortran* is ``'A'``). Return 0 otherwise. + Return 1 if the memory defined by the *view* is C-style (*order* is + ``'C'``) or Fortran-style (*order* is ``'F'``) contiguous or either one + (*order* is ``'A'``). Return 0 otherwise. -.. c:function:: void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t itemsize, char fortran) +.. c:function:: void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t itemsize, char order) Fill the *strides* array with byte-strides of a contiguous (C-style if - *fortran* is ``'C'`` or Fortran-style if *fortran* is ``'F'``) array of the + *order* is ``'C'`` or Fortran-style if *order* is ``'F'``) array of the given shape with the given number of bytes per element. -.. c:function:: int PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int infoflags) +.. c:function:: int PyBuffer_FillInfo(Py_buffer *view, PyObject *exporter, void *buf, Py_ssize_t len, int readonly, int flags) + + Handle buffer requests for an exporter that wants to expose *buf* of size *len* + with writability set according to *readonly*. *buf* is interpreted as a sequence + of unsigned bytes. + + The *flags* argument indicates the request type. This function always fills in + *view* as specified by flags, unless *buf* has been designated as read-only + and :c:macro:`PyBUF_WRITABLE` is set in *flags*. + + On success, set :c:member:`view->obj` to a new reference to *exporter* and + return 0. Otherwise, raise :c:data:`PyExc_BufferError`, set + :c:member:`view->obj` to *NULL* and return -1; + + If this function is used as part of a :ref:`getbufferproc <buffer-structs>`, + *exporter* MUST be set to the exporting object. Otherwise, *exporter* MUST + be NULL. + - Fill in a buffer-info structure, *view*, correctly for an exporter that can - only share a contiguous chunk of memory of "unsigned bytes" of the given - length. Return 0 on success and -1 (with raising an error) on error. diff --git a/Doc/c-api/memoryview.rst b/Doc/c-api/memoryview.rst index 6b49cdf..ef03975 100644 --- a/Doc/c-api/memoryview.rst +++ b/Doc/c-api/memoryview.rst @@ -17,16 +17,19 @@ any other object. Create a memoryview object from an object that provides the buffer interface. If *obj* supports writable buffer exports, the memoryview object will be - readable and writable, otherwise it will be read-only. + read/write, otherwise it may be either read-only or read/write at the + discretion of the exporter. +.. c:function:: PyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags) + + Create a memoryview object using *mem* as the underlying buffer. + *flags* can be one of :c:macro:`PyBUF_READ` or :c:macro:`PyBUF_WRITE`. .. c:function:: PyObject *PyMemoryView_FromBuffer(Py_buffer *view) Create a memoryview object wrapping the given buffer structure *view*. - The memoryview object then owns the buffer represented by *view*, which - means you shouldn't try to call :c:func:`PyBuffer_Release` yourself: it - will be done on deallocation of the memoryview object. - + For simple byte buffers, :c:func:`PyMemoryView_FromMemory` is the preferred + function. .. c:function:: PyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order) @@ -43,10 +46,16 @@ any other object. currently allowed to create subclasses of :class:`memoryview`. -.. c:function:: Py_buffer *PyMemoryView_GET_BUFFER(PyObject *obj) +.. c:function:: Py_buffer *PyMemoryView_GET_BUFFER(PyObject *mview) + + Return a pointer to the memoryview's private copy of the exporter's buffer. + *mview* **must** be a memoryview instance; this macro doesn't check its type, + you must do it yourself or you will risk crashes. + +.. c:function:: Py_buffer *PyMemoryView_GET_BASE(PyObject *mview) - Return a pointer to the buffer structure wrapped by the given - memoryview object. The object **must** be a memoryview instance; - this macro doesn't check its type, you must do it yourself or you - will risk crashes. + Return either a pointer to the exporting object that the memoryview is based + on or *NULL* if the memoryview has been created by one of the functions + :c:func:`PyMemoryView_FromMemory` or :c:func:`PyMemoryView_FromBuffer`. + *mview* **must** be a memoryview instance. diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index 68ca9ad..b15d927 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -1198,46 +1198,74 @@ Buffer Object Structures .. sectionauthor:: Greg J. Stein <greg@lyra.org> .. sectionauthor:: Benjamin Peterson +.. sectionauthor:: Stefan Krah +.. c:type:: PyBufferProcs -The :ref:`buffer interface <bufferobjects>` exports a model where an object can expose its internal -data. + This structure holds pointers to the functions required by the + :ref:`Buffer protocol <bufferobjects>`. The protocol defines how + an exporter object can expose its internal data to consumer objects. -If an object does not export the buffer interface, then its :attr:`tp_as_buffer` -member in the :c:type:`PyTypeObject` structure should be *NULL*. Otherwise, the -:attr:`tp_as_buffer` will point to a :c:type:`PyBufferProcs` structure. +.. c:member:: getbufferproc PyBufferProcs.bf_getbuffer + The signature of this function is:: -.. c:type:: PyBufferProcs + int (PyObject *exporter, Py_buffer *view, int flags); + + Handle a request to *exporter* to fill in *view* as specified by *flags*. + A standard implementation of this function will take these steps: + + - Check if the request can be met. If not, raise :c:data:`PyExc_BufferError`, + set :c:data:`view->obj` to *NULL* and return -1. + + - Fill in the requested fields. + + - Increment an internal counter for the number of exports. + + - Set :c:data:`view->obj` to *exporter* and increment :c:data:`view->obj`. + + - Return 0. + + The individual fields of *view* are described in section + :ref:`Buffer structure <buffer-structure>`, the rules how an exporter + must react to specific requests are in section + :ref:`Buffer request types <buffer-request-types>`. + + All memory pointed to in the :c:type:`Py_buffer` structure belongs to + the exporter and must remain valid until there are no consumers left. + :c:member:`~Py_buffer.shape`, :c:member:`~Py_buffer.strides`, + :c:member:`~Py_buffer.suboffsets` and :c:member:`~Py_buffer.internal` + are read-only for the consumer. + + :c:func:`PyBuffer_FillInfo` provides an easy way of exposing a simple + bytes buffer while dealing correctly with all request types. + + :c:func:`PyObject_GetBuffer` is the interface for the consumer that + wraps this function. + +.. c:member:: releasebufferproc PyBufferProcs.bf_releasebuffer + + The signature of this function is:: + + void (PyObject *exporter, Py_buffer *view); - Structure used to hold the function pointers which define an implementation of - the buffer protocol. + Handle a request to release the resources of the buffer. If no resources + need to be released, this field may be *NULL*. A standard implementation + of this function will take these steps: - .. c:member:: getbufferproc bf_getbuffer + - Decrement an internal counter for the number of exports. - This should fill a :c:type:`Py_buffer` with the necessary data for - exporting the type. The signature of :data:`getbufferproc` is ``int - (PyObject *obj, Py_buffer *view, int flags)``. *obj* is the object to - export, *view* is the :c:type:`Py_buffer` struct to fill, and *flags* gives - the conditions the caller wants the memory under. (See - :c:func:`PyObject_GetBuffer` for all flags.) :c:member:`bf_getbuffer` is - responsible for filling *view* with the appropriate information. - (:c:func:`PyBuffer_FillView` can be used in simple cases.) See - :c:type:`Py_buffer`\s docs for what needs to be filled in. + - If the counter is 0, free all memory associated with *view*. + The exporter MUST use the :c:member:`~Py_buffer.internal` field to keep + track of buffer-specific resources (if present). This field is guaranteed + to remain constant, while a consumer MAY pass a copy of the original buffer + as the *view* argument. - .. c:member:: releasebufferproc bf_releasebuffer - This should release the resources of the buffer. The signature of - :c:data:`releasebufferproc` is ``void (PyObject *obj, Py_buffer *view)``. - If the :c:data:`bf_releasebuffer` function is not provided (i.e. it is - *NULL*), then it does not ever need to be called. + This function MUST NOT decrement :c:data:`view->obj`, since that is + done automatically in :c:func:`PyBuffer_Release`. - The exporter of the buffer interface must make sure that any memory - pointed to in the :c:type:`Py_buffer` structure remains valid until - releasebuffer is called. Exporters will need to define a - :c:data:`bf_releasebuffer` function if they can re-allocate their memory, - strides, shape, suboffsets, or format variables which they might share - through the struct bufferinfo. - See :c:func:`PyBuffer_Release`. + :c:func:`PyBuffer_Release` is the interface for the consumer that + wraps this function. diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index a07be4f..183b2f7 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -2377,7 +2377,7 @@ memoryview type :class:`memoryview` objects allow Python code to access the internal data of an object that supports the :ref:`buffer protocol <bufferobjects>` without -copying. Memory is generally interpreted as simple bytes. +copying. .. class:: memoryview(obj) @@ -2391,52 +2391,88 @@ copying. Memory is generally interpreted as simple bytes. is a single byte, but other types such as :class:`array.array` may have bigger elements. - ``len(view)`` returns the total number of elements in the memoryview, - *view*. The :class:`~memoryview.itemsize` attribute will give you the + ``len(view)`` is equal to the length of :class:`~memoryview.tolist`. + If ``view.ndim = 0``, the length is 1. If ``view.ndim = 1``, the length + is equal to the number of elements in the view. For higher dimensions, + the length is equal to the length of the nested list representation of + the view. The :class:`~memoryview.itemsize` attribute will give you the number of bytes in a single element. - A :class:`memoryview` supports slicing to expose its data. Taking a single - index will return a single element as a :class:`bytes` object. Full - slicing will result in a subview:: + A :class:`memoryview` supports slicing to expose its data. If + :class:`~memoryview.format` is one of the native format specifiers + from the :mod:`struct` module, indexing will return a single element + with the correct type. Full slicing will result in a subview:: + + >>> v = memoryview(b'abcefg') + >>> v[1] + 98 + >>> v[-1] + 103 + >>> v[1:4] + <memory at 0x7f3ddc9f4350> + >>> bytes(v[1:4]) + b'bce' + + Other native formats:: + + >>> import array + >>> a = array.array('l', [-11111111, 22222222, -33333333, 44444444]) + >>> a[0] + -11111111 + >>> a[-1] + 44444444 + >>> a[2:3].tolist() + [-33333333] + >>> a[::2].tolist() + [-11111111, -33333333] + >>> a[::-1].tolist() + [44444444, -33333333, 22222222, -11111111] - >>> v = memoryview(b'abcefg') - >>> v[1] - b'b' - >>> v[-1] - b'g' - >>> v[1:4] - <memory at 0x77ab28> - >>> bytes(v[1:4]) - b'bce' - - If the object the memoryview is over supports changing its data, the - memoryview supports slice assignment:: + .. versionadded:: 3.3 + + If the underlying object is writable, the memoryview supports slice + assignment. Resizing is not allowed:: >>> data = bytearray(b'abcefg') >>> v = memoryview(data) >>> v.readonly False - >>> v[0] = b'z' + >>> v[0] = ord(b'z') >>> data bytearray(b'zbcefg') >>> v[1:4] = b'123' >>> data bytearray(b'z123fg') - >>> v[2] = b'spam' + >>> v[2:3] = b'spam' Traceback (most recent call last): - File "<stdin>", line 1, in <module> - ValueError: cannot modify size of memoryview object - - Notice how the size of the memoryview object cannot be changed. + File "<stdin>", line 1, in <module> + ValueError: memoryview assignment: lvalue and rvalue have different structures + >>> v[2:6] = b'spam' + >>> data + bytearray(b'z1spam') - Memoryviews of hashable (read-only) types are also hashable and their - hash value matches the corresponding bytes object:: + Memoryviews of hashable (read-only) types are also hashable. The hash + is defined as ``hash(m) == hash(m.tobytes())``:: >>> v = memoryview(b'abcefg') >>> hash(v) == hash(b'abcefg') True >>> hash(v[2:4]) == hash(b'ce') True + >>> hash(v[::-2]) == hash(b'abcefg'[::-2]) + True + + Hashing of multi-dimensional objects is supported:: + + >>> buf = bytes(list(range(12))) + >>> x = memoryview(buf) + >>> y = x.cast('B', shape=[2,2,3]) + >>> x.tolist() + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] + >>> y.tolist() + [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]] + >>> hash(x) == hash(y) == hash(y.tobytes()) + True .. versionchanged:: 3.3 Memoryview objects are now hashable. @@ -2455,12 +2491,20 @@ copying. Memory is generally interpreted as simple bytes. >>> bytes(m) b'abc' + For non-contiguous arrays the result is equal to the flattened list + representation with all elements converted to bytes. + .. method:: tolist() - Return the data in the buffer as a list of integers. :: + Return the data in the buffer as a list of elements. :: >>> memoryview(b'abc').tolist() [97, 98, 99] + >>> import array + >>> a = array.array('d', [1.1, 2.2, 3.3]) + >>> m = memoryview(a) + >>> m.tolist() + [1.1, 2.2, 3.3] .. method:: release() @@ -2487,7 +2531,7 @@ copying. Memory is generally interpreted as simple bytes. >>> with memoryview(b'abc') as m: ... m[0] ... - b'a' + 97 >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> @@ -2495,45 +2539,219 @@ copying. Memory is generally interpreted as simple bytes. .. versionadded:: 3.2 + .. method:: cast(format[, shape]) + + Cast a memoryview to a new format or shape. *shape* defaults to + ``[byte_length//new_itemsize]``, which means that the result view + will be one-dimensional. The return value is a new memoryview, but + the buffer itself is not copied. Supported casts are 1D -> C-contiguous + and C-contiguous -> 1D. One of the formats must be a byte format + ('B', 'b' or 'c'). The byte length of the result must be the same + as the original length. + + Cast 1D/long to 1D/unsigned bytes:: + + >>> import array + >>> a = array.array('l', [1,2,3]) + >>> x = memoryview(a) + >>> x.format + 'l' + >>> x.itemsize + 8 + >>> len(x) + 3 + >>> x.nbytes + 24 + >>> y = x.cast('B') + >>> y.format + 'B' + >>> y.itemsize + 1 + >>> len(y) + 24 + >>> y.nbytes + 24 + + Cast 1D/unsigned bytes to 1D/char:: + + >>> b = bytearray(b'zyz') + >>> x = memoryview(b) + >>> x[0] = b'a' + Traceback (most recent call last): + File "<stdin>", line 1, in <module> + ValueError: memoryview: invalid value for format "B" + >>> y = x.cast('c') + >>> y[0] = b'a' + >>> b + bytearray(b'ayz') + + Cast 1D/bytes to 3D/ints to 1D/signed char:: + + >>> import struct + >>> buf = struct.pack("i"*12, *list(range(12))) + >>> x = memoryview(buf) + >>> y = x.cast('i', shape=[2,2,3]) + >>> y.tolist() + [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]] + >>> y.format + 'i' + >>> y.itemsize + 4 + >>> len(y) + 2 + >>> y.nbytes + 48 + >>> z = y.cast('b') + >>> z.format + 'b' + >>> z.itemsize + 1 + >>> len(z) + 48 + >>> z.nbytes + 48 + + Cast 1D/unsigned char to to 2D/unsigned long:: + + >>> buf = struct.pack("L"*6, *list(range(6))) + >>> x = memoryview(buf) + >>> y = x.cast('L', shape=[2,3]) + >>> len(y) + 2 + >>> y.nbytes + 48 + >>> y.tolist() + [[0, 1, 2], [3, 4, 5]] + + .. versionadded:: 3.3 + There are also several readonly attributes available: + .. attribute:: obj + + The underlying object of the memoryview:: + + >>> b = bytearray(b'xyz') + >>> m = memoryview(b) + >>> m.obj is b + True + + .. versionadded:: 3.3 + + .. attribute:: nbytes + + ``nbytes == product(shape) * itemsize == len(m.tobytes())``. This is + the amount of space in bytes that the array would use in a contiguous + representation. It is not necessarily equal to len(m):: + + >>> import array + >>> a = array.array('i', [1,2,3,4,5]) + >>> m = memoryview(a) + >>> len(m) + 5 + >>> m.nbytes + 20 + >>> y = m[::2] + >>> len(y) + 3 + >>> y.nbytes + 12 + >>> len(y.tobytes()) + 12 + + Multi-dimensional arrays:: + + >>> import struct + >>> buf = struct.pack("d"*12, *[1.5*x for x in range(12)]) + >>> x = memoryview(buf) + >>> y = x.cast('d', shape=[3,4]) + >>> y.tolist() + [[0.0, 1.5, 3.0, 4.5], [6.0, 7.5, 9.0, 10.5], [12.0, 13.5, 15.0, 16.5]] + >>> len(y) + 3 + >>> y.nbytes + 96 + + .. versionadded:: 3.3 + + .. attribute:: readonly + + A bool indicating whether the memory is read only. + .. attribute:: format A string containing the format (in :mod:`struct` module style) for each - element in the view. This defaults to ``'B'``, a simple bytestring. + element in the view. A memoryview can be created from exporters with + arbitrary format strings, but some methods (e.g. :meth:`tolist`) are + restricted to native single element formats. Special care must be taken + when comparing memoryviews. Since comparisons are required to return a + value for ``==`` and ``!=``, two memoryviews referencing the same + exporter can compare as not-equal if the exporter's format is not + understood:: + + >>> from ctypes import BigEndianStructure, c_long + >>> class BEPoint(BigEndianStructure): + ... _fields_ = [("x", c_long), ("y", c_long)] + ... + >>> point = BEPoint(100, 200) + >>> a = memoryview(point) + >>> b = memoryview(point) + >>> a == b + False + >>> a.tolist() + Traceback (most recent call last): + File "<stdin>", line 1, in <module> + NotImplementedError: memoryview: unsupported format T{>l:x:>l:y:} .. attribute:: itemsize The size in bytes of each element of the memoryview:: - >>> m = memoryview(array.array('H', [1,2,3])) + >>> import array, struct + >>> m = memoryview(array.array('H', [32000, 32001, 32002])) >>> m.itemsize 2 >>> m[0] - b'\x01\x00' - >>> len(m[0]) == m.itemsize + 32000 + >>> struct.calcsize('H') == m.itemsize True - .. attribute:: shape - - A tuple of integers the length of :attr:`ndim` giving the shape of the - memory as a N-dimensional array. - .. attribute:: ndim An integer indicating how many dimensions of a multi-dimensional array the memory represents. + .. attribute:: shape + + A tuple of integers the length of :attr:`ndim` giving the shape of the + memory as a N-dimensional array. + .. attribute:: strides A tuple of integers the length of :attr:`ndim` giving the size in bytes to access each element for each dimension of the array. - .. attribute:: readonly + .. attribute:: suboffsets - A bool indicating whether the memory is read only. + Used internally for PIL-style arrays. The value is informational only. + + .. attribute:: c_contiguous + + A bool indicating whether the memory is C-contiguous. + + .. versionadded:: 3.3 + + .. attribute:: f_contiguous + + A bool indicating whether the memory is Fortran contiguous. + + .. versionadded:: 3.3 + + .. attribute:: contiguous + + A bool indicating whether the memory is contiguous. - .. memoryview.suboffsets isn't documented because it only seems useful for C + .. versionadded:: 3.3 .. _typecontextmanager: diff --git a/Doc/whatsnew/3.3.rst b/Doc/whatsnew/3.3.rst index 20e2914..560331f 100644 --- a/Doc/whatsnew/3.3.rst +++ b/Doc/whatsnew/3.3.rst @@ -49,6 +49,62 @@ This article explains the new features in Python 3.3, compared to 3.2. +.. _pep-3118: + +PEP 3118: New memoryview implementation and buffer protocol documentation +========================================================================= + +:issue:`10181` - memoryview bug fixes and features. + Written by Stefan Krah. + +The new memoryview implementation comprehensively fixes all ownership and +lifetime issues of dynamically allocated fields in the Py_buffer struct +that led to multiple crash reports. Additionally, several functions that +crashed or returned incorrect results for non-contiguous or multi-dimensional +input have been fixed. + +The memoryview object now has a PEP-3118 compliant getbufferproc() +that checks the consumer's request type. Many new features have been +added, most of them work in full generality for non-contiguous arrays +and arrays with suboffsets. + +The documentation has been updated, clearly spelling out responsibilities +for both exporters and consumers. Buffer request flags are grouped into +basic and compound flags. The memory layout of non-contiguous and +multi-dimensional NumPy-style arrays is explained. + +Features +-------- + +* All native single character format specifiers in struct module syntax + (optionally prefixed with '@') are now supported. + +* With some restrictions, the cast() method allows changing of format and + shape of C-contiguous arrays. + +* Multi-dimensional list representations are supported for any array type. + +* Multi-dimensional comparisons are supported for any array type. + +* All array types are hashable if the exporting object is hashable + and the view is read-only. + +* Arbitrary slicing of any 1-D arrays type is supported. For example, it + is now possible to reverse a memoryview in O(1) by using a negative step. + +API changes +----------- + +* The maximum number of dimensions is officially limited to 64. + +* The representation of empty shape, strides and suboffsets is now + an empty tuple instead of None. + +* Accessing a memoryview element with format 'B' (unsigned bytes) + now returns an integer (in accordance with the struct module syntax). + For returning a bytes object the view must be cast to 'c' first. + + .. _pep-393: PEP 393: Flexible String Representation diff --git a/Include/abstract.h b/Include/abstract.h index 3946ec5..abb996f 100644 --- a/Include/abstract.h +++ b/Include/abstract.h @@ -559,7 +559,7 @@ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx*/ /* Copy the data from the src buffer to the buffer of destination */ - PyAPI_FUNC(int) PyBuffer_IsContiguous(Py_buffer *view, char fort); + PyAPI_FUNC(int) PyBuffer_IsContiguous(const Py_buffer *view, char fort); PyAPI_FUNC(void) PyBuffer_FillContiguousStrides(int ndims, diff --git a/Include/memoryobject.h b/Include/memoryobject.h index aff5d99..4ac6f65 100644 --- a/Include/memoryobject.h +++ b/Include/memoryobject.h @@ -6,70 +6,64 @@ extern "C" { #endif +#ifndef Py_LIMITED_API +PyAPI_DATA(PyTypeObject) _PyManagedBuffer_Type; +#endif PyAPI_DATA(PyTypeObject) PyMemoryView_Type; #define PyMemoryView_Check(op) (Py_TYPE(op) == &PyMemoryView_Type) #ifndef Py_LIMITED_API -/* Get a pointer to the underlying Py_buffer of a memoryview object. */ +/* Get a pointer to the memoryview's private copy of the exporter's buffer. */ #define PyMemoryView_GET_BUFFER(op) (&((PyMemoryViewObject *)(op))->view) -/* Get a pointer to the PyObject from which originates a memoryview object. */ +/* Get a pointer to the exporting object (this may be NULL!). */ #define PyMemoryView_GET_BASE(op) (((PyMemoryViewObject *)(op))->view.obj) #endif - -PyAPI_FUNC(PyObject *) PyMemoryView_GetContiguous(PyObject *base, - int buffertype, - char fort); - - /* Return a contiguous chunk of memory representing the buffer - from an object in a memory view object. If a copy is made then the - base object for the memory view will be a *new* bytes object. - - Otherwise, the base-object will be the object itself and no - data-copying will be done. - - The buffertype argument can be PyBUF_READ, PyBUF_WRITE, - PyBUF_SHADOW to determine whether the returned buffer - should be READONLY, WRITABLE, or set to update the - original buffer if a copy must be made. If buffertype is - PyBUF_WRITE and the buffer is not contiguous an error will - be raised. In this circumstance, the user can use - PyBUF_SHADOW to ensure that a a writable temporary - contiguous buffer is returned. The contents of this - contiguous buffer will be copied back into the original - object after the memoryview object is deleted as long as - the original object is writable and allows setting an - exclusive write lock. If this is not allowed by the - original object, then a BufferError is raised. - - If the object is multi-dimensional and if fortran is 'F', - the first dimension of the underlying array will vary the - fastest in the buffer. If fortran is 'C', then the last - dimension will vary the fastest (C-style contiguous). If - fortran is 'A', then it does not matter and you will get - whatever the object decides is more efficient. - - A new reference is returned that must be DECREF'd when finished. - */ - PyAPI_FUNC(PyObject *) PyMemoryView_FromObject(PyObject *base); - +PyAPI_FUNC(PyObject *) PyMemoryView_FromMemory(char *mem, Py_ssize_t size, + int flags); #ifndef Py_LIMITED_API PyAPI_FUNC(PyObject *) PyMemoryView_FromBuffer(Py_buffer *info); - /* create new if bufptr is NULL - will be a new bytesobject in base */ #endif +PyAPI_FUNC(PyObject *) PyMemoryView_GetContiguous(PyObject *base, + int buffertype, + char order); -/* The struct is declared here so that macros can work, but it shouldn't - be considered public. Don't access those fields directly, use the macros +/* The structs are declared here so that macros can work, but they shouldn't + be considered public. Don't access their fields directly, use the macros and functions instead! */ #ifndef Py_LIMITED_API +#define _Py_MANAGED_BUFFER_RELEASED 0x001 /* access to exporter blocked */ +#define _Py_MANAGED_BUFFER_FREE_FORMAT 0x002 /* free format */ typedef struct { PyObject_HEAD - Py_buffer view; - Py_hash_t hash; + int flags; /* state flags */ + Py_ssize_t exports; /* number of direct memoryview exports */ + Py_buffer master; /* snapshot buffer obtained from the original exporter */ +} _PyManagedBufferObject; + + +/* static storage used for casting between formats */ +#define _Py_MEMORYVIEW_MAX_FORMAT 3 /* must be >= 3 */ + +/* memoryview state flags */ +#define _Py_MEMORYVIEW_RELEASED 0x001 /* access to master buffer blocked */ +#define _Py_MEMORYVIEW_C 0x002 /* C-contiguous layout */ +#define _Py_MEMORYVIEW_FORTRAN 0x004 /* Fortran contiguous layout */ +#define _Py_MEMORYVIEW_SCALAR 0x008 /* scalar: ndim = 0 */ +#define _Py_MEMORYVIEW_PIL 0x010 /* PIL-style layout */ + +typedef struct { + PyObject_VAR_HEAD + _PyManagedBufferObject *mbuf; /* managed buffer */ + Py_hash_t hash; /* hash value for read-only views */ + int flags; /* state flags */ + Py_ssize_t exports; /* number of buffer re-exports */ + Py_buffer view; /* private copy of the exporter's view */ + char format[_Py_MEMORYVIEW_MAX_FORMAT]; /* used for casting */ + Py_ssize_t ob_array[1]; /* shape, strides, suboffsets */ } PyMemoryViewObject; #endif diff --git a/Include/object.h b/Include/object.h index 71d9dc8..9b3055d 100644 --- a/Include/object.h +++ b/Include/object.h @@ -186,15 +186,16 @@ typedef struct bufferinfo { Py_ssize_t *shape; Py_ssize_t *strides; Py_ssize_t *suboffsets; - Py_ssize_t smalltable[2]; /* static store for shape and strides of - mono-dimensional buffers. */ void *internal; } Py_buffer; typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); - /* Flags for getting buffers */ +/* Maximum number of dimensions */ +#define PyBUF_MAX_NDIM 64 + +/* Flags for getting buffers */ #define PyBUF_SIMPLE 0 #define PyBUF_WRITABLE 0x0001 /* we used to include an E, backwards compatible alias */ diff --git a/Lib/ctypes/test/test_pep3118.py b/Lib/ctypes/test/test_pep3118.py index fa6461f..ad13b01 100644 --- a/Lib/ctypes/test/test_pep3118.py +++ b/Lib/ctypes/test/test_pep3118.py @@ -25,14 +25,17 @@ class Test(unittest.TestCase): v = memoryview(ob) try: self.assertEqual(normalize(v.format), normalize(fmt)) - if shape is not None: + if shape: self.assertEqual(len(v), shape[0]) else: self.assertEqual(len(v) * sizeof(itemtp), sizeof(ob)) self.assertEqual(v.itemsize, sizeof(itemtp)) self.assertEqual(v.shape, shape) - # ctypes object always have a non-strided memory block - self.assertEqual(v.strides, None) + # XXX Issue #12851: PyCData_NewGetBuffer() must provide strides + # if requested. memoryview currently reconstructs missing + # stride information, so this assert will fail. + # self.assertEqual(v.strides, ()) + # they are always read/write self.assertFalse(v.readonly) @@ -52,14 +55,15 @@ class Test(unittest.TestCase): v = memoryview(ob) try: self.assertEqual(v.format, fmt) - if shape is not None: + if shape: self.assertEqual(len(v), shape[0]) else: self.assertEqual(len(v) * sizeof(itemtp), sizeof(ob)) self.assertEqual(v.itemsize, sizeof(itemtp)) self.assertEqual(v.shape, shape) - # ctypes object always have a non-strided memory block - self.assertEqual(v.strides, None) + # XXX Issue #12851 + # self.assertEqual(v.strides, ()) + # they are always read/write self.assertFalse(v.readonly) @@ -110,34 +114,34 @@ native_types = [ ## simple types - (c_char, "<c", None, c_char), - (c_byte, "<b", None, c_byte), - (c_ubyte, "<B", None, c_ubyte), - (c_short, "<h", None, c_short), - (c_ushort, "<H", None, c_ushort), + (c_char, "<c", (), c_char), + (c_byte, "<b", (), c_byte), + (c_ubyte, "<B", (), c_ubyte), + (c_short, "<h", (), c_short), + (c_ushort, "<H", (), c_ushort), # c_int and c_uint may be aliases to c_long - #(c_int, "<i", None, c_int), - #(c_uint, "<I", None, c_uint), + #(c_int, "<i", (), c_int), + #(c_uint, "<I", (), c_uint), - (c_long, "<l", None, c_long), - (c_ulong, "<L", None, c_ulong), + (c_long, "<l", (), c_long), + (c_ulong, "<L", (), c_ulong), # c_longlong and c_ulonglong are aliases on 64-bit platforms #(c_longlong, "<q", None, c_longlong), #(c_ulonglong, "<Q", None, c_ulonglong), - (c_float, "<f", None, c_float), - (c_double, "<d", None, c_double), + (c_float, "<f", (), c_float), + (c_double, "<d", (), c_double), # c_longdouble may be an alias to c_double - (c_bool, "<?", None, c_bool), - (py_object, "<O", None, py_object), + (c_bool, "<?", (), c_bool), + (py_object, "<O", (), py_object), ## pointers - (POINTER(c_byte), "&<b", None, POINTER(c_byte)), - (POINTER(POINTER(c_long)), "&&<l", None, POINTER(POINTER(c_long))), + (POINTER(c_byte), "&<b", (), POINTER(c_byte)), + (POINTER(POINTER(c_long)), "&&<l", (), POINTER(POINTER(c_long))), ## arrays and pointers @@ -145,32 +149,32 @@ native_types = [ (c_float * 4 * 3 * 2, "(2,3,4)<f", (2,3,4), c_float), (POINTER(c_short) * 2, "(2)&<h", (2,), POINTER(c_short)), (POINTER(c_short) * 2 * 3, "(3,2)&<h", (3,2,), POINTER(c_short)), - (POINTER(c_short * 2), "&(2)<h", None, POINTER(c_short)), + (POINTER(c_short * 2), "&(2)<h", (), POINTER(c_short)), ## structures and unions - (Point, "T{<l:x:<l:y:}", None, Point), + (Point, "T{<l:x:<l:y:}", (), Point), # packed structures do not implement the pep - (PackedPoint, "B", None, PackedPoint), - (Point2, "T{<l:x:<l:y:}", None, Point2), - (EmptyStruct, "T{}", None, EmptyStruct), + (PackedPoint, "B", (), PackedPoint), + (Point2, "T{<l:x:<l:y:}", (), Point2), + (EmptyStruct, "T{}", (), EmptyStruct), # the pep does't support unions - (aUnion, "B", None, aUnion), + (aUnion, "B", (), aUnion), ## pointer to incomplete structure - (Incomplete, "B", None, Incomplete), - (POINTER(Incomplete), "&B", None, POINTER(Incomplete)), + (Incomplete, "B", (), Incomplete), + (POINTER(Incomplete), "&B", (), POINTER(Incomplete)), # 'Complete' is a structure that starts incomplete, but is completed after the # pointer type to it has been created. - (Complete, "T{<l:a:}", None, Complete), + (Complete, "T{<l:a:}", (), Complete), # Unfortunately the pointer format string is not fixed... - (POINTER(Complete), "&B", None, POINTER(Complete)), + (POINTER(Complete), "&B", (), POINTER(Complete)), ## other # function signatures are not implemented - (CFUNCTYPE(None), "X{}", None, CFUNCTYPE(None)), + (CFUNCTYPE(None), "X{}", (), CFUNCTYPE(None)), ] @@ -186,10 +190,10 @@ class LEPoint(LittleEndianStructure): # and little endian machines. # endian_types = [ - (BEPoint, "T{>l:x:>l:y:}", None, BEPoint), - (LEPoint, "T{<l:x:<l:y:}", None, LEPoint), - (POINTER(BEPoint), "&T{>l:x:>l:y:}", None, POINTER(BEPoint)), - (POINTER(LEPoint), "&T{<l:x:<l:y:}", None, POINTER(LEPoint)), + (BEPoint, "T{>l:x:>l:y:}", (), BEPoint), + (LEPoint, "T{<l:x:<l:y:}", (), LEPoint), + (POINTER(BEPoint), "&T{>l:x:>l:y:}", (), POINTER(BEPoint)), + (POINTER(LEPoint), "&T{<l:x:<l:y:}", (), POINTER(LEPoint)), ] if __name__ == "__main__": diff --git a/Lib/test/test_buffer.py b/Lib/test/test_buffer.py new file mode 100644 index 0000000..25324ef --- /dev/null +++ b/Lib/test/test_buffer.py @@ -0,0 +1,3437 @@ +# +# The ndarray object from _testbuffer.c is a complete implementation of +# a PEP-3118 buffer provider. It is independent from NumPy's ndarray +# and the tests don't require NumPy. +# +# If NumPy is present, some tests check both ndarray implementations +# against each other. +# +# Most ndarray tests also check that memoryview(ndarray) behaves in +# the same way as the original. Thus, a substantial part of the +# memoryview tests is now in this module. +# + +import unittest +from test import support +from itertools import permutations, product +from random import randrange, sample, choice +from sysconfig import get_config_var +from platform import architecture +import warnings +import sys, array, io +from decimal import Decimal +from fractions import Fraction + +try: + from _testbuffer import * +except ImportError: + ndarray = None + +try: + import struct +except ImportError: + struct = None + +try: + with warnings.catch_warnings(): + from numpy import ndarray as numpy_array +except ImportError: + numpy_array = None + + +SHORT_TEST = True + + +# ====================================================================== +# Random lists by format specifier +# ====================================================================== + +# Native format chars and their ranges. +NATIVE = { + '?':0, 'c':0, 'b':0, 'B':0, + 'h':0, 'H':0, 'i':0, 'I':0, + 'l':0, 'L':0, 'n':0, 'N':0, + 'f':0, 'd':0, 'P':0 +} + +if struct: + try: + # Add "qQ" if present in native mode. + struct.pack('Q', 2**64-1) + NATIVE['q'] = 0 + NATIVE['Q'] = 0 + except struct.error: + pass + +# Standard format chars and their ranges. +STANDARD = { + '?':(0, 2), 'c':(0, 1<<8), + 'b':(-(1<<7), 1<<7), 'B':(0, 1<<8), + 'h':(-(1<<15), 1<<15), 'H':(0, 1<<16), + 'i':(-(1<<31), 1<<31), 'I':(0, 1<<32), + 'l':(-(1<<31), 1<<31), 'L':(0, 1<<32), + 'q':(-(1<<63), 1<<63), 'Q':(0, 1<<64), + 'f':(-(1<<63), 1<<63), 'd':(-(1<<1023), 1<<1023) +} + +def native_type_range(fmt): + """Return range of a native type.""" + if fmt == 'c': + lh = (0, 256) + elif fmt == '?': + lh = (0, 2) + elif fmt == 'f': + lh = (-(1<<63), 1<<63) + elif fmt == 'd': + lh = (-(1<<1023), 1<<1023) + else: + for exp in (128, 127, 64, 63, 32, 31, 16, 15, 8, 7): + try: + struct.pack(fmt, (1<<exp)-1) + break + except struct.error: + pass + lh = (-(1<<exp), 1<<exp) if exp & 1 else (0, 1<<exp) + return lh + +fmtdict = { + '':NATIVE, + '@':NATIVE, + '<':STANDARD, + '>':STANDARD, + '=':STANDARD, + '!':STANDARD +} + +if struct: + for fmt in fmtdict['@']: + fmtdict['@'][fmt] = native_type_range(fmt) + +MEMORYVIEW = NATIVE.copy() +ARRAY = NATIVE.copy() +for k in NATIVE: + if not k in "bBhHiIlLfd": + del ARRAY[k] + +BYTEFMT = NATIVE.copy() +for k in NATIVE: + if not k in "Bbc": + del BYTEFMT[k] + +fmtdict['m'] = MEMORYVIEW +fmtdict['@m'] = MEMORYVIEW +fmtdict['a'] = ARRAY +fmtdict['b'] = BYTEFMT +fmtdict['@b'] = BYTEFMT + +# Capabilities of the test objects: +MODE = 0 +MULT = 1 +cap = { # format chars # multiplier + 'ndarray': (['', '@', '<', '>', '=', '!'], ['', '1', '2', '3']), + 'array': (['a'], ['']), + 'numpy': ([''], ['']), + 'memoryview': (['@m', 'm'], ['']), + 'bytefmt': (['@b', 'b'], ['']), +} + +def randrange_fmt(mode, char, obj): + """Return random item for a type specified by a mode and a single + format character.""" + x = randrange(*fmtdict[mode][char]) + if char == 'c': + x = bytes(chr(x), 'latin1') + if char == '?': + x = bool(x) + if char == 'f' or char == 'd': + x = struct.pack(char, x) + x = struct.unpack(char, x)[0] + if obj == 'numpy' and x == b'\x00': + # http://projects.scipy.org/numpy/ticket/1925 + x = b'\x01' + return x + +def gen_item(fmt, obj): + """Return single random item.""" + mode, chars = fmt.split('#') + x = [] + for c in chars: + x.append(randrange_fmt(mode, c, obj)) + return x[0] if len(x) == 1 else tuple(x) + +def gen_items(n, fmt, obj): + """Return a list of random items (or a scalar).""" + if n == 0: + return gen_item(fmt, obj) + lst = [0] * n + for i in range(n): + lst[i] = gen_item(fmt, obj) + return lst + +def struct_items(n, obj): + mode = choice(cap[obj][MODE]) + xfmt = mode + '#' + fmt = mode.strip('amb') + nmemb = randrange(2, 10) # number of struct members + for _ in range(nmemb): + char = choice(tuple(fmtdict[mode])) + multiplier = choice(cap[obj][MULT]) + xfmt += (char * int(multiplier if multiplier else 1)) + fmt += (multiplier + char) + items = gen_items(n, xfmt, obj) + item = gen_item(xfmt, obj) + return fmt, items, item + +def randitems(n, obj='ndarray', mode=None, char=None): + """Return random format, items, item.""" + if mode is None: + mode = choice(cap[obj][MODE]) + if char is None: + char = choice(tuple(fmtdict[mode])) + multiplier = choice(cap[obj][MULT]) + fmt = mode + '#' + char * int(multiplier if multiplier else 1) + items = gen_items(n, fmt, obj) + item = gen_item(fmt, obj) + fmt = mode.strip('amb') + multiplier + char + return fmt, items, item + +def iter_mode(n, obj='ndarray'): + """Iterate through supported mode/char combinations.""" + for mode in cap[obj][MODE]: + for char in fmtdict[mode]: + yield randitems(n, obj, mode, char) + +def iter_format(nitems, testobj='ndarray'): + """Yield (format, items, item) for all possible modes and format + characters plus one random compound format string.""" + for t in iter_mode(nitems, testobj): + yield t + if testobj != 'ndarray': + raise StopIteration + yield struct_items(nitems, testobj) + + +def is_byte_format(fmt): + return 'c' in fmt or 'b' in fmt or 'B' in fmt + +def is_memoryview_format(fmt): + """format suitable for memoryview""" + x = len(fmt) + return ((x == 1 or (x == 2 and fmt[0] == '@')) and + fmt[x-1] in MEMORYVIEW) + +NON_BYTE_FORMAT = [c for c in fmtdict['@'] if not is_byte_format(c)] + + +# ====================================================================== +# Multi-dimensional tolist(), slicing and slice assignments +# ====================================================================== + +def atomp(lst): + """Tuple items (representing structs) are regarded as atoms.""" + return not isinstance(lst, list) + +def listp(lst): + return isinstance(lst, list) + +def prod(lst): + """Product of list elements.""" + if len(lst) == 0: + return 0 + x = lst[0] + for v in lst[1:]: + x *= v + return x + +def strides_from_shape(ndim, shape, itemsize, layout): + """Calculate strides of a contiguous array. Layout is 'C' or + 'F' (Fortran).""" + if ndim == 0: + return () + if layout == 'C': + strides = list(shape[1:]) + [itemsize] + for i in range(ndim-2, -1, -1): + strides[i] *= strides[i+1] + else: + strides = [itemsize] + list(shape[:-1]) + for i in range(1, ndim): + strides[i] *= strides[i-1] + return strides + +def _ca(items, s): + """Convert flat item list to the nested list representation of a + multidimensional C array with shape 's'.""" + if atomp(items): + return items + if len(s) == 0: + return items[0] + lst = [0] * s[0] + stride = len(items) // s[0] if s[0] else 0 + for i in range(s[0]): + start = i*stride + lst[i] = _ca(items[start:start+stride], s[1:]) + return lst + +def _fa(items, s): + """Convert flat item list to the nested list representation of a + multidimensional Fortran array with shape 's'.""" + if atomp(items): + return items + if len(s) == 0: + return items[0] + lst = [0] * s[0] + stride = s[0] + for i in range(s[0]): + lst[i] = _fa(items[i::stride], s[1:]) + return lst + +def carray(items, shape): + if listp(items) and not 0 in shape and prod(shape) != len(items): + raise ValueError("prod(shape) != len(items)") + return _ca(items, shape) + +def farray(items, shape): + if listp(items) and not 0 in shape and prod(shape) != len(items): + raise ValueError("prod(shape) != len(items)") + return _fa(items, shape) + +def indices(shape): + """Generate all possible tuples of indices.""" + iterables = [range(v) for v in shape] + return product(*iterables) + +def getindex(ndim, ind, strides): + """Convert multi-dimensional index to the position in the flat list.""" + ret = 0 + for i in range(ndim): + ret += strides[i] * ind[i] + return ret + +def transpose(src, shape): + """Transpose flat item list that is regarded as a multi-dimensional + matrix defined by shape: dest...[k][j][i] = src[i][j][k]... """ + if not shape: + return src + ndim = len(shape) + sstrides = strides_from_shape(ndim, shape, 1, 'C') + dstrides = strides_from_shape(ndim, shape[::-1], 1, 'C') + dest = [0] * len(src) + for ind in indices(shape): + fr = getindex(ndim, ind, sstrides) + to = getindex(ndim, ind[::-1], dstrides) + dest[to] = src[fr] + return dest + +def _flatten(lst): + """flatten list""" + if lst == []: + return lst + if atomp(lst): + return [lst] + return _flatten(lst[0]) + _flatten(lst[1:]) + +def flatten(lst): + """flatten list or return scalar""" + if atomp(lst): # scalar + return lst + return _flatten(lst) + +def slice_shape(lst, slices): + """Get the shape of lst after slicing: slices is a list of slice + objects.""" + if atomp(lst): + return [] + return [len(lst[slices[0]])] + slice_shape(lst[0], slices[1:]) + +def multislice(lst, slices): + """Multi-dimensional slicing: slices is a list of slice objects.""" + if atomp(lst): + return lst + return [multislice(sublst, slices[1:]) for sublst in lst[slices[0]]] + +def m_assign(llst, rlst, lslices, rslices): + """Multi-dimensional slice assignment: llst and rlst are the operands, + lslices and rslices are lists of slice objects. llst and rlst must + have the same structure. + + For a two-dimensional example, this is not implemented in Python: + + llst[0:3:2, 0:3:2] = rlst[1:3:1, 1:3:1] + + Instead we write: + + lslices = [slice(0,3,2), slice(0,3,2)] + rslices = [slice(1,3,1), slice(1,3,1)] + multislice_assign(llst, rlst, lslices, rslices) + """ + if atomp(rlst): + return rlst + rlst = [m_assign(l, r, lslices[1:], rslices[1:]) + for l, r in zip(llst[lslices[0]], rlst[rslices[0]])] + llst[lslices[0]] = rlst + return llst + +def cmp_structure(llst, rlst, lslices, rslices): + """Compare the structure of llst[lslices] and rlst[rslices].""" + lshape = slice_shape(llst, lslices) + rshape = slice_shape(rlst, rslices) + if (len(lshape) != len(rshape)): + return -1 + for i in range(len(lshape)): + if lshape[i] != rshape[i]: + return -1 + if lshape[i] == 0: + return 0 + return 0 + +def multislice_assign(llst, rlst, lslices, rslices): + """Return llst after assigning: llst[lslices] = rlst[rslices]""" + if cmp_structure(llst, rlst, lslices, rslices) < 0: + raise ValueError("lvalue and rvalue have different structures") + return m_assign(llst, rlst, lslices, rslices) + + +# ====================================================================== +# Random structures +# ====================================================================== + +# +# PEP-3118 is very permissive with respect to the contents of a +# Py_buffer. In particular: +# +# - shape can be zero +# - strides can be any integer, including zero +# - offset can point to any location in the underlying +# memory block, provided that it is a multiple of +# itemsize. +# +# The functions in this section test and verify random structures +# in full generality. A structure is valid iff it fits in the +# underlying memory block. +# +# The structure 't' (short for 'tuple') is fully defined by: +# +# t = (memlen, itemsize, ndim, shape, strides, offset) +# + +def verify_structure(memlen, itemsize, ndim, shape, strides, offset): + """Verify that the parameters represent a valid array within + the bounds of the allocated memory: + char *mem: start of the physical memory block + memlen: length of the physical memory block + offset: (char *)buf - mem + """ + if offset % itemsize: + return False + if offset < 0 or offset+itemsize > memlen: + return False + if any(v % itemsize for v in strides): + return False + + if ndim <= 0: + return ndim == 0 and not shape and not strides + if 0 in shape: + return True + + imin = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] <= 0) + imax = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] > 0) + + return 0 <= offset+imin and offset+imax+itemsize <= memlen + +def get_item(lst, indices): + for i in indices: + lst = lst[i] + return lst + +def memory_index(indices, t): + """Location of an item in the underlying memory.""" + memlen, itemsize, ndim, shape, strides, offset = t + p = offset + for i in range(ndim): + p += strides[i]*indices[i] + return p + +def is_overlapping(t): + """The structure 't' is overlapping if at least one memory location + is visited twice while iterating through all possible tuples of + indices.""" + memlen, itemsize, ndim, shape, strides, offset = t + visited = 1<<memlen + for ind in indices(shape): + i = memory_index(ind, t) + bit = 1<<i + if visited & bit: + return True + visited |= bit + return False + +def rand_structure(itemsize, valid, maxdim=5, maxshape=16, shape=()): + """Return random structure: + (memlen, itemsize, ndim, shape, strides, offset) + If 'valid' is true, the returned structure is valid, otherwise invalid. + If 'shape' is given, use that instead of creating a random shape. + """ + if not shape: + ndim = randrange(maxdim+1) + if (ndim == 0): + if valid: + return itemsize, itemsize, ndim, (), (), 0 + else: + nitems = randrange(1, 16+1) + memlen = nitems * itemsize + offset = -itemsize if randrange(2) == 0 else memlen + return memlen, itemsize, ndim, (), (), offset + + minshape = 2 + n = randrange(100) + if n >= 95 and valid: + minshape = 0 + elif n >= 90: + minshape = 1 + shape = [0] * ndim + + for i in range(ndim): + shape[i] = randrange(minshape, maxshape+1) + else: + ndim = len(shape) + + maxstride = 5 + n = randrange(100) + zero_stride = True if n >= 95 and n & 1 else False + + strides = [0] * ndim + strides[ndim-1] = itemsize * randrange(-maxstride, maxstride+1) + if not zero_stride and strides[ndim-1] == 0: + strides[ndim-1] = itemsize + + for i in range(ndim-2, -1, -1): + maxstride *= shape[i+1] if shape[i+1] else 1 + if zero_stride: + strides[i] = itemsize * randrange(-maxstride, maxstride+1) + else: + strides[i] = ((1,-1)[randrange(2)] * + itemsize * randrange(1, maxstride+1)) + + imin = imax = 0 + if not 0 in shape: + imin = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] <= 0) + imax = sum(strides[j]*(shape[j]-1) for j in range(ndim) + if strides[j] > 0) + + nitems = imax - imin + if valid: + offset = -imin * itemsize + memlen = offset + (imax+1) * itemsize + else: + memlen = (-imin + imax) * itemsize + offset = -imin-itemsize if randrange(2) == 0 else memlen + return memlen, itemsize, ndim, shape, strides, offset + +def randslice_from_slicelen(slicelen, listlen): + """Create a random slice of len slicelen that fits into listlen.""" + maxstart = listlen - slicelen + start = randrange(maxstart+1) + maxstep = (listlen - start) // slicelen if slicelen else 1 + step = randrange(1, maxstep+1) + stop = start + slicelen * step + s = slice(start, stop, step) + _, _, _, control = slice_indices(s, listlen) + if control != slicelen: + raise RuntimeError + return s + +def randslice_from_shape(ndim, shape): + """Create two sets of slices for an array x with shape 'shape' + such that shapeof(x[lslices]) == shapeof(x[rslices]).""" + lslices = [0] * ndim + rslices = [0] * ndim + for n in range(ndim): + l = shape[n] + slicelen = randrange(1, l+1) if l > 0 else 0 + lslices[n] = randslice_from_slicelen(slicelen, l) + rslices[n] = randslice_from_slicelen(slicelen, l) + return tuple(lslices), tuple(rslices) + +def rand_aligned_slices(maxdim=5, maxshape=16): + """Create (lshape, rshape, tuple(lslices), tuple(rslices)) such that + shapeof(x[lslices]) == shapeof(y[rslices]), where x is an array + with shape 'lshape' and y is an array with shape 'rshape'.""" + ndim = randrange(1, maxdim+1) + minshape = 2 + n = randrange(100) + if n >= 95: + minshape = 0 + elif n >= 90: + minshape = 1 + all_random = True if randrange(100) >= 80 else False + lshape = [0]*ndim; rshape = [0]*ndim + lslices = [0]*ndim; rslices = [0]*ndim + + for n in range(ndim): + small = randrange(minshape, maxshape+1) + big = randrange(minshape, maxshape+1) + if big < small: + big, small = small, big + + # Create a slice that fits the smaller value. + if all_random: + start = randrange(-small, small+1) + stop = randrange(-small, small+1) + step = (1,-1)[randrange(2)] * randrange(1, small+2) + s_small = slice(start, stop, step) + _, _, _, slicelen = slice_indices(s_small, small) + else: + slicelen = randrange(1, small+1) if small > 0 else 0 + s_small = randslice_from_slicelen(slicelen, small) + + # Create a slice of the same length for the bigger value. + s_big = randslice_from_slicelen(slicelen, big) + if randrange(2) == 0: + rshape[n], lshape[n] = big, small + rslices[n], lslices[n] = s_big, s_small + else: + rshape[n], lshape[n] = small, big + rslices[n], lslices[n] = s_small, s_big + + return lshape, rshape, tuple(lslices), tuple(rslices) + +def randitems_from_structure(fmt, t): + """Return a list of random items for structure 't' with format + 'fmtchar'.""" + memlen, itemsize, _, _, _, _ = t + return gen_items(memlen//itemsize, '#'+fmt, 'numpy') + +def ndarray_from_structure(items, fmt, t, flags=0): + """Return ndarray from the tuple returned by rand_structure()""" + memlen, itemsize, ndim, shape, strides, offset = t + return ndarray(items, shape=shape, strides=strides, format=fmt, + offset=offset, flags=ND_WRITABLE|flags) + +def numpy_array_from_structure(items, fmt, t): + """Return numpy_array from the tuple returned by rand_structure()""" + memlen, itemsize, ndim, shape, strides, offset = t + buf = bytearray(memlen) + for j, v in enumerate(items): + struct.pack_into(fmt, buf, j*itemsize, v) + return numpy_array(buffer=buf, shape=shape, strides=strides, + dtype=fmt, offset=offset) + + +# ====================================================================== +# memoryview casts +# ====================================================================== + +def cast_items(exporter, fmt, itemsize, shape=None): + """Interpret the raw memory of 'exporter' as a list of items with + size 'itemsize'. If shape=None, the new structure is assumed to + be 1-D with n * itemsize = bytelen. If shape is given, the usual + constraint for contiguous arrays prod(shape) * itemsize = bytelen + applies. On success, return (items, shape). If the constraints + cannot be met, return (None, None). If a chunk of bytes is interpreted + as NaN as a result of float conversion, return ('nan', None).""" + bytelen = exporter.nbytes + if shape: + if prod(shape) * itemsize != bytelen: + return None, shape + elif shape == []: + if exporter.ndim == 0 or itemsize != bytelen: + return None, shape + else: + n, r = divmod(bytelen, itemsize) + shape = [n] + if r != 0: + return None, shape + + mem = exporter.tobytes() + byteitems = [mem[i:i+itemsize] for i in range(0, len(mem), itemsize)] + + items = [] + for v in byteitems: + item = struct.unpack(fmt, v)[0] + if item != item: + return 'nan', shape + items.append(item) + + return (items, shape) if shape != [] else (items[0], shape) + +def gencastshapes(): + """Generate shapes to test casting.""" + for n in range(32): + yield [n] + ndim = randrange(4, 6) + minshape = 1 if randrange(100) > 80 else 2 + yield [randrange(minshape, 5) for _ in range(ndim)] + ndim = randrange(2, 4) + minshape = 1 if randrange(100) > 80 else 2 + yield [randrange(minshape, 5) for _ in range(ndim)] + + +# ====================================================================== +# Actual tests +# ====================================================================== + +def genslices(n): + """Generate all possible slices for a single dimension.""" + return product(range(-n, n+1), range(-n, n+1), range(-n, n+1)) + +def genslices_ndim(ndim, shape): + """Generate all possible slice tuples for 'shape'.""" + iterables = [genslices(shape[n]) for n in range(ndim)] + return product(*iterables) + +def rslice(n, allow_empty=False): + """Generate random slice for a single dimension of length n. + If zero=True, the slices may be empty, otherwise they will + be non-empty.""" + minlen = 0 if allow_empty or n == 0 else 1 + slicelen = randrange(minlen, n+1) + return randslice_from_slicelen(slicelen, n) + +def rslices(n, allow_empty=False): + """Generate random slices for a single dimension.""" + for _ in range(5): + yield rslice(n, allow_empty) + +def rslices_ndim(ndim, shape, iterations=5): + """Generate random slice tuples for 'shape'.""" + # non-empty slices + for _ in range(iterations): + yield tuple(rslice(shape[n]) for n in range(ndim)) + # possibly empty slices + for _ in range(iterations): + yield tuple(rslice(shape[n], allow_empty=True) for n in range(ndim)) + # invalid slices + yield tuple(slice(0,1,0) for _ in range(ndim)) + +def rpermutation(iterable, r=None): + pool = tuple(iterable) + r = len(pool) if r is None else r + yield tuple(sample(pool, r)) + +def ndarray_print(nd): + """Print ndarray for debugging.""" + try: + x = nd.tolist() + except (TypeError, NotImplementedError): + x = nd.tobytes() + if isinstance(nd, ndarray): + offset = nd.offset + flags = nd.flags + else: + offset = 'unknown' + flags = 'unknown' + print("ndarray(%s, shape=%s, strides=%s, suboffsets=%s, offset=%s, " + "format='%s', itemsize=%s, flags=%s)" % + (x, nd.shape, nd.strides, nd.suboffsets, offset, + nd.format, nd.itemsize, flags)) + sys.stdout.flush() + + +ITERATIONS = 100 +MAXDIM = 5 +MAXSHAPE = 10 + +if SHORT_TEST: + ITERATIONS = 10 + MAXDIM = 3 + MAXSHAPE = 4 + genslices = rslices + genslices_ndim = rslices_ndim + permutations = rpermutation + + +@unittest.skipUnless(struct, 'struct module required for this test.') +@unittest.skipUnless(ndarray, 'ndarray object required for this test') +class TestBufferProtocol(unittest.TestCase): + + def setUp(self): + self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') + if not self.sizeof_void_p: + self.sizeof_void_p = 8 if architecture()[0] == '64bit' else 4 + + def verify(self, result, obj=-1, + itemsize={1}, fmt=-1, readonly={1}, + ndim={1}, shape=-1, strides=-1, + lst=-1, sliced=False, cast=False): + # Verify buffer contents against expected values. Default values + # are deliberately initialized to invalid types. + if shape: + expected_len = prod(shape)*itemsize + else: + if not fmt: # array has been implicitly cast to unsigned bytes + expected_len = len(lst) + else: # ndim = 0 + expected_len = itemsize + + # Reconstruct suboffsets from strides. Support for slicing + # could be added, but is currently only needed for test_getbuf(). + suboffsets = () + if result.suboffsets: + self.assertGreater(ndim, 0) + + suboffset0 = 0 + for n in range(1, ndim): + if shape[n] == 0: + break + if strides[n] <= 0: + suboffset0 += -strides[n] * (shape[n]-1) + + suboffsets = [suboffset0] + [-1 for v in range(ndim-1)] + + # Not correct if slicing has occurred in the first dimension. + stride0 = self.sizeof_void_p + if strides[0] < 0: + stride0 = -stride0 + strides = [stride0] + list(strides[1:]) + + self.assertIs(result.obj, obj) + self.assertEqual(result.nbytes, expected_len) + self.assertEqual(result.itemsize, itemsize) + self.assertEqual(result.format, fmt) + self.assertEqual(result.readonly, readonly) + self.assertEqual(result.ndim, ndim) + self.assertEqual(result.shape, tuple(shape)) + if not (sliced and suboffsets): + self.assertEqual(result.strides, tuple(strides)) + self.assertEqual(result.suboffsets, tuple(suboffsets)) + + if isinstance(result, ndarray) or is_memoryview_format(fmt): + rep = result.tolist() if fmt else result.tobytes() + self.assertEqual(rep, lst) + + if not fmt: # array has been cast to unsigned bytes, + return # the remaining tests won't work. + + # PyBuffer_GetPointer() is the definition how to access an item. + # If PyBuffer_GetPointer(indices) is correct for all possible + # combinations of indices, the buffer is correct. + # + # Also test tobytes() against the flattened 'lst', with all items + # packed to bytes. + if not cast: # casts chop up 'lst' in different ways + b = bytearray() + buf_err = None + for ind in indices(shape): + try: + item1 = get_pointer(result, ind) + item2 = get_item(lst, ind) + if isinstance(item2, tuple): + x = struct.pack(fmt, *item2) + else: + x = struct.pack(fmt, item2) + b.extend(x) + except BufferError: + buf_err = True # re-exporter does not provide full buffer + break + self.assertEqual(item1, item2) + + if not buf_err: + # test tobytes() + self.assertEqual(result.tobytes(), b) + + if not buf_err and is_memoryview_format(fmt): + + # lst := expected multi-dimensional logical representation + # flatten(lst) := elements in C-order + ff = fmt if fmt else 'B' + flattened = flatten(lst) + + # Rules for 'A': if the array is already contiguous, return + # the array unaltered. Otherwise, return a contiguous 'C' + # representation. + for order in ['C', 'F', 'A']: + expected = result + if order == 'F': + if not is_contiguous(result, 'A') or \ + is_contiguous(result, 'C'): + # For constructing the ndarray, convert the + # flattened logical representation to Fortran order. + trans = transpose(flattened, shape) + expected = ndarray(trans, shape=shape, format=ff, + flags=ND_FORTRAN) + else: # 'C', 'A' + if not is_contiguous(result, 'A') or \ + is_contiguous(result, 'F') and order == 'C': + # The flattened list is already in C-order. + expected = ndarray(flattened, shape=shape, format=ff) + contig = get_contiguous(result, PyBUF_READ, order) + contig = get_contiguous(result, PyBUF_READ, order) + self.assertEqual(contig.tobytes(), b) + self.assertTrue(cmp_contig(contig, expected)) + + if is_memoryview_format(fmt): + try: + m = memoryview(result) + except BufferError: # re-exporter does not provide full information + return + ex = result.obj if isinstance(result, memoryview) else result + self.assertIs(m.obj, ex) + self.assertEqual(m.nbytes, expected_len) + self.assertEqual(m.itemsize, itemsize) + self.assertEqual(m.format, fmt) + self.assertEqual(m.readonly, readonly) + self.assertEqual(m.ndim, ndim) + self.assertEqual(m.shape, tuple(shape)) + if not (sliced and suboffsets): + self.assertEqual(m.strides, tuple(strides)) + self.assertEqual(m.suboffsets, tuple(suboffsets)) + + n = 1 if ndim == 0 else len(lst) + self.assertEqual(len(m), n) + + rep = result.tolist() if fmt else result.tobytes() + self.assertEqual(rep, lst) + self.assertEqual(m, result) + + def verify_getbuf(self, orig_ex, ex, req, sliced=False): + def simple_fmt(ex): + return ex.format == '' or ex.format == 'B' + def match(req, flag): + return ((req&flag) == flag) + + if (# writable request to read-only exporter + (ex.readonly and match(req, PyBUF_WRITABLE)) or + # cannot match explicit contiguity request + (match(req, PyBUF_C_CONTIGUOUS) and not ex.c_contiguous) or + (match(req, PyBUF_F_CONTIGUOUS) and not ex.f_contiguous) or + (match(req, PyBUF_ANY_CONTIGUOUS) and not ex.contiguous) or + # buffer needs suboffsets + (not match(req, PyBUF_INDIRECT) and ex.suboffsets) or + # buffer without strides must be C-contiguous + (not match(req, PyBUF_STRIDES) and not ex.c_contiguous) or + # PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT + (not match(req, PyBUF_ND) and match(req, PyBUF_FORMAT))): + + self.assertRaises(BufferError, ndarray, ex, getbuf=req) + return + + if isinstance(ex, ndarray) or is_memoryview_format(ex.format): + lst = ex.tolist() + else: + nd = ndarray(ex, getbuf=PyBUF_FULL_RO) + lst = nd.tolist() + + # The consumer may have requested default values or a NULL format. + ro = 0 if match(req, PyBUF_WRITABLE) else ex.readonly + fmt = ex.format + itemsize = ex.itemsize + ndim = ex.ndim + if not match(req, PyBUF_FORMAT): + # itemsize refers to the original itemsize before the cast. + # The equality product(shape) * itemsize = len still holds. + # The equality calcsize(format) = itemsize does _not_ hold. + fmt = '' + lst = orig_ex.tobytes() # Issue 12834 + if not match(req, PyBUF_ND): + ndim = 1 + shape = orig_ex.shape if match(req, PyBUF_ND) else () + strides = orig_ex.strides if match(req, PyBUF_STRIDES) else () + + nd = ndarray(ex, getbuf=req) + self.verify(nd, obj=ex, + itemsize=itemsize, fmt=fmt, readonly=ro, + ndim=ndim, shape=shape, strides=strides, + lst=lst, sliced=sliced) + + def test_ndarray_getbuf(self): + requests = ( + # distinct flags + PyBUF_INDIRECT, PyBUF_STRIDES, PyBUF_ND, PyBUF_SIMPLE, + PyBUF_C_CONTIGUOUS, PyBUF_F_CONTIGUOUS, PyBUF_ANY_CONTIGUOUS, + # compound requests + PyBUF_FULL, PyBUF_FULL_RO, + PyBUF_RECORDS, PyBUF_RECORDS_RO, + PyBUF_STRIDED, PyBUF_STRIDED_RO, + PyBUF_CONTIG, PyBUF_CONTIG_RO, + ) + # items and format + items_fmt = ( + ([True if x % 2 else False for x in range(12)], '?'), + ([1,2,3,4,5,6,7,8,9,10,11,12], 'b'), + ([1,2,3,4,5,6,7,8,9,10,11,12], 'B'), + ([(2**31-x) if x % 2 else (-2**31+x) for x in range(12)], 'l') + ) + # shape, strides, offset + structure = ( + ([], [], 0), + ([12], [], 0), + ([12], [-1], 11), + ([6], [2], 0), + ([6], [-2], 11), + ([3, 4], [], 0), + ([3, 4], [-4, -1], 11), + ([2, 2], [4, 1], 4), + ([2, 2], [-4, -1], 8) + ) + # ndarray creation flags + ndflags = ( + 0, ND_WRITABLE, ND_FORTRAN, ND_FORTRAN|ND_WRITABLE, + ND_PIL, ND_PIL|ND_WRITABLE + ) + # flags that can actually be used as flags + real_flags = (0, PyBUF_WRITABLE, PyBUF_FORMAT, + PyBUF_WRITABLE|PyBUF_FORMAT) + + for items, fmt in items_fmt: + itemsize = struct.calcsize(fmt) + for shape, strides, offset in structure: + strides = [v * itemsize for v in strides] + offset *= itemsize + for flags in ndflags: + + if strides and (flags&ND_FORTRAN): + continue + if not shape and (flags&ND_PIL): + continue + + _items = items if shape else items[0] + ex1 = ndarray(_items, format=fmt, flags=flags, + shape=shape, strides=strides, offset=offset) + ex2 = ex1[::-2] if shape else None + + m1 = memoryview(ex1) + if ex2: + m2 = memoryview(ex2) + if ex1.ndim == 0 or (ex1.ndim == 1 and shape and strides): + self.assertEqual(m1, ex1) + if ex2 and ex2.ndim == 1 and shape and strides: + self.assertEqual(m2, ex2) + + for req in requests: + for bits in real_flags: + self.verify_getbuf(ex1, ex1, req|bits) + self.verify_getbuf(ex1, m1, req|bits) + if ex2: + self.verify_getbuf(ex2, ex2, req|bits, + sliced=True) + self.verify_getbuf(ex2, m2, req|bits, + sliced=True) + + items = [1,2,3,4,5,6,7,8,9,10,11,12] + + # ND_GETBUF_FAIL + ex = ndarray(items, shape=[12], flags=ND_GETBUF_FAIL) + self.assertRaises(BufferError, ndarray, ex) + + # Request complex structure from a simple exporter. In this + # particular case the test object is not PEP-3118 compliant. + base = ndarray([9], [1]) + ex = ndarray(base, getbuf=PyBUF_SIMPLE) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_WRITABLE) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_ND) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_STRIDES) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_C_CONTIGUOUS) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_F_CONTIGUOUS) + self.assertRaises(BufferError, ndarray, ex, getbuf=PyBUF_ANY_CONTIGUOUS) + nd = ndarray(ex, getbuf=PyBUF_SIMPLE) + + def test_ndarray_exceptions(self): + nd = ndarray([9], [1]) + ndm = ndarray([9], [1], flags=ND_VAREXPORT) + + # Initialization of a new ndarray or mutation of an existing array. + for c in (ndarray, nd.push, ndm.push): + # Invalid types. + self.assertRaises(TypeError, c, {1,2,3}) + self.assertRaises(TypeError, c, [1,2,'3']) + self.assertRaises(TypeError, c, [1,2,(3,4)]) + self.assertRaises(TypeError, c, [1,2,3], shape={3}) + self.assertRaises(TypeError, c, [1,2,3], shape=[3], strides={1}) + self.assertRaises(TypeError, c, [1,2,3], shape=[3], offset=[]) + self.assertRaises(TypeError, c, [1], shape=[1], format={}) + self.assertRaises(TypeError, c, [1], shape=[1], flags={}) + self.assertRaises(TypeError, c, [1], shape=[1], getbuf={}) + + # ND_FORTRAN flag is only valid without strides. + self.assertRaises(TypeError, c, [1], shape=[1], strides=[1], + flags=ND_FORTRAN) + + # ND_PIL flag is only valid with ndim > 0. + self.assertRaises(TypeError, c, [1], shape=[], flags=ND_PIL) + + # Invalid items. + self.assertRaises(ValueError, c, [], shape=[1]) + self.assertRaises(ValueError, c, ['XXX'], shape=[1], format="L") + # Invalid combination of items and format. + self.assertRaises(struct.error, c, [1000], shape=[1], format="B") + self.assertRaises(ValueError, c, [1,(2,3)], shape=[2], format="B") + self.assertRaises(ValueError, c, [1,2,3], shape=[3], format="QL") + + # Invalid ndim. + n = ND_MAX_NDIM+1 + self.assertRaises(ValueError, c, [1]*n, shape=[1]*n) + + # Invalid shape. + self.assertRaises(ValueError, c, [1], shape=[-1]) + self.assertRaises(ValueError, c, [1,2,3], shape=['3']) + self.assertRaises(OverflowError, c, [1], shape=[2**128]) + # prod(shape) * itemsize != len(items) + self.assertRaises(ValueError, c, [1,2,3,4,5], shape=[2,2], offset=3) + + # Invalid strides. + self.assertRaises(ValueError, c, [1,2,3], shape=[3], strides=['1']) + self.assertRaises(OverflowError, c, [1], shape=[1], + strides=[2**128]) + + # Invalid combination of strides and shape. + self.assertRaises(ValueError, c, [1,2], shape=[2,1], strides=[1]) + # Invalid combination of strides and format. + self.assertRaises(ValueError, c, [1,2,3,4], shape=[2], strides=[3], + format="L") + + # Invalid offset. + self.assertRaises(ValueError, c, [1,2,3], shape=[3], offset=4) + self.assertRaises(ValueError, c, [1,2,3], shape=[1], offset=3, + format="L") + + # Invalid format. + self.assertRaises(ValueError, c, [1,2,3], shape=[3], format="") + self.assertRaises(struct.error, c, [(1,2,3)], shape=[1], + format="@#$") + + # Striding out of the memory bounds. + items = [1,2,3,4,5,6,7,8,9,10] + self.assertRaises(ValueError, c, items, shape=[2,3], + strides=[-3, -2], offset=5) + + # Constructing consumer: format argument invalid. + self.assertRaises(TypeError, c, bytearray(), format="Q") + + # Constructing original base object: getbuf argument invalid. + self.assertRaises(TypeError, c, [1], shape=[1], getbuf=PyBUF_FULL) + + # Shape argument is mandatory for original base objects. + self.assertRaises(TypeError, c, [1]) + + + # PyBUF_WRITABLE request to read-only provider. + self.assertRaises(BufferError, ndarray, b'123', getbuf=PyBUF_WRITABLE) + + # ND_VAREXPORT can only be specified during construction. + nd = ndarray([9], [1], flags=ND_VAREXPORT) + self.assertRaises(ValueError, nd.push, [1], [1], flags=ND_VAREXPORT) + + # Invalid operation for consumers: push/pop + nd = ndarray(b'123') + self.assertRaises(BufferError, nd.push, [1], [1]) + self.assertRaises(BufferError, nd.pop) + + # ND_VAREXPORT not set: push/pop fail with exported buffers + nd = ndarray([9], [1]) + nd.push([1], [1]) + m = memoryview(nd) + self.assertRaises(BufferError, nd.push, [1], [1]) + self.assertRaises(BufferError, nd.pop) + m.release() + nd.pop() + + # Single remaining buffer: pop fails + self.assertRaises(BufferError, nd.pop) + del nd + + # get_pointer() + self.assertRaises(TypeError, get_pointer, {}, [1,2,3]) + self.assertRaises(TypeError, get_pointer, b'123', {}) + + nd = ndarray(list(range(100)), shape=[1]*100) + self.assertRaises(ValueError, get_pointer, nd, [5]) + + nd = ndarray(list(range(12)), shape=[3,4]) + self.assertRaises(ValueError, get_pointer, nd, [2,3,4]) + self.assertRaises(ValueError, get_pointer, nd, [3,3]) + self.assertRaises(ValueError, get_pointer, nd, [-3,3]) + self.assertRaises(OverflowError, get_pointer, nd, [1<<64,3]) + + # tolist() needs format + ex = ndarray([1,2,3], shape=[3], format='L') + nd = ndarray(ex, getbuf=PyBUF_SIMPLE) + self.assertRaises(ValueError, nd.tolist) + + # memoryview_from_buffer() + ex1 = ndarray([1,2,3], shape=[3], format='L') + ex2 = ndarray(ex1) + nd = ndarray(ex2) + self.assertRaises(TypeError, nd.memoryview_from_buffer) + + nd = ndarray([(1,)*200], shape=[1], format='L'*200) + self.assertRaises(TypeError, nd.memoryview_from_buffer) + + n = ND_MAX_NDIM + nd = ndarray(list(range(n)), shape=[1]*n) + self.assertRaises(ValueError, nd.memoryview_from_buffer) + + # get_contiguous() + nd = ndarray([1], shape=[1]) + self.assertRaises(TypeError, get_contiguous, 1, 2, 3, 4, 5) + self.assertRaises(TypeError, get_contiguous, nd, "xyz", 'C') + self.assertRaises(OverflowError, get_contiguous, nd, 2**64, 'C') + self.assertRaises(TypeError, get_contiguous, nd, PyBUF_READ, 961) + self.assertRaises(UnicodeEncodeError, get_contiguous, nd, PyBUF_READ, + '\u2007') + + # cmp_contig() + nd = ndarray([1], shape=[1]) + self.assertRaises(TypeError, cmp_contig, 1, 2, 3, 4, 5) + self.assertRaises(TypeError, cmp_contig, {}, nd) + self.assertRaises(TypeError, cmp_contig, nd, {}) + + # is_contiguous() + nd = ndarray([1], shape=[1]) + self.assertRaises(TypeError, is_contiguous, 1, 2, 3, 4, 5) + self.assertRaises(TypeError, is_contiguous, {}, 'A') + self.assertRaises(TypeError, is_contiguous, nd, 201) + + def test_ndarray_linked_list(self): + for perm in permutations(range(5)): + m = [0]*5 + nd = ndarray([1,2,3], shape=[3], flags=ND_VAREXPORT) + m[0] = memoryview(nd) + + for i in range(1, 5): + nd.push([1,2,3], shape=[3]) + m[i] = memoryview(nd) + + for i in range(5): + m[perm[i]].release() + + self.assertRaises(BufferError, nd.pop) + del nd + + def test_ndarray_format_scalar(self): + # ndim = 0: scalar + for fmt, scalar, _ in iter_format(0): + itemsize = struct.calcsize(fmt) + nd = ndarray(scalar, shape=(), format=fmt) + self.verify(nd, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=0, shape=(), strides=(), + lst=scalar) + + def test_ndarray_format_shape(self): + # ndim = 1, shape = [n] + nitems = randrange(1, 10) + for fmt, items, _ in iter_format(nitems): + itemsize = struct.calcsize(fmt) + for flags in (0, ND_PIL): + nd = ndarray(items, shape=[nitems], format=fmt, flags=flags) + self.verify(nd, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=1, shape=(nitems,), strides=(itemsize,), + lst=items) + + def test_ndarray_format_strides(self): + # ndim = 1, strides + nitems = randrange(1, 30) + for fmt, items, _ in iter_format(nitems): + itemsize = struct.calcsize(fmt) + for step in range(-5, 5): + if step == 0: + continue + + shape = [len(items[::step])] + strides = [step*itemsize] + offset = itemsize*(nitems-1) if step < 0 else 0 + + for flags in (0, ND_PIL): + nd = ndarray(items, shape=shape, strides=strides, + format=fmt, offset=offset, flags=flags) + self.verify(nd, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=1, shape=shape, strides=strides, + lst=items[::step]) + + def test_ndarray_fortran(self): + items = [1,2,3,4,5,6,7,8,9,10,11,12] + ex = ndarray(items, shape=(3, 4), strides=(1, 3)) + nd = ndarray(ex, getbuf=PyBUF_F_CONTIGUOUS|PyBUF_FORMAT) + self.assertEqual(nd.tolist(), farray(items, (3, 4))) + + def test_ndarray_multidim(self): + for ndim in range(5): + shape_t = [randrange(2, 10) for _ in range(ndim)] + nitems = prod(shape_t) + for shape in permutations(shape_t): + + fmt, items, _ = randitems(nitems) + itemsize = struct.calcsize(fmt) + + for flags in (0, ND_PIL): + if ndim == 0 and flags == ND_PIL: + continue + + # C array + nd = ndarray(items, shape=shape, format=fmt, flags=flags) + + strides = strides_from_shape(ndim, shape, itemsize, 'C') + lst = carray(items, shape) + self.verify(nd, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + if is_memoryview_format(fmt): + # memoryview: reconstruct strides + ex = ndarray(items, shape=shape, format=fmt) + nd = ndarray(ex, getbuf=PyBUF_CONTIG_RO|PyBUF_FORMAT) + self.assertTrue(nd.strides == ()) + mv = nd.memoryview_from_buffer() + self.verify(mv, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # Fortran array + nd = ndarray(items, shape=shape, format=fmt, + flags=flags|ND_FORTRAN) + + strides = strides_from_shape(ndim, shape, itemsize, 'F') + lst = farray(items, shape) + self.verify(nd, obj=None, + itemsize=itemsize, fmt=fmt, readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + def test_ndarray_index_invalid(self): + # not writable + nd = ndarray([1], shape=[1]) + self.assertRaises(TypeError, nd.__setitem__, 1, 8) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertRaises(TypeError, mv.__setitem__, 1, 8) + + # cannot be deleted + nd = ndarray([1], shape=[1], flags=ND_WRITABLE) + self.assertRaises(TypeError, nd.__delitem__, 1) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertRaises(TypeError, mv.__delitem__, 1) + + # overflow + nd = ndarray([1], shape=[1], flags=ND_WRITABLE) + self.assertRaises(OverflowError, nd.__getitem__, 1<<64) + self.assertRaises(OverflowError, nd.__setitem__, 1<<64, 8) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertRaises(IndexError, mv.__getitem__, 1<<64) + self.assertRaises(IndexError, mv.__setitem__, 1<<64, 8) + + # format + items = [1,2,3,4,5,6,7,8] + nd = ndarray(items, shape=[len(items)], format="B", flags=ND_WRITABLE) + self.assertRaises(struct.error, nd.__setitem__, 2, 300) + self.assertRaises(ValueError, nd.__setitem__, 1, (100, 200)) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertRaises(ValueError, mv.__setitem__, 2, 300) + self.assertRaises(TypeError, mv.__setitem__, 1, (100, 200)) + + items = [(1,2), (3,4), (5,6)] + nd = ndarray(items, shape=[len(items)], format="LQ", flags=ND_WRITABLE) + self.assertRaises(ValueError, nd.__setitem__, 2, 300) + self.assertRaises(struct.error, nd.__setitem__, 1, (b'\x001', 200)) + + def test_ndarray_index_scalar(self): + # scalar + nd = ndarray(1, shape=(), flags=ND_WRITABLE) + mv = memoryview(nd) + self.assertEqual(mv, nd) + + x = nd[()]; self.assertEqual(x, 1) + x = nd[...]; self.assertEqual(x.tolist(), nd.tolist()) + + x = mv[()]; self.assertEqual(x, 1) + x = mv[...]; self.assertEqual(x.tolist(), nd.tolist()) + + self.assertRaises(TypeError, nd.__getitem__, 0) + self.assertRaises(TypeError, mv.__getitem__, 0) + self.assertRaises(TypeError, nd.__setitem__, 0, 8) + self.assertRaises(TypeError, mv.__setitem__, 0, 8) + + self.assertEqual(nd.tolist(), 1) + self.assertEqual(mv.tolist(), 1) + + nd[()] = 9; self.assertEqual(nd.tolist(), 9) + mv[()] = 9; self.assertEqual(mv.tolist(), 9) + + nd[...] = 5; self.assertEqual(nd.tolist(), 5) + mv[...] = 5; self.assertEqual(mv.tolist(), 5) + + def test_ndarray_index_null_strides(self): + ex = ndarray(list(range(2*4)), shape=[2, 4], flags=ND_WRITABLE) + nd = ndarray(ex, getbuf=PyBUF_CONTIG) + + # Sub-views are only possible for full exporters. + self.assertRaises(BufferError, nd.__getitem__, 1) + # Same for slices. + self.assertRaises(BufferError, nd.__getitem__, slice(3,5,1)) + + def test_ndarray_index_getitem_single(self): + # getitem + for fmt, items, _ in iter_format(5): + nd = ndarray(items, shape=[5], format=fmt) + for i in range(-5, 5): + self.assertEqual(nd[i], items[i]) + + self.assertRaises(IndexError, nd.__getitem__, -6) + self.assertRaises(IndexError, nd.__getitem__, 5) + + if is_memoryview_format(fmt): + mv = memoryview(nd) + self.assertEqual(mv, nd) + for i in range(-5, 5): + self.assertEqual(mv[i], items[i]) + + self.assertRaises(IndexError, mv.__getitem__, -6) + self.assertRaises(IndexError, mv.__getitem__, 5) + + # getitem with null strides + for fmt, items, _ in iter_format(5): + ex = ndarray(items, shape=[5], flags=ND_WRITABLE, format=fmt) + nd = ndarray(ex, getbuf=PyBUF_CONTIG|PyBUF_FORMAT) + + for i in range(-5, 5): + self.assertEqual(nd[i], items[i]) + + if is_memoryview_format(fmt): + mv = nd.memoryview_from_buffer() + self.assertIs(mv.__eq__(nd), NotImplemented) + for i in range(-5, 5): + self.assertEqual(mv[i], items[i]) + + # getitem with null format + items = [1,2,3,4,5] + ex = ndarray(items, shape=[5]) + nd = ndarray(ex, getbuf=PyBUF_CONTIG_RO) + for i in range(-5, 5): + self.assertEqual(nd[i], items[i]) + + # getitem with null shape/strides/format + items = [1,2,3,4,5] + ex = ndarray(items, shape=[5]) + nd = ndarray(ex, getbuf=PyBUF_SIMPLE) + + for i in range(-5, 5): + self.assertEqual(nd[i], items[i]) + + def test_ndarray_index_setitem_single(self): + # assign single value + for fmt, items, single_item in iter_format(5): + nd = ndarray(items, shape=[5], format=fmt, flags=ND_WRITABLE) + for i in range(5): + items[i] = single_item + nd[i] = single_item + self.assertEqual(nd.tolist(), items) + + self.assertRaises(IndexError, nd.__setitem__, -6, single_item) + self.assertRaises(IndexError, nd.__setitem__, 5, single_item) + + if not is_memoryview_format(fmt): + continue + + nd = ndarray(items, shape=[5], format=fmt, flags=ND_WRITABLE) + mv = memoryview(nd) + self.assertEqual(mv, nd) + for i in range(5): + items[i] = single_item + mv[i] = single_item + self.assertEqual(mv.tolist(), items) + + self.assertRaises(IndexError, mv.__setitem__, -6, single_item) + self.assertRaises(IndexError, mv.__setitem__, 5, single_item) + + + # assign single value: lobject = robject + for fmt, items, single_item in iter_format(5): + nd = ndarray(items, shape=[5], format=fmt, flags=ND_WRITABLE) + for i in range(-5, 4): + items[i] = items[i+1] + nd[i] = nd[i+1] + self.assertEqual(nd.tolist(), items) + + if not is_memoryview_format(fmt): + continue + + nd = ndarray(items, shape=[5], format=fmt, flags=ND_WRITABLE) + mv = memoryview(nd) + self.assertEqual(mv, nd) + for i in range(-5, 4): + items[i] = items[i+1] + mv[i] = mv[i+1] + self.assertEqual(mv.tolist(), items) + + def test_ndarray_index_getitem_multidim(self): + shape_t = (2, 3, 5) + nitems = prod(shape_t) + for shape in permutations(shape_t): + + fmt, items, _ = randitems(nitems) + + for flags in (0, ND_PIL): + # C array + nd = ndarray(items, shape=shape, format=fmt, flags=flags) + lst = carray(items, shape) + + for i in range(-shape[0], shape[0]): + self.assertEqual(lst[i], nd[i].tolist()) + for j in range(-shape[1], shape[1]): + self.assertEqual(lst[i][j], nd[i][j].tolist()) + for k in range(-shape[2], shape[2]): + self.assertEqual(lst[i][j][k], nd[i][j][k]) + + # Fortran array + nd = ndarray(items, shape=shape, format=fmt, + flags=flags|ND_FORTRAN) + lst = farray(items, shape) + + for i in range(-shape[0], shape[0]): + self.assertEqual(lst[i], nd[i].tolist()) + for j in range(-shape[1], shape[1]): + self.assertEqual(lst[i][j], nd[i][j].tolist()) + for k in range(shape[2], shape[2]): + self.assertEqual(lst[i][j][k], nd[i][j][k]) + + def test_ndarray_sequence(self): + nd = ndarray(1, shape=()) + self.assertRaises(TypeError, eval, "1 in nd", locals()) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertRaises(TypeError, eval, "1 in mv", locals()) + + for fmt, items, _ in iter_format(5): + nd = ndarray(items, shape=[5], format=fmt) + for i, v in enumerate(nd): + self.assertEqual(v, items[i]) + self.assertTrue(v in nd) + + if is_memoryview_format(fmt): + mv = memoryview(nd) + for i, v in enumerate(mv): + self.assertEqual(v, items[i]) + self.assertTrue(v in mv) + + def test_ndarray_slice_invalid(self): + items = [1,2,3,4,5,6,7,8] + + # rvalue is not an exporter + xl = ndarray(items, shape=[8], flags=ND_WRITABLE) + ml = memoryview(xl) + self.assertRaises(TypeError, xl.__setitem__, slice(0,8,1), items) + self.assertRaises(TypeError, ml.__setitem__, slice(0,8,1), items) + + # rvalue is not a full exporter + xl = ndarray(items, shape=[8], flags=ND_WRITABLE) + ex = ndarray(items, shape=[8], flags=ND_WRITABLE) + xr = ndarray(ex, getbuf=PyBUF_ND) + self.assertRaises(BufferError, xl.__setitem__, slice(0,8,1), xr) + + # zero step + nd = ndarray(items, shape=[8], format="L", flags=ND_WRITABLE) + mv = memoryview(nd) + self.assertRaises(ValueError, nd.__getitem__, slice(0,1,0)) + self.assertRaises(ValueError, mv.__getitem__, slice(0,1,0)) + + nd = ndarray(items, shape=[2,4], format="L", flags=ND_WRITABLE) + mv = memoryview(nd) + + self.assertRaises(ValueError, nd.__getitem__, + (slice(0,1,1), slice(0,1,0))) + self.assertRaises(ValueError, nd.__getitem__, + (slice(0,1,0), slice(0,1,1))) + self.assertRaises(TypeError, nd.__getitem__, "@%$") + self.assertRaises(TypeError, nd.__getitem__, ("@%$", slice(0,1,1))) + self.assertRaises(TypeError, nd.__getitem__, (slice(0,1,1), {})) + + # memoryview: not implemented + self.assertRaises(NotImplementedError, mv.__getitem__, + (slice(0,1,1), slice(0,1,0))) + self.assertRaises(TypeError, mv.__getitem__, "@%$") + + # differing format + xl = ndarray(items, shape=[8], format="B", flags=ND_WRITABLE) + xr = ndarray(items, shape=[8], format="b") + ml = memoryview(xl) + mr = memoryview(xr) + self.assertRaises(ValueError, xl.__setitem__, slice(0,1,1), xr[7:8]) + self.assertEqual(xl.tolist(), items) + self.assertRaises(ValueError, ml.__setitem__, slice(0,1,1), mr[7:8]) + self.assertEqual(ml.tolist(), items) + + # differing itemsize + xl = ndarray(items, shape=[8], format="B", flags=ND_WRITABLE) + yr = ndarray(items, shape=[8], format="L") + ml = memoryview(xl) + mr = memoryview(xr) + self.assertRaises(ValueError, xl.__setitem__, slice(0,1,1), xr[7:8]) + self.assertEqual(xl.tolist(), items) + self.assertRaises(ValueError, ml.__setitem__, slice(0,1,1), mr[7:8]) + self.assertEqual(ml.tolist(), items) + + # differing ndim + xl = ndarray(items, shape=[2, 4], format="b", flags=ND_WRITABLE) + xr = ndarray(items, shape=[8], format="b") + ml = memoryview(xl) + mr = memoryview(xr) + self.assertRaises(ValueError, xl.__setitem__, slice(0,1,1), xr[7:8]) + self.assertEqual(xl.tolist(), [[1,2,3,4], [5,6,7,8]]) + self.assertRaises(NotImplementedError, ml.__setitem__, slice(0,1,1), + mr[7:8]) + + # differing shape + xl = ndarray(items, shape=[8], format="b", flags=ND_WRITABLE) + xr = ndarray(items, shape=[8], format="b") + ml = memoryview(xl) + mr = memoryview(xr) + self.assertRaises(ValueError, xl.__setitem__, slice(0,2,1), xr[7:8]) + self.assertEqual(xl.tolist(), items) + self.assertRaises(ValueError, ml.__setitem__, slice(0,2,1), mr[7:8]) + self.assertEqual(ml.tolist(), items) + + # _testbuffer.c module functions + self.assertRaises(TypeError, slice_indices, slice(0,1,2), {}) + self.assertRaises(TypeError, slice_indices, "###########", 1) + self.assertRaises(ValueError, slice_indices, slice(0,1,0), 4) + + x = ndarray(items, shape=[8], format="b", flags=ND_PIL) + self.assertRaises(TypeError, x.add_suboffsets) + + ex = ndarray(items, shape=[8], format="B") + x = ndarray(ex, getbuf=PyBUF_SIMPLE) + self.assertRaises(TypeError, x.add_suboffsets) + + def test_ndarray_slice_zero_shape(self): + items = [1,2,3,4,5,6,7,8,9,10,11,12] + + x = ndarray(items, shape=[12], format="L", flags=ND_WRITABLE) + y = ndarray(items, shape=[12], format="L") + x[4:4] = y[9:9] + self.assertEqual(x.tolist(), items) + + ml = memoryview(x) + mr = memoryview(y) + self.assertEqual(ml, x) + self.assertEqual(ml, y) + ml[4:4] = mr[9:9] + self.assertEqual(ml.tolist(), items) + + x = ndarray(items, shape=[3, 4], format="L", flags=ND_WRITABLE) + y = ndarray(items, shape=[4, 3], format="L") + x[1:2, 2:2] = y[1:2, 3:3] + self.assertEqual(x.tolist(), carray(items, [3, 4])) + + def test_ndarray_slice_multidim(self): + shape_t = (2, 3, 5) + ndim = len(shape_t) + nitems = prod(shape_t) + for shape in permutations(shape_t): + + fmt, items, _ = randitems(nitems) + itemsize = struct.calcsize(fmt) + + for flags in (0, ND_PIL): + nd = ndarray(items, shape=shape, format=fmt, flags=flags) + lst = carray(items, shape) + + for slices in rslices_ndim(ndim, shape): + + listerr = None + try: + sliced = multislice(lst, slices) + except Exception as e: + listerr = e.__class__ + + nderr = None + try: + ndsliced = nd[slices] + except Exception as e: + nderr = e.__class__ + + if nderr or listerr: + self.assertIs(nderr, listerr) + else: + self.assertEqual(ndsliced.tolist(), sliced) + + def test_ndarray_slice_redundant_suboffsets(self): + shape_t = (2, 3, 5, 2) + ndim = len(shape_t) + nitems = prod(shape_t) + for shape in permutations(shape_t): + + fmt, items, _ = randitems(nitems) + itemsize = struct.calcsize(fmt) + + nd = ndarray(items, shape=shape, format=fmt) + nd.add_suboffsets() + ex = ndarray(items, shape=shape, format=fmt) + ex.add_suboffsets() + mv = memoryview(ex) + lst = carray(items, shape) + + for slices in rslices_ndim(ndim, shape): + + listerr = None + try: + sliced = multislice(lst, slices) + except Exception as e: + listerr = e.__class__ + + nderr = None + try: + ndsliced = nd[slices] + except Exception as e: + nderr = e.__class__ + + if nderr or listerr: + self.assertIs(nderr, listerr) + else: + self.assertEqual(ndsliced.tolist(), sliced) + + def test_ndarray_slice_assign_single(self): + for fmt, items, _ in iter_format(5): + for lslice in genslices(5): + for rslice in genslices(5): + for flags in (0, ND_PIL): + + f = flags|ND_WRITABLE + nd = ndarray(items, shape=[5], format=fmt, flags=f) + ex = ndarray(items, shape=[5], format=fmt, flags=f) + mv = memoryview(ex) + + lsterr = None + diff_structure = None + lst = items[:] + try: + lval = lst[lslice] + rval = lst[rslice] + lst[lslice] = lst[rslice] + diff_structure = len(lval) != len(rval) + except Exception as e: + lsterr = e.__class__ + + nderr = None + try: + nd[lslice] = nd[rslice] + except Exception as e: + nderr = e.__class__ + + if diff_structure: # ndarray cannot change shape + self.assertIs(nderr, ValueError) + else: + self.assertEqual(nd.tolist(), lst) + self.assertIs(nderr, lsterr) + + if not is_memoryview_format(fmt): + continue + + mverr = None + try: + mv[lslice] = mv[rslice] + except Exception as e: + mverr = e.__class__ + + if diff_structure: # memoryview cannot change shape + self.assertIs(mverr, ValueError) + else: + self.assertEqual(mv.tolist(), lst) + self.assertEqual(mv, nd) + self.assertIs(mverr, lsterr) + self.verify(mv, obj=ex, + itemsize=nd.itemsize, fmt=fmt, readonly=0, + ndim=nd.ndim, shape=nd.shape, strides=nd.strides, + lst=nd.tolist()) + + def test_ndarray_slice_assign_multidim(self): + shape_t = (2, 3, 5) + ndim = len(shape_t) + nitems = prod(shape_t) + for shape in permutations(shape_t): + + fmt, items, _ = randitems(nitems) + + for flags in (0, ND_PIL): + for _ in range(ITERATIONS): + lslices, rslices = randslice_from_shape(ndim, shape) + + nd = ndarray(items, shape=shape, format=fmt, + flags=flags|ND_WRITABLE) + lst = carray(items, shape) + + listerr = None + try: + result = multislice_assign(lst, lst, lslices, rslices) + except Exception as e: + listerr = e.__class__ + + nderr = None + try: + nd[lslices] = nd[rslices] + except Exception as e: + nderr = e.__class__ + + if nderr or listerr: + self.assertIs(nderr, listerr) + else: + self.assertEqual(nd.tolist(), result) + + def test_ndarray_random(self): + # construction of valid arrays + for _ in range(ITERATIONS): + for fmt in fmtdict['@']: + itemsize = struct.calcsize(fmt) + + t = rand_structure(itemsize, True, maxdim=MAXDIM, + maxshape=MAXSHAPE) + self.assertTrue(verify_structure(*t)) + items = randitems_from_structure(fmt, t) + + x = ndarray_from_structure(items, fmt, t) + xlist = x.tolist() + + mv = memoryview(x) + if is_memoryview_format(fmt): + mvlist = mv.tolist() + self.assertEqual(mvlist, xlist) + + if t[2] > 0: + # ndim > 0: test against suboffsets representation. + y = ndarray_from_structure(items, fmt, t, flags=ND_PIL) + ylist = y.tolist() + self.assertEqual(xlist, ylist) + + mv = memoryview(y) + if is_memoryview_format(fmt): + self.assertEqual(mv, y) + mvlist = mv.tolist() + self.assertEqual(mvlist, ylist) + + if numpy_array: + shape = t[3] + if 0 in shape: + continue # http://projects.scipy.org/numpy/ticket/1910 + z = numpy_array_from_structure(items, fmt, t) + self.verify(x, obj=None, + itemsize=z.itemsize, fmt=fmt, readonly=0, + ndim=z.ndim, shape=z.shape, strides=z.strides, + lst=z.tolist()) + + def test_ndarray_random_invalid(self): + # exceptions during construction of invalid arrays + for _ in range(ITERATIONS): + for fmt in fmtdict['@']: + itemsize = struct.calcsize(fmt) + + t = rand_structure(itemsize, False, maxdim=MAXDIM, + maxshape=MAXSHAPE) + self.assertFalse(verify_structure(*t)) + items = randitems_from_structure(fmt, t) + + nderr = False + try: + x = ndarray_from_structure(items, fmt, t) + except Exception as e: + nderr = e.__class__ + self.assertTrue(nderr) + + if numpy_array: + numpy_err = False + try: + y = numpy_array_from_structure(items, fmt, t) + except Exception as e: + numpy_err = e.__class__ + + if 0: # http://projects.scipy.org/numpy/ticket/1910 + self.assertTrue(numpy_err) + + def test_ndarray_random_slice_assign(self): + # valid slice assignments + for _ in range(ITERATIONS): + for fmt in fmtdict['@']: + itemsize = struct.calcsize(fmt) + + lshape, rshape, lslices, rslices = \ + rand_aligned_slices(maxdim=MAXDIM, maxshape=MAXSHAPE) + tl = rand_structure(itemsize, True, shape=lshape) + tr = rand_structure(itemsize, True, shape=rshape) + self.assertTrue(verify_structure(*tl)) + self.assertTrue(verify_structure(*tr)) + litems = randitems_from_structure(fmt, tl) + ritems = randitems_from_structure(fmt, tr) + + xl = ndarray_from_structure(litems, fmt, tl) + xr = ndarray_from_structure(ritems, fmt, tr) + xl[lslices] = xr[rslices] + xllist = xl.tolist() + xrlist = xr.tolist() + + ml = memoryview(xl) + mr = memoryview(xr) + self.assertEqual(ml.tolist(), xllist) + self.assertEqual(mr.tolist(), xrlist) + + if tl[2] > 0 and tr[2] > 0: + # ndim > 0: test against suboffsets representation. + yl = ndarray_from_structure(litems, fmt, tl, flags=ND_PIL) + yr = ndarray_from_structure(ritems, fmt, tr, flags=ND_PIL) + yl[lslices] = yr[rslices] + yllist = yl.tolist() + yrlist = yr.tolist() + self.assertEqual(xllist, yllist) + self.assertEqual(xrlist, yrlist) + + ml = memoryview(yl) + mr = memoryview(yr) + self.assertEqual(ml.tolist(), yllist) + self.assertEqual(mr.tolist(), yrlist) + + if numpy_array: + if 0 in lshape or 0 in rshape: + continue # http://projects.scipy.org/numpy/ticket/1910 + + zl = numpy_array_from_structure(litems, fmt, tl) + zr = numpy_array_from_structure(ritems, fmt, tr) + zl[lslices] = zr[rslices] + + if not is_overlapping(tl) and not is_overlapping(tr): + # Slice assignment of overlapping structures + # is undefined in NumPy. + self.verify(xl, obj=None, + itemsize=zl.itemsize, fmt=fmt, readonly=0, + ndim=zl.ndim, shape=zl.shape, + strides=zl.strides, lst=zl.tolist()) + + self.verify(xr, obj=None, + itemsize=zr.itemsize, fmt=fmt, readonly=0, + ndim=zr.ndim, shape=zr.shape, + strides=zr.strides, lst=zr.tolist()) + + def test_ndarray_re_export(self): + items = [1,2,3,4,5,6,7,8,9,10,11,12] + + nd = ndarray(items, shape=[3,4], flags=ND_PIL) + ex = ndarray(nd) + + self.assertTrue(ex.flags & ND_PIL) + self.assertIs(ex.obj, nd) + self.assertEqual(ex.suboffsets, (0, -1)) + self.assertFalse(ex.c_contiguous) + self.assertFalse(ex.f_contiguous) + self.assertFalse(ex.contiguous) + + def test_ndarray_zero_shape(self): + # zeros in shape + for flags in (0, ND_PIL): + nd = ndarray([1,2,3], shape=[0], flags=flags) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertEqual(nd.tolist(), []) + self.assertEqual(mv.tolist(), []) + + nd = ndarray([1,2,3], shape=[0,3,3], flags=flags) + self.assertEqual(nd.tolist(), []) + + nd = ndarray([1,2,3], shape=[3,0,3], flags=flags) + self.assertEqual(nd.tolist(), [[], [], []]) + + nd = ndarray([1,2,3], shape=[3,3,0], flags=flags) + self.assertEqual(nd.tolist(), + [[[], [], []], [[], [], []], [[], [], []]]) + + def test_ndarray_zero_strides(self): + # zero strides + for flags in (0, ND_PIL): + nd = ndarray([1], shape=[5], strides=[0], flags=flags) + mv = memoryview(nd) + self.assertEqual(mv, nd) + self.assertEqual(nd.tolist(), [1, 1, 1, 1, 1]) + self.assertEqual(mv.tolist(), [1, 1, 1, 1, 1]) + + def test_ndarray_offset(self): + nd = ndarray(list(range(20)), shape=[3], offset=7) + self.assertEqual(nd.offset, 7) + self.assertEqual(nd.tolist(), [7,8,9]) + + def test_ndarray_memoryview_from_buffer(self): + for flags in (0, ND_PIL): + nd = ndarray(list(range(3)), shape=[3], flags=flags) + m = nd.memoryview_from_buffer() + self.assertEqual(m, nd) + + def test_ndarray_get_pointer(self): + for flags in (0, ND_PIL): + nd = ndarray(list(range(3)), shape=[3], flags=flags) + for i in range(3): + self.assertEqual(nd[i], get_pointer(nd, [i])) + + def test_ndarray_tolist_null_strides(self): + ex = ndarray(list(range(20)), shape=[2,2,5]) + + nd = ndarray(ex, getbuf=PyBUF_ND|PyBUF_FORMAT) + self.assertEqual(nd.tolist(), ex.tolist()) + + m = memoryview(ex) + self.assertEqual(m.tolist(), ex.tolist()) + + def test_ndarray_cmp_contig(self): + + self.assertFalse(cmp_contig(b"123", b"456")) + + x = ndarray(list(range(12)), shape=[3,4]) + y = ndarray(list(range(12)), shape=[4,3]) + self.assertFalse(cmp_contig(x, y)) + + x = ndarray([1], shape=[1], format="B") + self.assertTrue(cmp_contig(x, b'\x01')) + self.assertTrue(cmp_contig(b'\x01', x)) + + def test_ndarray_hash(self): + + a = array.array('L', [1,2,3]) + nd = ndarray(a) + self.assertRaises(ValueError, hash, nd) + + # one-dimensional + b = bytes(list(range(12))) + + nd = ndarray(list(range(12)), shape=[12]) + self.assertEqual(hash(nd), hash(b)) + + # C-contiguous + nd = ndarray(list(range(12)), shape=[3,4]) + self.assertEqual(hash(nd), hash(b)) + + nd = ndarray(list(range(12)), shape=[3,2,2]) + self.assertEqual(hash(nd), hash(b)) + + # Fortran contiguous + b = bytes(transpose(list(range(12)), shape=[4,3])) + nd = ndarray(list(range(12)), shape=[3,4], flags=ND_FORTRAN) + self.assertEqual(hash(nd), hash(b)) + + b = bytes(transpose(list(range(12)), shape=[2,3,2])) + nd = ndarray(list(range(12)), shape=[2,3,2], flags=ND_FORTRAN) + self.assertEqual(hash(nd), hash(b)) + + # suboffsets + b = bytes(list(range(12))) + nd = ndarray(list(range(12)), shape=[2,2,3], flags=ND_PIL) + self.assertEqual(hash(nd), hash(b)) + + # non-byte formats + nd = ndarray(list(range(12)), shape=[2,2,3], format='L') + self.assertEqual(hash(nd), hash(nd.tobytes())) + + def test_memoryview_construction(self): + + items_shape = [(9, []), ([1,2,3], [3]), (list(range(2*3*5)), [2,3,5])] + + # NumPy style, C-contiguous: + for items, shape in items_shape: + + # From PEP-3118 compliant exporter: + ex = ndarray(items, shape=shape) + m = memoryview(ex) + self.assertTrue(m.c_contiguous) + self.assertTrue(m.contiguous) + + ndim = len(shape) + strides = strides_from_shape(ndim, shape, 1, 'C') + lst = carray(items, shape) + + self.verify(m, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # From memoryview: + m2 = memoryview(m) + self.verify(m2, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # PyMemoryView_FromBuffer(): no strides + nd = ndarray(ex, getbuf=PyBUF_CONTIG_RO|PyBUF_FORMAT) + self.assertEqual(nd.strides, ()) + m = nd.memoryview_from_buffer() + self.verify(m, obj=None, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # PyMemoryView_FromBuffer(): no format, shape, strides + nd = ndarray(ex, getbuf=PyBUF_SIMPLE) + self.assertEqual(nd.format, '') + self.assertEqual(nd.shape, ()) + self.assertEqual(nd.strides, ()) + m = nd.memoryview_from_buffer() + + lst = [items] if ndim == 0 else items + self.verify(m, obj=None, + itemsize=1, fmt='B', readonly=1, + ndim=1, shape=[ex.nbytes], strides=(1,), + lst=lst) + + # NumPy style, Fortran contiguous: + for items, shape in items_shape: + + # From PEP-3118 compliant exporter: + ex = ndarray(items, shape=shape, flags=ND_FORTRAN) + m = memoryview(ex) + self.assertTrue(m.f_contiguous) + self.assertTrue(m.contiguous) + + ndim = len(shape) + strides = strides_from_shape(ndim, shape, 1, 'F') + lst = farray(items, shape) + + self.verify(m, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # From memoryview: + m2 = memoryview(m) + self.verify(m2, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst) + + # PIL style: + for items, shape in items_shape[1:]: + + # From PEP-3118 compliant exporter: + ex = ndarray(items, shape=shape, flags=ND_PIL) + m = memoryview(ex) + + ndim = len(shape) + lst = carray(items, shape) + + self.verify(m, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=ex.strides, + lst=lst) + + # From memoryview: + m2 = memoryview(m) + self.verify(m2, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=ndim, shape=shape, strides=ex.strides, + lst=lst) + + # Invalid number of arguments: + self.assertRaises(TypeError, memoryview, b'9', 'x') + # Not a buffer provider: + self.assertRaises(TypeError, memoryview, {}) + # Non-compliant buffer provider: + ex = ndarray([1,2,3], shape=[3]) + nd = ndarray(ex, getbuf=PyBUF_SIMPLE) + self.assertRaises(BufferError, memoryview, nd) + nd = ndarray(ex, getbuf=PyBUF_CONTIG_RO|PyBUF_FORMAT) + self.assertRaises(BufferError, memoryview, nd) + + # ndim > 64 + nd = ndarray([1]*128, shape=[1]*128, format='L') + self.assertRaises(ValueError, memoryview, nd) + self.assertRaises(ValueError, nd.memoryview_from_buffer) + self.assertRaises(ValueError, get_contiguous, nd, PyBUF_READ, 'C') + self.assertRaises(ValueError, get_contiguous, nd, PyBUF_READ, 'F') + self.assertRaises(ValueError, get_contiguous, nd[::-1], PyBUF_READ, 'C') + + def test_memoryview_cast_zero_shape(self): + # Casts are undefined if shape contains zeros. These arrays are + # regarded as C-contiguous by Numpy and PyBuffer_GetContiguous(), + # so they are not caught by the test for C-contiguity in memory_cast(). + items = [1,2,3] + for shape in ([0,3,3], [3,0,3], [0,3,3]): + ex = ndarray(items, shape=shape) + self.assertTrue(ex.c_contiguous) + msrc = memoryview(ex) + self.assertRaises(TypeError, msrc.cast, 'c') + + def test_memoryview_struct_module(self): + + class INT(object): + def __init__(self, val): + self.val = val + def __int__(self): + return self.val + + class IDX(object): + def __init__(self, val): + self.val = val + def __index__(self): + return self.val + + def f(): return 7 + + values = [INT(9), IDX(9), + 2.2+3j, Decimal("-21.1"), 12.2, Fraction(5, 2), + [1,2,3], {4,5,6}, {7:8}, (), (9,), + True, False, None, NotImplemented, + b'a', b'abc', bytearray(b'a'), bytearray(b'abc'), + 'a', 'abc', r'a', r'abc', + f, lambda x: x] + + for fmt, items, item in iter_format(10, 'memoryview'): + ex = ndarray(items, shape=[10], format=fmt, flags=ND_WRITABLE) + nd = ndarray(items, shape=[10], format=fmt, flags=ND_WRITABLE) + m = memoryview(ex) + + struct.pack_into(fmt, nd, 0, item) + m[0] = item + self.assertEqual(m[0], nd[0]) + + itemsize = struct.calcsize(fmt) + if 'P' in fmt: + continue + + for v in values: + struct_err = None + try: + struct.pack_into(fmt, nd, itemsize, v) + except struct.error: + struct_err = struct.error + + mv_err = None + try: + m[1] = v + except (TypeError, ValueError) as e: + mv_err = e.__class__ + + if struct_err or mv_err: + self.assertIsNot(struct_err, None) + self.assertIsNot(mv_err, None) + else: + self.assertEqual(m[1], nd[1]) + + def test_memoryview_cast_zero_strides(self): + # Casts are undefined if strides contains zeros. These arrays are + # (sometimes!) regarded as C-contiguous by Numpy, but not by + # PyBuffer_GetContiguous(). + ex = ndarray([1,2,3], shape=[3], strides=[0]) + self.assertFalse(ex.c_contiguous) + msrc = memoryview(ex) + self.assertRaises(TypeError, msrc.cast, 'c') + + def test_memoryview_cast_invalid(self): + # invalid format + for sfmt in NON_BYTE_FORMAT: + sformat = '@' + sfmt if randrange(2) else sfmt + ssize = struct.calcsize(sformat) + for dfmt in NON_BYTE_FORMAT: + dformat = '@' + dfmt if randrange(2) else dfmt + dsize = struct.calcsize(dformat) + ex = ndarray(list(range(32)), shape=[32//ssize], format=sformat) + msrc = memoryview(ex) + self.assertRaises(TypeError, msrc.cast, dfmt, [32//dsize]) + + for sfmt, sitems, _ in iter_format(1): + ex = ndarray(sitems, shape=[1], format=sfmt) + msrc = memoryview(ex) + for dfmt, _, _ in iter_format(1): + if (not is_memoryview_format(sfmt) or + not is_memoryview_format(dfmt)): + self.assertRaises(ValueError, msrc.cast, dfmt, + [32//dsize]) + else: + if not is_byte_format(sfmt) and not is_byte_format(dfmt): + self.assertRaises(TypeError, msrc.cast, dfmt, + [32//dsize]) + + # invalid shape + size_h = struct.calcsize('h') + size_d = struct.calcsize('d') + ex = ndarray(list(range(2*2*size_d)), shape=[2,2,size_d], format='h') + msrc = memoryview(ex) + self.assertRaises(TypeError, msrc.cast, shape=[2,2,size_h], format='d') + + ex = ndarray(list(range(120)), shape=[1,2,3,4,5]) + m = memoryview(ex) + + # incorrect number of args + self.assertRaises(TypeError, m.cast) + self.assertRaises(TypeError, m.cast, 1, 2, 3) + + # incorrect dest format type + self.assertRaises(TypeError, m.cast, {}) + + # incorrect dest format + self.assertRaises(ValueError, m.cast, "X") + self.assertRaises(ValueError, m.cast, "@X") + self.assertRaises(ValueError, m.cast, "@XY") + + # dest format not implemented + self.assertRaises(ValueError, m.cast, "=B") + self.assertRaises(ValueError, m.cast, "!L") + self.assertRaises(ValueError, m.cast, "<P") + self.assertRaises(ValueError, m.cast, ">l") + self.assertRaises(ValueError, m.cast, "BI") + self.assertRaises(ValueError, m.cast, "xBI") + + # src format not implemented + ex = ndarray([(1,2), (3,4)], shape=[2], format="II") + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.__getitem__, 0) + self.assertRaises(NotImplementedError, m.__setitem__, 0, 8) + self.assertRaises(NotImplementedError, m.tolist) + + # incorrect shape type + ex = ndarray(list(range(120)), shape=[1,2,3,4,5]) + m = memoryview(ex) + self.assertRaises(TypeError, m.cast, "B", shape={}) + + # incorrect shape elements + ex = ndarray(list(range(120)), shape=[2*3*4*5]) + m = memoryview(ex) + self.assertRaises(OverflowError, m.cast, "B", shape=[2**64]) + self.assertRaises(ValueError, m.cast, "B", shape=[-1]) + self.assertRaises(ValueError, m.cast, "B", shape=[2,3,4,5,6,7,-1]) + self.assertRaises(ValueError, m.cast, "B", shape=[2,3,4,5,6,7,0]) + self.assertRaises(TypeError, m.cast, "B", shape=[2,3,4,5,6,7,'x']) + + # N-D -> N-D cast + ex = ndarray(list([9 for _ in range(3*5*7*11)]), shape=[3,5,7,11]) + m = memoryview(ex) + self.assertRaises(TypeError, m.cast, "I", shape=[2,3,4,5]) + + # cast with ndim > 64 + nd = ndarray(list(range(128)), shape=[128], format='I') + m = memoryview(nd) + self.assertRaises(ValueError, m.cast, 'I', [1]*128) + + # view->len not a multiple of itemsize + ex = ndarray(list([9 for _ in range(3*5*7*11)]), shape=[3*5*7*11]) + m = memoryview(ex) + self.assertRaises(TypeError, m.cast, "I", shape=[2,3,4,5]) + + # product(shape) * itemsize != buffer size + ex = ndarray(list([9 for _ in range(3*5*7*11)]), shape=[3*5*7*11]) + m = memoryview(ex) + self.assertRaises(TypeError, m.cast, "B", shape=[2,3,4,5]) + + # product(shape) * itemsize overflow + nd = ndarray(list(range(128)), shape=[128], format='I') + m1 = memoryview(nd) + nd = ndarray(list(range(128)), shape=[128], format='B') + m2 = memoryview(nd) + if sys.maxsize == 2**63-1: + self.assertRaises(TypeError, m1.cast, 'B', + [7, 7, 73, 127, 337, 92737, 649657]) + self.assertRaises(ValueError, m1.cast, 'B', + [2**20, 2**20, 2**10, 2**10, 2**3]) + self.assertRaises(ValueError, m2.cast, 'I', + [2**20, 2**20, 2**10, 2**10, 2**1]) + else: + self.assertRaises(TypeError, m1.cast, 'B', + [1, 2147483647]) + self.assertRaises(ValueError, m1.cast, 'B', + [2**10, 2**10, 2**5, 2**5, 2**1]) + self.assertRaises(ValueError, m2.cast, 'I', + [2**10, 2**10, 2**5, 2**3, 2**1]) + + def test_memoryview_cast(self): + bytespec = ( + ('B', lambda ex: list(ex.tobytes())), + ('b', lambda ex: [x-256 if x > 127 else x for x in list(ex.tobytes())]), + ('c', lambda ex: [bytes(chr(x), 'latin-1') for x in list(ex.tobytes())]), + ) + + def iter_roundtrip(ex, m, items, fmt): + srcsize = struct.calcsize(fmt) + for bytefmt, to_bytelist in bytespec: + + m2 = m.cast(bytefmt) + lst = to_bytelist(ex) + self.verify(m2, obj=ex, + itemsize=1, fmt=bytefmt, readonly=0, + ndim=1, shape=[31*srcsize], strides=(1,), + lst=lst, cast=True) + + m3 = m2.cast(fmt) + self.assertEqual(m3, ex) + lst = ex.tolist() + self.verify(m3, obj=ex, + itemsize=srcsize, fmt=fmt, readonly=0, + ndim=1, shape=[31], strides=(srcsize,), + lst=lst, cast=True) + + # cast from ndim = 0 to ndim = 1 + srcsize = struct.calcsize('I') + ex = ndarray(9, shape=[], format='I') + destitems, destshape = cast_items(ex, 'B', 1) + m = memoryview(ex) + m2 = m.cast('B') + self.verify(m2, obj=ex, + itemsize=1, fmt='B', readonly=1, + ndim=1, shape=destshape, strides=(1,), + lst=destitems, cast=True) + + # cast from ndim = 1 to ndim = 0 + destsize = struct.calcsize('I') + ex = ndarray([9]*destsize, shape=[destsize], format='B') + destitems, destshape = cast_items(ex, 'I', destsize, shape=[]) + m = memoryview(ex) + m2 = m.cast('I', shape=[]) + self.verify(m2, obj=ex, + itemsize=destsize, fmt='I', readonly=1, + ndim=0, shape=(), strides=(), + lst=destitems, cast=True) + + # array.array: roundtrip to/from bytes + for fmt, items, _ in iter_format(31, 'array'): + ex = array.array(fmt, items) + m = memoryview(ex) + iter_roundtrip(ex, m, items, fmt) + + # ndarray: roundtrip to/from bytes + for fmt, items, _ in iter_format(31, 'memoryview'): + ex = ndarray(items, shape=[31], format=fmt, flags=ND_WRITABLE) + m = memoryview(ex) + iter_roundtrip(ex, m, items, fmt) + + def test_memoryview_cast_1D_ND(self): + # Cast between C-contiguous buffers. At least one buffer must + # be 1D, at least one format must be 'c', 'b' or 'B'. + for _tshape in gencastshapes(): + for char in fmtdict['@']: + tfmt = ('', '@')[randrange(2)] + char + tsize = struct.calcsize(tfmt) + n = prod(_tshape) * tsize + obj = 'memoryview' if is_byte_format(tfmt) else 'bytefmt' + for fmt, items, _ in iter_format(n, obj): + size = struct.calcsize(fmt) + shape = [n] if n > 0 else [] + tshape = _tshape + [size] + + ex = ndarray(items, shape=shape, format=fmt) + m = memoryview(ex) + + titems, tshape = cast_items(ex, tfmt, tsize, shape=tshape) + + if titems is None: + self.assertRaises(TypeError, m.cast, tfmt, tshape) + continue + if titems == 'nan': + continue # NaNs in lists are a recipe for trouble. + + # 1D -> ND + nd = ndarray(titems, shape=tshape, format=tfmt) + + m2 = m.cast(tfmt, shape=tshape) + ndim = len(tshape) + strides = nd.strides + lst = nd.tolist() + self.verify(m2, obj=ex, + itemsize=tsize, fmt=tfmt, readonly=1, + ndim=ndim, shape=tshape, strides=strides, + lst=lst, cast=True) + + # ND -> 1D + m3 = m2.cast(fmt) + m4 = m2.cast(fmt, shape=shape) + ndim = len(shape) + strides = ex.strides + lst = ex.tolist() + + self.verify(m3, obj=ex, + itemsize=size, fmt=fmt, readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst, cast=True) + + self.verify(m4, obj=ex, + itemsize=size, fmt=fmt, readonly=1, + ndim=ndim, shape=shape, strides=strides, + lst=lst, cast=True) + + def test_memoryview_tolist(self): + + # Most tolist() tests are in self.verify() etc. + + a = array.array('h', list(range(-6, 6))) + m = memoryview(a) + self.assertEqual(m, a) + self.assertEqual(m.tolist(), a.tolist()) + + a = a[2::3] + m = m[2::3] + self.assertEqual(m, a) + self.assertEqual(m.tolist(), a.tolist()) + + ex = ndarray(list(range(2*3*5*7*11)), shape=[11,2,7,3,5], format='L') + m = memoryview(ex) + self.assertEqual(m.tolist(), ex.tolist()) + + ex = ndarray([(2, 5), (7, 11)], shape=[2], format='lh') + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.tolist) + + ex = ndarray([b'12345'], shape=[1], format="s") + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.tolist) + + ex = ndarray([b"a",b"b",b"c",b"d",b"e",b"f"], shape=[2,3], format='s') + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.tolist) + + def test_memoryview_repr(self): + m = memoryview(bytearray(9)) + r = m.__repr__() + self.assertTrue(r.startswith("<memory")) + + m.release() + r = m.__repr__() + self.assertTrue(r.startswith("<released")) + + def test_memoryview_sequence(self): + + for fmt in ('d', 'f'): + inf = float(3e400) + ex = array.array(fmt, [1.0, inf, 3.0]) + m = memoryview(ex) + self.assertIn(1.0, m) + self.assertIn(5e700, m) + self.assertIn(3.0, m) + + ex = ndarray(9.0, [], format='f') + m = memoryview(ex) + self.assertRaises(TypeError, eval, "9.0 in m", locals()) + + def test_memoryview_index(self): + + # ndim = 0 + ex = ndarray(12.5, shape=[], format='d') + m = memoryview(ex) + self.assertEqual(m[()], 12.5) + self.assertEqual(m[...], m) + self.assertEqual(m[...], ex) + self.assertRaises(TypeError, m.__getitem__, 0) + + ex = ndarray((1,2,3), shape=[], format='iii') + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.__getitem__, ()) + + # range + ex = ndarray(list(range(7)), shape=[7], flags=ND_WRITABLE) + m = memoryview(ex) + + self.assertRaises(IndexError, m.__getitem__, 2**64) + self.assertRaises(TypeError, m.__getitem__, 2.0) + self.assertRaises(TypeError, m.__getitem__, 0.0) + + # out of bounds + self.assertRaises(IndexError, m.__getitem__, -8) + self.assertRaises(IndexError, m.__getitem__, 8) + + # Not implemented: multidimensional sub-views + ex = ndarray(list(range(12)), shape=[3,4], flags=ND_WRITABLE) + m = memoryview(ex) + + self.assertRaises(NotImplementedError, m.__getitem__, 0) + self.assertRaises(NotImplementedError, m.__setitem__, 0, 9) + self.assertRaises(NotImplementedError, m.__getitem__, 0) + + def test_memoryview_assign(self): + + # ndim = 0 + ex = ndarray(12.5, shape=[], format='f', flags=ND_WRITABLE) + m = memoryview(ex) + m[()] = 22.5 + self.assertEqual(m[()], 22.5) + m[...] = 23.5 + self.assertEqual(m[()], 23.5) + self.assertRaises(TypeError, m.__setitem__, 0, 24.7) + + # read-only + ex = ndarray(list(range(7)), shape=[7]) + m = memoryview(ex) + self.assertRaises(TypeError, m.__setitem__, 2, 10) + + # range + ex = ndarray(list(range(7)), shape=[7], flags=ND_WRITABLE) + m = memoryview(ex) + + self.assertRaises(IndexError, m.__setitem__, 2**64, 9) + self.assertRaises(TypeError, m.__setitem__, 2.0, 10) + self.assertRaises(TypeError, m.__setitem__, 0.0, 11) + + # out of bounds + self.assertRaises(IndexError, m.__setitem__, -8, 20) + self.assertRaises(IndexError, m.__setitem__, 8, 25) + + # pack_single() success: + for fmt in fmtdict['@']: + if fmt == 'c' or fmt == '?': + continue + ex = ndarray([1,2,3], shape=[3], format=fmt, flags=ND_WRITABLE) + m = memoryview(ex) + i = randrange(-3, 3) + m[i] = 8 + self.assertEqual(m[i], 8) + self.assertEqual(m[i], ex[i]) + + ex = ndarray([b'1', b'2', b'3'], shape=[3], format='c', + flags=ND_WRITABLE) + m = memoryview(ex) + m[2] = b'9' + self.assertEqual(m[2], b'9') + + ex = ndarray([True, False, True], shape=[3], format='?', + flags=ND_WRITABLE) + m = memoryview(ex) + m[1] = True + self.assertEqual(m[1], True) + + # pack_single() exceptions: + nd = ndarray([b'x'], shape=[1], format='c', flags=ND_WRITABLE) + m = memoryview(nd) + self.assertRaises(TypeError, m.__setitem__, 0, 100) + + ex = ndarray(list(range(120)), shape=[1,2,3,4,5], flags=ND_WRITABLE) + m1 = memoryview(ex) + + for fmt, _range in fmtdict['@'].items(): + if (fmt == '?'): # PyObject_IsTrue() accepts anything + continue + if fmt == 'c': # special case tested above + continue + m2 = m1.cast(fmt) + lo, hi = _range + if fmt == 'd' or fmt == 'f': + lo, hi = -2**1024, 2**1024 + if fmt != 'P': # PyLong_AsVoidPtr() accepts negative numbers + self.assertRaises(ValueError, m2.__setitem__, 0, lo-1) + self.assertRaises(TypeError, m2.__setitem__, 0, "xyz") + self.assertRaises(ValueError, m2.__setitem__, 0, hi) + + # invalid item + m2 = m1.cast('c') + self.assertRaises(ValueError, m2.__setitem__, 0, b'\xff\xff') + + # format not implemented + ex = ndarray(list(range(1)), shape=[1], format="xL", flags=ND_WRITABLE) + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.__setitem__, 0, 1) + + ex = ndarray([b'12345'], shape=[1], format="s", flags=ND_WRITABLE) + m = memoryview(ex) + self.assertRaises(NotImplementedError, m.__setitem__, 0, 1) + + # Not implemented: multidimensional sub-views + ex = ndarray(list(range(12)), shape=[3,4], flags=ND_WRITABLE) + m = memoryview(ex) + + self.assertRaises(NotImplementedError, m.__setitem__, 0, [2, 3]) + + def test_memoryview_slice(self): + + ex = ndarray(list(range(12)), shape=[12], flags=ND_WRITABLE) + m = memoryview(ex) + + # zero step + self.assertRaises(ValueError, m.__getitem__, slice(0,2,0)) + self.assertRaises(ValueError, m.__setitem__, slice(0,2,0), + bytearray([1,2])) + + # invalid slice key + self.assertRaises(TypeError, m.__getitem__, ()) + + # multidimensional slices + ex = ndarray(list(range(12)), shape=[12], flags=ND_WRITABLE) + m = memoryview(ex) + + self.assertRaises(NotImplementedError, m.__getitem__, + (slice(0,2,1), slice(0,2,1))) + self.assertRaises(NotImplementedError, m.__setitem__, + (slice(0,2,1), slice(0,2,1)), bytearray([1,2])) + + # invalid slice tuple + self.assertRaises(TypeError, m.__getitem__, (slice(0,2,1), {})) + self.assertRaises(TypeError, m.__setitem__, (slice(0,2,1), {}), + bytearray([1,2])) + + # rvalue is not an exporter + self.assertRaises(TypeError, m.__setitem__, slice(0,1,1), [1]) + + # non-contiguous slice assignment + for flags in (0, ND_PIL): + ex1 = ndarray(list(range(12)), shape=[12], strides=[-1], offset=11, + flags=ND_WRITABLE|flags) + ex2 = ndarray(list(range(24)), shape=[12], strides=[2], flags=flags) + m1 = memoryview(ex1) + m2 = memoryview(ex2) + + ex1[2:5] = ex1[2:5] + m1[2:5] = m2[2:5] + + self.assertEqual(m1, ex1) + self.assertEqual(m2, ex2) + + ex1[1:3][::-1] = ex2[0:2][::1] + m1[1:3][::-1] = m2[0:2][::1] + + self.assertEqual(m1, ex1) + self.assertEqual(m2, ex2) + + ex1[4:1:-2][::-1] = ex1[1:4:2][::1] + m1[4:1:-2][::-1] = m1[1:4:2][::1] + + self.assertEqual(m1, ex1) + self.assertEqual(m2, ex2) + + def test_memoryview_array(self): + + def cmptest(testcase, a, b, m, singleitem): + for i, _ in enumerate(a): + ai = a[i] + mi = m[i] + testcase.assertEqual(ai, mi) + a[i] = singleitem + if singleitem != ai: + testcase.assertNotEqual(a, m) + testcase.assertNotEqual(a, b) + else: + testcase.assertEqual(a, m) + testcase.assertEqual(a, b) + m[i] = singleitem + testcase.assertEqual(a, m) + testcase.assertEqual(b, m) + a[i] = ai + m[i] = mi + + for n in range(1, 5): + for fmt, items, singleitem in iter_format(n, 'array'): + for lslice in genslices(n): + for rslice in genslices(n): + + a = array.array(fmt, items) + b = array.array(fmt, items) + m = memoryview(b) + + self.assertEqual(m, a) + self.assertEqual(m.tolist(), a.tolist()) + self.assertEqual(m.tobytes(), a.tobytes()) + self.assertEqual(len(m), len(a)) + + cmptest(self, a, b, m, singleitem) + + array_err = None + have_resize = None + try: + al = a[lslice] + ar = a[rslice] + a[lslice] = a[rslice] + have_resize = len(al) != len(ar) + except Exception as e: + array_err = e.__class__ + + m_err = None + try: + m[lslice] = m[rslice] + except Exception as e: + m_err = e.__class__ + + if have_resize: # memoryview cannot change shape + self.assertIs(m_err, ValueError) + elif m_err or array_err: + self.assertIs(m_err, array_err) + else: + self.assertEqual(m, a) + self.assertEqual(m.tolist(), a.tolist()) + self.assertEqual(m.tobytes(), a.tobytes()) + cmptest(self, a, b, m, singleitem) + + def test_memoryview_compare(self): + + a = array.array('L', [1, 2, 3]) + b = array.array('L', [1, 2, 7]) + + # Ordering comparisons raise: + v = memoryview(a) + w = memoryview(b) + for attr in ('__lt__', '__le__', '__gt__', '__ge__'): + self.assertIs(getattr(v, attr)(w), NotImplemented) + self.assertIs(getattr(a, attr)(v), NotImplemented) + + # Released views compare equal to themselves: + v = memoryview(a) + v.release() + self.assertEqual(v, v) + self.assertNotEqual(v, a) + self.assertNotEqual(a, v) + + v = memoryview(a) + w = memoryview(a) + w.release() + self.assertNotEqual(v, w) + self.assertNotEqual(w, v) + + # Operand does not implement the buffer protocol: + v = memoryview(a) + self.assertNotEqual(v, [1, 2, 3]) + + # Different formats: + c = array.array('l', [1, 2, 3]) + v = memoryview(a) + self.assertNotEqual(v, c) + self.assertNotEqual(c, v) + + # Not implemented formats. Ugly, but inevitable. This is the same as + # issue #2531: equality is also used for membership testing and must + # return a result. + a = ndarray([(1, 1.5), (2, 2.7)], shape=[2], format='ld') + v = memoryview(a) + self.assertNotEqual(v, a) + self.assertNotEqual(a, v) + + a = ndarray([b'12345'], shape=[1], format="s") + v = memoryview(a) + self.assertNotEqual(v, a) + self.assertNotEqual(a, v) + + nd = ndarray([(1,1,1), (2,2,2), (3,3,3)], shape=[3], format='iii') + v = memoryview(nd) + self.assertNotEqual(v, nd) + self.assertNotEqual(nd, v) + + # '@' prefix can be dropped: + nd1 = ndarray([1,2,3], shape=[3], format='@i') + nd2 = ndarray([1,2,3], shape=[3], format='i') + v = memoryview(nd1) + w = memoryview(nd2) + self.assertEqual(v, w) + self.assertEqual(w, v) + self.assertEqual(v, nd2) + self.assertEqual(nd2, v) + self.assertEqual(w, nd1) + self.assertEqual(nd1, w) + + # ndim = 0 + nd1 = ndarray(1729, shape=[], format='@L') + nd2 = ndarray(1729, shape=[], format='L', flags=ND_WRITABLE) + v = memoryview(nd1) + w = memoryview(nd2) + self.assertEqual(v, w) + self.assertEqual(w, v) + self.assertEqual(v, nd2) + self.assertEqual(nd2, v) + self.assertEqual(w, nd1) + self.assertEqual(nd1, w) + + self.assertFalse(v.__ne__(w)) + self.assertFalse(w.__ne__(v)) + + w[()] = 1728 + self.assertNotEqual(v, w) + self.assertNotEqual(w, v) + self.assertNotEqual(v, nd2) + self.assertNotEqual(nd2, v) + self.assertNotEqual(w, nd1) + self.assertNotEqual(nd1, w) + + self.assertFalse(v.__eq__(w)) + self.assertFalse(w.__eq__(v)) + + nd = ndarray(list(range(12)), shape=[12], flags=ND_WRITABLE|ND_PIL) + ex = ndarray(list(range(12)), shape=[12], flags=ND_WRITABLE|ND_PIL) + m = memoryview(ex) + + self.assertEqual(m, nd) + m[9] = 100 + self.assertNotEqual(m, nd) + + # ndim = 1: contiguous + nd1 = ndarray([-529, 576, -625, 676, -729], shape=[5], format='@h') + nd2 = ndarray([-529, 576, -625, 676, 729], shape=[5], format='@h') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # ndim = 1: non-contiguous + nd1 = ndarray([-529, -625, -729], shape=[3], format='@h') + nd2 = ndarray([-529, 576, -625, 676, -729], shape=[5], format='@h') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd2[::2]) + self.assertEqual(w[::2], nd1) + self.assertEqual(v, w[::2]) + self.assertEqual(v[::-1], w[::-2]) + + # ndim = 1: non-contiguous, suboffsets + nd1 = ndarray([-529, -625, -729], shape=[3], format='@h') + nd2 = ndarray([-529, 576, -625, 676, -729], shape=[5], format='@h', + flags=ND_PIL) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd2[::2]) + self.assertEqual(w[::2], nd1) + self.assertEqual(v, w[::2]) + self.assertEqual(v[::-1], w[::-2]) + + # ndim = 1: zeros in shape + nd1 = ndarray([900, 961], shape=[0], format='@h') + nd2 = ndarray([-900, -961], shape=[0], format='@h') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertEqual(v, nd2) + self.assertEqual(w, nd1) + self.assertEqual(v, w) + + # ndim = 1: zero strides + nd1 = ndarray([900, 900, 900, 900], shape=[4], format='@L') + nd2 = ndarray([900], shape=[4], strides=[0], format='L') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertEqual(v, nd2) + self.assertEqual(w, nd1) + self.assertEqual(v, w) + + n = 10 + for char in fmtdict['@m']: + fmt, items, singleitem = randitems(n, 'memoryview', '@', char) + for flags in (0, ND_PIL): + nd = ndarray(items, shape=[n], format=fmt, flags=flags) + m = memoryview(nd) + self.assertEqual(m, nd) + + nd = nd[::-3] + m = memoryview(nd) + self.assertEqual(m, nd) + + ##### ndim > 1: C-contiguous + # different values + nd1 = ndarray(list(range(-15, 15)), shape=[3, 2, 5], format='@h') + nd2 = ndarray(list(range(0, 30)), shape=[3, 2, 5], format='@h') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different shape + nd1 = ndarray(list(range(30)), shape=[2, 3, 5], format='L') + nd2 = ndarray(list(range(30)), shape=[3, 2, 5], format='L') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different format + nd1 = ndarray(list(range(30)), shape=[2, 3, 5], format='L') + nd2 = ndarray(list(range(30)), shape=[2, 3, 5], format='l') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + ##### ndim > 1: Fortran contiguous + # different values + nd1 = ndarray(list(range(-15, 15)), shape=[5, 2, 3], format='@h', + flags=ND_FORTRAN) + nd2 = ndarray(list(range(0, 30)), shape=[5, 2, 3], format='@h', + flags=ND_FORTRAN) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different shape + nd1 = ndarray(list(range(-15, 15)), shape=[2, 3, 5], format='l', + flags=ND_FORTRAN) + nd2 = ndarray(list(range(-15, 15)), shape=[3, 2, 5], format='l', + flags=ND_FORTRAN) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different format + nd1 = ndarray(list(range(30)), shape=[5, 2, 3], format='@h', + flags=ND_FORTRAN) + nd2 = ndarray(list(range(30)), shape=[5, 2, 3], format='@b', + flags=ND_FORTRAN) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + ##### ndim > 1: mixed C/Fortran contiguous + lst1 = list(range(-15, 15)) + lst2 = transpose(lst1, [3, 2, 5]) + nd1 = ndarray(lst1, shape=[3, 2, 5], format='@l') + nd2 = ndarray(lst2, shape=[3, 2, 5], format='l', flags=ND_FORTRAN) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertEqual(v, w) + + ##### ndim > 1: non-contiguous + # different values + ex1 = ndarray(list(range(40)), shape=[5, 8], format='@I') + nd1 = ex1[3:1:-1, ::-2] + ex2 = ndarray(list(range(40)), shape=[5, 8], format='I') + nd2 = ex2[1:3:1, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different shape + ex1 = ndarray(list(range(30)), shape=[2, 3, 5], format='b') + nd1 = ex1[1:3:, ::-2] + nd2 = ndarray(list(range(30)), shape=[3, 2, 5], format='b') + nd2 = ex2[1:3:, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different format + ex1 = ndarray(list(range(30)), shape=[5, 3, 2], format='i') + nd1 = ex1[1:3:, ::-2] + nd2 = ndarray(list(range(30)), shape=[5, 3, 2], format='@I') + nd2 = ex2[1:3:, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + ##### ndim > 1: zeros in shape + nd1 = ndarray(list(range(30)), shape=[0, 3, 2], format='i') + nd2 = ndarray(list(range(30)), shape=[5, 0, 2], format='@i') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # ndim > 1: zero strides + nd1 = ndarray([900]*80, shape=[4, 5, 4], format='@L') + nd2 = ndarray([900], shape=[4, 5, 4], strides=[0, 0, 0], format='L') + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertEqual(v, nd2) + self.assertEqual(w, nd1) + self.assertEqual(v, w) + self.assertEqual(v.tolist(), w.tolist()) + + ##### ndim > 1: suboffsets + ex1 = ndarray(list(range(40)), shape=[5, 8], format='@I') + nd1 = ex1[3:1:-1, ::-2] + ex2 = ndarray(list(range(40)), shape=[5, 8], format='I', flags=ND_PIL) + nd2 = ex2[1:3:1, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different shape + ex1 = ndarray(list(range(30)), shape=[2, 3, 5], format='b', flags=ND_PIL) + nd1 = ex1[1:3:, ::-2] + nd2 = ndarray(list(range(30)), shape=[3, 2, 5], format='b') + nd2 = ex2[1:3:, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # different format + ex1 = ndarray(list(range(30)), shape=[5, 3, 2], format='i', flags=ND_PIL) + nd1 = ex1[1:3:, ::-2] + nd2 = ndarray(list(range(30)), shape=[5, 3, 2], format='@I', flags=ND_PIL) + nd2 = ex2[1:3:, ::-2] + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertNotEqual(v, nd2) + self.assertNotEqual(w, nd1) + self.assertNotEqual(v, w) + + # initialize mixed C/Fortran + suboffsets + lst1 = list(range(-15, 15)) + lst2 = transpose(lst1, [3, 2, 5]) + nd1 = ndarray(lst1, shape=[3, 2, 5], format='@l', flags=ND_PIL) + nd2 = ndarray(lst2, shape=[3, 2, 5], format='l', flags=ND_FORTRAN|ND_PIL) + v = memoryview(nd1) + w = memoryview(nd2) + + self.assertEqual(v, nd1) + self.assertEqual(w, nd2) + self.assertEqual(v, w) + + def test_memoryview_check_released(self): + + a = array.array('d', [1.1, 2.2, 3.3]) + + m = memoryview(a) + m.release() + + # PyMemoryView_FromObject() + self.assertRaises(ValueError, memoryview, m) + # memoryview.cast() + self.assertRaises(ValueError, m.cast, 'c') + # getbuffer() + self.assertRaises(ValueError, ndarray, m) + # memoryview.tolist() + self.assertRaises(ValueError, m.tolist) + # memoryview.tobytes() + self.assertRaises(ValueError, m.tobytes) + # sequence + self.assertRaises(ValueError, eval, "1.0 in m", locals()) + # subscript + self.assertRaises(ValueError, m.__getitem__, 0) + # assignment + self.assertRaises(ValueError, m.__setitem__, 0, 1) + + for attr in ('obj', 'nbytes', 'readonly', 'itemsize', 'format', 'ndim', + 'shape', 'strides', 'suboffsets', 'c_contiguous', + 'f_contiguous', 'contiguous'): + self.assertRaises(ValueError, m.__getattribute__, attr) + + # richcompare + b = array.array('d', [1.1, 2.2, 3.3]) + m1 = memoryview(a) + m2 = memoryview(b) + + self.assertEqual(m1, m2) + m1.release() + self.assertNotEqual(m1, m2) + self.assertNotEqual(m1, a) + self.assertEqual(m1, m1) + + def test_memoryview_tobytes(self): + # Many implicit tests are already in self.verify(). + + nd = ndarray([-529, 576, -625, 676, -729], shape=[5], format='@h') + + m = memoryview(nd) + self.assertEqual(m.tobytes(), nd.tobytes()) + + def test_memoryview_get_contiguous(self): + # Many implicit tests are already in self.verify(). + + # no buffer interface + self.assertRaises(TypeError, get_contiguous, {}, PyBUF_READ, 'F') + + # writable request to read-only object + self.assertRaises(BufferError, get_contiguous, b'x', PyBUF_WRITE, 'C') + + # writable request to non-contiguous object + nd = ndarray([1, 2, 3], shape=[2], strides=[2]) + self.assertRaises(BufferError, get_contiguous, nd, PyBUF_WRITE, 'A') + + # scalar, read-only request from read-only exporter + nd = ndarray(9, shape=(), format="L") + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(m, nd) + self.assertEqual(m[()], 9) + + # scalar, read-only request from writable exporter + nd = ndarray(9, shape=(), format="L", flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(m, nd) + self.assertEqual(m[()], 9) + + # scalar, writable request + for order in ['C', 'F', 'A']: + nd[()] = 9 + m = get_contiguous(nd, PyBUF_WRITE, order) + self.assertEqual(m, nd) + self.assertEqual(m[()], 9) + + m[()] = 10 + self.assertEqual(m[()], 10) + self.assertEqual(nd[()], 10) + + # zeros in shape + nd = ndarray([1], shape=[0], format="L", flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertRaises(IndexError, m.__getitem__, 0) + self.assertEqual(m, nd) + self.assertEqual(m.tolist(), []) + + nd = ndarray(list(range(8)), shape=[2, 0, 7], format="L", + flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(ndarray(m).tolist(), [[], []]) + + # one-dimensional + nd = ndarray([1], shape=[1], format="h", flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_WRITE, order) + self.assertEqual(m, nd) + self.assertEqual(m.tolist(), nd.tolist()) + + nd = ndarray([1, 2, 3], shape=[3], format="b", flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_WRITE, order) + self.assertEqual(m, nd) + self.assertEqual(m.tolist(), nd.tolist()) + + # one-dimensional, non-contiguous + nd = ndarray([1, 2, 3], shape=[2], strides=[2], flags=ND_WRITABLE) + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(m, nd) + self.assertEqual(m.tolist(), nd.tolist()) + self.assertRaises(TypeError, m.__setitem__, 1, 20) + self.assertEqual(m[1], 3) + self.assertEqual(nd[1], 3) + + nd = nd[::-1] + for order in ['C', 'F', 'A']: + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(m, nd) + self.assertEqual(m.tolist(), nd.tolist()) + self.assertRaises(TypeError, m.__setitem__, 1, 20) + self.assertEqual(m[1], 1) + self.assertEqual(nd[1], 1) + + # multi-dimensional, contiguous input + nd = ndarray(list(range(12)), shape=[3, 4], flags=ND_WRITABLE) + for order in ['C', 'A']: + m = get_contiguous(nd, PyBUF_WRITE, order) + self.assertEqual(ndarray(m).tolist(), nd.tolist()) + + self.assertRaises(BufferError, get_contiguous, nd, PyBUF_WRITE, 'F') + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(ndarray(m).tolist(), nd.tolist()) + + nd = ndarray(list(range(12)), shape=[3, 4], + flags=ND_WRITABLE|ND_FORTRAN) + for order in ['F', 'A']: + m = get_contiguous(nd, PyBUF_WRITE, order) + self.assertEqual(ndarray(m).tolist(), nd.tolist()) + + self.assertRaises(BufferError, get_contiguous, nd, PyBUF_WRITE, 'C') + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(ndarray(m).tolist(), nd.tolist()) + + # multi-dimensional, non-contiguous input + nd = ndarray(list(range(12)), shape=[3, 4], flags=ND_WRITABLE|ND_PIL) + for order in ['C', 'F', 'A']: + self.assertRaises(BufferError, get_contiguous, nd, PyBUF_WRITE, + order) + m = get_contiguous(nd, PyBUF_READ, order) + self.assertEqual(ndarray(m).tolist(), nd.tolist()) + + # flags + nd = ndarray([1,2,3,4,5], shape=[3], strides=[2]) + m = get_contiguous(nd, PyBUF_READ, 'C') + self.assertTrue(m.c_contiguous) + + def test_memoryview_serializing(self): + + # C-contiguous + size = struct.calcsize('i') + a = array.array('i', [1,2,3,4,5]) + m = memoryview(a) + buf = io.BytesIO(m) + b = bytearray(5*size) + buf.readinto(b) + self.assertEqual(m.tobytes(), b) + + # C-contiguous, multi-dimensional + size = struct.calcsize('L') + nd = ndarray(list(range(12)), shape=[2,3,2], format="L") + m = memoryview(nd) + buf = io.BytesIO(m) + b = bytearray(2*3*2*size) + buf.readinto(b) + self.assertEqual(m.tobytes(), b) + + # Fortran contiguous, multi-dimensional + #size = struct.calcsize('L') + #nd = ndarray(list(range(12)), shape=[2,3,2], format="L", + # flags=ND_FORTRAN) + #m = memoryview(nd) + #buf = io.BytesIO(m) + #b = bytearray(2*3*2*size) + #buf.readinto(b) + #self.assertEqual(m.tobytes(), b) + + def test_memoryview_hash(self): + + # bytes exporter + b = bytes(list(range(12))) + m = memoryview(b) + self.assertEqual(hash(b), hash(m)) + + # C-contiguous + mc = m.cast('c', shape=[3,4]) + self.assertEqual(hash(mc), hash(b)) + + # non-contiguous + mx = m[::-2] + b = bytes(list(range(12))[::-2]) + self.assertEqual(hash(mx), hash(b)) + + # Fortran contiguous + nd = ndarray(list(range(30)), shape=[3,2,5], flags=ND_FORTRAN) + m = memoryview(nd) + self.assertEqual(hash(m), hash(nd)) + + # multi-dimensional slice + nd = ndarray(list(range(30)), shape=[3,2,5]) + x = nd[::2, ::, ::-1] + m = memoryview(x) + self.assertEqual(hash(m), hash(x)) + + # multi-dimensional slice with suboffsets + nd = ndarray(list(range(30)), shape=[2,5,3], flags=ND_PIL) + x = nd[::2, ::, ::-1] + m = memoryview(x) + self.assertEqual(hash(m), hash(x)) + + # non-byte formats + nd = ndarray(list(range(12)), shape=[2,2,3], format='L') + m = memoryview(nd) + self.assertEqual(hash(m), hash(nd.tobytes())) + + nd = ndarray(list(range(-6, 6)), shape=[2,2,3], format='h') + m = memoryview(nd) + self.assertEqual(hash(m), hash(nd.tobytes())) + + def test_memoryview_release(self): + + # Create re-exporter from getbuffer(memoryview), then release the view. + a = bytearray([1,2,3]) + m = memoryview(a) + nd = ndarray(m) # re-exporter + self.assertRaises(BufferError, m.release) + del nd + m.release() + + # chained views + a = bytearray([1,2,3]) + m1 = memoryview(a) + m2 = memoryview(m1) + nd = ndarray(m2) # re-exporter + m1.release() + self.assertRaises(BufferError, m2.release) + del nd + m2.release() + + # Allow changing layout while buffers are exported. + nd = ndarray([1,2,3], shape=[3], flags=ND_VAREXPORT) + m1 = memoryview(nd) + + nd.push([4,5,6,7,8], shape=[5]) # mutate nd + m2 = memoryview(nd) + + x = memoryview(m1) + self.assertEqual(x.tolist(), m1.tolist()) + + y = memoryview(m2) + self.assertEqual(y.tolist(), m2.tolist()) + self.assertEqual(y.tolist(), nd.tolist()) + m2.release() + y.release() + + nd.pop() # pop the current view + self.assertEqual(x.tolist(), nd.tolist()) + + del nd + m1.release() + x.release() + + # If multiple memoryviews share the same managed buffer, implicit + # release() in the context manager's __exit__() method should still + # work. + def catch22(b): + with memoryview(b) as m2: + pass + + x = bytearray(b'123') + with memoryview(x) as m1: + catch22(m1) + self.assertEqual(m1[0], ord(b'1')) + + # XXX If m1 has exports, raise BufferError. + # x = bytearray(b'123') + # with memoryview(x) as m1: + # ex = ndarray(m1) + # m1[0] == ord(b'1') + + def test_issue_7385(self): + x = ndarray([1,2,3], shape=[3], flags=ND_GETBUF_FAIL) + self.assertRaises(BufferError, memoryview, x) + + +def test_main(): + support.run_unittest(TestBufferProtocol) + + +if __name__ == "__main__": + test_main() diff --git a/Lib/test/test_memoryview.py b/Lib/test/test_memoryview.py index a5a0ca1..8809930 100644 --- a/Lib/test/test_memoryview.py +++ b/Lib/test/test_memoryview.py @@ -24,15 +24,14 @@ class AbstractMemoryTests: return filter(None, [self.ro_type, self.rw_type]) def check_getitem_with_type(self, tp): - item = self.getitem_type b = tp(self._source) oldrefcount = sys.getrefcount(b) m = self._view(b) - self.assertEqual(m[0], item(b"a")) - self.assertIsInstance(m[0], bytes) - self.assertEqual(m[5], item(b"f")) - self.assertEqual(m[-1], item(b"f")) - self.assertEqual(m[-6], item(b"a")) + self.assertEqual(m[0], ord(b"a")) + self.assertIsInstance(m[0], int) + self.assertEqual(m[5], ord(b"f")) + self.assertEqual(m[-1], ord(b"f")) + self.assertEqual(m[-6], ord(b"a")) # Bounds checking self.assertRaises(IndexError, lambda: m[6]) self.assertRaises(IndexError, lambda: m[-7]) @@ -76,7 +75,9 @@ class AbstractMemoryTests: b = self.rw_type(self._source) oldrefcount = sys.getrefcount(b) m = self._view(b) - m[0] = tp(b"0") + m[0] = ord(b'1') + self._check_contents(tp, b, b"1bcdef") + m[0:1] = tp(b"0") self._check_contents(tp, b, b"0bcdef") m[1:3] = tp(b"12") self._check_contents(tp, b, b"012def") @@ -102,10 +103,17 @@ class AbstractMemoryTests: # Wrong index/slice types self.assertRaises(TypeError, setitem, 0.0, b"a") self.assertRaises(TypeError, setitem, (0,), b"a") + self.assertRaises(TypeError, setitem, (slice(0,1,1), 0), b"a") + self.assertRaises(TypeError, setitem, (0, slice(0,1,1)), b"a") + self.assertRaises(TypeError, setitem, (0,), b"a") self.assertRaises(TypeError, setitem, "a", b"a") + # Not implemented: multidimensional slices + slices = (slice(0,1,1), slice(0,1,2)) + self.assertRaises(NotImplementedError, setitem, slices, b"a") # Trying to resize the memory object - self.assertRaises(ValueError, setitem, 0, b"") - self.assertRaises(ValueError, setitem, 0, b"ab") + exc = ValueError if m.format == 'c' else TypeError + self.assertRaises(exc, setitem, 0, b"") + self.assertRaises(exc, setitem, 0, b"ab") self.assertRaises(ValueError, setitem, slice(1,1), b"a") self.assertRaises(ValueError, setitem, slice(0,2), b"a") @@ -175,7 +183,7 @@ class AbstractMemoryTests: self.assertEqual(m.shape, (6,)) self.assertEqual(len(m), 6) self.assertEqual(m.strides, (self.itemsize,)) - self.assertEqual(m.suboffsets, None) + self.assertEqual(m.suboffsets, ()) return m def test_attributes_readonly(self): @@ -209,12 +217,16 @@ class AbstractMemoryTests: # If tp is a factory rather than a plain type, skip continue + class MyView(): + def __init__(self, base): + self.m = memoryview(base) class MySource(tp): pass class MyObject: pass - # Create a reference cycle through a memoryview object + # Create a reference cycle through a memoryview object. + # This exercises mbuf_clear(). b = MySource(tp(b'abc')) m = self._view(b) o = MyObject() @@ -226,6 +238,17 @@ class AbstractMemoryTests: gc.collect() self.assertTrue(wr() is None, wr()) + # This exercises memory_clear(). + m = MyView(tp(b'abc')) + o = MyObject() + m.x = m + m.o = o + wr = weakref.ref(o) + m = o = None + # The cycle must be broken + gc.collect() + self.assertTrue(wr() is None, wr()) + def _check_released(self, m, tp): check = self.assertRaisesRegex(ValueError, "released") with check: bytes(m) @@ -283,9 +306,12 @@ class AbstractMemoryTests: i = io.BytesIO(b'ZZZZ') self.assertRaises(TypeError, i.readinto, m) + def test_getbuf_fail(self): + self.assertRaises(TypeError, self._view, {}) + def test_hash(self): # Memoryviews of readonly (hashable) types are hashable, and they - # hash as the corresponding object. + # hash as hash(obj.tobytes()). tp = self.ro_type if tp is None: self.skipTest("no read-only type to test") diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py index bf22df2..551c3a5 100644 --- a/Lib/test/test_sys.py +++ b/Lib/test/test_sys.py @@ -773,8 +773,8 @@ class SizeofTest(unittest.TestCase): check(int(PyLong_BASE), size(vh) + 2*self.longdigit) check(int(PyLong_BASE**2-1), size(vh) + 2*self.longdigit) check(int(PyLong_BASE**2), size(vh) + 3*self.longdigit) - # memory (Py_buffer + hash value) - check(memoryview(b''), size(h + 'PP2P2i7P' + 'P')) + # memoryview + check(memoryview(b''), size(h + 'PPiP4P2i5P3cP')) # module check(unittest, size(h + '3P')) # None @@ -1041,6 +1041,7 @@ John Viega Kannan Vijayan Kurt Vile Norman Vine +Pauli Virtanen Frank Visser Johannes Vogel Sjoerd de Vries @@ -10,6 +10,23 @@ What's New in Python 3.3 Alpha 1? Core and Builtins ----------------- +- Issue #10181: New memoryview implementation fixes multiple ownership + and lifetime issues of dynamically allocated Py_buffer members (#9990) + as well as crashes (#8305, #7433). Many new features have been added + (See whatsnew/3.3), and the documentation has been updated extensively. + The ndarray test object from _testbuffer.c implements all aspects of + PEP-3118, so further development towards the complete implementation + of the PEP can proceed in a test-driven manner. + + Thanks to Nick Coghlan, Antoine Pitrou and Pauli Virtanen for review + and many ideas. + +- Issue #12834: Fix incorrect results of memoryview.tobytes() for + non-contiguous arrays. + +- Issue #5231: Introduce memoryview.cast() method that allows changing + format and shape without making a copy of the underlying memory. + - Issue #14084: Fix a file descriptor leak when importing a module with a bad encoding. diff --git a/Misc/valgrind-python.supp b/Misc/valgrind-python.supp index 194ecbf..20dbf1e 100644 --- a/Misc/valgrind-python.supp +++ b/Misc/valgrind-python.supp @@ -412,4 +412,15 @@ fun:SHA1_Update } +{ + test_buffer_non_debug + Memcheck:Addr4 + fun:PyUnicodeUCS2_FSConverter +} + +{ + test_buffer_non_debug + Memcheck:Addr4 + fun:PyUnicode_FSConverter +} diff --git a/Modules/_testbuffer.c b/Modules/_testbuffer.c new file mode 100644 index 0000000..39a7bcc --- /dev/null +++ b/Modules/_testbuffer.c @@ -0,0 +1,2683 @@ +/* C Extension module to test all aspects of PEP-3118. + Written by Stefan Krah. */ + + +#define PY_SSIZE_T_CLEAN + +#include "Python.h" + + +/* struct module */ +PyObject *structmodule = NULL; +PyObject *Struct = NULL; +PyObject *calcsize = NULL; + +/* cache simple format string */ +static const char *simple_fmt = "B"; +PyObject *simple_format = NULL; +#define SIMPLE_FORMAT(fmt) (fmt == NULL || strcmp(fmt, "B") == 0) + + +/**************************************************************************/ +/* NDArray Object */ +/**************************************************************************/ + +static PyTypeObject NDArray_Type; +#define NDArray_Check(v) (Py_TYPE(v) == &NDArray_Type) + +#define CHECK_LIST_OR_TUPLE(v) \ + if (!PyList_Check(v) && !PyTuple_Check(v)) { \ + PyErr_SetString(PyExc_TypeError, \ + #v " must be a list or a tuple"); \ + return NULL; \ + } \ + +#define PyMem_XFree(v) \ + do { if (v) PyMem_Free(v); } while (0) + +/* Maximum number of dimensions. */ +#define ND_MAX_NDIM (2 * PyBUF_MAX_NDIM) + +/* Check for the presence of suboffsets in the first dimension. */ +#define HAVE_PTR(suboffsets) (suboffsets && suboffsets[0] >= 0) +/* Adjust ptr if suboffsets are present. */ +#define ADJUST_PTR(ptr, suboffsets) \ + (HAVE_PTR(suboffsets) ? *((char**)ptr) + suboffsets[0] : ptr) + +/* User configurable flags for the ndarray */ +#define ND_VAREXPORT 0x001 /* change layout while buffers are exported */ + +/* User configurable flags for each base buffer */ +#define ND_WRITABLE 0x002 /* mark base buffer as writable */ +#define ND_FORTRAN 0x004 /* Fortran contiguous layout */ +#define ND_SCALAR 0x008 /* scalar: ndim = 0 */ +#define ND_PIL 0x010 /* convert to PIL-style array (suboffsets) */ +#define ND_GETBUF_FAIL 0x020 /* test issue 7385 */ + +/* Default: NumPy style (strides), read-only, no var-export, C-style layout */ +#define ND_DEFAULT 0x0 + +/* Internal flags for the base buffer */ +#define ND_C 0x040 /* C contiguous layout (default) */ +#define ND_OWN_ARRAYS 0x080 /* consumer owns arrays */ +#define ND_UNUSED 0x100 /* initializer */ + +/* ndarray properties */ +#define ND_IS_CONSUMER(nd) \ + (((NDArrayObject *)nd)->head == &((NDArrayObject *)nd)->staticbuf) + +/* ndbuf->flags properties */ +#define ND_C_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_C))) +#define ND_FORTRAN_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_FORTRAN))) +#define ND_ANY_CONTIGUOUS(flags) (!!(flags&(ND_SCALAR|ND_C|ND_FORTRAN))) + +/* getbuffer() requests */ +#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT) +#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) +#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) +#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS) +#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES) +#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND) +#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE) +#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT) + + +/* Single node of a list of base buffers. The list is needed to implement + changes in memory layout while exported buffers are active. */ +static PyTypeObject NDArray_Type; + +struct ndbuf; +typedef struct ndbuf { + struct ndbuf *next; + struct ndbuf *prev; + Py_ssize_t len; /* length of data */ + Py_ssize_t offset; /* start of the array relative to data */ + char *data; /* raw data */ + int flags; /* capabilities of the base buffer */ + Py_ssize_t exports; /* number of exports */ + Py_buffer base; /* base buffer */ +} ndbuf_t; + +typedef struct { + PyObject_HEAD + int flags; /* ndarray flags */ + ndbuf_t staticbuf; /* static buffer for re-exporting mode */ + ndbuf_t *head; /* currently active base buffer */ +} NDArrayObject; + + +static ndbuf_t * +ndbuf_new(Py_ssize_t nitems, Py_ssize_t itemsize, Py_ssize_t offset, int flags) +{ + ndbuf_t *ndbuf; + Py_buffer *base; + Py_ssize_t len; + + len = nitems * itemsize; + if (offset % itemsize) { + PyErr_SetString(PyExc_ValueError, + "offset must be a multiple of itemsize"); + return NULL; + } + if (offset < 0 || offset+itemsize > len) { + PyErr_SetString(PyExc_ValueError, "offset out of bounds"); + return NULL; + } + + ndbuf = PyMem_Malloc(sizeof *ndbuf); + if (ndbuf == NULL) { + PyErr_NoMemory(); + return NULL; + } + + ndbuf->next = NULL; + ndbuf->prev = NULL; + ndbuf->len = len; + ndbuf->offset= offset; + + ndbuf->data = PyMem_Malloc(len); + if (ndbuf->data == NULL) { + PyErr_NoMemory(); + PyMem_Free(ndbuf); + return NULL; + } + + ndbuf->flags = flags; + ndbuf->exports = 0; + + base = &ndbuf->base; + base->obj = NULL; + base->buf = ndbuf->data; + base->len = len; + base->itemsize = 1; + base->readonly = 0; + base->format = NULL; + base->ndim = 1; + base->shape = NULL; + base->strides = NULL; + base->suboffsets = NULL; + base->internal = ndbuf; + + return ndbuf; +} + +static void +ndbuf_free(ndbuf_t *ndbuf) +{ + Py_buffer *base = &ndbuf->base; + + PyMem_XFree(ndbuf->data); + PyMem_XFree(base->format); + PyMem_XFree(base->shape); + PyMem_XFree(base->strides); + PyMem_XFree(base->suboffsets); + + PyMem_Free(ndbuf); +} + +static void +ndbuf_push(NDArrayObject *nd, ndbuf_t *elt) +{ + elt->next = nd->head; + if (nd->head) nd->head->prev = elt; + nd->head = elt; + elt->prev = NULL; +} + +static void +ndbuf_delete(NDArrayObject *nd, ndbuf_t *elt) +{ + if (elt->prev) + elt->prev->next = elt->next; + else + nd->head = elt->next; + + if (elt->next) + elt->next->prev = elt->prev; + + ndbuf_free(elt); +} + +static void +ndbuf_pop(NDArrayObject *nd) +{ + ndbuf_delete(nd, nd->head); +} + + +static PyObject * +ndarray_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + NDArrayObject *nd; + + nd = PyObject_New(NDArrayObject, &NDArray_Type); + if (nd == NULL) + return NULL; + + nd->flags = 0; + nd->head = NULL; + return (PyObject *)nd; +} + +static void +ndarray_dealloc(NDArrayObject *self) +{ + if (self->head) { + if (ND_IS_CONSUMER(self)) { + Py_buffer *base = &self->head->base; + if (self->head->flags & ND_OWN_ARRAYS) { + PyMem_XFree(base->shape); + PyMem_XFree(base->strides); + PyMem_XFree(base->suboffsets); + } + PyBuffer_Release(base); + } + else { + while (self->head) + ndbuf_pop(self); + } + } + PyObject_Del(self); +} + +static int +ndarray_init_staticbuf(PyObject *exporter, NDArrayObject *nd, int flags) +{ + Py_buffer *base = &nd->staticbuf.base; + + if (PyObject_GetBuffer(exporter, base, flags) < 0) + return -1; + + nd->head = &nd->staticbuf; + + nd->head->next = NULL; + nd->head->prev = NULL; + nd->head->len = -1; + nd->head->offset = -1; + nd->head->data = NULL; + + nd->head->flags = base->readonly ? 0 : ND_WRITABLE; + nd->head->exports = 0; + + return 0; +} + +static void +init_flags(ndbuf_t *ndbuf) +{ + if (ndbuf->base.ndim == 0) + ndbuf->flags |= ND_SCALAR; + if (ndbuf->base.suboffsets) + ndbuf->flags |= ND_PIL; + if (PyBuffer_IsContiguous(&ndbuf->base, 'C')) + ndbuf->flags |= ND_C; + if (PyBuffer_IsContiguous(&ndbuf->base, 'F')) + ndbuf->flags |= ND_FORTRAN; +} + + +/****************************************************************************/ +/* Buffer/List conversions */ +/****************************************************************************/ + +static Py_ssize_t *strides_from_shape(const ndbuf_t *, int flags); + +/* Get number of members in a struct: see issue #12740 */ +typedef struct { + PyObject_HEAD + Py_ssize_t s_size; + Py_ssize_t s_len; +} PyPartialStructObject; + +static Py_ssize_t +get_nmemb(PyObject *s) +{ + return ((PyPartialStructObject *)s)->s_len; +} + +/* Pack all items into the buffer of 'obj'. The 'format' parameter must be + in struct module syntax. For standard C types, a single item is an integer. + For compound types, a single item is a tuple of integers. */ +static int +pack_from_list(PyObject *obj, PyObject *items, PyObject *format, + Py_ssize_t itemsize) +{ + PyObject *structobj, *pack_into; + PyObject *args, *offset; + PyObject *item, *tmp; + Py_ssize_t nitems; /* number of items */ + Py_ssize_t nmemb; /* number of members in a single item */ + Py_ssize_t i, j; + int ret = 0; + + assert(PyObject_CheckBuffer(obj)); + assert(PyList_Check(items) || PyTuple_Check(items)); + + structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL); + if (structobj == NULL) + return -1; + + nitems = PySequence_Fast_GET_SIZE(items); + nmemb = get_nmemb(structobj); + assert(nmemb >= 1); + + pack_into = PyObject_GetAttrString(structobj, "pack_into"); + if (pack_into == NULL) { + Py_DECREF(structobj); + return -1; + } + + /* nmemb >= 1 */ + args = PyTuple_New(2 + nmemb); + if (args == NULL) { + Py_DECREF(pack_into); + Py_DECREF(structobj); + return -1; + } + + offset = NULL; + for (i = 0; i < nitems; i++) { + /* Loop invariant: args[j] are borrowed references or NULL. */ + PyTuple_SET_ITEM(args, 0, obj); + for (j = 1; j < 2+nmemb; j++) + PyTuple_SET_ITEM(args, j, NULL); + + Py_XDECREF(offset); + offset = PyLong_FromSsize_t(i*itemsize); + if (offset == NULL) { + ret = -1; + break; + } + PyTuple_SET_ITEM(args, 1, offset); + + item = PySequence_Fast_GET_ITEM(items, i); + if ((PyBytes_Check(item) || PyLong_Check(item) || + PyFloat_Check(item)) && nmemb == 1) { + PyTuple_SET_ITEM(args, 2, item); + } + else if ((PyList_Check(item) || PyTuple_Check(item)) && + PySequence_Length(item) == nmemb) { + for (j = 0; j < nmemb; j++) { + tmp = PySequence_Fast_GET_ITEM(item, j); + PyTuple_SET_ITEM(args, 2+j, tmp); + } + } + else { + PyErr_SetString(PyExc_ValueError, + "mismatch between initializer element and format string"); + ret = -1; + break; + } + + tmp = PyObject_CallObject(pack_into, args); + if (tmp == NULL) { + ret = -1; + break; + } + Py_DECREF(tmp); + } + + Py_INCREF(obj); /* args[0] */ + /* args[1]: offset is either NULL or should be dealloc'd */ + for (i = 2; i < 2+nmemb; i++) { + tmp = PyTuple_GET_ITEM(args, i); + Py_XINCREF(tmp); + } + Py_DECREF(args); + + Py_DECREF(pack_into); + Py_DECREF(structobj); + return ret; + +} + +/* Pack single element */ +static int +pack_single(char *ptr, PyObject *item, const char *fmt, Py_ssize_t itemsize) +{ + PyObject *structobj = NULL, *pack_into = NULL, *args = NULL; + PyObject *format = NULL, *mview = NULL, *zero = NULL; + Py_ssize_t i, nmemb; + int ret = -1; + PyObject *x; + + if (fmt == NULL) fmt = "B"; + + format = PyUnicode_FromString(fmt); + if (format == NULL) + goto out; + + structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL); + if (structobj == NULL) + goto out; + + nmemb = get_nmemb(structobj); + assert(nmemb >= 1); + + mview = PyMemoryView_FromMemory(ptr, itemsize, PyBUF_WRITE); + if (mview == NULL) + goto out; + + zero = PyLong_FromLong(0); + if (zero == NULL) + goto out; + + pack_into = PyObject_GetAttrString(structobj, "pack_into"); + if (pack_into == NULL) + goto out; + + args = PyTuple_New(2+nmemb); + if (args == NULL) + goto out; + + PyTuple_SET_ITEM(args, 0, mview); + PyTuple_SET_ITEM(args, 1, zero); + + if ((PyBytes_Check(item) || PyLong_Check(item) || + PyFloat_Check(item)) && nmemb == 1) { + PyTuple_SET_ITEM(args, 2, item); + } + else if ((PyList_Check(item) || PyTuple_Check(item)) && + PySequence_Length(item) == nmemb) { + for (i = 0; i < nmemb; i++) { + x = PySequence_Fast_GET_ITEM(item, i); + PyTuple_SET_ITEM(args, 2+i, x); + } + } + else { + PyErr_SetString(PyExc_ValueError, + "mismatch between initializer element and format string"); + goto args_out; + } + + x = PyObject_CallObject(pack_into, args); + if (x != NULL) { + Py_DECREF(x); + ret = 0; + } + + +args_out: + for (i = 0; i < 2+nmemb; i++) + Py_XINCREF(PyTuple_GET_ITEM(args, i)); + Py_XDECREF(args); +out: + Py_XDECREF(pack_into); + Py_XDECREF(zero); + Py_XDECREF(mview); + Py_XDECREF(structobj); + Py_XDECREF(format); + return ret; +} + +static void +copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize, + char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets, + char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets, + char *mem) +{ + Py_ssize_t i; + + assert(ndim >= 1); + + if (ndim == 1) { + if (!HAVE_PTR(dsuboffsets) && !HAVE_PTR(ssuboffsets) && + dstrides[0] == itemsize && sstrides[0] == itemsize) { + memmove(dptr, sptr, shape[0] * itemsize); + } + else { + char *p; + assert(mem != NULL); + for (i=0, p=mem; i<shape[0]; p+=itemsize, sptr+=sstrides[0], i++) { + char *xsptr = ADJUST_PTR(sptr, ssuboffsets); + memcpy(p, xsptr, itemsize); + } + for (i=0, p=mem; i<shape[0]; p+=itemsize, dptr+=dstrides[0], i++) { + char *xdptr = ADJUST_PTR(dptr, dsuboffsets); + memcpy(xdptr, p, itemsize); + } + } + return; + } + + for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) { + char *xdptr = ADJUST_PTR(dptr, dsuboffsets); + char *xsptr = ADJUST_PTR(sptr, ssuboffsets); + + copy_rec(shape+1, ndim-1, itemsize, + xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL, + xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL, + mem); + } +} + +static int +cmp_structure(Py_buffer *dest, Py_buffer *src) +{ + Py_ssize_t i; + int same_fmt = ((dest->format == NULL && src->format == NULL) || \ + (strcmp(dest->format, src->format) == 0)); + + if (!same_fmt || + dest->itemsize != src->itemsize || + dest->ndim != src->ndim) + return -1; + + for (i = 0; i < dest->ndim; i++) { + if (dest->shape[i] != src->shape[i]) + return -1; + if (dest->shape[i] == 0) + break; + } + + return 0; +} + +/* Copy src to dest. Both buffers must have the same format, itemsize, + ndim and shape. Copying is atomic, the function never fails with + a partial copy. */ +static int +copy_buffer(Py_buffer *dest, Py_buffer *src) +{ + char *mem = NULL; + + assert(dest->ndim > 0); + + if (cmp_structure(dest, src) < 0) { + PyErr_SetString(PyExc_ValueError, + "ndarray assignment: lvalue and rvalue have different structures"); + return -1; + } + + if ((dest->suboffsets && dest->suboffsets[dest->ndim-1] >= 0) || + (src->suboffsets && src->suboffsets[src->ndim-1] >= 0) || + dest->strides[dest->ndim-1] != dest->itemsize || + src->strides[src->ndim-1] != src->itemsize) { + mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize); + if (mem == NULL) { + PyErr_NoMemory(); + return -1; + } + } + + copy_rec(dest->shape, dest->ndim, dest->itemsize, + dest->buf, dest->strides, dest->suboffsets, + src->buf, src->strides, src->suboffsets, + mem); + + PyMem_XFree(mem); + return 0; +} + + +/* Unpack single element */ +static PyObject * +unpack_single(char *ptr, const char *fmt, Py_ssize_t itemsize) +{ + PyObject *x, *unpack_from, *mview; + + if (fmt == NULL) { + fmt = "B"; + itemsize = 1; + } + + unpack_from = PyObject_GetAttrString(structmodule, "unpack_from"); + if (unpack_from == NULL) + return NULL; + + mview = PyMemoryView_FromMemory(ptr, itemsize, PyBUF_READ); + if (mview == NULL) { + Py_DECREF(unpack_from); + return NULL; + } + + x = PyObject_CallFunction(unpack_from, "sO", fmt, mview); + Py_DECREF(unpack_from); + Py_DECREF(mview); + if (x == NULL) + return NULL; + + if (PyTuple_GET_SIZE(x) == 1) { + PyObject *tmp = PyTuple_GET_ITEM(x, 0); + Py_INCREF(tmp); + Py_DECREF(x); + return tmp; + } + + return x; +} + +/* Unpack a multi-dimensional matrix into a nested list. Return a scalar + for ndim = 0. */ +static PyObject * +unpack_rec(PyObject *unpack_from, char *ptr, PyObject *mview, char *item, + const Py_ssize_t *shape, const Py_ssize_t *strides, + const Py_ssize_t *suboffsets, Py_ssize_t ndim, Py_ssize_t itemsize) +{ + PyObject *lst, *x; + Py_ssize_t i; + + assert(ndim >= 0); + assert(shape != NULL); + assert(strides != NULL); + + if (ndim == 0) { + memcpy(item, ptr, itemsize); + x = PyObject_CallFunctionObjArgs(unpack_from, mview, NULL); + if (x == NULL) + return NULL; + if (PyTuple_GET_SIZE(x) == 1) { + PyObject *tmp = PyTuple_GET_ITEM(x, 0); + Py_INCREF(tmp); + Py_DECREF(x); + return tmp; + } + return x; + } + + lst = PyList_New(shape[0]); + if (lst == NULL) + return NULL; + + for (i = 0; i < shape[0]; ptr+=strides[0], i++) { + char *nextptr = ADJUST_PTR(ptr, suboffsets); + + x = unpack_rec(unpack_from, nextptr, mview, item, + shape+1, strides+1, suboffsets ? suboffsets+1 : NULL, + ndim-1, itemsize); + if (x == NULL) { + Py_DECREF(lst); + return NULL; + } + + PyList_SET_ITEM(lst, i, x); + } + + return lst; +} + + +static PyObject * +ndarray_as_list(NDArrayObject *nd) +{ + PyObject *structobj = NULL, *unpack_from = NULL; + PyObject *lst = NULL, *mview = NULL; + Py_buffer *base = &nd->head->base; + Py_ssize_t *shape = base->shape; + Py_ssize_t *strides = base->strides; + Py_ssize_t simple_shape[1]; + Py_ssize_t simple_strides[1]; + char *item = NULL; + PyObject *format; + char *fmt = base->format; + + base = &nd->head->base; + + if (fmt == NULL) { + PyErr_SetString(PyExc_ValueError, + "ndarray: tolist() does not support format=NULL, use " + "tobytes()"); + return NULL; + } + if (shape == NULL) { + assert(ND_C_CONTIGUOUS(nd->head->flags)); + assert(base->strides == NULL); + assert(base->ndim <= 1); + shape = simple_shape; + shape[0] = base->len; + strides = simple_strides; + strides[0] = base->itemsize; + } + else if (strides == NULL) { + assert(ND_C_CONTIGUOUS(nd->head->flags)); + strides = strides_from_shape(nd->head, 0); + if (strides == NULL) + return NULL; + } + + format = PyUnicode_FromString(fmt); + if (format == NULL) + goto out; + + structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL); + Py_DECREF(format); + if (structobj == NULL) + goto out; + + unpack_from = PyObject_GetAttrString(structobj, "unpack_from"); + if (unpack_from == NULL) + goto out; + + item = PyMem_Malloc(base->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + goto out; + } + + mview = PyMemoryView_FromMemory(item, base->itemsize, PyBUF_WRITE); + if (mview == NULL) + goto out; + + lst = unpack_rec(unpack_from, base->buf, mview, item, + shape, strides, base->suboffsets, + base->ndim, base->itemsize); + +out: + Py_XDECREF(mview); + PyMem_XFree(item); + Py_XDECREF(unpack_from); + Py_XDECREF(structobj); + if (strides != base->strides && strides != simple_strides) + PyMem_XFree(strides); + + return lst; +} + + +/****************************************************************************/ +/* Initialize ndbuf */ +/****************************************************************************/ + +/* + State of a new ndbuf during initialization. 'OK' means that initialization + is complete. 'PTR' means that a pointer has been initialized, but the + state of the memory is still undefined and ndbuf->offset is disregarded. + + +-----------------+-----------+-------------+----------------+ + | | ndbuf_new | init_simple | init_structure | + +-----------------+-----------+-------------+----------------+ + | next | OK (NULL) | OK | OK | + +-----------------+-----------+-------------+----------------+ + | prev | OK (NULL) | OK | OK | + +-----------------+-----------+-------------+----------------+ + | len | OK | OK | OK | + +-----------------+-----------+-------------+----------------+ + | offset | OK | OK | OK | + +-----------------+-----------+-------------+----------------+ + | data | PTR | OK | OK | + +-----------------+-----------+-------------+----------------+ + | flags | user | user | OK | + +-----------------+-----------+-------------+----------------+ + | exports | OK (0) | OK | OK | + +-----------------+-----------+-------------+----------------+ + | base.obj | OK (NULL) | OK | OK | + +-----------------+-----------+-------------+----------------+ + | base.buf | PTR | PTR | OK | + +-----------------+-----------+-------------+----------------+ + | base.len | len(data) | len(data) | OK | + +-----------------+-----------+-------------+----------------+ + | base.itemsize | 1 | OK | OK | + +-----------------+-----------+-------------+----------------+ + | base.readonly | 0 | OK | OK | + +-----------------+-----------+-------------+----------------+ + | base.format | NULL | OK | OK | + +-----------------+-----------+-------------+----------------+ + | base.ndim | 1 | 1 | OK | + +-----------------+-----------+-------------+----------------+ + | base.shape | NULL | NULL | OK | + +-----------------+-----------+-------------+----------------+ + | base.strides | NULL | NULL | OK | + +-----------------+-----------+-------------+----------------+ + | base.suboffsets | NULL | NULL | OK | + +-----------------+-----------+-------------+----------------+ + | base.internal | OK | OK | OK | + +-----------------+-----------+-------------+----------------+ + +*/ + +static Py_ssize_t +get_itemsize(PyObject *format) +{ + PyObject *tmp; + Py_ssize_t itemsize; + + tmp = PyObject_CallFunctionObjArgs(calcsize, format, NULL); + if (tmp == NULL) + return -1; + itemsize = PyLong_AsSsize_t(tmp); + Py_DECREF(tmp); + + return itemsize; +} + +static char * +get_format(PyObject *format) +{ + PyObject *tmp; + char *fmt; + + tmp = PyUnicode_AsASCIIString(format); + if (tmp == NULL) + return NULL; + fmt = PyMem_Malloc(PyBytes_GET_SIZE(tmp)+1); + if (fmt == NULL) { + PyErr_NoMemory(); + Py_DECREF(tmp); + return NULL; + } + strcpy(fmt, PyBytes_AS_STRING(tmp)); + Py_DECREF(tmp); + + return fmt; +} + +static int +init_simple(ndbuf_t *ndbuf, PyObject *items, PyObject *format, + Py_ssize_t itemsize) +{ + PyObject *mview; + Py_buffer *base = &ndbuf->base; + int ret; + + mview = PyMemoryView_FromBuffer(base); + if (mview == NULL) + return -1; + + ret = pack_from_list(mview, items, format, itemsize); + Py_DECREF(mview); + if (ret < 0) + return -1; + + base->readonly = !(ndbuf->flags & ND_WRITABLE); + base->itemsize = itemsize; + base->format = get_format(format); + if (base->format == NULL) + return -1; + + return 0; +} + +static Py_ssize_t * +seq_as_ssize_array(PyObject *seq, Py_ssize_t len, int is_shape) +{ + Py_ssize_t *dest; + Py_ssize_t x, i; + + dest = PyMem_Malloc(len * (sizeof *dest)); + if (dest == NULL) { + PyErr_NoMemory(); + return NULL; + } + + for (i = 0; i < len; i++) { + PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i); + if (!PyLong_Check(tmp)) { + PyErr_Format(PyExc_ValueError, + "elements of %s must be integers", + is_shape ? "shape" : "strides"); + PyMem_Free(dest); + return NULL; + } + x = PyLong_AsSsize_t(tmp); + if (PyErr_Occurred()) { + PyMem_Free(dest); + return NULL; + } + if (is_shape && x < 0) { + PyErr_Format(PyExc_ValueError, + "elements of shape must be integers >= 0"); + PyMem_Free(dest); + return NULL; + } + dest[i] = x; + } + + return dest; +} + +static Py_ssize_t * +strides_from_shape(const ndbuf_t *ndbuf, int flags) +{ + const Py_buffer *base = &ndbuf->base; + Py_ssize_t *s, i; + + s = PyMem_Malloc(base->ndim * (sizeof *s)); + if (s == NULL) { + PyErr_NoMemory(); + return NULL; + } + + if (flags & ND_FORTRAN) { + s[0] = base->itemsize; + for (i = 1; i < base->ndim; i++) + s[i] = s[i-1] * base->shape[i-1]; + } + else { + s[base->ndim-1] = base->itemsize; + for (i = base->ndim-2; i >= 0; i--) + s[i] = s[i+1] * base->shape[i+1]; + } + + return s; +} + +/* Bounds check: + + len := complete length of allocated memory + offset := start of the array + + A single array element is indexed by: + + i = indices[0] * strides[0] + indices[1] * strides[1] + ... + + imin is reached when all indices[n] combined with positive strides are 0 + and all indices combined with negative strides are shape[n]-1, which is + the maximum index for the nth dimension. + + imax is reached when all indices[n] combined with negative strides are 0 + and all indices combined with positive strides are shape[n]-1. +*/ +static int +verify_structure(Py_ssize_t len, Py_ssize_t itemsize, Py_ssize_t offset, + const Py_ssize_t *shape, const Py_ssize_t *strides, + Py_ssize_t ndim) +{ + Py_ssize_t imin, imax; + Py_ssize_t n; + + assert(ndim >= 0); + + if (ndim == 0 && (offset < 0 || offset+itemsize > len)) + goto invalid_combination; + + for (n = 0; n < ndim; n++) + if (strides[n] % itemsize) { + PyErr_SetString(PyExc_ValueError, + "strides must be a multiple of itemsize"); + return -1; + } + + for (n = 0; n < ndim; n++) + if (shape[n] == 0) + return 0; + + imin = imax = 0; + for (n = 0; n < ndim; n++) + if (strides[n] <= 0) + imin += (shape[n]-1) * strides[n]; + else + imax += (shape[n]-1) * strides[n]; + + if (imin + offset < 0 || imax + offset + itemsize > len) + goto invalid_combination; + + return 0; + + +invalid_combination: + PyErr_SetString(PyExc_ValueError, + "invalid combination of buffer, shape and strides"); + return -1; +} + +/* + Convert a NumPy-style array to an array using suboffsets to stride in + the first dimension. Requirements: ndim > 0. + + Contiguous example + ================== + + Input: + ------ + shape = {2, 2, 3}; + strides = {6, 3, 1}; + suboffsets = NULL; + data = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; + buf = &data[0] + + Output: + ------- + shape = {2, 2, 3}; + strides = {sizeof(char *), 3, 1}; + suboffsets = {0, -1, -1}; + data = {p1, p2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; + | | ^ ^ + `---'---' | + | | + `---------------------' + buf = &data[0] + + So, in the example the input resembles the three-dimensional array + char v[2][2][3], while the output resembles an array of two pointers + to two-dimensional arrays: char (*v[2])[2][3]. + + + Non-contiguous example: + ======================= + + Input (with offset and negative strides): + ----------------------------------------- + shape = {2, 2, 3}; + strides = {-6, 3, -1}; + offset = 8 + suboffsets = NULL; + data = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; + + Output: + ------- + shape = {2, 2, 3}; + strides = {-sizeof(char *), 3, -1}; + suboffsets = {2, -1, -1}; + newdata = {p1, p2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; + | | ^ ^ ^ ^ + `---'---' | | `- p2+suboffsets[0] + | `-----------|--- p1+suboffsets[0] + `---------------------' + buf = &newdata[1] # striding backwards over the pointers. + + suboffsets[0] is the same as the offset that one would specify if + the two {2, 3} subarrays were created directly, hence the name. +*/ +static int +init_suboffsets(ndbuf_t *ndbuf) +{ + Py_buffer *base = &ndbuf->base; + Py_ssize_t start, step; + Py_ssize_t imin, suboffset0; + Py_ssize_t addsize; + Py_ssize_t n; + char *data; + + assert(base->ndim > 0); + assert(base->suboffsets == NULL); + + /* Allocate new data with additional space for shape[0] pointers. */ + addsize = base->shape[0] * (sizeof (char *)); + + /* Align array start to a multiple of 8. */ + addsize = 8 * ((addsize + 7) / 8); + + data = PyMem_Malloc(ndbuf->len + addsize); + if (data == NULL) { + PyErr_NoMemory(); + return -1; + } + + memcpy(data + addsize, ndbuf->data, ndbuf->len); + + PyMem_Free(ndbuf->data); + ndbuf->data = data; + ndbuf->len += addsize; + base->buf = ndbuf->data; + + /* imin: minimum index of the input array relative to ndbuf->offset. + suboffset0: offset for each sub-array of the output. This is the + same as calculating -imin' for a sub-array of ndim-1. */ + imin = suboffset0 = 0; + for (n = 0; n < base->ndim; n++) { + if (base->shape[n] == 0) + break; + if (base->strides[n] <= 0) { + Py_ssize_t x = (base->shape[n]-1) * base->strides[n]; + imin += x; + suboffset0 += (n >= 1) ? -x : 0; + } + } + + /* Initialize the array of pointers to the sub-arrays. */ + start = addsize + ndbuf->offset + imin; + step = base->strides[0] < 0 ? -base->strides[0] : base->strides[0]; + + for (n = 0; n < base->shape[0]; n++) + ((char **)base->buf)[n] = (char *)base->buf + start + n*step; + + /* Initialize suboffsets. */ + base->suboffsets = PyMem_Malloc(base->ndim * (sizeof *base->suboffsets)); + if (base->suboffsets == NULL) { + PyErr_NoMemory(); + return -1; + } + base->suboffsets[0] = suboffset0; + for (n = 1; n < base->ndim; n++) + base->suboffsets[n] = -1; + + /* Adjust strides for the first (zeroth) dimension. */ + if (base->strides[0] >= 0) { + base->strides[0] = sizeof(char *); + } + else { + /* Striding backwards. */ + base->strides[0] = -(Py_ssize_t)sizeof(char *); + if (base->shape[0] > 0) + base->buf = (char *)base->buf + (base->shape[0]-1) * sizeof(char *); + } + + ndbuf->flags &= ~(ND_C|ND_FORTRAN); + ndbuf->offset = 0; + return 0; +} + +static void +init_len(Py_buffer *base) +{ + Py_ssize_t i; + + base->len = 1; + for (i = 0; i < base->ndim; i++) + base->len *= base->shape[i]; + base->len *= base->itemsize; +} + +static int +init_structure(ndbuf_t *ndbuf, PyObject *shape, PyObject *strides, + Py_ssize_t ndim) +{ + Py_buffer *base = &ndbuf->base; + + base->ndim = (int)ndim; + if (ndim == 0) { + if (ndbuf->flags & ND_PIL) { + PyErr_SetString(PyExc_TypeError, + "ndim = 0 cannot be used in conjunction with ND_PIL"); + return -1; + } + ndbuf->flags |= (ND_SCALAR|ND_C|ND_FORTRAN); + return 0; + } + + /* shape */ + base->shape = seq_as_ssize_array(shape, ndim, 1); + if (base->shape == NULL) + return -1; + + /* strides */ + if (strides) { + base->strides = seq_as_ssize_array(strides, ndim, 0); + } + else { + base->strides = strides_from_shape(ndbuf, ndbuf->flags); + } + if (base->strides == NULL) + return -1; + if (verify_structure(base->len, base->itemsize, ndbuf->offset, + base->shape, base->strides, ndim) < 0) + return -1; + + /* buf */ + base->buf = ndbuf->data + ndbuf->offset; + + /* len */ + init_len(base); + + /* ndbuf->flags */ + if (PyBuffer_IsContiguous(base, 'C')) + ndbuf->flags |= ND_C; + if (PyBuffer_IsContiguous(base, 'F')) + ndbuf->flags |= ND_FORTRAN; + + + /* convert numpy array to suboffset representation */ + if (ndbuf->flags & ND_PIL) { + /* modifies base->buf, base->strides and base->suboffsets **/ + return init_suboffsets(ndbuf); + } + + return 0; +} + +static ndbuf_t * +init_ndbuf(PyObject *items, PyObject *shape, PyObject *strides, + Py_ssize_t offset, PyObject *format, int flags) +{ + ndbuf_t *ndbuf; + Py_ssize_t ndim; + Py_ssize_t nitems; + Py_ssize_t itemsize; + + /* ndim = len(shape) */ + CHECK_LIST_OR_TUPLE(shape) + ndim = PySequence_Fast_GET_SIZE(shape); + if (ndim > ND_MAX_NDIM) { + PyErr_Format(PyExc_ValueError, + "ndim must not exceed %d", ND_MAX_NDIM); + return NULL; + } + + /* len(strides) = len(shape) */ + if (strides) { + CHECK_LIST_OR_TUPLE(strides) + if (PySequence_Fast_GET_SIZE(strides) == 0) + strides = NULL; + else if (flags & ND_FORTRAN) { + PyErr_SetString(PyExc_TypeError, + "ND_FORTRAN cannot be used together with strides"); + return NULL; + } + else if (PySequence_Fast_GET_SIZE(strides) != ndim) { + PyErr_SetString(PyExc_ValueError, + "len(shape) != len(strides)"); + return NULL; + } + } + + /* itemsize */ + itemsize = get_itemsize(format); + if (itemsize <= 0) { + if (itemsize == 0) { + PyErr_SetString(PyExc_ValueError, + "itemsize must not be zero"); + } + return NULL; + } + + /* convert scalar to list */ + if (ndim == 0) { + items = Py_BuildValue("(O)", items); + if (items == NULL) + return NULL; + } + else { + CHECK_LIST_OR_TUPLE(items) + Py_INCREF(items); + } + + /* number of items */ + nitems = PySequence_Fast_GET_SIZE(items); + if (nitems == 0) { + PyErr_SetString(PyExc_ValueError, + "initializer list or tuple must not be empty"); + Py_DECREF(items); + return NULL; + } + + ndbuf = ndbuf_new(nitems, itemsize, offset, flags); + if (ndbuf == NULL) { + Py_DECREF(items); + return NULL; + } + + + if (init_simple(ndbuf, items, format, itemsize) < 0) + goto error; + if (init_structure(ndbuf, shape, strides, ndim) < 0) + goto error; + + Py_DECREF(items); + return ndbuf; + +error: + Py_DECREF(items); + ndbuf_free(ndbuf); + return NULL; +} + +/* initialize and push a new base onto the linked list */ +static int +ndarray_push_base(NDArrayObject *nd, PyObject *items, + PyObject *shape, PyObject *strides, + Py_ssize_t offset, PyObject *format, int flags) +{ + ndbuf_t *ndbuf; + + ndbuf = init_ndbuf(items, shape, strides, offset, format, flags); + if (ndbuf == NULL) + return -1; + + ndbuf_push(nd, ndbuf); + return 0; +} + +#define PyBUF_UNUSED 0x10000 +static int +ndarray_init(PyObject *self, PyObject *args, PyObject *kwds) +{ + NDArrayObject *nd = (NDArrayObject *)self; + static char *kwlist[] = { + "obj", "shape", "strides", "offset", "format", "flags", "getbuf", NULL + }; + PyObject *v = NULL; /* initializer: scalar, list, tuple or base object */ + PyObject *shape = NULL; /* size of each dimension */ + PyObject *strides = NULL; /* number of bytes to the next elt in each dim */ + Py_ssize_t offset = 0; /* buffer offset */ + PyObject *format = simple_format; /* struct module specifier: "B" */ + int flags = ND_UNUSED; /* base buffer and ndarray flags */ + + int getbuf = PyBUF_UNUSED; /* re-exporter: getbuffer request flags */ + + + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|OOnOii", kwlist, + &v, &shape, &strides, &offset, &format, &flags, &getbuf)) + return -1; + + /* NDArrayObject is re-exporter */ + if (PyObject_CheckBuffer(v) && shape == NULL) { + if (strides || offset || format != simple_format || + flags != ND_UNUSED) { + PyErr_SetString(PyExc_TypeError, + "construction from exporter object only takes a single " + "additional getbuf argument"); + return -1; + } + + getbuf = (getbuf == PyBUF_UNUSED) ? PyBUF_FULL_RO : getbuf; + + if (ndarray_init_staticbuf(v, nd, getbuf) < 0) + return -1; + + init_flags(nd->head); + + return 0; + } + + /* NDArrayObject is the original base object. */ + if (getbuf != PyBUF_UNUSED) { + PyErr_SetString(PyExc_TypeError, + "getbuf argument only valid for construction from exporter " + "object"); + return -1; + } + if (shape == NULL) { + PyErr_SetString(PyExc_TypeError, + "shape is a required argument when constructing from " + "list, tuple or scalar"); + return -1; + } + + if (flags == ND_UNUSED) + flags = ND_DEFAULT; + if (flags & ND_VAREXPORT) { + nd->flags |= ND_VAREXPORT; + flags &= ~ND_VAREXPORT; + } + + /* Initialize and push the first base buffer onto the linked list. */ + return ndarray_push_base(nd, v, shape, strides, offset, format, flags); +} + +/* Push an additional base onto the linked list. */ +static PyObject * +ndarray_push(PyObject *self, PyObject *args, PyObject *kwds) +{ + NDArrayObject *nd = (NDArrayObject *)self; + static char *kwlist[] = { + "items", "shape", "strides", "offset", "format", "flags", NULL + }; + PyObject *items = NULL; /* initializer: scalar, list or tuple */ + PyObject *shape = NULL; /* size of each dimension */ + PyObject *strides = NULL; /* number of bytes to the next elt in each dim */ + PyObject *format = simple_format; /* struct module specifier: "B" */ + Py_ssize_t offset = 0; /* buffer offset */ + int flags = ND_UNUSED; /* base buffer flags */ + + if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|OnOi", kwlist, + &items, &shape, &strides, &offset, &format, &flags)) + return NULL; + + if (flags & ND_VAREXPORT) { + PyErr_SetString(PyExc_ValueError, + "ND_VAREXPORT flag can only be used during object creation"); + return NULL; + } + if (ND_IS_CONSUMER(nd)) { + PyErr_SetString(PyExc_BufferError, + "structure of re-exporting object is immutable"); + return NULL; + } + if (!(nd->flags&ND_VAREXPORT) && nd->head->exports > 0) { + PyErr_Format(PyExc_BufferError, + "cannot change structure: %zd exported buffer%s", + nd->head->exports, nd->head->exports==1 ? "" : "s"); + return NULL; + } + + if (ndarray_push_base(nd, items, shape, strides, + offset, format, flags) < 0) + return NULL; + Py_RETURN_NONE; +} + +/* Pop a base from the linked list (if possible). */ +static PyObject * +ndarray_pop(PyObject *self, PyObject *dummy) +{ + NDArrayObject *nd = (NDArrayObject *)self; + if (ND_IS_CONSUMER(nd)) { + PyErr_SetString(PyExc_BufferError, + "structure of re-exporting object is immutable"); + return NULL; + } + if (nd->head->exports > 0) { + PyErr_Format(PyExc_BufferError, + "cannot change structure: %zd exported buffer%s", + nd->head->exports, nd->head->exports==1 ? "" : "s"); + return NULL; + } + if (nd->head->next == NULL) { + PyErr_SetString(PyExc_BufferError, + "list only has a single base"); + return NULL; + } + + ndbuf_pop(nd); + Py_RETURN_NONE; +} + +/**************************************************************************/ +/* getbuffer */ +/**************************************************************************/ + +static int +ndarray_getbuf(NDArrayObject *self, Py_buffer *view, int flags) +{ + ndbuf_t *ndbuf = self->head; + Py_buffer *base = &ndbuf->base; + int baseflags = ndbuf->flags; + + /* start with complete information */ + *view = *base; + view->obj = NULL; + + /* reconstruct format */ + if (view->format == NULL) + view->format = "B"; + + if (base->ndim != 0 && + ((REQ_SHAPE(flags) && base->shape == NULL) || + (REQ_STRIDES(flags) && base->strides == NULL))) { + /* The ndarray is a re-exporter that has been created without full + information for testing purposes. In this particular case the + ndarray is not a PEP-3118 compliant buffer provider. */ + PyErr_SetString(PyExc_BufferError, + "re-exporter does not provide format, shape or strides"); + return -1; + } + + if (baseflags & ND_GETBUF_FAIL) { + PyErr_SetString(PyExc_BufferError, + "ND_GETBUF_FAIL: forced test exception"); + return -1; + } + + if (REQ_WRITABLE(flags) && base->readonly) { + PyErr_SetString(PyExc_BufferError, + "ndarray is not writable"); + return -1; + } + if (!REQ_FORMAT(flags)) { + /* NULL indicates that the buffer's data type has been cast to 'B'. + view->itemsize is the _previous_ itemsize. If shape is present, + the equality product(shape) * itemsize = len still holds at this + point. The equality calcsize(format) = itemsize does _not_ hold + from here on! */ + view->format = NULL; + } + + if (REQ_C_CONTIGUOUS(flags) && !ND_C_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "ndarray is not C-contiguous"); + return -1; + } + if (REQ_F_CONTIGUOUS(flags) && !ND_FORTRAN_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "ndarray is not Fortran contiguous"); + return -1; + } + if (REQ_ANY_CONTIGUOUS(flags) && !ND_ANY_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "ndarray is not contiguous"); + return -1; + } + if (!REQ_INDIRECT(flags) && (baseflags & ND_PIL)) { + PyErr_SetString(PyExc_BufferError, + "ndarray cannot be represented without suboffsets"); + return -1; + } + if (!REQ_STRIDES(flags)) { + if (!ND_C_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "ndarray is not C-contiguous"); + return -1; + } + view->strides = NULL; + } + if (!REQ_SHAPE(flags)) { + /* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous, + so base->buf = ndbuf->data. */ + if (view->format != NULL) { + /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do + not make sense. */ + PyErr_Format(PyExc_BufferError, + "ndarray: cannot cast to unsigned bytes if the format flag " + "is present"); + return -1; + } + /* product(shape) * itemsize = len and calcsize(format) = itemsize + do _not_ hold from here on! */ + view->ndim = 1; + view->shape = NULL; + } + + view->obj = (PyObject *)self; + Py_INCREF(view->obj); + self->head->exports++; + + return 0; +} + +static int +ndarray_releasebuf(NDArrayObject *self, Py_buffer *view) +{ + if (!ND_IS_CONSUMER(self)) { + ndbuf_t *ndbuf = view->internal; + if (--ndbuf->exports == 0 && ndbuf != self->head) + ndbuf_delete(self, ndbuf); + } + + return 0; +} + +static PyBufferProcs ndarray_as_buffer = { + (getbufferproc)ndarray_getbuf, /* bf_getbuffer */ + (releasebufferproc)ndarray_releasebuf /* bf_releasebuffer */ +}; + + +/**************************************************************************/ +/* indexing/slicing */ +/**************************************************************************/ + +static char * +ptr_from_index(Py_buffer *base, Py_ssize_t index) +{ + char *ptr; + Py_ssize_t nitems; /* items in the first dimension */ + + if (base->shape) + nitems = base->shape[0]; + else { + assert(base->ndim == 1 && SIMPLE_FORMAT(base->format)); + nitems = base->len; + } + + if (index < 0) { + index += nitems; + } + if (index < 0 || index >= nitems) { + PyErr_SetString(PyExc_IndexError, "index out of bounds"); + return NULL; + } + + ptr = (char *)base->buf; + + if (base->strides == NULL) + ptr += base->itemsize * index; + else + ptr += base->strides[0] * index; + + ptr = ADJUST_PTR(ptr, base->suboffsets); + + return ptr; +} + +static PyObject * +ndarray_item(NDArrayObject *self, Py_ssize_t index) +{ + ndbuf_t *ndbuf = self->head; + Py_buffer *base = &ndbuf->base; + char *ptr; + + if (base->ndim == 0) { + PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar"); + return NULL; + } + + ptr = ptr_from_index(base, index); + if (ptr == NULL) + return NULL; + + if (base->ndim == 1) { + return unpack_single(ptr, base->format, base->itemsize); + } + else { + NDArrayObject *nd; + Py_buffer *subview; + + nd = (NDArrayObject *)ndarray_new(&NDArray_Type, NULL, NULL); + if (nd == NULL) + return NULL; + + if (ndarray_init_staticbuf((PyObject *)self, nd, PyBUF_FULL_RO) < 0) { + Py_DECREF(nd); + return NULL; + } + + subview = &nd->staticbuf.base; + + subview->buf = ptr; + subview->len /= subview->shape[0]; + + subview->ndim--; + subview->shape++; + if (subview->strides) subview->strides++; + if (subview->suboffsets) subview->suboffsets++; + + init_flags(&nd->staticbuf); + + return (PyObject *)nd; + } +} + +/* + For each dimension, we get valid (start, stop, step, slicelength) quadruples + from PySlice_GetIndicesEx(). + + Slicing NumPy arrays + ==================== + + A pointer to an element in a NumPy array is defined by: + + ptr = (char *)buf + indices[0] * strides[0] + + ... + + indices[ndim-1] * strides[ndim-1] + + Adjust buf: + ----------- + Adding start[n] for each dimension effectively adds the constant: + + c = start[0] * strides[0] + ... + start[ndim-1] * strides[ndim-1] + + Therefore init_slice() adds all start[n] directly to buf. + + Adjust shape: + ------------- + Obviously shape[n] = slicelength[n] + + Adjust strides: + --------------- + In the original array, the next element in a dimension is reached + by adding strides[n] to the pointer. In the sliced array, elements + may be skipped, so the next element is reached by adding: + + strides[n] * step[n] + + Slicing PIL arrays + ================== + + Layout: + ------- + In the first (zeroth) dimension, PIL arrays have an array of pointers + to sub-arrays of ndim-1. Striding in the first dimension is done by + getting the index of the nth pointer, dereference it and then add a + suboffset to it. The arrays pointed to can best be seen a regular + NumPy arrays. + + Adjust buf: + ----------- + In the original array, buf points to a location (usually the start) + in the array of pointers. For the sliced array, start[0] can be + added to buf in the same manner as for NumPy arrays. + + Adjust suboffsets: + ------------------ + Due to the dereferencing step in the addressing scheme, it is not + possible to adjust buf for higher dimensions. Recall that the + sub-arrays pointed to are regular NumPy arrays, so for each of + those arrays adding start[n] effectively adds the constant: + + c = start[1] * strides[1] + ... + start[ndim-1] * strides[ndim-1] + + This constant is added to suboffsets[0]. suboffsets[0] in turn is + added to each pointer right after dereferencing. + + Adjust shape and strides: + ------------------------- + Shape and strides are not influenced by the dereferencing step, so + they are adjusted in the same manner as for NumPy arrays. + + Multiple levels of suboffsets + ============================= + + For a construct like an array of pointers to array of pointers to + sub-arrays of ndim-2: + + suboffsets[0] = start[1] * strides[1] + suboffsets[1] = start[2] * strides[2] + ... +*/ +static int +init_slice(Py_buffer *base, PyObject *key, int dim) +{ + Py_ssize_t start, stop, step, slicelength; + + if (PySlice_GetIndicesEx(key, base->shape[dim], + &start, &stop, &step, &slicelength) < 0) { + return -1; + } + + + if (base->suboffsets == NULL || dim == 0) { + adjust_buf: + base->buf = (char *)base->buf + base->strides[dim] * start; + } + else { + Py_ssize_t n = dim-1; + while (n >= 0 && base->suboffsets[n] < 0) + n--; + if (n < 0) + goto adjust_buf; /* all suboffsets are negative */ + base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start; + } + base->shape[dim] = slicelength; + base->strides[dim] = base->strides[dim] * step; + + return 0; +} + +static int +copy_structure(Py_buffer *base) +{ + Py_ssize_t *shape = NULL, *strides = NULL, *suboffsets = NULL; + Py_ssize_t i; + + shape = PyMem_Malloc(base->ndim * (sizeof *shape)); + strides = PyMem_Malloc(base->ndim * (sizeof *strides)); + if (shape == NULL || strides == NULL) + goto err_nomem; + + suboffsets = NULL; + if (base->suboffsets) { + suboffsets = PyMem_Malloc(base->ndim * (sizeof *suboffsets)); + if (suboffsets == NULL) + goto err_nomem; + } + + for (i = 0; i < base->ndim; i++) { + shape[i] = base->shape[i]; + strides[i] = base->strides[i]; + if (suboffsets) + suboffsets[i] = base->suboffsets[i]; + } + + base->shape = shape; + base->strides = strides; + base->suboffsets = suboffsets; + + return 0; + +err_nomem: + PyErr_NoMemory(); + PyMem_XFree(shape); + PyMem_XFree(strides); + PyMem_XFree(suboffsets); + return -1; +} + +static PyObject * +ndarray_subscript(NDArrayObject *self, PyObject *key) +{ + NDArrayObject *nd; + ndbuf_t *ndbuf; + Py_buffer *base = &self->head->base; + + if (base->ndim == 0) { + if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) { + return unpack_single(base->buf, base->format, base->itemsize); + } + else if (key == Py_Ellipsis) { + Py_INCREF(self); + return (PyObject *)self; + } + else { + PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar"); + return NULL; + } + } + if (PyIndex_Check(key)) { + Py_ssize_t index = PyLong_AsSsize_t(key); + if (index == -1 && PyErr_Occurred()) + return NULL; + return ndarray_item(self, index); + } + + nd = (NDArrayObject *)ndarray_new(&NDArray_Type, NULL, NULL); + if (nd == NULL) + return NULL; + + /* new ndarray is a consumer */ + if (ndarray_init_staticbuf((PyObject *)self, nd, PyBUF_FULL_RO) < 0) { + Py_DECREF(nd); + return NULL; + } + + /* copy shape, strides and suboffsets */ + ndbuf = nd->head; + base = &ndbuf->base; + if (copy_structure(base) < 0) { + Py_DECREF(nd); + return NULL; + } + ndbuf->flags |= ND_OWN_ARRAYS; + + if (PySlice_Check(key)) { + /* one-dimensional slice */ + if (init_slice(base, key, 0) < 0) + goto err_occurred; + } + else if PyTuple_Check(key) { + /* multi-dimensional slice */ + PyObject *tuple = key; + Py_ssize_t i, n; + + n = PyTuple_GET_SIZE(tuple); + for (i = 0; i < n; i++) { + key = PyTuple_GET_ITEM(tuple, i); + if (!PySlice_Check(key)) + goto type_error; + if (init_slice(base, key, (int)i) < 0) + goto err_occurred; + } + } + else { + goto type_error; + } + + init_len(base); + init_flags(ndbuf); + + return (PyObject *)nd; + + +type_error: + PyErr_Format(PyExc_TypeError, + "cannot index memory using \"%.200s\"", + key->ob_type->tp_name); +err_occurred: + Py_DECREF(nd); + return NULL; +} + + +static int +ndarray_ass_subscript(NDArrayObject *self, PyObject *key, PyObject *value) +{ + NDArrayObject *nd; + Py_buffer *dest = &self->head->base; + Py_buffer src; + char *ptr; + Py_ssize_t index; + int ret = -1; + + if (dest->readonly) { + PyErr_SetString(PyExc_TypeError, "ndarray is not writable"); + return -1; + } + if (value == NULL) { + PyErr_SetString(PyExc_TypeError, "ndarray data cannot be deleted"); + return -1; + } + if (dest->ndim == 0) { + if (key == Py_Ellipsis || + (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0)) { + ptr = (char *)dest->buf; + return pack_single(ptr, value, dest->format, dest->itemsize); + } + else { + PyErr_SetString(PyExc_TypeError, "invalid indexing of scalar"); + return -1; + } + } + if (dest->ndim == 1 && PyIndex_Check(key)) { + /* rvalue must be a single item */ + index = PyLong_AsSsize_t(key); + if (index == -1 && PyErr_Occurred()) + return -1; + else { + ptr = ptr_from_index(dest, index); + if (ptr == NULL) + return -1; + } + return pack_single(ptr, value, dest->format, dest->itemsize); + } + + /* rvalue must be an exporter */ + if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) == -1) + return -1; + + nd = (NDArrayObject *)ndarray_subscript(self, key); + if (nd != NULL) { + dest = &nd->head->base; + ret = copy_buffer(dest, &src); + Py_DECREF(nd); + } + + PyBuffer_Release(&src); + return ret; +} + +static PyObject * +slice_indices(PyObject *self, PyObject *args) +{ + PyObject *ret, *key, *tmp; + Py_ssize_t s[4]; /* start, stop, step, slicelength */ + Py_ssize_t i, len; + + if (!PyArg_ParseTuple(args, "On", &key, &len)) { + return NULL; + } + if (!PySlice_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "first argument must be a slice object"); + return NULL; + } + if (PySlice_GetIndicesEx(key, len, &s[0], &s[1], &s[2], &s[3]) < 0) { + return NULL; + } + + ret = PyTuple_New(4); + if (ret == NULL) + return NULL; + + for (i = 0; i < 4; i++) { + tmp = PyLong_FromSsize_t(s[i]); + if (tmp == NULL) + goto error; + PyTuple_SET_ITEM(ret, i, tmp); + } + + return ret; + +error: + Py_DECREF(ret); + return NULL; +} + + +static PyMappingMethods ndarray_as_mapping = { + NULL, /* mp_length */ + (binaryfunc)ndarray_subscript, /* mp_subscript */ + (objobjargproc)ndarray_ass_subscript /* mp_ass_subscript */ +}; + +static PySequenceMethods ndarray_as_sequence = { + 0, /* sq_length */ + 0, /* sq_concat */ + 0, /* sq_repeat */ + (ssizeargfunc)ndarray_item, /* sq_item */ +}; + + +/**************************************************************************/ +/* getters */ +/**************************************************************************/ + +static PyObject * +ssize_array_as_tuple(Py_ssize_t *array, Py_ssize_t len) +{ + PyObject *tuple, *x; + Py_ssize_t i; + + if (array == NULL) + return PyTuple_New(0); + + tuple = PyTuple_New(len); + if (tuple == NULL) + return NULL; + + for (i = 0; i < len; i++) { + x = PyLong_FromSsize_t(array[i]); + if (x == NULL) { + Py_DECREF(tuple); + return NULL; + } + PyTuple_SET_ITEM(tuple, i, x); + } + + return tuple; +} + +static PyObject * +ndarray_get_flags(NDArrayObject *self, void *closure) +{ + return PyLong_FromLong(self->head->flags); +} + +static PyObject * +ndarray_get_offset(NDArrayObject *self, void *closure) +{ + ndbuf_t *ndbuf = self->head; + return PyLong_FromSsize_t(ndbuf->offset); +} + +static PyObject * +ndarray_get_obj(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + + if (base->obj == NULL) { + Py_RETURN_NONE; + } + Py_INCREF(base->obj); + return base->obj; +} + +static PyObject * +ndarray_get_nbytes(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return PyLong_FromSsize_t(base->len); +} + +static PyObject * +ndarray_get_readonly(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return PyLong_FromLong(base->readonly); +} + +static PyObject * +ndarray_get_itemsize(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return PyLong_FromSsize_t(base->itemsize); +} + +static PyObject * +ndarray_get_format(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + char *fmt = base->format ? base->format : ""; + return PyUnicode_FromString(fmt); +} + +static PyObject * +ndarray_get_ndim(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return PyLong_FromSsize_t(base->ndim); +} + +static PyObject * +ndarray_get_shape(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return ssize_array_as_tuple(base->shape, base->ndim); +} + +static PyObject * +ndarray_get_strides(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return ssize_array_as_tuple(base->strides, base->ndim); +} + +static PyObject * +ndarray_get_suboffsets(NDArrayObject *self, void *closure) +{ + Py_buffer *base = &self->head->base; + return ssize_array_as_tuple(base->suboffsets, base->ndim); +} + +static PyObject * +ndarray_c_contig(PyObject *self, PyObject *dummy) +{ + NDArrayObject *nd = (NDArrayObject *)self; + int ret = PyBuffer_IsContiguous(&nd->head->base, 'C'); + + if (ret != ND_C_CONTIGUOUS(nd->head->flags)) { + PyErr_SetString(PyExc_RuntimeError, + "results from PyBuffer_IsContiguous() and flags differ"); + return NULL; + } + return PyBool_FromLong(ret); +} + +static PyObject * +ndarray_fortran_contig(PyObject *self, PyObject *dummy) +{ + NDArrayObject *nd = (NDArrayObject *)self; + int ret = PyBuffer_IsContiguous(&nd->head->base, 'F'); + + if (ret != ND_FORTRAN_CONTIGUOUS(nd->head->flags)) { + PyErr_SetString(PyExc_RuntimeError, + "results from PyBuffer_IsContiguous() and flags differ"); + return NULL; + } + return PyBool_FromLong(ret); +} + +static PyObject * +ndarray_contig(PyObject *self, PyObject *dummy) +{ + NDArrayObject *nd = (NDArrayObject *)self; + int ret = PyBuffer_IsContiguous(&nd->head->base, 'A'); + + if (ret != ND_ANY_CONTIGUOUS(nd->head->flags)) { + PyErr_SetString(PyExc_RuntimeError, + "results from PyBuffer_IsContiguous() and flags differ"); + return NULL; + } + return PyBool_FromLong(ret); +} + + +static PyGetSetDef ndarray_getset [] = +{ + /* ndbuf */ + { "flags", (getter)ndarray_get_flags, NULL, NULL, NULL}, + { "offset", (getter)ndarray_get_offset, NULL, NULL, NULL}, + /* ndbuf.base */ + { "obj", (getter)ndarray_get_obj, NULL, NULL, NULL}, + { "nbytes", (getter)ndarray_get_nbytes, NULL, NULL, NULL}, + { "readonly", (getter)ndarray_get_readonly, NULL, NULL, NULL}, + { "itemsize", (getter)ndarray_get_itemsize, NULL, NULL, NULL}, + { "format", (getter)ndarray_get_format, NULL, NULL, NULL}, + { "ndim", (getter)ndarray_get_ndim, NULL, NULL, NULL}, + { "shape", (getter)ndarray_get_shape, NULL, NULL, NULL}, + { "strides", (getter)ndarray_get_strides, NULL, NULL, NULL}, + { "suboffsets", (getter)ndarray_get_suboffsets, NULL, NULL, NULL}, + { "c_contiguous", (getter)ndarray_c_contig, NULL, NULL, NULL}, + { "f_contiguous", (getter)ndarray_fortran_contig, NULL, NULL, NULL}, + { "contiguous", (getter)ndarray_contig, NULL, NULL, NULL}, + {NULL} +}; + +static PyObject * +ndarray_tolist(PyObject *self, PyObject *dummy) +{ + return ndarray_as_list((NDArrayObject *)self); +} + +static PyObject * +ndarray_tobytes(PyObject *self, PyObject *dummy) +{ + ndbuf_t *ndbuf = ((NDArrayObject *)self)->head; + Py_buffer *src = &ndbuf->base; + Py_buffer dest; + PyObject *ret = NULL; + char *mem; + + if (ND_C_CONTIGUOUS(ndbuf->flags)) + return PyBytes_FromStringAndSize(src->buf, src->len); + + assert(src->shape != NULL); + assert(src->strides != NULL); + assert(src->ndim > 0); + + mem = PyMem_Malloc(src->len); + if (mem == NULL) { + PyErr_NoMemory(); + return NULL; + } + + dest = *src; + dest.buf = mem; + dest.suboffsets = NULL; + dest.strides = strides_from_shape(ndbuf, 0); + if (dest.strides == NULL) + goto out; + if (copy_buffer(&dest, src) < 0) + goto out; + + ret = PyBytes_FromStringAndSize(mem, src->len); + +out: + PyMem_XFree(dest.strides); + PyMem_Free(mem); + return ret; +} + +/* add redundant (negative) suboffsets for testing */ +static PyObject * +ndarray_add_suboffsets(PyObject *self, PyObject *dummy) +{ + NDArrayObject *nd = (NDArrayObject *)self; + Py_buffer *base = &nd->head->base; + Py_ssize_t i; + + if (base->suboffsets != NULL) { + PyErr_SetString(PyExc_TypeError, + "cannot add suboffsets to PIL-style array"); + return NULL; + } + if (base->strides == NULL) { + PyErr_SetString(PyExc_TypeError, + "cannot add suboffsets to array without strides"); + return NULL; + } + + base->suboffsets = PyMem_Malloc(base->ndim * (sizeof *base->suboffsets)); + if (base->suboffsets == NULL) { + PyErr_NoMemory(); + return NULL; + } + + for (i = 0; i < base->ndim; i++) + base->suboffsets[i] = -1; + + Py_RETURN_NONE; +} + +/* Test PyMemoryView_FromBuffer(): return a memoryview from a static buffer. + Obviously this is fragile and only one such view may be active at any + time. Never use anything like this in real code! */ +static char *infobuf = NULL; +static PyObject * +ndarray_memoryview_from_buffer(PyObject *self, PyObject *dummy) +{ + const NDArrayObject *nd = (NDArrayObject *)self; + const Py_buffer *view = &nd->head->base; + const ndbuf_t *ndbuf; + static char format[ND_MAX_NDIM+1]; + static Py_ssize_t shape[ND_MAX_NDIM]; + static Py_ssize_t strides[ND_MAX_NDIM]; + static Py_ssize_t suboffsets[ND_MAX_NDIM]; + static Py_buffer info; + char *p; + + if (!ND_IS_CONSUMER(nd)) + ndbuf = nd->head; /* self is ndarray/original exporter */ + else if (NDArray_Check(view->obj) && !ND_IS_CONSUMER(view->obj)) + /* self is ndarray and consumer from ndarray/original exporter */ + ndbuf = ((NDArrayObject *)view->obj)->head; + else { + PyErr_SetString(PyExc_TypeError, + "memoryview_from_buffer(): ndarray must be original exporter or " + "consumer from ndarray/original exporter"); + return NULL; + } + + info = *view; + p = PyMem_Realloc(infobuf, ndbuf->len); + if (p == NULL) { + PyMem_Free(infobuf); + PyErr_NoMemory(); + infobuf = NULL; + return NULL; + } + else { + infobuf = p; + } + /* copy the complete raw data */ + memcpy(infobuf, ndbuf->data, ndbuf->len); + info.buf = infobuf + ((char *)view->buf - ndbuf->data); + + if (view->format) { + if (strlen(view->format) > ND_MAX_NDIM) { + PyErr_Format(PyExc_TypeError, + "memoryview_from_buffer: format is limited to %d characters", + ND_MAX_NDIM); + return NULL; + } + strcpy(format, view->format); + info.format = format; + } + if (view->ndim > ND_MAX_NDIM) { + PyErr_Format(PyExc_TypeError, + "memoryview_from_buffer: ndim is limited to %d", ND_MAX_NDIM); + return NULL; + } + if (view->shape) { + memcpy(shape, view->shape, view->ndim * sizeof(Py_ssize_t)); + info.shape = shape; + } + if (view->strides) { + memcpy(strides, view->strides, view->ndim * sizeof(Py_ssize_t)); + info.strides = strides; + } + if (view->suboffsets) { + memcpy(suboffsets, view->suboffsets, view->ndim * sizeof(Py_ssize_t)); + info.suboffsets = suboffsets; + } + + return PyMemoryView_FromBuffer(&info); +} + +/* Get a single item from bufobj at the location specified by seq. + seq is a list or tuple of indices. The purpose of this function + is to check other functions against PyBuffer_GetPointer(). */ +static PyObject * +get_pointer(PyObject *self, PyObject *args) +{ + PyObject *ret = NULL, *bufobj, *seq; + Py_buffer view; + Py_ssize_t indices[ND_MAX_NDIM]; + Py_ssize_t i; + void *ptr; + + if (!PyArg_ParseTuple(args, "OO", &bufobj, &seq)) { + return NULL; + } + + CHECK_LIST_OR_TUPLE(seq); + if (PyObject_GetBuffer(bufobj, &view, PyBUF_FULL_RO) < 0) + return NULL; + + if (view.ndim > ND_MAX_NDIM) { + PyErr_Format(PyExc_ValueError, + "get_pointer(): ndim > %d", ND_MAX_NDIM); + goto out; + } + if (PySequence_Fast_GET_SIZE(seq) != view.ndim) { + PyErr_SetString(PyExc_ValueError, + "get_pointer(): len(indices) != ndim"); + goto out; + } + + for (i = 0; i < view.ndim; i++) { + PyObject *x = PySequence_Fast_GET_ITEM(seq, i); + indices[i] = PyLong_AsSsize_t(x); + if (PyErr_Occurred()) + goto out; + if (indices[i] < 0 || indices[i] >= view.shape[i]) { + PyErr_Format(PyExc_ValueError, + "get_pointer(): invalid index %zd at position %zd", + indices[i], i); + goto out; + } + } + + ptr = PyBuffer_GetPointer(&view, indices); + ret = unpack_single(ptr, view.format, view.itemsize); + +out: + PyBuffer_Release(&view); + return ret; +} + +static char +get_ascii_order(PyObject *order) +{ + PyObject *ascii_order; + char ord; + + if (!PyUnicode_Check(order)) { + PyErr_SetString(PyExc_TypeError, + "order must be a string"); + return CHAR_MAX; + } + + ascii_order = PyUnicode_AsASCIIString(order); + if (ascii_order == NULL) { + return CHAR_MAX; + } + + ord = PyBytes_AS_STRING(ascii_order)[0]; + Py_DECREF(ascii_order); + return ord; +} + +/* Get a contiguous memoryview. */ +static PyObject * +get_contiguous(PyObject *self, PyObject *args) +{ + PyObject *obj; + PyObject *buffertype; + PyObject *order; + long type; + char ord; + + if (!PyArg_ParseTuple(args, "OOO", &obj, &buffertype, &order)) { + return NULL; + } + + if (!PyLong_Check(buffertype)) { + PyErr_SetString(PyExc_TypeError, + "buffertype must be PyBUF_READ or PyBUF_WRITE"); + return NULL; + } + type = PyLong_AsLong(buffertype); + if (type == -1 && PyErr_Occurred()) { + return NULL; + } + + ord = get_ascii_order(order); + if (ord == CHAR_MAX) { + return NULL; + } + + return PyMemoryView_GetContiguous(obj, (int)type, ord); +} + +static int +fmtcmp(const char *fmt1, const char *fmt2) +{ + if (fmt1 == NULL) { + return fmt2 == NULL || strcmp(fmt2, "B") == 0; + } + if (fmt2 == NULL) { + return fmt1 == NULL || strcmp(fmt1, "B") == 0; + } + return strcmp(fmt1, fmt2) == 0; +} + +static int +arraycmp(const Py_ssize_t *a1, const Py_ssize_t *a2, const Py_ssize_t *shape, + Py_ssize_t ndim) +{ + Py_ssize_t i; + + if (ndim == 1 && shape && shape[0] == 1) { + /* This is for comparing strides: For example, the array + [175], shape=[1], strides=[-5] is considered contiguous. */ + return 1; + } + + for (i = 0; i < ndim; i++) { + if (a1[i] != a2[i]) { + return 0; + } + } + + return 1; +} + +/* Compare two contiguous buffers for physical equality. */ +static PyObject * +cmp_contig(PyObject *self, PyObject *args) +{ + PyObject *b1, *b2; /* buffer objects */ + Py_buffer v1, v2; + PyObject *ret; + int equal = 0; + + if (!PyArg_ParseTuple(args, "OO", &b1, &b2)) { + return NULL; + } + + if (PyObject_GetBuffer(b1, &v1, PyBUF_FULL_RO) < 0) { + PyErr_SetString(PyExc_TypeError, + "cmp_contig: first argument does not implement the buffer " + "protocol"); + return NULL; + } + if (PyObject_GetBuffer(b2, &v2, PyBUF_FULL_RO) < 0) { + PyErr_SetString(PyExc_TypeError, + "cmp_contig: second argument does not implement the buffer " + "protocol"); + PyBuffer_Release(&v1); + return NULL; + } + + if (!(PyBuffer_IsContiguous(&v1, 'C')&&PyBuffer_IsContiguous(&v2, 'C')) && + !(PyBuffer_IsContiguous(&v1, 'F')&&PyBuffer_IsContiguous(&v2, 'F'))) { + goto result; + } + + /* readonly may differ if created from non-contiguous */ + if (v1.len != v2.len || + v1.itemsize != v2.itemsize || + v1.ndim != v2.ndim || + !fmtcmp(v1.format, v2.format) || + !!v1.shape != !!v2.shape || + !!v1.strides != !!v2.strides || + !!v1.suboffsets != !!v2.suboffsets) { + goto result; + } + + if ((v1.shape && !arraycmp(v1.shape, v2.shape, NULL, v1.ndim)) || + (v1.strides && !arraycmp(v1.strides, v2.strides, v1.shape, v1.ndim)) || + (v1.suboffsets && !arraycmp(v1.suboffsets, v2.suboffsets, NULL, + v1.ndim))) { + goto result; + } + + if (memcmp((char *)v1.buf, (char *)v2.buf, v1.len) != 0) { + goto result; + } + + equal = 1; + +result: + PyBuffer_Release(&v1); + PyBuffer_Release(&v2); + + ret = equal ? Py_True : Py_False; + Py_INCREF(ret); + return ret; +} + +static PyObject * +is_contiguous(PyObject *self, PyObject *args) +{ + PyObject *obj; + PyObject *order; + PyObject *ret = NULL; + Py_buffer view; + char ord; + + if (!PyArg_ParseTuple(args, "OO", &obj, &order)) { + return NULL; + } + + if (PyObject_GetBuffer(obj, &view, PyBUF_FULL_RO) < 0) { + PyErr_SetString(PyExc_TypeError, + "is_contiguous: object does not implement the buffer " + "protocol"); + return NULL; + } + + ord = get_ascii_order(order); + if (ord == CHAR_MAX) { + goto release; + } + + ret = PyBuffer_IsContiguous(&view, ord) ? Py_True : Py_False; + Py_INCREF(ret); + +release: + PyBuffer_Release(&view); + return ret; +} + +static Py_hash_t +ndarray_hash(PyObject *self) +{ + const NDArrayObject *nd = (NDArrayObject *)self; + const Py_buffer *view = &nd->head->base; + PyObject *bytes; + Py_hash_t hash; + + if (!view->readonly) { + PyErr_SetString(PyExc_ValueError, + "cannot hash writable ndarray object"); + return -1; + } + if (view->obj != NULL && PyObject_Hash(view->obj) == -1) { + return -1; + } + + bytes = ndarray_tobytes(self, NULL); + if (bytes == NULL) { + return -1; + } + + hash = PyObject_Hash(bytes); + Py_DECREF(bytes); + return hash; +} + + +static PyMethodDef ndarray_methods [] = +{ + { "tolist", ndarray_tolist, METH_NOARGS, NULL }, + { "tobytes", ndarray_tobytes, METH_NOARGS, NULL }, + { "push", (PyCFunction)ndarray_push, METH_VARARGS|METH_KEYWORDS, NULL }, + { "pop", ndarray_pop, METH_NOARGS, NULL }, + { "add_suboffsets", ndarray_add_suboffsets, METH_NOARGS, NULL }, + { "memoryview_from_buffer", ndarray_memoryview_from_buffer, METH_NOARGS, NULL }, + {NULL} +}; + +static PyTypeObject NDArray_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "ndarray", /* Name of this type */ + sizeof(NDArrayObject), /* Basic object size */ + 0, /* Item size for varobject */ + (destructor)ndarray_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + &ndarray_as_sequence, /* tp_as_sequence */ + &ndarray_as_mapping, /* tp_as_mapping */ + (hashfunc)ndarray_hash, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &ndarray_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + ndarray_methods, /* tp_methods */ + 0, /* tp_members */ + ndarray_getset, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + ndarray_init, /* tp_init */ + 0, /* tp_alloc */ + ndarray_new, /* tp_new */ +}; + + +static struct PyMethodDef _testbuffer_functions[] = { + {"slice_indices", slice_indices, METH_VARARGS, NULL}, + {"get_pointer", get_pointer, METH_VARARGS, NULL}, + {"get_contiguous", get_contiguous, METH_VARARGS, NULL}, + {"is_contiguous", is_contiguous, METH_VARARGS, NULL}, + {"cmp_contig", cmp_contig, METH_VARARGS, NULL}, + {NULL, NULL} +}; + +static struct PyModuleDef _testbuffermodule = { + PyModuleDef_HEAD_INIT, + "_testbuffer", + NULL, + -1, + _testbuffer_functions, + NULL, + NULL, + NULL, + NULL +}; + + +PyMODINIT_FUNC +PyInit__testbuffer(void) +{ + PyObject *m; + + m = PyModule_Create(&_testbuffermodule); + if (m == NULL) + return NULL; + + Py_TYPE(&NDArray_Type)=&PyType_Type; + Py_INCREF(&NDArray_Type); + PyModule_AddObject(m, "ndarray", (PyObject *)&NDArray_Type); + + structmodule = PyImport_ImportModule("struct"); + if (structmodule == NULL) + return NULL; + + Struct = PyObject_GetAttrString(structmodule, "Struct"); + calcsize = PyObject_GetAttrString(structmodule, "calcsize"); + if (Struct == NULL || calcsize == NULL) + return NULL; + + simple_format = PyUnicode_FromString(simple_fmt); + if (simple_format == NULL) + return NULL; + + PyModule_AddIntConstant(m, "ND_MAX_NDIM", ND_MAX_NDIM); + PyModule_AddIntConstant(m, "ND_VAREXPORT", ND_VAREXPORT); + PyModule_AddIntConstant(m, "ND_WRITABLE", ND_WRITABLE); + PyModule_AddIntConstant(m, "ND_FORTRAN", ND_FORTRAN); + PyModule_AddIntConstant(m, "ND_SCALAR", ND_SCALAR); + PyModule_AddIntConstant(m, "ND_PIL", ND_PIL); + PyModule_AddIntConstant(m, "ND_GETBUF_FAIL", ND_GETBUF_FAIL); + + PyModule_AddIntConstant(m, "PyBUF_SIMPLE", PyBUF_SIMPLE); + PyModule_AddIntConstant(m, "PyBUF_WRITABLE", PyBUF_WRITABLE); + PyModule_AddIntConstant(m, "PyBUF_FORMAT", PyBUF_FORMAT); + PyModule_AddIntConstant(m, "PyBUF_ND", PyBUF_ND); + PyModule_AddIntConstant(m, "PyBUF_STRIDES", PyBUF_STRIDES); + PyModule_AddIntConstant(m, "PyBUF_INDIRECT", PyBUF_INDIRECT); + PyModule_AddIntConstant(m, "PyBUF_C_CONTIGUOUS", PyBUF_C_CONTIGUOUS); + PyModule_AddIntConstant(m, "PyBUF_F_CONTIGUOUS", PyBUF_F_CONTIGUOUS); + PyModule_AddIntConstant(m, "PyBUF_ANY_CONTIGUOUS", PyBUF_ANY_CONTIGUOUS); + PyModule_AddIntConstant(m, "PyBUF_FULL", PyBUF_FULL); + PyModule_AddIntConstant(m, "PyBUF_FULL_RO", PyBUF_FULL_RO); + PyModule_AddIntConstant(m, "PyBUF_RECORDS", PyBUF_RECORDS); + PyModule_AddIntConstant(m, "PyBUF_RECORDS_RO", PyBUF_RECORDS_RO); + PyModule_AddIntConstant(m, "PyBUF_STRIDED", PyBUF_STRIDED); + PyModule_AddIntConstant(m, "PyBUF_STRIDED_RO", PyBUF_STRIDED_RO); + PyModule_AddIntConstant(m, "PyBUF_CONTIG", PyBUF_CONTIG); + PyModule_AddIntConstant(m, "PyBUF_CONTIG_RO", PyBUF_CONTIG_RO); + + PyModule_AddIntConstant(m, "PyBUF_READ", PyBUF_READ); + PyModule_AddIntConstant(m, "PyBUF_WRITE", PyBUF_WRITE); + + return m; +} + + + diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index bcb3a0f..23a4d5ac 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -275,95 +275,6 @@ test_lazy_hash_inheritance(PyObject* self) } -/* Issue #7385: Check that memoryview() does not crash - * when bf_getbuffer returns an error - */ - -static int -broken_buffer_getbuffer(PyObject *self, Py_buffer *view, int flags) -{ - PyErr_SetString( - TestError, - "test_broken_memoryview: expected error in bf_getbuffer"); - return -1; -} - -static PyBufferProcs memoryviewtester_as_buffer = { - (getbufferproc)broken_buffer_getbuffer, /* bf_getbuffer */ - 0, /* bf_releasebuffer */ -}; - -static PyTypeObject _MemoryViewTester_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "memoryviewtester", /* Name of this type */ - sizeof(PyObject), /* Basic object size */ - 0, /* Item size for varobject */ - (destructor)PyObject_Del, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &memoryviewtester_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ -}; - -static PyObject* -test_broken_memoryview(PyObject* self) -{ - PyObject *obj = PyObject_New(PyObject, &_MemoryViewTester_Type); - PyObject *res; - - if (obj == NULL) { - PyErr_Clear(); - PyErr_SetString( - TestError, - "test_broken_memoryview: failed to create object"); - return NULL; - } - - res = PyMemoryView_FromObject(obj); - if (res || !PyErr_Occurred()){ - PyErr_SetString( - TestError, - "test_broken_memoryview: memoryview() didn't raise an Exception"); - Py_XDECREF(res); - Py_DECREF(obj); - return NULL; - } - - PyErr_Clear(); - Py_DECREF(obj); - Py_RETURN_NONE; -} - - /* Tests of PyLong_{As, From}{Unsigned,}Long(), and (#ifdef HAVE_LONG_LONG) PyLong_{As, From}{Unsigned,}LongLong(). @@ -2421,7 +2332,6 @@ static PyMethodDef TestMethods[] = { {"test_list_api", (PyCFunction)test_list_api, METH_NOARGS}, {"test_dict_iteration", (PyCFunction)test_dict_iteration,METH_NOARGS}, {"test_lazy_hash_inheritance", (PyCFunction)test_lazy_hash_inheritance,METH_NOARGS}, - {"test_broken_memoryview", (PyCFunction)test_broken_memoryview,METH_NOARGS}, {"test_long_api", (PyCFunction)test_long_api, METH_NOARGS}, {"test_long_and_overflow", (PyCFunction)test_long_and_overflow, METH_NOARGS}, @@ -2684,7 +2594,6 @@ PyInit__testcapi(void) return NULL; Py_TYPE(&_HashInheritanceTester_Type)=&PyType_Type; - Py_TYPE(&_MemoryViewTester_Type)=&PyType_Type; Py_TYPE(&test_structmembersType)=&PyType_Type; Py_INCREF(&test_structmembersType); diff --git a/Objects/abstract.c b/Objects/abstract.c index 47010d6..62fccdc 100644 --- a/Objects/abstract.c +++ b/Objects/abstract.c @@ -340,7 +340,7 @@ PyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags) } static int -_IsFortranContiguous(Py_buffer *view) +_IsFortranContiguous(const Py_buffer *view) { Py_ssize_t sd, dim; int i; @@ -361,7 +361,7 @@ _IsFortranContiguous(Py_buffer *view) } static int -_IsCContiguous(Py_buffer *view) +_IsCContiguous(const Py_buffer *view) { Py_ssize_t sd, dim; int i; @@ -382,16 +382,16 @@ _IsCContiguous(Py_buffer *view) } int -PyBuffer_IsContiguous(Py_buffer *view, char fort) +PyBuffer_IsContiguous(const Py_buffer *view, char order) { if (view->suboffsets != NULL) return 0; - if (fort == 'C') + if (order == 'C') return _IsCContiguous(view); - else if (fort == 'F') + else if (order == 'F') return _IsFortranContiguous(view); - else if (fort == 'A') + else if (order == 'A') return (_IsCContiguous(view) || _IsFortranContiguous(view)); return 0; } @@ -651,7 +651,7 @@ int PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int flags) { - if (view == NULL) return 0; + if (view == NULL) return 0; /* XXX why not -1? */ if (((flags & PyBUF_WRITABLE) == PyBUF_WRITABLE) && (readonly == 1)) { PyErr_SetString(PyExc_BufferError, diff --git a/Objects/memoryobject.c b/Objects/memoryobject.c index 295a742..e87abf5 100644 --- a/Objects/memoryobject.c +++ b/Objects/memoryobject.c @@ -1,127 +1,918 @@ - /* Memoryview object implementation */ #include "Python.h" +#include <stddef.h> + + +/****************************************************************************/ +/* ManagedBuffer Object */ +/****************************************************************************/ + +/* + ManagedBuffer Object: + --------------------- + + The purpose of this object is to facilitate the handling of chained + memoryviews that have the same underlying exporting object. PEP-3118 + allows the underlying object to change while a view is exported. This + could lead to unexpected results when constructing a new memoryview + from an existing memoryview. + + Rather than repeatedly redirecting buffer requests to the original base + object, all chained memoryviews use a single buffer snapshot. This + snapshot is generated by the constructor _PyManagedBuffer_FromObject(). + + Ownership rules: + ---------------- + + The master buffer inside a managed buffer is filled in by the original + base object. shape, strides, suboffsets and format are read-only for + all consumers. + + A memoryview's buffer is a private copy of the exporter's buffer. shape, + strides and suboffsets belong to the memoryview and are thus writable. + + If a memoryview itself exports several buffers via memory_getbuf(), all + buffer copies share shape, strides and suboffsets. In this case, the + arrays are NOT writable. + + Reference count assumptions: + ---------------------------- + + The 'obj' member of a Py_buffer must either be NULL or refer to the + exporting base object. In the Python codebase, all getbufferprocs + return a new reference to view.obj (example: bytes_buffer_getbuffer()). + + PyBuffer_Release() decrements view.obj (if non-NULL), so the + releasebufferprocs must NOT decrement view.obj. +*/ + -#define IS_RELEASED(memobj) \ - (((PyMemoryViewObject *) memobj)->view.buf == NULL) +#define XSTRINGIZE(v) #v +#define STRINGIZE(v) XSTRINGIZE(v) -#define CHECK_RELEASED(memobj) \ - if (IS_RELEASED(memobj)) { \ - PyErr_SetString(PyExc_ValueError, \ - "operation forbidden on released memoryview object"); \ - return NULL; \ +#define CHECK_MBUF_RELEASED(mbuf) \ + if (((_PyManagedBufferObject *)mbuf)->flags&_Py_MANAGED_BUFFER_RELEASED) { \ + PyErr_SetString(PyExc_ValueError, \ + "operation forbidden on released memoryview object"); \ + return NULL; \ } -#define CHECK_RELEASED_INT(memobj) \ - if (IS_RELEASED(memobj)) { \ - PyErr_SetString(PyExc_ValueError, \ - "operation forbidden on released memoryview object"); \ - return -1; \ + +Py_LOCAL_INLINE(_PyManagedBufferObject *) +mbuf_alloc(void) +{ + _PyManagedBufferObject *mbuf; + + mbuf = (_PyManagedBufferObject *) + PyObject_GC_New(_PyManagedBufferObject, &_PyManagedBuffer_Type); + if (mbuf == NULL) + return NULL; + mbuf->flags = 0; + mbuf->exports = 0; + mbuf->master.obj = NULL; + _PyObject_GC_TRACK(mbuf); + + return mbuf; +} + +static PyObject * +_PyManagedBuffer_FromObject(PyObject *base) +{ + _PyManagedBufferObject *mbuf; + + mbuf = mbuf_alloc(); + if (mbuf == NULL) + return NULL; + + if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) { + /* mbuf->master.obj must be NULL. */ + Py_DECREF(mbuf); + return NULL; } -static Py_ssize_t -get_shape0(Py_buffer *buf) -{ - if (buf->shape != NULL) - return buf->shape[0]; - if (buf->ndim == 0) - return 1; - PyErr_SetString(PyExc_TypeError, - "exported buffer does not have any shape information associated " - "to it"); - return -1; + /* Assume that master.obj is a new reference to base. */ + assert(mbuf->master.obj == base); + + return (PyObject *)mbuf; } static void -dup_buffer(Py_buffer *dest, Py_buffer *src) +mbuf_release(_PyManagedBufferObject *self) { - *dest = *src; - if (src->ndim == 1 && src->shape != NULL) { - dest->shape = &(dest->smalltable[0]); - dest->shape[0] = get_shape0(src); - } - if (src->ndim == 1 && src->strides != NULL) { - dest->strides = &(dest->smalltable[1]); - dest->strides[0] = src->strides[0]; - } + if (self->flags&_Py_MANAGED_BUFFER_RELEASED) + return; + + /* NOTE: at this point self->exports can still be > 0 if this function + is called from mbuf_clear() to break up a reference cycle. */ + self->flags |= _Py_MANAGED_BUFFER_RELEASED; + + /* PyBuffer_Release() decrements master->obj and sets it to NULL. */ + _PyObject_GC_UNTRACK(self); + PyBuffer_Release(&self->master); +} + +static void +mbuf_dealloc(_PyManagedBufferObject *self) +{ + assert(self->exports == 0); + mbuf_release(self); + if (self->flags&_Py_MANAGED_BUFFER_FREE_FORMAT) + PyMem_Free(self->master.format); + PyObject_GC_Del(self); } static int -memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags) +mbuf_traverse(_PyManagedBufferObject *self, visitproc visit, void *arg) { - int res = 0; - CHECK_RELEASED_INT(self); - if (self->view.obj != NULL) - res = PyObject_GetBuffer(self->view.obj, view, flags); - if (view) - dup_buffer(view, &self->view); - return res; + Py_VISIT(self->master.obj); + return 0; } -static void -memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view) +static int +mbuf_clear(_PyManagedBufferObject *self) { - PyBuffer_Release(view); + assert(self->exports >= 0); + mbuf_release(self); + return 0; } +PyTypeObject _PyManagedBuffer_Type = { + PyVarObject_HEAD_INIT(&PyType_Type, 0) + "managedbuffer", + sizeof(_PyManagedBufferObject), + 0, + (destructor)mbuf_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ + 0, /* tp_doc */ + (traverseproc)mbuf_traverse, /* tp_traverse */ + (inquiry)mbuf_clear /* tp_clear */ +}; + + +/****************************************************************************/ +/* MemoryView Object */ +/****************************************************************************/ + +/* In the process of breaking reference cycles mbuf_release() can be + called before memory_release(). */ +#define BASE_INACCESSIBLE(mv) \ + (((PyMemoryViewObject *)mv)->flags&_Py_MEMORYVIEW_RELEASED || \ + ((PyMemoryViewObject *)mv)->mbuf->flags&_Py_MANAGED_BUFFER_RELEASED) + +#define CHECK_RELEASED(mv) \ + if (BASE_INACCESSIBLE(mv)) { \ + PyErr_SetString(PyExc_ValueError, \ + "operation forbidden on released memoryview object"); \ + return NULL; \ + } + +#define CHECK_RELEASED_INT(mv) \ + if (BASE_INACCESSIBLE(mv)) { \ + PyErr_SetString(PyExc_ValueError, \ + "operation forbidden on released memoryview object"); \ + return -1; \ + } + +#define CHECK_LIST_OR_TUPLE(v) \ + if (!PyList_Check(v) && !PyTuple_Check(v)) { \ + PyErr_SetString(PyExc_TypeError, \ + #v " must be a list or a tuple"); \ + return NULL; \ + } + +#define VIEW_ADDR(mv) (&((PyMemoryViewObject *)mv)->view) + +/* Check for the presence of suboffsets in the first dimension. */ +#define HAVE_PTR(suboffsets) (suboffsets && suboffsets[0] >= 0) +/* Adjust ptr if suboffsets are present. */ +#define ADJUST_PTR(ptr, suboffsets) \ + (HAVE_PTR(suboffsets) ? *((char**)ptr) + suboffsets[0] : ptr) + +/* Memoryview buffer properties */ +#define MV_C_CONTIGUOUS(flags) (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C)) +#define MV_F_CONTIGUOUS(flags) \ + (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_FORTRAN)) +#define MV_ANY_CONTIGUOUS(flags) \ + (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN)) + +/* Fast contiguity test. Caller must ensure suboffsets==NULL and ndim==1. */ +#define MV_CONTIGUOUS_NDIM1(view) \ + ((view)->shape[0] == 1 || (view)->strides[0] == (view)->itemsize) + +/* getbuffer() requests */ +#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT) +#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) +#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) +#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS) +#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES) +#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND) +#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE) +#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT) + + PyDoc_STRVAR(memory_doc, "memoryview(object)\n\ \n\ Create a new memoryview object which references the given object."); + +/**************************************************************************/ +/* Copy memoryview buffers */ +/**************************************************************************/ + +/* The functions in this section take a source and a destination buffer + with the same logical structure: format, itemsize, ndim and shape + are identical, with ndim > 0. + + NOTE: All buffers are assumed to have PyBUF_FULL information, which + is the case for memoryviews! */ + + +/* Assumptions: ndim >= 1. The macro tests for a corner case that should + perhaps be explicitly forbidden in the PEP. */ +#define HAVE_SUBOFFSETS_IN_LAST_DIM(view) \ + (view->suboffsets && view->suboffsets[dest->ndim-1] >= 0) + +Py_LOCAL_INLINE(int) +last_dim_is_contiguous(Py_buffer *dest, Py_buffer *src) +{ + assert(dest->ndim > 0 && src->ndim > 0); + return (!HAVE_SUBOFFSETS_IN_LAST_DIM(dest) && + !HAVE_SUBOFFSETS_IN_LAST_DIM(src) && + dest->strides[dest->ndim-1] == dest->itemsize && + src->strides[src->ndim-1] == src->itemsize); +} + +/* Check that the logical structure of the destination and source buffers + is identical. */ +static int +cmp_structure(Py_buffer *dest, Py_buffer *src) +{ + const char *dfmt, *sfmt; + int i; + + assert(dest->format && src->format); + dfmt = dest->format[0] == '@' ? dest->format+1 : dest->format; + sfmt = src->format[0] == '@' ? src->format+1 : src->format; + + if (strcmp(dfmt, sfmt) != 0 || + dest->itemsize != src->itemsize || + dest->ndim != src->ndim) { + goto value_error; + } + + for (i = 0; i < dest->ndim; i++) { + if (dest->shape[i] != src->shape[i]) + goto value_error; + if (dest->shape[i] == 0) + break; + } + + return 0; + +value_error: + PyErr_SetString(PyExc_ValueError, + "ndarray assignment: lvalue and rvalue have different structures"); + return -1; +} + +/* Base case for recursive multi-dimensional copying. Contiguous arrays are + copied with very little overhead. Assumptions: ndim == 1, mem == NULL or + sizeof(mem) == shape[0] * itemsize. */ +static void +copy_base(const Py_ssize_t *shape, Py_ssize_t itemsize, + char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets, + char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets, + char *mem) +{ + if (mem == NULL) { /* contiguous */ + Py_ssize_t size = shape[0] * itemsize; + if (dptr + size < sptr || sptr + size < dptr) + memcpy(dptr, sptr, size); /* no overlapping */ + else + memmove(dptr, sptr, size); + } + else { + char *p; + Py_ssize_t i; + for (i=0, p=mem; i < shape[0]; p+=itemsize, sptr+=sstrides[0], i++) { + char *xsptr = ADJUST_PTR(sptr, ssuboffsets); + memcpy(p, xsptr, itemsize); + } + for (i=0, p=mem; i < shape[0]; p+=itemsize, dptr+=dstrides[0], i++) { + char *xdptr = ADJUST_PTR(dptr, dsuboffsets); + memcpy(xdptr, p, itemsize); + } + } + +} + +/* Recursively copy a source buffer to a destination buffer. The two buffers + have the same ndim, shape and itemsize. */ +static void +copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize, + char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets, + char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets, + char *mem) +{ + Py_ssize_t i; + + assert(ndim >= 1); + + if (ndim == 1) { + copy_base(shape, itemsize, + dptr, dstrides, dsuboffsets, + sptr, sstrides, ssuboffsets, + mem); + return; + } + + for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) { + char *xdptr = ADJUST_PTR(dptr, dsuboffsets); + char *xsptr = ADJUST_PTR(sptr, ssuboffsets); + + copy_rec(shape+1, ndim-1, itemsize, + xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL, + xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL, + mem); + } +} + +/* Faster copying of one-dimensional arrays. */ +static int +copy_single(Py_buffer *dest, Py_buffer *src) +{ + char *mem = NULL; + + assert(dest->ndim == 1); + + if (cmp_structure(dest, src) < 0) + return -1; + + if (!last_dim_is_contiguous(dest, src)) { + mem = PyMem_Malloc(dest->shape[0] * dest->itemsize); + if (mem == NULL) { + PyErr_NoMemory(); + return -1; + } + } + + copy_base(dest->shape, dest->itemsize, + dest->buf, dest->strides, dest->suboffsets, + src->buf, src->strides, src->suboffsets, + mem); + + if (mem) + PyMem_Free(mem); + + return 0; +} + +/* Recursively copy src to dest. Both buffers must have the same basic + structure. Copying is atomic, the function never fails with a partial + copy. */ +static int +copy_buffer(Py_buffer *dest, Py_buffer *src) +{ + char *mem = NULL; + + assert(dest->ndim > 0); + + if (cmp_structure(dest, src) < 0) + return -1; + + if (!last_dim_is_contiguous(dest, src)) { + mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize); + if (mem == NULL) { + PyErr_NoMemory(); + return -1; + } + } + + copy_rec(dest->shape, dest->ndim, dest->itemsize, + dest->buf, dest->strides, dest->suboffsets, + src->buf, src->strides, src->suboffsets, + mem); + + if (mem) + PyMem_Free(mem); + + return 0; +} + +/* Initialize strides for a C-contiguous array. */ +Py_LOCAL_INLINE(void) +init_strides_from_shape(Py_buffer *view) +{ + Py_ssize_t i; + + assert(view->ndim > 0); + + view->strides[view->ndim-1] = view->itemsize; + for (i = view->ndim-2; i >= 0; i--) + view->strides[i] = view->strides[i+1] * view->shape[i+1]; +} + +/* Initialize strides for a Fortran-contiguous array. */ +Py_LOCAL_INLINE(void) +init_fortran_strides_from_shape(Py_buffer *view) +{ + Py_ssize_t i; + + assert(view->ndim > 0); + + view->strides[0] = view->itemsize; + for (i = 1; i < view->ndim; i++) + view->strides[i] = view->strides[i-1] * view->shape[i-1]; +} + +/* Copy src to a C-contiguous representation. Assumptions: + len(mem) == src->len. */ +static int +buffer_to_c_contiguous(char *mem, Py_buffer *src) +{ + Py_buffer dest; + Py_ssize_t *strides; + int ret; + + assert(src->shape != NULL); + assert(src->strides != NULL); + + strides = PyMem_Malloc(src->ndim * (sizeof *src->strides)); + if (strides == NULL) { + PyErr_NoMemory(); + return -1; + } + + /* initialize dest as a C-contiguous buffer */ + dest = *src; + dest.buf = mem; + /* shape is constant and shared */ + dest.strides = strides; + init_strides_from_shape(&dest); + dest.suboffsets = NULL; + + ret = copy_buffer(&dest, src); + + PyMem_Free(strides); + return ret; +} + + +/****************************************************************************/ +/* Constructors */ +/****************************************************************************/ + +/* Initialize values that are shared with the managed buffer. */ +Py_LOCAL_INLINE(void) +init_shared_values(Py_buffer *dest, const Py_buffer *src) +{ + dest->obj = src->obj; + dest->buf = src->buf; + dest->len = src->len; + dest->itemsize = src->itemsize; + dest->readonly = src->readonly; + dest->format = src->format ? src->format : "B"; + dest->internal = src->internal; +} + +/* Copy shape and strides. Reconstruct missing values. */ +static void +init_shape_strides(Py_buffer *dest, const Py_buffer *src) +{ + Py_ssize_t i; + + if (src->ndim == 0) { + dest->shape = NULL; + dest->strides = NULL; + return; + } + if (src->ndim == 1) { + dest->shape[0] = src->shape ? src->shape[0] : src->len / src->itemsize; + dest->strides[0] = src->strides ? src->strides[0] : src->itemsize; + return; + } + + for (i = 0; i < src->ndim; i++) + dest->shape[i] = src->shape[i]; + if (src->strides) { + for (i = 0; i < src->ndim; i++) + dest->strides[i] = src->strides[i]; + } + else { + init_strides_from_shape(dest); + } +} + +Py_LOCAL_INLINE(void) +init_suboffsets(Py_buffer *dest, const Py_buffer *src) +{ + Py_ssize_t i; + + if (src->suboffsets == NULL) { + dest->suboffsets = NULL; + return; + } + for (i = 0; i < src->ndim; i++) + dest->suboffsets[i] = src->suboffsets[i]; +} + +/* len = product(shape) * itemsize */ +Py_LOCAL_INLINE(void) +init_len(Py_buffer *view) +{ + Py_ssize_t i, len; + + len = 1; + for (i = 0; i < view->ndim; i++) + len *= view->shape[i]; + len *= view->itemsize; + + view->len = len; +} + +/* Initialize memoryview buffer properties. */ +static void +init_flags(PyMemoryViewObject *mv) +{ + const Py_buffer *view = &mv->view; + int flags = 0; + + switch (view->ndim) { + case 0: + flags |= (_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C| + _Py_MEMORYVIEW_FORTRAN); + break; + case 1: + if (MV_CONTIGUOUS_NDIM1(view)) + flags |= (_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN); + break; + default: + if (PyBuffer_IsContiguous(view, 'C')) + flags |= _Py_MEMORYVIEW_C; + if (PyBuffer_IsContiguous(view, 'F')) + flags |= _Py_MEMORYVIEW_FORTRAN; + break; + } + + if (view->suboffsets) { + flags |= _Py_MEMORYVIEW_PIL; + flags &= ~(_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN); + } + + mv->flags = flags; +} + +/* Allocate a new memoryview and perform basic initialization. New memoryviews + are exclusively created through the mbuf_add functions. */ +Py_LOCAL_INLINE(PyMemoryViewObject *) +memory_alloc(int ndim) +{ + PyMemoryViewObject *mv; + + mv = (PyMemoryViewObject *) + PyObject_GC_NewVar(PyMemoryViewObject, &PyMemoryView_Type, 3*ndim); + if (mv == NULL) + return NULL; + + mv->mbuf = NULL; + mv->hash = -1; + mv->flags = 0; + mv->exports = 0; + mv->view.ndim = ndim; + mv->view.shape = mv->ob_array; + mv->view.strides = mv->ob_array + ndim; + mv->view.suboffsets = mv->ob_array + 2 * ndim; + + _PyObject_GC_TRACK(mv); + return mv; +} + +/* + Return a new memoryview that is registered with mbuf. If src is NULL, + use mbuf->master as the underlying buffer. Otherwise, use src. + + The new memoryview has full buffer information: shape and strides + are always present, suboffsets as needed. Arrays are copied to + the memoryview's ob_array field. + */ +static PyObject * +mbuf_add_view(_PyManagedBufferObject *mbuf, const Py_buffer *src) +{ + PyMemoryViewObject *mv; + Py_buffer *dest; + + if (src == NULL) + src = &mbuf->master; + + if (src->ndim > PyBUF_MAX_NDIM) { + PyErr_SetString(PyExc_ValueError, + "memoryview: number of dimensions must not exceed " + STRINGIZE(PyBUF_MAX_NDIM)); + return NULL; + } + + mv = memory_alloc(src->ndim); + if (mv == NULL) + return NULL; + + dest = &mv->view; + init_shared_values(dest, src); + init_shape_strides(dest, src); + init_suboffsets(dest, src); + init_flags(mv); + + mv->mbuf = mbuf; + Py_INCREF(mbuf); + mbuf->exports++; + + return (PyObject *)mv; +} + +/* Register an incomplete view: shape, strides, suboffsets and flags still + need to be initialized. Use 'ndim' instead of src->ndim to determine the + size of the memoryview's ob_array. + + Assumption: ndim <= PyBUF_MAX_NDIM. */ +static PyObject * +mbuf_add_incomplete_view(_PyManagedBufferObject *mbuf, const Py_buffer *src, + int ndim) +{ + PyMemoryViewObject *mv; + Py_buffer *dest; + + if (src == NULL) + src = &mbuf->master; + + assert(ndim <= PyBUF_MAX_NDIM); + + mv = memory_alloc(ndim); + if (mv == NULL) + return NULL; + + dest = &mv->view; + init_shared_values(dest, src); + + mv->mbuf = mbuf; + Py_INCREF(mbuf); + mbuf->exports++; + + return (PyObject *)mv; +} + +/* Expose a raw memory area as a view of contiguous bytes. flags can be + PyBUF_READ or PyBUF_WRITE. view->format is set to "B" (unsigned bytes). + The memoryview has complete buffer information. */ +PyObject * +PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags) +{ + _PyManagedBufferObject *mbuf; + PyObject *mv; + int readonly; + + assert(mem != NULL); + assert(flags == PyBUF_READ || flags == PyBUF_WRITE); + + mbuf = mbuf_alloc(); + if (mbuf == NULL) + return NULL; + + readonly = (flags == PyBUF_WRITE) ? 0 : 1; + (void)PyBuffer_FillInfo(&mbuf->master, NULL, mem, size, readonly, + PyBUF_FULL_RO); + + mv = mbuf_add_view(mbuf, NULL); + Py_DECREF(mbuf); + + return mv; +} + +/* Create a memoryview from a given Py_buffer. For simple byte views, + PyMemoryView_FromMemory() should be used instead. + This function is the only entry point that can create a master buffer + without full information. Because of this fact init_shape_strides() + must be able to reconstruct missing values. */ PyObject * PyMemoryView_FromBuffer(Py_buffer *info) { - PyMemoryViewObject *mview; + _PyManagedBufferObject *mbuf; + PyObject *mv; if (info->buf == NULL) { PyErr_SetString(PyExc_ValueError, - "cannot make memory view from a buffer with a NULL data pointer"); + "PyMemoryView_FromBuffer(): info->buf must not be NULL"); return NULL; } - mview = (PyMemoryViewObject *) - PyObject_GC_New(PyMemoryViewObject, &PyMemoryView_Type); - if (mview == NULL) + + mbuf = mbuf_alloc(); + if (mbuf == NULL) return NULL; - mview->hash = -1; - dup_buffer(&mview->view, info); - /* NOTE: mview->view.obj should already have been incref'ed as - part of PyBuffer_FillInfo(). */ - _PyObject_GC_TRACK(mview); - return (PyObject *)mview; + + /* info->obj is either NULL or a borrowed reference. This reference + should not be decremented in PyBuffer_Release(). */ + mbuf->master = *info; + mbuf->master.obj = NULL; + + mv = mbuf_add_view(mbuf, NULL); + Py_DECREF(mbuf); + + return mv; } +/* Create a memoryview from an object that implements the buffer protocol. + If the object is a memoryview, the new memoryview must be registered + with the same managed buffer. Otherwise, a new managed buffer is created. */ PyObject * -PyMemoryView_FromObject(PyObject *base) +PyMemoryView_FromObject(PyObject *v) { - PyMemoryViewObject *mview; - Py_buffer view; + _PyManagedBufferObject *mbuf; - if (!PyObject_CheckBuffer(base)) { - PyErr_SetString(PyExc_TypeError, - "cannot make memory view because object does " - "not have the buffer interface"); + if (PyMemoryView_Check(v)) { + PyMemoryViewObject *mv = (PyMemoryViewObject *)v; + CHECK_RELEASED(mv); + return mbuf_add_view(mv->mbuf, &mv->view); + } + else if (PyObject_CheckBuffer(v)) { + PyObject *ret; + mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(v); + if (mbuf == NULL) + return NULL; + ret = mbuf_add_view(mbuf, NULL); + Py_DECREF(mbuf); + return ret; + } + + PyErr_Format(PyExc_TypeError, + "memoryview: %.200s object does not have the buffer interface", + Py_TYPE(v)->tp_name); + return NULL; +} + +/* Copy the format string from a base object that might vanish. */ +static int +mbuf_copy_format(_PyManagedBufferObject *mbuf, const char *fmt) +{ + if (fmt != NULL) { + char *cp = PyMem_Malloc(strlen(fmt)+1); + if (cp == NULL) { + PyErr_NoMemory(); + return -1; + } + mbuf->master.format = strcpy(cp, fmt); + mbuf->flags |= _Py_MANAGED_BUFFER_FREE_FORMAT; + } + + return 0; +} + +/* + Return a memoryview that is based on a contiguous copy of src. + Assumptions: src has PyBUF_FULL_RO information, src->ndim > 0. + + Ownership rules: + 1) As usual, the returned memoryview has a private copy + of src->shape, src->strides and src->suboffsets. + 2) src->format is copied to the master buffer and released + in mbuf_dealloc(). The releasebufferproc of the bytes + object is NULL, so it does not matter that mbuf_release() + passes the altered format pointer to PyBuffer_Release(). +*/ +static PyObject * +memory_from_contiguous_copy(Py_buffer *src, char order) +{ + _PyManagedBufferObject *mbuf; + PyMemoryViewObject *mv; + PyObject *bytes; + Py_buffer *dest; + int i; + + assert(src->ndim > 0); + assert(src->shape != NULL); + + bytes = PyBytes_FromStringAndSize(NULL, src->len); + if (bytes == NULL) + return NULL; + + mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(bytes); + Py_DECREF(bytes); + if (mbuf == NULL) + return NULL; + + if (mbuf_copy_format(mbuf, src->format) < 0) { + Py_DECREF(mbuf); + return NULL; + } + + mv = (PyMemoryViewObject *)mbuf_add_incomplete_view(mbuf, NULL, src->ndim); + Py_DECREF(mbuf); + if (mv == NULL) return NULL; + + dest = &mv->view; + + /* shared values are initialized correctly except for itemsize */ + dest->itemsize = src->itemsize; + + /* shape and strides */ + for (i = 0; i < src->ndim; i++) { + dest->shape[i] = src->shape[i]; + } + if (order == 'C' || order == 'A') { + init_strides_from_shape(dest); } + else { + init_fortran_strides_from_shape(dest); + } + /* suboffsets */ + dest->suboffsets = NULL; + + /* flags */ + init_flags(mv); + + if (copy_buffer(dest, src) < 0) { + Py_DECREF(mv); + return NULL; + } + + return (PyObject *)mv; +} + +/* + Return a new memoryview object based on a contiguous exporter with + buffertype={PyBUF_READ, PyBUF_WRITE} and order={'C', 'F'ortran, or 'A'ny}. + The logical structure of the input and output buffers is the same + (i.e. tolist(input) == tolist(output)), but the physical layout in + memory can be explicitly chosen. + + As usual, if buffertype=PyBUF_WRITE, the exporter's buffer must be writable, + otherwise it may be writable or read-only. + + If the exporter is already contiguous with the desired target order, + the memoryview will be directly based on the exporter. + + Otherwise, if the buffertype is PyBUF_READ, the memoryview will be + based on a new bytes object. If order={'C', 'A'ny}, use 'C' order, + 'F'ortran order otherwise. +*/ +PyObject * +PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order) +{ + PyMemoryViewObject *mv; + PyObject *ret; + Py_buffer *view; + + assert(buffertype == PyBUF_READ || buffertype == PyBUF_WRITE); + assert(order == 'C' || order == 'F' || order == 'A'); - if (PyObject_GetBuffer(base, &view, PyBUF_FULL_RO) < 0) + mv = (PyMemoryViewObject *)PyMemoryView_FromObject(obj); + if (mv == NULL) return NULL; - mview = (PyMemoryViewObject *)PyMemoryView_FromBuffer(&view); - if (mview == NULL) { - PyBuffer_Release(&view); + view = &mv->view; + if (buffertype == PyBUF_WRITE && view->readonly) { + PyErr_SetString(PyExc_BufferError, + "underlying buffer is not writable"); + Py_DECREF(mv); + return NULL; + } + + if (PyBuffer_IsContiguous(view, order)) + return (PyObject *)mv; + + if (buffertype == PyBUF_WRITE) { + PyErr_SetString(PyExc_BufferError, + "writable contiguous buffer requested " + "for a non-contiguous object."); + Py_DECREF(mv); return NULL; } - return (PyObject *)mview; + ret = memory_from_contiguous_copy(view, order); + Py_DECREF(mv); + return ret; } + static PyObject * memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) { PyObject *obj; - static char *kwlist[] = {"object", 0}; + static char *kwlist[] = {"object", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memoryview", kwlist, &obj)) { @@ -132,478 +923,1106 @@ memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) } +/****************************************************************************/ +/* Release/GC management */ +/****************************************************************************/ + +/* Inform the managed buffer that this particular memoryview will not access + the underlying buffer again. If no other memoryviews are registered with + the managed buffer, the underlying buffer is released instantly and + marked as inaccessible for both the memoryview and the managed buffer. + + This function fails if the memoryview itself has exported buffers. */ +static int +_memory_release(PyMemoryViewObject *self) +{ + if (self->flags & _Py_MEMORYVIEW_RELEASED) + return 0; + + if (self->exports == 0) { + self->flags |= _Py_MEMORYVIEW_RELEASED; + assert(self->mbuf->exports > 0); + if (--self->mbuf->exports == 0) + mbuf_release(self->mbuf); + return 0; + } + if (self->exports > 0) { + PyErr_Format(PyExc_BufferError, + "memoryview has %zd exported buffer%s", self->exports, + self->exports==1 ? "" : "s"); + return -1; + } + + Py_FatalError("_memory_release(): negative export count"); + return -1; +} + +static PyObject * +memory_release(PyMemoryViewObject *self) +{ + if (_memory_release(self) < 0) + return NULL; + Py_RETURN_NONE; +} + static void -_strided_copy_nd(char *dest, char *src, int nd, Py_ssize_t *shape, - Py_ssize_t *strides, Py_ssize_t itemsize, char fort) +memory_dealloc(PyMemoryViewObject *self) { - int k; - Py_ssize_t outstride; + assert(self->exports == 0); + _PyObject_GC_UNTRACK(self); + (void)_memory_release(self); + Py_CLEAR(self->mbuf); + PyObject_GC_Del(self); +} - if (nd==0) { - memcpy(dest, src, itemsize); - } - else if (nd == 1) { - for (k = 0; k<shape[0]; k++) { - memcpy(dest, src, itemsize); - dest += itemsize; - src += strides[0]; - } +static int +memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg) +{ + Py_VISIT(self->mbuf); + return 0; +} + +static int +memory_clear(PyMemoryViewObject *self) +{ + (void)_memory_release(self); + Py_CLEAR(self->mbuf); + return 0; +} + +static PyObject * +memory_enter(PyObject *self, PyObject *args) +{ + CHECK_RELEASED(self); + Py_INCREF(self); + return self; +} + +static PyObject * +memory_exit(PyObject *self, PyObject *args) +{ + return memory_release((PyMemoryViewObject *)self); +} + + +/****************************************************************************/ +/* Casting format and shape */ +/****************************************************************************/ + +#define IS_BYTE_FORMAT(f) (f == 'b' || f == 'B' || f == 'c') + +Py_LOCAL_INLINE(Py_ssize_t) +get_native_fmtchar(char *result, const char *fmt) +{ + Py_ssize_t size = -1; + + if (fmt[0] == '@') fmt++; + + switch (fmt[0]) { + case 'c': case 'b': case 'B': size = sizeof(char); break; + case 'h': case 'H': size = sizeof(short); break; + case 'i': case 'I': size = sizeof(int); break; + case 'l': case 'L': size = sizeof(long); break; + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': size = sizeof(PY_LONG_LONG); break; + #endif + case 'n': case 'N': size = sizeof(Py_ssize_t); break; + case 'f': size = sizeof(float); break; + case 'd': size = sizeof(double); break; + #ifdef HAVE_C99_BOOL + case '?': size = sizeof(_Bool); break; + #else + case '?': size = sizeof(char); break; + #endif + case 'P': size = sizeof(void *); break; } - else { - if (fort == 'F') { - /* Copy first dimension first, - second dimension second, etc... - Set up the recursive loop backwards so that final - dimension is actually copied last. - */ - outstride = itemsize; - for (k=1; k<nd-1;k++) { - outstride *= shape[k]; - } - for (k=0; k<shape[nd-1]; k++) { - _strided_copy_nd(dest, src, nd-1, shape, - strides, itemsize, fort); - dest += outstride; - src += strides[nd-1]; - } - } - else { - /* Copy last dimension first, - second-to-last dimension second, etc. - Set up the recursion so that the - first dimension is copied last - */ - outstride = itemsize; - for (k=1; k < nd; k++) { - outstride *= shape[k]; - } - for (k=0; k<shape[0]; k++) { - _strided_copy_nd(dest, src, nd-1, shape+1, - strides+1, itemsize, - fort); - dest += outstride; - src += strides[0]; - } - } + if (size > 0 && fmt[1] == '\0') { + *result = fmt[0]; + return size; } - return; + + return -1; } +/* Cast a memoryview's data type to 'format'. The input array must be + C-contiguous. At least one of input-format, output-format must have + byte size. The output array is 1-D, with the same byte length as the + input array. Thus, view->len must be a multiple of the new itemsize. */ static int -_indirect_copy_nd(char *dest, Py_buffer *view, char fort) +cast_to_1D(PyMemoryViewObject *mv, PyObject *format) { - Py_ssize_t *indices; - int k; - Py_ssize_t elements; - char *ptr; - void (*func)(int, Py_ssize_t *, const Py_ssize_t *); + Py_buffer *view = &mv->view; + PyObject *asciifmt; + char srcchar, destchar; + Py_ssize_t itemsize; + int ret = -1; + + assert(view->ndim >= 1); + assert(Py_SIZE(mv) == 3*view->ndim); + assert(view->shape == mv->ob_array); + assert(view->strides == mv->ob_array + view->ndim); + assert(view->suboffsets == mv->ob_array + 2*view->ndim); + + if (get_native_fmtchar(&srcchar, view->format) < 0) { + PyErr_SetString(PyExc_ValueError, + "memoryview: source format must be a native single character " + "format prefixed with an optional '@'"); + return ret; + } - if (view->ndim > PY_SSIZE_T_MAX / sizeof(Py_ssize_t)) { - PyErr_NoMemory(); - return -1; + asciifmt = PyUnicode_AsASCIIString(format); + if (asciifmt == NULL) + return ret; + + itemsize = get_native_fmtchar(&destchar, PyBytes_AS_STRING(asciifmt)); + if (itemsize < 0) { + PyErr_SetString(PyExc_ValueError, + "memoryview: destination format must be a native single " + "character format prefixed with an optional '@'"); + goto out; } - indices = (Py_ssize_t *)PyMem_Malloc(sizeof(Py_ssize_t)*view->ndim); - if (indices == NULL) { - PyErr_NoMemory(); - return -1; + if (!IS_BYTE_FORMAT(srcchar) && !IS_BYTE_FORMAT(destchar)) { + PyErr_SetString(PyExc_TypeError, + "memoryview: cannot cast between two non-byte formats"); + goto out; } - for (k=0; k<view->ndim;k++) { - indices[k] = 0; + if (view->len % itemsize) { + PyErr_SetString(PyExc_TypeError, + "memoryview: length is not a multiple of itemsize"); + goto out; } - elements = 1; - for (k=0; k<view->ndim; k++) { - elements *= view->shape[k]; + strncpy(mv->format, PyBytes_AS_STRING(asciifmt), + _Py_MEMORYVIEW_MAX_FORMAT); + mv->format[_Py_MEMORYVIEW_MAX_FORMAT-1] = '\0'; + view->format = mv->format; + view->itemsize = itemsize; + + view->ndim = 1; + view->shape[0] = view->len / view->itemsize; + view->strides[0] = view->itemsize; + view->suboffsets = NULL; + + init_flags(mv); + + ret = 0; + +out: + Py_DECREF(asciifmt); + return ret; +} + +/* The memoryview must have space for 3*len(seq) elements. */ +static Py_ssize_t +copy_shape(Py_ssize_t *shape, const PyObject *seq, Py_ssize_t ndim, + Py_ssize_t itemsize) +{ + Py_ssize_t x, i; + Py_ssize_t len = itemsize; + + for (i = 0; i < ndim; i++) { + PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i); + if (!PyLong_Check(tmp)) { + PyErr_SetString(PyExc_TypeError, + "memoryview.cast(): elements of shape must be integers"); + return -1; + } + x = PyLong_AsSsize_t(tmp); + if (x == -1 && PyErr_Occurred()) { + return -1; + } + if (x <= 0) { + /* In general elements of shape may be 0, but not for casting. */ + PyErr_Format(PyExc_ValueError, + "memoryview.cast(): elements of shape must be integers > 0"); + return -1; + } + if (x > PY_SSIZE_T_MAX / len) { + PyErr_Format(PyExc_ValueError, + "memoryview.cast(): product(shape) > SSIZE_MAX"); + return -1; + } + len *= x; + shape[i] = x; } - if (fort == 'F') { - func = _Py_add_one_to_index_F; + + return len; +} + +/* Cast a 1-D array to a new shape. The result array will be C-contiguous. + If the result array does not have exactly the same byte length as the + input array, raise ValueError. */ +static int +cast_to_ND(PyMemoryViewObject *mv, const PyObject *shape, int ndim) +{ + Py_buffer *view = &mv->view; + Py_ssize_t len; + + assert(view->ndim == 1); /* ndim from cast_to_1D() */ + assert(Py_SIZE(mv) == 3*(ndim==0?1:ndim)); /* ndim of result array */ + assert(view->shape == mv->ob_array); + assert(view->strides == mv->ob_array + (ndim==0?1:ndim)); + assert(view->suboffsets == NULL); + + view->ndim = ndim; + if (view->ndim == 0) { + view->shape = NULL; + view->strides = NULL; + len = view->itemsize; } else { - func = _Py_add_one_to_index_C; + len = copy_shape(view->shape, shape, ndim, view->itemsize); + if (len < 0) + return -1; + init_strides_from_shape(view); } - while (elements--) { - func(view->ndim, indices, view->shape); - ptr = PyBuffer_GetPointer(view, indices); - memcpy(dest, ptr, view->itemsize); - dest += view->itemsize; + + if (view->len != len) { + PyErr_SetString(PyExc_TypeError, + "memoryview: product(shape) * itemsize != buffer size"); + return -1; } - PyMem_Free(indices); + init_flags(mv); + + return 0; +} + +static int +zero_in_shape(PyMemoryViewObject *mv) +{ + Py_buffer *view = &mv->view; + Py_ssize_t i; + + for (i = 0; i < view->ndim; i++) + if (view->shape[i] == 0) + return 1; + return 0; } /* - Get a the data from an object as a contiguous chunk of memory (in - either 'C' or 'F'ortran order) even if it means copying it into a - separate memory area. - - Returns a new reference to a Memory view object. If no copy is needed, - the memory view object points to the original memory and holds a - lock on the original. If a copy is needed, then the memory view object - points to a brand-new Bytes object (and holds a memory lock on it). - - buffertype - - PyBUF_READ buffer only needs to be read-only - PyBUF_WRITE buffer needs to be writable (give error if not contiguous) - PyBUF_SHADOW buffer needs to be writable so shadow it with - a contiguous buffer if it is not. The view will point to - the shadow buffer which can be written to and then - will be copied back into the other buffer when the memory - view is de-allocated. While the shadow buffer is - being used, it will have an exclusive write lock on - the original buffer. - */ + Cast a copy of 'self' to a different view. The input view must + be C-contiguous. The function always casts the input view to a + 1-D output according to 'format'. At least one of input-format, + output-format must have byte size. -PyObject * -PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char fort) + If 'shape' is given, the 1-D view from the previous step will + be cast to a C-contiguous view with new shape and strides. + + All casts must result in views that will have the exact byte + size of the original input. Otherwise, an error is raised. +*/ +static PyObject * +memory_cast(PyMemoryViewObject *self, PyObject *args, PyObject *kwds) { - PyMemoryViewObject *mem; - PyObject *bytes; - Py_buffer *view; - int flags; - char *dest; + static char *kwlist[] = {"format", "shape", NULL}; + PyMemoryViewObject *mv = NULL; + PyObject *shape = NULL; + PyObject *format; + Py_ssize_t ndim = 1; - if (!PyObject_CheckBuffer(obj)) { + CHECK_RELEASED(self); + + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist, + &format, &shape)) { + return NULL; + } + if (!PyUnicode_Check(format)) { PyErr_SetString(PyExc_TypeError, - "object does not support the buffer interface"); + "memoryview: format argument must be a string"); return NULL; } - - mem = PyObject_GC_New(PyMemoryViewObject, &PyMemoryView_Type); - if (mem == NULL) + if (!MV_C_CONTIGUOUS(self->flags)) { + PyErr_SetString(PyExc_TypeError, + "memoryview: casts are restricted to C-contiguous views"); return NULL; - - view = &mem->view; - flags = PyBUF_FULL_RO; - switch(buffertype) { - case PyBUF_WRITE: - flags = PyBUF_FULL; - break; + } + if (zero_in_shape(self)) { + PyErr_SetString(PyExc_TypeError, + "memoryview: cannot cast view with zeros in shape or strides"); + return NULL; + } + if (shape) { + CHECK_LIST_OR_TUPLE(shape) + ndim = PySequence_Fast_GET_SIZE(shape); + if (ndim > PyBUF_MAX_NDIM) { + PyErr_SetString(PyExc_ValueError, + "memoryview: number of dimensions must not exceed " + STRINGIZE(PyBUF_MAX_NDIM)); + return NULL; + } + if (self->view.ndim != 1 && ndim != 1) { + PyErr_SetString(PyExc_TypeError, + "memoryview: cast must be 1D -> ND or ND -> 1D"); + return NULL; + } } - if (PyObject_GetBuffer(obj, view, flags) != 0) { - Py_DECREF(mem); + mv = (PyMemoryViewObject *) + mbuf_add_incomplete_view(self->mbuf, &self->view, ndim==0 ? 1 : (int)ndim); + if (mv == NULL) return NULL; + + if (cast_to_1D(mv, format) < 0) + goto error; + if (shape && cast_to_ND(mv, shape, (int)ndim) < 0) + goto error; + + return (PyObject *)mv; + +error: + Py_DECREF(mv); + return NULL; +} + + +/**************************************************************************/ +/* getbuffer */ +/**************************************************************************/ + +static int +memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags) +{ + Py_buffer *base = &self->view; + int baseflags = self->flags; + + CHECK_RELEASED_INT(self); + + /* start with complete information */ + *view = *base; + view->obj = NULL; + + if (REQ_WRITABLE(flags) && base->readonly) { + PyErr_SetString(PyExc_BufferError, + "memoryview: underlying buffer is not writable"); + return -1; + } + if (!REQ_FORMAT(flags)) { + /* NULL indicates that the buffer's data type has been cast to 'B'. + view->itemsize is the _previous_ itemsize. If shape is present, + the equality product(shape) * itemsize = len still holds at this + point. The equality calcsize(format) = itemsize does _not_ hold + from here on! */ + view->format = NULL; } - if (PyBuffer_IsContiguous(view, fort)) { - /* no copy needed */ - _PyObject_GC_TRACK(mem); - return (PyObject *)mem; + if (REQ_C_CONTIGUOUS(flags) && !MV_C_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "memoryview: underlying buffer is not C-contiguous"); + return -1; } - /* otherwise a copy is needed */ - if (buffertype == PyBUF_WRITE) { - Py_DECREF(mem); + if (REQ_F_CONTIGUOUS(flags) && !MV_F_CONTIGUOUS(baseflags)) { PyErr_SetString(PyExc_BufferError, - "writable contiguous buffer requested " - "for a non-contiguousobject."); - return NULL; + "memoryview: underlying buffer is not Fortran contiguous"); + return -1; } - bytes = PyBytes_FromStringAndSize(NULL, view->len); - if (bytes == NULL) { - Py_DECREF(mem); - return NULL; + if (REQ_ANY_CONTIGUOUS(flags) && !MV_ANY_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "memoryview: underlying buffer is not contiguous"); + return -1; } - dest = PyBytes_AS_STRING(bytes); - /* different copying strategy depending on whether - or not any pointer de-referencing is needed - */ - /* strided or in-direct copy */ - if (view->suboffsets==NULL) { - _strided_copy_nd(dest, view->buf, view->ndim, view->shape, - view->strides, view->itemsize, fort); + if (!REQ_INDIRECT(flags) && (baseflags & _Py_MEMORYVIEW_PIL)) { + PyErr_SetString(PyExc_BufferError, + "memoryview: underlying buffer requires suboffsets"); + return -1; } - else { - if (_indirect_copy_nd(dest, view, fort) < 0) { - Py_DECREF(bytes); - Py_DECREF(mem); - return NULL; + if (!REQ_STRIDES(flags)) { + if (!MV_C_CONTIGUOUS(baseflags)) { + PyErr_SetString(PyExc_BufferError, + "memoryview: underlying buffer is not C-contiguous"); + return -1; } - PyBuffer_Release(view); /* XXX ? */ + view->strides = NULL; + } + if (!REQ_SHAPE(flags)) { + /* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous, + so base->buf = ndbuf->data. */ + if (view->format != NULL) { + /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do + not make sense. */ + PyErr_Format(PyExc_BufferError, + "ndarray: cannot cast to unsigned bytes if the format flag " + "is present"); + return -1; + } + /* product(shape) * itemsize = len and calcsize(format) = itemsize + do _not_ hold from here on! */ + view->ndim = 1; + view->shape = NULL; } - _PyObject_GC_TRACK(mem); - return (PyObject *)mem; -} -static PyObject * -memory_format_get(PyMemoryViewObject *self) + view->obj = (PyObject *)self; + Py_INCREF(view->obj); + self->exports++; + + return 0; +} + +static void +memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view) { - CHECK_RELEASED(self); - return PyUnicode_FromString(self->view.format); + self->exports--; + return; + /* PyBuffer_Release() decrements view->obj after this function returns. */ } -static PyObject * -memory_itemsize_get(PyMemoryViewObject *self) +/* Buffer methods */ +static PyBufferProcs memory_as_buffer = { + (getbufferproc)memory_getbuf, /* bf_getbuffer */ + (releasebufferproc)memory_releasebuf, /* bf_releasebuffer */ +}; + + +/****************************************************************************/ +/* Optimized pack/unpack for all native format specifiers */ +/****************************************************************************/ + +/* + Fix exceptions: + 1) Include format string in the error message. + 2) OverflowError -> ValueError. + 3) The error message from PyNumber_Index() is not ideal. +*/ +static int +type_error_int(const char *fmt) { - CHECK_RELEASED(self); - return PyLong_FromSsize_t(self->view.itemsize); + PyErr_Format(PyExc_TypeError, + "memoryview: invalid type for format '%s'", fmt); + return -1; } -static PyObject * -_IntTupleFromSsizet(int len, Py_ssize_t *vals) +static int +value_error_int(const char *fmt) { - int i; - PyObject *o; - PyObject *intTuple; + PyErr_Format(PyExc_ValueError, + "memoryview: invalid value for format '%s'", fmt); + return -1; +} - if (vals == NULL) { - Py_INCREF(Py_None); - return Py_None; +static int +fix_error_int(const char *fmt) +{ + assert(PyErr_Occurred()); + if (PyErr_ExceptionMatches(PyExc_TypeError)) { + PyErr_Clear(); + return type_error_int(fmt); } - intTuple = PyTuple_New(len); - if (!intTuple) - return NULL; - for (i=0; i<len; i++) { - o = PyLong_FromSsize_t(vals[i]); - if (!o) { - Py_DECREF(intTuple); - return NULL; - } - PyTuple_SET_ITEM(intTuple, i, o); + else if (PyErr_ExceptionMatches(PyExc_OverflowError) || + PyErr_ExceptionMatches(PyExc_ValueError)) { + PyErr_Clear(); + return value_error_int(fmt); } - return intTuple; + + return -1; } -static PyObject * -memory_shape_get(PyMemoryViewObject *self) +/* Accept integer objects or objects with an __index__() method. */ +static long +pylong_as_ld(PyObject *item) { - CHECK_RELEASED(self); - return _IntTupleFromSsizet(self->view.ndim, self->view.shape); + PyObject *tmp; + long ld; + + tmp = PyNumber_Index(item); + if (tmp == NULL) + return -1; + + ld = PyLong_AsLong(tmp); + Py_DECREF(tmp); + return ld; } -static PyObject * -memory_strides_get(PyMemoryViewObject *self) +static unsigned long +pylong_as_lu(PyObject *item) { - CHECK_RELEASED(self); - return _IntTupleFromSsizet(self->view.ndim, self->view.strides); + PyObject *tmp; + unsigned long lu; + + tmp = PyNumber_Index(item); + if (tmp == NULL) + return (unsigned long)-1; + + lu = PyLong_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return lu; } -static PyObject * -memory_suboffsets_get(PyMemoryViewObject *self) +#ifdef HAVE_LONG_LONG +static PY_LONG_LONG +pylong_as_lld(PyObject *item) { - CHECK_RELEASED(self); - return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets); + PyObject *tmp; + PY_LONG_LONG lld; + + tmp = PyNumber_Index(item); + if (tmp == NULL) + return -1; + + lld = PyLong_AsLongLong(tmp); + Py_DECREF(tmp); + return lld; } -static PyObject * -memory_readonly_get(PyMemoryViewObject *self) +static unsigned PY_LONG_LONG +pylong_as_llu(PyObject *item) { - CHECK_RELEASED(self); - return PyBool_FromLong(self->view.readonly); + PyObject *tmp; + unsigned PY_LONG_LONG llu; + + tmp = PyNumber_Index(item); + if (tmp == NULL) + return (unsigned PY_LONG_LONG)-1; + + llu = PyLong_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return llu; } +#endif -static PyObject * -memory_ndim_get(PyMemoryViewObject *self) +static Py_ssize_t +pylong_as_zd(PyObject *item) { - CHECK_RELEASED(self); - return PyLong_FromLong(self->view.ndim); + PyObject *tmp; + Py_ssize_t zd; + + tmp = PyNumber_Index(item); + if (tmp == NULL) + return -1; + + zd = PyLong_AsSsize_t(tmp); + Py_DECREF(tmp); + return zd; } -static PyGetSetDef memory_getsetlist[] ={ - {"format", (getter)memory_format_get, NULL, NULL}, - {"itemsize", (getter)memory_itemsize_get, NULL, NULL}, - {"shape", (getter)memory_shape_get, NULL, NULL}, - {"strides", (getter)memory_strides_get, NULL, NULL}, - {"suboffsets", (getter)memory_suboffsets_get, NULL, NULL}, - {"readonly", (getter)memory_readonly_get, NULL, NULL}, - {"ndim", (getter)memory_ndim_get, NULL, NULL}, - {NULL, NULL, NULL, NULL}, -}; +static size_t +pylong_as_zu(PyObject *item) +{ + PyObject *tmp; + size_t zu; + tmp = PyNumber_Index(item); + if (tmp == NULL) + return (size_t)-1; -static PyObject * -memory_tobytes(PyMemoryViewObject *mem, PyObject *noargs) + zu = PyLong_AsSize_t(tmp); + Py_DECREF(tmp); + return zu; +} + +/* Timings with the ndarray from _testbuffer.c indicate that using the + struct module is around 15x slower than the two functions below. */ + +#define UNPACK_SINGLE(dest, ptr, type) \ + do { \ + type x; \ + memcpy((char *)&x, ptr, sizeof x); \ + dest = x; \ + } while (0) + +/* Unpack a single item. 'fmt' can be any native format character in struct + module syntax. This function is very sensitive to small changes. With this + layout gcc automatically generates a fast jump table. */ +Py_LOCAL_INLINE(PyObject *) +unpack_single(const char *ptr, const char *fmt) { - CHECK_RELEASED(mem); - return PyObject_CallFunctionObjArgs( - (PyObject *) &PyBytes_Type, mem, NULL); + unsigned PY_LONG_LONG llu; + unsigned long lu; + size_t zu; + PY_LONG_LONG lld; + long ld; + Py_ssize_t zd; + double d; + unsigned char uc; + void *p; + + switch (fmt[0]) { + + /* signed integers and fast path for 'B' */ + case 'B': uc = *((unsigned char *)ptr); goto convert_uc; + case 'b': ld = *((signed char *)ptr); goto convert_ld; + case 'h': UNPACK_SINGLE(ld, ptr, short); goto convert_ld; + case 'i': UNPACK_SINGLE(ld, ptr, int); goto convert_ld; + case 'l': UNPACK_SINGLE(ld, ptr, long); goto convert_ld; + + /* boolean */ + #ifdef HAVE_C99_BOOL + case '?': UNPACK_SINGLE(ld, ptr, _Bool); goto convert_bool; + #else + case '?': UNPACK_SINGLE(ld, ptr, char); goto convert_bool; + #endif + + /* unsigned integers */ + case 'H': UNPACK_SINGLE(lu, ptr, unsigned short); goto convert_lu; + case 'I': UNPACK_SINGLE(lu, ptr, unsigned int); goto convert_lu; + case 'L': UNPACK_SINGLE(lu, ptr, unsigned long); goto convert_lu; + + /* native 64-bit */ + #ifdef HAVE_LONG_LONG + case 'q': UNPACK_SINGLE(lld, ptr, PY_LONG_LONG); goto convert_lld; + case 'Q': UNPACK_SINGLE(llu, ptr, unsigned PY_LONG_LONG); goto convert_llu; + #endif + + /* ssize_t and size_t */ + case 'n': UNPACK_SINGLE(zd, ptr, Py_ssize_t); goto convert_zd; + case 'N': UNPACK_SINGLE(zu, ptr, size_t); goto convert_zu; + + /* floats */ + case 'f': UNPACK_SINGLE(d, ptr, float); goto convert_double; + case 'd': UNPACK_SINGLE(d, ptr, double); goto convert_double; + + /* bytes object */ + case 'c': goto convert_bytes; + + /* pointer */ + case 'P': UNPACK_SINGLE(p, ptr, void *); goto convert_pointer; + + /* default */ + default: goto err_format; + } + +convert_uc: + /* PyLong_FromUnsignedLong() is slower */ + return PyLong_FromLong(uc); +convert_ld: + return PyLong_FromLong(ld); +convert_lu: + return PyLong_FromUnsignedLong(lu); +convert_lld: + return PyLong_FromLongLong(lld); +convert_llu: + return PyLong_FromUnsignedLongLong(llu); +convert_zd: + return PyLong_FromSsize_t(zd); +convert_zu: + return PyLong_FromSize_t(zu); +convert_double: + return PyFloat_FromDouble(d); +convert_bool: + return PyBool_FromLong(ld); +convert_bytes: + return PyBytes_FromStringAndSize(ptr, 1); +convert_pointer: + return PyLong_FromVoidPtr(p); +err_format: + PyErr_Format(PyExc_NotImplementedError, + "memoryview: format %s not supported", fmt); + return NULL; } -/* TODO: rewrite this function using the struct module to unpack - each buffer item */ +#define PACK_SINGLE(ptr, src, type) \ + do { \ + type x; \ + x = (type)src; \ + memcpy(ptr, (char *)&x, sizeof x); \ + } while (0) + +/* Pack a single item. 'fmt' can be any native format character in + struct module syntax. */ +static int +pack_single(char *ptr, PyObject *item, const char *fmt) +{ + unsigned PY_LONG_LONG llu; + unsigned long lu; + size_t zu; + PY_LONG_LONG lld; + long ld; + Py_ssize_t zd; + double d; + void *p; + + switch (fmt[0]) { + /* signed integers */ + case 'b': case 'h': case 'i': case 'l': + ld = pylong_as_ld(item); + if (ld == -1 && PyErr_Occurred()) + goto err_occurred; + switch (fmt[0]) { + case 'b': + if (ld < SCHAR_MIN || ld > SCHAR_MAX) goto err_range; + *((signed char *)ptr) = (signed char)ld; break; + case 'h': + if (ld < SHRT_MIN || ld > SHRT_MAX) goto err_range; + PACK_SINGLE(ptr, ld, short); break; + case 'i': + if (ld < INT_MIN || ld > INT_MAX) goto err_range; + PACK_SINGLE(ptr, ld, int); break; + default: /* 'l' */ + PACK_SINGLE(ptr, ld, long); break; + } + break; + + /* unsigned integers */ + case 'B': case 'H': case 'I': case 'L': + lu = pylong_as_lu(item); + if (lu == (unsigned long)-1 && PyErr_Occurred()) + goto err_occurred; + switch (fmt[0]) { + case 'B': + if (lu > UCHAR_MAX) goto err_range; + *((unsigned char *)ptr) = (unsigned char)lu; break; + case 'H': + if (lu > USHRT_MAX) goto err_range; + PACK_SINGLE(ptr, lu, unsigned short); break; + case 'I': + if (lu > UINT_MAX) goto err_range; + PACK_SINGLE(ptr, lu, unsigned int); break; + default: /* 'L' */ + PACK_SINGLE(ptr, lu, unsigned long); break; + } + break; + + /* native 64-bit */ + #ifdef HAVE_LONG_LONG + case 'q': + lld = pylong_as_lld(item); + if (lld == -1 && PyErr_Occurred()) + goto err_occurred; + PACK_SINGLE(ptr, lld, PY_LONG_LONG); + break; + case 'Q': + llu = pylong_as_llu(item); + if (llu == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred()) + goto err_occurred; + PACK_SINGLE(ptr, llu, unsigned PY_LONG_LONG); + break; + #endif + + /* ssize_t and size_t */ + case 'n': + zd = pylong_as_zd(item); + if (zd == -1 && PyErr_Occurred()) + goto err_occurred; + PACK_SINGLE(ptr, zd, Py_ssize_t); + break; + case 'N': + zu = pylong_as_zu(item); + if (zu == (size_t)-1 && PyErr_Occurred()) + goto err_occurred; + PACK_SINGLE(ptr, zu, size_t); + break; + + /* floats */ + case 'f': case 'd': + d = PyFloat_AsDouble(item); + if (d == -1.0 && PyErr_Occurred()) + goto err_occurred; + if (fmt[0] == 'f') { + PACK_SINGLE(ptr, d, float); + } + else { + PACK_SINGLE(ptr, d, double); + } + break; + + /* bool */ + case '?': + ld = PyObject_IsTrue(item); + if (ld < 0) + return -1; /* preserve original error */ + #ifdef HAVE_C99_BOOL + PACK_SINGLE(ptr, ld, _Bool); + #else + PACK_SINGLE(ptr, ld, char); + #endif + break; + + /* bytes object */ + case 'c': + if (!PyBytes_Check(item)) + return type_error_int(fmt); + if (PyBytes_GET_SIZE(item) != 1) + return value_error_int(fmt); + *ptr = PyBytes_AS_STRING(item)[0]; + break; + /* pointer */ + case 'P': + p = PyLong_AsVoidPtr(item); + if (p == NULL && PyErr_Occurred()) + goto err_occurred; + PACK_SINGLE(ptr, p, void *); + break; + + /* default */ + default: goto err_format; + } + + return 0; + +err_occurred: + return fix_error_int(fmt); +err_range: + return value_error_int(fmt); +err_format: + PyErr_Format(PyExc_NotImplementedError, + "memoryview: format %s not supported", fmt); + return -1; +} + + +/****************************************************************************/ +/* Representations */ +/****************************************************************************/ + +/* allow explicit form of native format */ +Py_LOCAL_INLINE(const char *) +adjust_fmt(const Py_buffer *view) +{ + const char *fmt; + + fmt = (view->format[0] == '@') ? view->format+1 : view->format; + if (fmt[0] && fmt[1] == '\0') + return fmt; + + PyErr_Format(PyExc_NotImplementedError, + "memoryview: unsupported format %s", view->format); + return NULL; +} + +/* Base case for multi-dimensional unpacking. Assumption: ndim == 1. */ static PyObject * -memory_tolist(PyMemoryViewObject *mem, PyObject *noargs) +tolist_base(const char *ptr, const Py_ssize_t *shape, + const Py_ssize_t *strides, const Py_ssize_t *suboffsets, + const char *fmt) { - Py_buffer *view = &(mem->view); + PyObject *lst, *item; Py_ssize_t i; - PyObject *res, *item; - char *buf; - CHECK_RELEASED(mem); - if (strcmp(view->format, "B") || view->itemsize != 1) { - PyErr_SetString(PyExc_NotImplementedError, - "tolist() only supports byte views"); - return NULL; - } - if (view->ndim != 1) { - PyErr_SetString(PyExc_NotImplementedError, - "tolist() only supports one-dimensional objects"); - return NULL; - } - res = PyList_New(view->len); - if (res == NULL) + lst = PyList_New(shape[0]); + if (lst == NULL) return NULL; - buf = view->buf; - for (i = 0; i < view->len; i++) { - item = PyLong_FromUnsignedLong((unsigned char) *buf); + + for (i = 0; i < shape[0]; ptr+=strides[0], i++) { + const char *xptr = ADJUST_PTR(ptr, suboffsets); + item = unpack_single(xptr, fmt); if (item == NULL) { - Py_DECREF(res); + Py_DECREF(lst); return NULL; } - PyList_SET_ITEM(res, i, item); - buf++; + PyList_SET_ITEM(lst, i, item); } - return res; + + return lst; } -static void -do_release(PyMemoryViewObject *self) +/* Unpack a multi-dimensional array into a nested list. + Assumption: ndim >= 1. */ +static PyObject * +tolist_rec(const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape, + const Py_ssize_t *strides, const Py_ssize_t *suboffsets, + const char *fmt) { - if (self->view.obj != NULL) { - PyBuffer_Release(&(self->view)); + PyObject *lst, *item; + Py_ssize_t i; + + assert(ndim >= 1); + assert(shape != NULL); + assert(strides != NULL); + + if (ndim == 1) + return tolist_base(ptr, shape, strides, suboffsets, fmt); + + lst = PyList_New(shape[0]); + if (lst == NULL) + return NULL; + + for (i = 0; i < shape[0]; ptr+=strides[0], i++) { + const char *xptr = ADJUST_PTR(ptr, suboffsets); + item = tolist_rec(xptr, ndim-1, shape+1, + strides+1, suboffsets ? suboffsets+1 : NULL, + fmt); + if (item == NULL) { + Py_DECREF(lst); + return NULL; + } + PyList_SET_ITEM(lst, i, item); } - self->view.obj = NULL; - self->view.buf = NULL; + + return lst; } +/* Return a list representation of the memoryview. Currently only buffers + with native format strings are supported. */ static PyObject * -memory_enter(PyObject *self, PyObject *args) +memory_tolist(PyMemoryViewObject *mv, PyObject *noargs) { - CHECK_RELEASED(self); - Py_INCREF(self); - return self; + const Py_buffer *view = &(mv->view); + const char *fmt; + + CHECK_RELEASED(mv); + + fmt = adjust_fmt(view); + if (fmt == NULL) + return NULL; + if (view->ndim == 0) { + return unpack_single(view->buf, fmt); + } + else if (view->ndim == 1) { + return tolist_base(view->buf, view->shape, + view->strides, view->suboffsets, + fmt); + } + else { + return tolist_rec(view->buf, view->ndim, view->shape, + view->strides, view->suboffsets, + fmt); + } } static PyObject * -memory_exit(PyObject *self, PyObject *args) +memory_tobytes(PyMemoryViewObject *self, PyObject *dummy) { - do_release((PyMemoryViewObject *) self); - Py_RETURN_NONE; -} + Py_buffer *src = VIEW_ADDR(self); + PyObject *bytes = NULL; -static PyMethodDef memory_methods[] = { - {"release", memory_exit, METH_NOARGS}, - {"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, NULL}, - {"tolist", (PyCFunction)memory_tolist, METH_NOARGS, NULL}, - {"__enter__", memory_enter, METH_NOARGS}, - {"__exit__", memory_exit, METH_VARARGS}, - {NULL, NULL} /* sentinel */ -}; + CHECK_RELEASED(self); + if (MV_C_CONTIGUOUS(self->flags)) { + return PyBytes_FromStringAndSize(src->buf, src->len); + } -static void -memory_dealloc(PyMemoryViewObject *self) -{ - _PyObject_GC_UNTRACK(self); - do_release(self); - PyObject_GC_Del(self); + bytes = PyBytes_FromStringAndSize(NULL, src->len); + if (bytes == NULL) + return NULL; + + if (buffer_to_c_contiguous(PyBytes_AS_STRING(bytes), src) < 0) { + Py_DECREF(bytes); + return NULL; + } + + return bytes; } static PyObject * memory_repr(PyMemoryViewObject *self) { - if (IS_RELEASED(self)) + if (self->flags & _Py_MEMORYVIEW_RELEASED) return PyUnicode_FromFormat("<released memory at %p>", self); else return PyUnicode_FromFormat("<memory at %p>", self); } -static Py_hash_t -memory_hash(PyMemoryViewObject *self) + +/**************************************************************************/ +/* Indexing and slicing */ +/**************************************************************************/ + +/* Get the pointer to the item at index. */ +static char * +ptr_from_index(Py_buffer *view, Py_ssize_t index) { - if (self->hash == -1) { - Py_buffer *view = &self->view; - CHECK_RELEASED_INT(self); - if (view->ndim > 1) { - PyErr_SetString(PyExc_NotImplementedError, - "can't hash multi-dimensional memoryview object"); - return -1; - } - if (view->strides && view->strides[0] != view->itemsize) { - PyErr_SetString(PyExc_NotImplementedError, - "can't hash strided memoryview object"); - return -1; - } - if (!view->readonly) { - PyErr_SetString(PyExc_ValueError, - "can't hash writable memoryview object"); - return -1; - } - if (view->obj != NULL && PyObject_Hash(view->obj) == -1) { - /* Keep the original error message */ - return -1; - } - /* Can't fail */ - self->hash = _Py_HashBytes((unsigned char *) view->buf, view->len); + char *ptr; + Py_ssize_t nitems; /* items in the first dimension */ + + assert(view->shape); + assert(view->strides); + + nitems = view->shape[0]; + if (index < 0) { + index += nitems; + } + if (index < 0 || index >= nitems) { + PyErr_SetString(PyExc_IndexError, "index out of bounds"); + return NULL; } - return self->hash; -} -/* Sequence methods */ -static Py_ssize_t -memory_length(PyMemoryViewObject *self) -{ - CHECK_RELEASED_INT(self); - return get_shape0(&self->view); + ptr = (char *)view->buf; + ptr += view->strides[0] * index; + + ptr = ADJUST_PTR(ptr, view->suboffsets); + + return ptr; } -/* Alternate version of memory_subcript that only accepts indices. - Used by PySeqIter_New(). -*/ +/* Return the item at index. In a one-dimensional view, this is an object + with the type specified by view->format. Otherwise, the item is a sub-view. + The function is used in memory_subscript() and memory_as_sequence. */ static PyObject * -memory_item(PyMemoryViewObject *self, Py_ssize_t result) +memory_item(PyMemoryViewObject *self, Py_ssize_t index) { Py_buffer *view = &(self->view); + const char *fmt; CHECK_RELEASED(self); + + fmt = adjust_fmt(view); + if (fmt == NULL) + return NULL; + if (view->ndim == 0) { - PyErr_SetString(PyExc_IndexError, - "invalid indexing of 0-dim memory"); + PyErr_SetString(PyExc_TypeError, "invalid indexing of 0-dim memory"); return NULL; } if (view->ndim == 1) { - /* Return a bytes object */ - char *ptr; - ptr = (char *)view->buf; - if (result < 0) { - result += get_shape0(view); - } - if ((result < 0) || (result >= get_shape0(view))) { - PyErr_SetString(PyExc_IndexError, - "index out of bounds"); + char *ptr = ptr_from_index(view, index); + if (ptr == NULL) return NULL; - } - if (view->strides == NULL) - ptr += view->itemsize * result; - else - ptr += view->strides[0] * result; - if (view->suboffsets != NULL && - view->suboffsets[0] >= 0) { - ptr = *((char **)ptr) + view->suboffsets[0]; - } - return PyBytes_FromStringAndSize(ptr, view->itemsize); - } else { - /* Return a new memory-view object */ - Py_buffer newview; - memset(&newview, 0, sizeof(newview)); - /* XXX: This needs to be fixed so it actually returns a sub-view */ - return PyMemoryView_FromBuffer(&newview); + return unpack_single(ptr, fmt); } + + PyErr_SetString(PyExc_NotImplementedError, + "multi-dimensional sub-views are not implemented"); + return NULL; } -/* - mem[obj] returns a bytes object holding the data for one element if - obj fully indexes the memory view or another memory-view object - if it does not. +Py_LOCAL_INLINE(int) +init_slice(Py_buffer *base, PyObject *key, int dim) +{ + Py_ssize_t start, stop, step, slicelength; - 0-d memory-view objects can be referenced using ... or () but - not with anything else. - */ + if (PySlice_GetIndicesEx(key, base->shape[dim], + &start, &stop, &step, &slicelength) < 0) { + return -1; + } + + + if (base->suboffsets == NULL || dim == 0) { + adjust_buf: + base->buf = (char *)base->buf + base->strides[dim] * start; + } + else { + Py_ssize_t n = dim-1; + while (n >= 0 && base->suboffsets[n] < 0) + n--; + if (n < 0) + goto adjust_buf; /* all suboffsets are negative */ + base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start; + } + base->shape[dim] = slicelength; + base->strides[dim] = base->strides[dim] * step; + + return 0; +} + +static int +is_multislice(PyObject *key) +{ + Py_ssize_t size, i; + + if (!PyTuple_Check(key)) + return 0; + size = PyTuple_GET_SIZE(key); + if (size == 0) + return 0; + + for (i = 0; i < size; i++) { + PyObject *x = PyTuple_GET_ITEM(key, i); + if (!PySlice_Check(x)) + return 0; + } + return 1; +} + +/* mv[obj] returns an object holding the data for one element if obj + fully indexes the memoryview or another memoryview object if it + does not. + + 0-d memoryview objects can be referenced using mv[...] or mv[()] + but not with anything else. */ static PyObject * memory_subscript(PyMemoryViewObject *self, PyObject *key) { @@ -611,247 +2030,567 @@ memory_subscript(PyMemoryViewObject *self, PyObject *key) view = &(self->view); CHECK_RELEASED(self); + if (view->ndim == 0) { - if (key == Py_Ellipsis || - (PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) { + if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) { + const char *fmt = adjust_fmt(view); + if (fmt == NULL) + return NULL; + return unpack_single(view->buf, fmt); + } + else if (key == Py_Ellipsis) { Py_INCREF(self); return (PyObject *)self; } else { - PyErr_SetString(PyExc_IndexError, - "invalid indexing of 0-dim memory"); + PyErr_SetString(PyExc_TypeError, + "invalid indexing of 0-dim memory"); return NULL; } } + if (PyIndex_Check(key)) { - Py_ssize_t result; - result = PyNumber_AsSsize_t(key, NULL); - if (result == -1 && PyErr_Occurred()) - return NULL; - return memory_item(self, result); + Py_ssize_t index; + index = PyNumber_AsSsize_t(key, PyExc_IndexError); + if (index == -1 && PyErr_Occurred()) + return NULL; + return memory_item(self, index); } else if (PySlice_Check(key)) { - Py_ssize_t start, stop, step, slicelength; + PyMemoryViewObject *sliced; - if (PySlice_GetIndicesEx(key, get_shape0(view), - &start, &stop, &step, &slicelength) < 0) { + sliced = (PyMemoryViewObject *)mbuf_add_view(self->mbuf, view); + if (sliced == NULL) + return NULL; + + if (init_slice(&sliced->view, key, 0) < 0) { + Py_DECREF(sliced); return NULL; } - - if (step == 1 && view->ndim == 1) { - Py_buffer newview; - void *newbuf = (char *) view->buf - + start * view->itemsize; - int newflags = view->readonly - ? PyBUF_CONTIG_RO : PyBUF_CONTIG; - - /* XXX There should be an API to create a subbuffer */ - if (view->obj != NULL) { - if (PyObject_GetBuffer(view->obj, &newview, newflags) == -1) - return NULL; - } - else { - newview = *view; - } - newview.buf = newbuf; - newview.len = slicelength * newview.itemsize; - newview.format = view->format; - newview.shape = &(newview.smalltable[0]); - newview.shape[0] = slicelength; - newview.strides = &(newview.itemsize); - return PyMemoryView_FromBuffer(&newview); - } - PyErr_SetNone(PyExc_NotImplementedError); + init_len(&sliced->view); + init_flags(sliced); + + return (PyObject *)sliced; + } + else if (is_multislice(key)) { + PyErr_SetString(PyExc_NotImplementedError, + "multi-dimensional slicing is not implemented"); return NULL; } - PyErr_Format(PyExc_TypeError, - "cannot index memory using \"%.200s\"", - key->ob_type->tp_name); + + PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key"); return NULL; } - -/* Need to support assigning memory if we can */ static int memory_ass_sub(PyMemoryViewObject *self, PyObject *key, PyObject *value) { - Py_ssize_t start, len, bytelen; - Py_buffer srcview; Py_buffer *view = &(self->view); - char *srcbuf, *destbuf; + Py_buffer src; + const char *fmt; + char *ptr; CHECK_RELEASED_INT(self); + + fmt = adjust_fmt(view); + if (fmt == NULL) + return -1; + if (view->readonly) { - PyErr_SetString(PyExc_TypeError, - "cannot modify read-only memory"); + PyErr_SetString(PyExc_TypeError, "cannot modify read-only memory"); return -1; } if (value == NULL) { - PyErr_SetString(PyExc_TypeError, - "cannot delete memory"); + PyErr_SetString(PyExc_TypeError, "cannot delete memory"); return -1; } - if (view->ndim != 1) { - PyErr_SetNone(PyExc_NotImplementedError); - return -1; - } - if (PyIndex_Check(key)) { - start = PyNumber_AsSsize_t(key, NULL); - if (start == -1 && PyErr_Occurred()) - return -1; - if (start < 0) { - start += get_shape0(view); + if (view->ndim == 0) { + if (key == Py_Ellipsis || + (PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) { + ptr = (char *)view->buf; + return pack_single(ptr, value, fmt); } - if ((start < 0) || (start >= get_shape0(view))) { - PyErr_SetString(PyExc_IndexError, - "index out of bounds"); + else { + PyErr_SetString(PyExc_TypeError, + "invalid indexing of 0-dim memory"); return -1; } - len = 1; } - else if (PySlice_Check(key)) { - Py_ssize_t stop, step; + if (view->ndim != 1) { + PyErr_SetString(PyExc_NotImplementedError, + "memoryview assignments are currently restricted to ndim = 1"); + return -1; + } - if (PySlice_GetIndicesEx(key, get_shape0(view), - &start, &stop, &step, &len) < 0) { + if (PyIndex_Check(key)) { + Py_ssize_t index = PyNumber_AsSsize_t(key, PyExc_IndexError); + if (index == -1 && PyErr_Occurred()) return -1; - } - if (step != 1) { - PyErr_SetNone(PyExc_NotImplementedError); + ptr = ptr_from_index(view, index); + if (ptr == NULL) return -1; + return pack_single(ptr, value, fmt); + } + /* one-dimensional: fast path */ + if (PySlice_Check(key) && view->ndim == 1) { + Py_buffer dest; /* sliced view */ + Py_ssize_t arrays[3]; + int ret = -1; + + /* rvalue must be an exporter */ + if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) < 0) + return ret; + + dest = *view; + dest.shape = &arrays[0]; dest.shape[0] = view->shape[0]; + dest.strides = &arrays[1]; dest.strides[0] = view->strides[0]; + if (view->suboffsets) { + dest.suboffsets = &arrays[2]; dest.suboffsets[0] = view->suboffsets[0]; } + + if (init_slice(&dest, key, 0) < 0) + goto end_block; + dest.len = dest.shape[0] * dest.itemsize; + + ret = copy_single(&dest, &src); + + end_block: + PyBuffer_Release(&src); + return ret; } - else { - PyErr_Format(PyExc_TypeError, - "cannot index memory using \"%.200s\"", - key->ob_type->tp_name); + else if (PySlice_Check(key) || is_multislice(key)) { + /* Call memory_subscript() to produce a sliced lvalue, then copy + rvalue into lvalue. This is already implemented in _testbuffer.c. */ + PyErr_SetString(PyExc_NotImplementedError, + "memoryview slice assignments are currently restricted " + "to ndim = 1"); return -1; } - if (PyObject_GetBuffer(value, &srcview, PyBUF_CONTIG_RO) == -1) { - return -1; + + PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key"); + return -1; +} + +static Py_ssize_t +memory_length(PyMemoryViewObject *self) +{ + CHECK_RELEASED_INT(self); + return self->view.ndim == 0 ? 1 : self->view.shape[0]; +} + +/* As mapping */ +static PyMappingMethods memory_as_mapping = { + (lenfunc)memory_length, /* mp_length */ + (binaryfunc)memory_subscript, /* mp_subscript */ + (objobjargproc)memory_ass_sub, /* mp_ass_subscript */ +}; + +/* As sequence */ +static PySequenceMethods memory_as_sequence = { + 0, /* sq_length */ + 0, /* sq_concat */ + 0, /* sq_repeat */ + (ssizeargfunc)memory_item, /* sq_item */ +}; + + +/**************************************************************************/ +/* Comparisons */ +/**************************************************************************/ + +#define CMP_SINGLE(p, q, type) \ + do { \ + type x; \ + type y; \ + memcpy((char *)&x, p, sizeof x); \ + memcpy((char *)&y, q, sizeof y); \ + equal = (x == y); \ + } while (0) + +Py_LOCAL_INLINE(int) +unpack_cmp(const char *p, const char *q, const char *fmt) +{ + int equal; + + switch (fmt[0]) { + + /* signed integers and fast path for 'B' */ + case 'B': return *((unsigned char *)p) == *((unsigned char *)q); + case 'b': return *((signed char *)p) == *((signed char *)q); + case 'h': CMP_SINGLE(p, q, short); return equal; + case 'i': CMP_SINGLE(p, q, int); return equal; + case 'l': CMP_SINGLE(p, q, long); return equal; + + /* boolean */ + #ifdef HAVE_C99_BOOL + case '?': CMP_SINGLE(p, q, _Bool); return equal; + #else + case '?': CMP_SINGLE(p, q, char); return equal; + #endif + + /* unsigned integers */ + case 'H': CMP_SINGLE(p, q, unsigned short); return equal; + case 'I': CMP_SINGLE(p, q, unsigned int); return equal; + case 'L': CMP_SINGLE(p, q, unsigned long); return equal; + + /* native 64-bit */ + #ifdef HAVE_LONG_LONG + case 'q': CMP_SINGLE(p, q, PY_LONG_LONG); return equal; + case 'Q': CMP_SINGLE(p, q, unsigned PY_LONG_LONG); return equal; + #endif + + /* ssize_t and size_t */ + case 'n': CMP_SINGLE(p, q, Py_ssize_t); return equal; + case 'N': CMP_SINGLE(p, q, size_t); return equal; + + /* floats */ + /* XXX DBL_EPSILON? */ + case 'f': CMP_SINGLE(p, q, float); return equal; + case 'd': CMP_SINGLE(p, q, double); return equal; + + /* bytes object */ + case 'c': return *p == *q; + + /* pointer */ + case 'P': CMP_SINGLE(p, q, void *); return equal; + + /* Py_NotImplemented */ + default: return -1; } - /* XXX should we allow assignment of different item sizes - as long as the byte length is the same? - (e.g. assign 2 shorts to a 4-byte slice) */ - if (srcview.itemsize != view->itemsize) { - PyErr_Format(PyExc_TypeError, - "mismatching item sizes for \"%.200s\" and \"%.200s\"", - view->obj->ob_type->tp_name, srcview.obj->ob_type->tp_name); - goto _error; - } - bytelen = len * view->itemsize; - if (bytelen != srcview.len) { - PyErr_SetString(PyExc_ValueError, - "cannot modify size of memoryview object"); - goto _error; - } - /* Do the actual copy */ - destbuf = (char *) view->buf + start * view->itemsize; - srcbuf = (char *) srcview.buf; - if (destbuf + bytelen < srcbuf || srcbuf + bytelen < destbuf) - /* No overlapping */ - memcpy(destbuf, srcbuf, bytelen); - else - memmove(destbuf, srcbuf, bytelen); +} - PyBuffer_Release(&srcview); - return 0; +/* Base case for recursive array comparisons. Assumption: ndim == 1. */ +static int +cmp_base(const char *p, const char *q, const Py_ssize_t *shape, + const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets, + const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets, + const char *fmt) +{ + Py_ssize_t i; + int equal; + + for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) { + const char *xp = ADJUST_PTR(p, psuboffsets); + const char *xq = ADJUST_PTR(q, qsuboffsets); + equal = unpack_cmp(xp, xq, fmt); + if (equal <= 0) + return equal; + } -_error: - PyBuffer_Release(&srcview); - return -1; + return 1; +} + +/* Recursively compare two multi-dimensional arrays that have the same + logical structure. Assumption: ndim >= 1. */ +static int +cmp_rec(const char *p, const char *q, + Py_ssize_t ndim, const Py_ssize_t *shape, + const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets, + const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets, + const char *fmt) +{ + Py_ssize_t i; + int equal; + + assert(ndim >= 1); + assert(shape != NULL); + assert(pstrides != NULL); + assert(qstrides != NULL); + + if (ndim == 1) { + return cmp_base(p, q, shape, + pstrides, psuboffsets, + qstrides, qsuboffsets, + fmt); + } + + for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) { + const char *xp = ADJUST_PTR(p, psuboffsets); + const char *xq = ADJUST_PTR(q, qsuboffsets); + equal = cmp_rec(xp, xq, ndim-1, shape+1, + pstrides+1, psuboffsets ? psuboffsets+1 : NULL, + qstrides+1, qsuboffsets ? qsuboffsets+1 : NULL, + fmt); + if (equal <= 0) + return equal; + } + + return 1; } static PyObject * memory_richcompare(PyObject *v, PyObject *w, int op) { - Py_buffer vv, ww; - int equal = 0; PyObject *res; + Py_buffer wbuf, *vv, *ww = NULL; + const char *vfmt, *wfmt; + int equal = -1; /* Py_NotImplemented */ - vv.obj = NULL; - ww.obj = NULL; if (op != Py_EQ && op != Py_NE) - goto _notimpl; - if ((PyMemoryView_Check(v) && IS_RELEASED(v)) || - (PyMemoryView_Check(w) && IS_RELEASED(w))) { + goto result; /* Py_NotImplemented */ + + assert(PyMemoryView_Check(v)); + if (BASE_INACCESSIBLE(v)) { equal = (v == w); - goto _end; + goto result; } - if (PyObject_GetBuffer(v, &vv, PyBUF_CONTIG_RO) == -1) { - PyErr_Clear(); - goto _notimpl; + vv = VIEW_ADDR(v); + + if (PyMemoryView_Check(w)) { + if (BASE_INACCESSIBLE(w)) { + equal = (v == w); + goto result; + } + ww = VIEW_ADDR(w); + } + else { + if (PyObject_GetBuffer(w, &wbuf, PyBUF_FULL_RO) < 0) { + PyErr_Clear(); + goto result; /* Py_NotImplemented */ + } + ww = &wbuf; } - if (PyObject_GetBuffer(w, &ww, PyBUF_CONTIG_RO) == -1) { + + vfmt = adjust_fmt(vv); + wfmt = adjust_fmt(ww); + if (vfmt == NULL || wfmt == NULL) { PyErr_Clear(); - goto _notimpl; + goto result; /* Py_NotImplemented */ } - if (vv.itemsize != ww.itemsize || vv.len != ww.len) - goto _end; + if (cmp_structure(vv, ww) < 0) { + PyErr_Clear(); + equal = 0; + goto result; + } - equal = !memcmp(vv.buf, ww.buf, vv.len); + if (vv->ndim == 0) { + equal = unpack_cmp(vv->buf, ww->buf, vfmt); + } + else if (vv->ndim == 1) { + equal = cmp_base(vv->buf, ww->buf, vv->shape, + vv->strides, vv->suboffsets, + ww->strides, ww->suboffsets, + vfmt); + } + else { + equal = cmp_rec(vv->buf, ww->buf, vv->ndim, vv->shape, + vv->strides, vv->suboffsets, + ww->strides, ww->suboffsets, + vfmt); + } -_end: - PyBuffer_Release(&vv); - PyBuffer_Release(&ww); - if ((equal && op == Py_EQ) || (!equal && op == Py_NE)) +result: + if (equal < 0) + res = Py_NotImplemented; + else if ((equal && op == Py_EQ) || (!equal && op == Py_NE)) res = Py_True; else res = Py_False; + + if (ww == &wbuf) + PyBuffer_Release(ww); Py_INCREF(res); return res; +} + +/**************************************************************************/ +/* Hash */ +/**************************************************************************/ + +static Py_hash_t +memory_hash(PyMemoryViewObject *self) +{ + if (self->hash == -1) { + Py_buffer *view = &self->view; + char *mem = view->buf; + + CHECK_RELEASED_INT(self); -_notimpl: - PyBuffer_Release(&vv); - PyBuffer_Release(&ww); - Py_RETURN_NOTIMPLEMENTED; + if (!view->readonly) { + PyErr_SetString(PyExc_ValueError, + "cannot hash writable memoryview object"); + return -1; + } + if (view->obj != NULL && PyObject_Hash(view->obj) == -1) { + /* Keep the original error message */ + return -1; + } + + if (!MV_C_CONTIGUOUS(self->flags)) { + mem = PyMem_Malloc(view->len); + if (mem == NULL) { + PyErr_NoMemory(); + return -1; + } + if (buffer_to_c_contiguous(mem, view) < 0) { + PyMem_Free(mem); + return -1; + } + } + + /* Can't fail */ + self->hash = _Py_HashBytes((unsigned char *)mem, view->len); + + if (mem != view->buf) + PyMem_Free(mem); + } + + return self->hash; } -static int -memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg) +/**************************************************************************/ +/* getters */ +/**************************************************************************/ + +static PyObject * +_IntTupleFromSsizet(int len, Py_ssize_t *vals) { - if (self->view.obj != NULL) - Py_VISIT(self->view.obj); - return 0; + int i; + PyObject *o; + PyObject *intTuple; + + if (vals == NULL) + return PyTuple_New(0); + + intTuple = PyTuple_New(len); + if (!intTuple) + return NULL; + for (i=0; i<len; i++) { + o = PyLong_FromSsize_t(vals[i]); + if (!o) { + Py_DECREF(intTuple); + return NULL; + } + PyTuple_SET_ITEM(intTuple, i, o); + } + return intTuple; } -static int -memory_clear(PyMemoryViewObject *self) +static PyObject * +memory_obj_get(PyMemoryViewObject *self) { - PyBuffer_Release(&self->view); - return 0; + Py_buffer *view = &self->view; + + CHECK_RELEASED(self); + if (view->obj == NULL) { + Py_RETURN_NONE; + } + Py_INCREF(view->obj); + return view->obj; } +static PyObject * +memory_nbytes_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return PyLong_FromSsize_t(self->view.len); +} -/* As mapping */ -static PyMappingMethods memory_as_mapping = { - (lenfunc)memory_length, /* mp_length */ - (binaryfunc)memory_subscript, /* mp_subscript */ - (objobjargproc)memory_ass_sub, /* mp_ass_subscript */ -}; +static PyObject * +memory_format_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return PyUnicode_FromString(self->view.format); +} -static PySequenceMethods memory_as_sequence = { - 0, /* sq_length */ - 0, /* sq_concat */ - 0, /* sq_repeat */ - (ssizeargfunc)memory_item, /* sq_item */ +static PyObject * +memory_itemsize_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return PyLong_FromSsize_t(self->view.itemsize); +} + +static PyObject * +memory_shape_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return _IntTupleFromSsizet(self->view.ndim, self->view.shape); +} + +static PyObject * +memory_strides_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return _IntTupleFromSsizet(self->view.ndim, self->view.strides); +} + +static PyObject * +memory_suboffsets_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets); +} + +static PyObject * +memory_readonly_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return PyBool_FromLong(self->view.readonly); +} + +static PyObject * +memory_ndim_get(PyMemoryViewObject *self) +{ + CHECK_RELEASED(self); + return PyLong_FromLong(self->view.ndim); +} + +static PyObject * +memory_c_contiguous(PyMemoryViewObject *self, PyObject *dummy) +{ + CHECK_RELEASED(self); + return PyBool_FromLong(MV_C_CONTIGUOUS(self->flags)); +} + +static PyObject * +memory_f_contiguous(PyMemoryViewObject *self, PyObject *dummy) +{ + CHECK_RELEASED(self); + return PyBool_FromLong(MV_F_CONTIGUOUS(self->flags)); +} + +static PyObject * +memory_contiguous(PyMemoryViewObject *self, PyObject *dummy) +{ + CHECK_RELEASED(self); + return PyBool_FromLong(MV_ANY_CONTIGUOUS(self->flags)); +} + +static PyGetSetDef memory_getsetlist[] = { + {"obj", (getter)memory_obj_get, NULL, NULL}, + {"nbytes", (getter)memory_nbytes_get, NULL, NULL}, + {"readonly", (getter)memory_readonly_get, NULL, NULL}, + {"itemsize", (getter)memory_itemsize_get, NULL, NULL}, + {"format", (getter)memory_format_get, NULL, NULL}, + {"ndim", (getter)memory_ndim_get, NULL, NULL}, + {"shape", (getter)memory_shape_get, NULL, NULL}, + {"strides", (getter)memory_strides_get, NULL, NULL}, + {"suboffsets", (getter)memory_suboffsets_get, NULL, NULL}, + {"c_contiguous", (getter)memory_c_contiguous, NULL, NULL}, + {"f_contiguous", (getter)memory_f_contiguous, NULL, NULL}, + {"contiguous", (getter)memory_contiguous, NULL, NULL}, + {NULL, NULL, NULL, NULL}, }; -/* Buffer methods */ -static PyBufferProcs memory_as_buffer = { - (getbufferproc)memory_getbuf, /* bf_getbuffer */ - (releasebufferproc)memory_releasebuf, /* bf_releasebuffer */ +static PyMethodDef memory_methods[] = { + {"release", (PyCFunction)memory_release, METH_NOARGS}, + {"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, NULL}, + {"tolist", (PyCFunction)memory_tolist, METH_NOARGS, NULL}, + {"cast", (PyCFunction)memory_cast, METH_VARARGS|METH_KEYWORDS, NULL}, + {"__enter__", memory_enter, METH_NOARGS}, + {"__exit__", memory_exit, METH_VARARGS}, + {NULL, NULL} }; PyTypeObject PyMemoryView_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "memoryview", - sizeof(PyMemoryViewObject), - 0, + "memoryview", /* tp_name */ + offsetof(PyMemoryViewObject, ob_array), /* tp_basicsize */ + sizeof(Py_ssize_t), /* tp_itemsize */ (destructor)memory_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ diff --git a/Objects/object.c b/Objects/object.c index 2665d21..70d320c 100644 --- a/Objects/object.c +++ b/Objects/object.c @@ -1650,6 +1650,9 @@ _Py_ReadyTypes(void) if (PyType_Ready(&PyProperty_Type) < 0) Py_FatalError("Can't initialize property type"); + if (PyType_Ready(&_PyManagedBuffer_Type) < 0) + Py_FatalError("Can't initialize managed buffer type"); + if (PyType_Ready(&PyMemoryView_Type) < 0) Py_FatalError("Can't initialize memoryview type"); diff --git a/PCbuild/_testbuffer.vcproj b/PCbuild/_testbuffer.vcproj new file mode 100644 index 0000000..795ea27 --- /dev/null +++ b/PCbuild/_testbuffer.vcproj @@ -0,0 +1,521 @@ +<?xml version="1.0" encoding="Windows-1252"?>
+<VisualStudioProject
+ ProjectType="Visual C++"
+ Version="9.00"
+ Name="_testbuffer"
+ ProjectGUID="{A2697BD3-28C1-4AEC-9106-8B748639FD16}"
+ RootNamespace="_testbuffer"
+ Keyword="Win32Proj"
+ TargetFrameworkVersion="196613"
+ >
+ <Platforms>
+ <Platform
+ Name="Win32"
+ />
+ <Platform
+ Name="x64"
+ />
+ </Platforms>
+ <ToolFiles>
+ </ToolFiles>
+ <Configurations>
+ <Configuration
+ Name="Debug|Win32"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd_d.vsprops"
+ CharacterSet="0"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="Debug|x64"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd_d.vsprops;.\x64.vsprops"
+ CharacterSet="0"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ TargetEnvironment="3"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="Release|Win32"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="Release|x64"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ TargetEnvironment="3"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="PGInstrument|Win32"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops;.\pginstrument.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="PGInstrument|x64"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops;.\pginstrument.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ TargetEnvironment="3"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ TargetMachine="17"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="PGUpdate|Win32"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops;.\pgupdate.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ <Configuration
+ Name="PGUpdate|x64"
+ ConfigurationType="2"
+ InheritedPropertySheets=".\pyd.vsprops;.\x64.vsprops;.\pgupdate.vsprops"
+ CharacterSet="0"
+ WholeProgramOptimization="1"
+ >
+ <Tool
+ Name="VCPreBuildEventTool"
+ />
+ <Tool
+ Name="VCCustomBuildTool"
+ />
+ <Tool
+ Name="VCXMLDataGeneratorTool"
+ />
+ <Tool
+ Name="VCWebServiceProxyGeneratorTool"
+ />
+ <Tool
+ Name="VCMIDLTool"
+ TargetEnvironment="3"
+ />
+ <Tool
+ Name="VCCLCompilerTool"
+ />
+ <Tool
+ Name="VCManagedResourceCompilerTool"
+ />
+ <Tool
+ Name="VCResourceCompilerTool"
+ />
+ <Tool
+ Name="VCPreLinkEventTool"
+ />
+ <Tool
+ Name="VCLinkerTool"
+ BaseAddress="0x1e1F0000"
+ TargetMachine="17"
+ />
+ <Tool
+ Name="VCALinkTool"
+ />
+ <Tool
+ Name="VCManifestTool"
+ />
+ <Tool
+ Name="VCXDCMakeTool"
+ />
+ <Tool
+ Name="VCBscMakeTool"
+ />
+ <Tool
+ Name="VCFxCopTool"
+ />
+ <Tool
+ Name="VCAppVerifierTool"
+ />
+ <Tool
+ Name="VCPostBuildEventTool"
+ />
+ </Configuration>
+ </Configurations>
+ <References>
+ </References>
+ <Files>
+ <Filter
+ Name="Source Files"
+ >
+ <File
+ RelativePath="..\Modules\_testbuffer.c"
+ >
+ </File>
+ </Filter>
+ </Files>
+ <Globals>
+ </Globals>
+</VisualStudioProject>
diff --git a/PCbuild/pcbuild.sln b/PCbuild/pcbuild.sln index 9efb6d9..992e66a 100644 --- a/PCbuild/pcbuild.sln +++ b/PCbuild/pcbuild.sln @@ -142,6 +142,11 @@ Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "python3dll", "python3dll.vc EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "xxlimited", "xxlimited.vcproj", "{F749B822-B489-4CA5-A3AD-CE078F5F338A}"
EndProject
+Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "_testbuffer", "_testbuffer.vcproj", "{A2697BD3-28C1-4AEC-9106-8B748639FD16}"
+ ProjectSection(ProjectDependencies) = postProject
+ {CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26} = {CF7AC3D1-E2DF-41D2-BEA6-1E2556CDEA26}
+ EndProjectSection
+EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
@@ -609,6 +614,22 @@ Global {F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|Win32.Build.0 = Release|Win32
{F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|x64.ActiveCfg = Release|x64
{F749B822-B489-4CA5-A3AD-CE078F5F338A}.Release|x64.Build.0 = Release|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|Win32.ActiveCfg = Debug|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|Win32.Build.0 = Debug|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|x64.ActiveCfg = Debug|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Debug|x64.Build.0 = Debug|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|Win32.ActiveCfg = PGInstrument|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|Win32.Build.0 = PGInstrument|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|x64.ActiveCfg = PGInstrument|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGInstrument|x64.Build.0 = PGInstrument|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|Win32.ActiveCfg = PGUpdate|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|Win32.Build.0 = PGUpdate|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|x64.ActiveCfg = PGUpdate|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.PGUpdate|x64.Build.0 = PGUpdate|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|Win32.ActiveCfg = Release|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|Win32.Build.0 = Release|Win32
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|x64.ActiveCfg = Release|x64
+ {A2697BD3-28C1-4AEC-9106-8B748639FD16}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index f47d555..2146221 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -92,6 +92,9 @@ _socket _testcapi tests of the Python C API, run via Lib/test/test_capi.py, and implemented by module Modules/_testcapimodule.c +_testbuffer + buffer protocol tests, run via Lib/test/test_buffer.py, and + implemented by module Modules/_testbuffer.c pyexpat Python wrapper for accelerated XML parsing, which incorporates stable code from the Expat project: http://sourceforge.net/projects/expat/ @@ -530,6 +530,8 @@ class PyBuildExt(build_ext): # Python C API test module exts.append( Extension('_testcapi', ['_testcapimodule.c'], depends=['testcapi_long.h']) ) + # Python PEP-3118 (buffer protocol) test module + exts.append( Extension('_testbuffer', ['_testbuffer.c']) ) # profiler (_lsprof is for cProfile.py) exts.append( Extension('_lsprof', ['_lsprof.c', 'rotatingtree.c']) ) # static Unicode character database |