| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|\
| |
| |
| | |
ssh://bitbucket.hdfgroup.org:7999/~brtnfld/hdf5_msb into develop
|
| |
| |
| |
| | |
supplied
|
|/
|
|
|
|
| |
Implemented a process-0 read and then broadcast
for collective read of full datasets (H5S_ALL) by
all the processes in the file communicator.
|
|\
| |
| |
| |
| | |
* commit '6f52793adcd5a14aa63731e3c33c9737b5a04d16':
Minor tweak to address JIRA HDFFV-10611 (which was already fixed).
|
| | |
|
|/ |
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
~SONGYULU/hdf5_ray:bugfix/HDFFV-10635-hdf5-library-segmentation-fault to develop
* commit '3e8599591504c95d8a97100b9546174f6132dc97':
HDFFV-10635: Some minor changes to the test case and the comments in the library.
HDFFV-10635: add a test case.
HDFFV-10635: Allowing to write the same variable-length element more than once.
|
| |
| |
| |
| | |
library.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
a comment from VOL plugin -> connector.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
variables during testing, including connecting native, pass-through, and
dynamically loaded VOL connectors. Also bring native and pass-through
VOL connectors into alignment, removing the "H5VLnative_private.h" header.
|
| | |
|
| |
| |
| |
| | |
API calls.
|
| |
| |
| |
| |
| |
| | |
comparison to return comparison value as parameter, so they can return error
values; "cancelled" -> "canceled"; switched order of 'wrap_object' and
'free_wrap_ctx' management callbacks.
|
|\ \
| | |
| | |
| | | |
stackable_vol
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
~SONGYULU/hdf5_ray:bugfix/HDFFV-10607-patches-for-warnings-in-the-core to develop
* commit '14de476c8cb1b797ad43bea3c71dfb32bcd2131c':
HDFFV-10607 Fixing two compiler warnings in the library.
|
| | |/ |
|
| | | |
|
| |/
| |
| |
| |
| |
| | |
where available. OpenMPI 4.0 removed the deprecated MPI-1
MPI_type_extent() call by default, so this avoids needing
a special OpenMPI build.
|
|\ \
| |/
| |
| | |
stackable_vol
|
| | |
|
| |\
| | |
| | |
| | | |
develop
|
| | |
| | |
| | |
| | | |
run and finish.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add more sample batch scripts, specifically for sbatch, not for knl
cross compile.
Don't run parallel tests when no parallel test script is configured in
HDF5options.cmake.
|
| | |
| | |
| | |
| | | |
value.
|
| | |
| | |
| | |
| | | |
deserializing a connector's info object.
|
|\ \ \
| | |/
| |/|
| | | |
stackable_vol
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
~SONGYULU/hdf5_ray:bugfix/HDFFV-10571-cve-2018-17237-divided-by-zero to develop
* commit 'c923cdad6e515c842f3795a5b6d754ad94021e09':
HDFFV-10571: Minor format changes.
HDFFV-10571: Minor change - reformatting the error check.
HDFFV-10571: Minor change - adding the error check right after decoding of chunk dimension for safeguard.
HDFFV-10571: Minor change - revised the comment to be clearer.
HDFFV-10571 Divided by Zero vulnerability. Minor fix: I added an error check to make sure the chunk size is not zero.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
chunk dimension for safeguard.
|
| | | | |
|
| | |/
| | |
| | |
| | | |
check to make sure the chunk size is not zero.
|
| | |
| | |
| | |
| | | |
metadata read attempts.
|
|\ \ \
| |/ /
| | |
| | | |
stackable_vol
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
~SONGYULU/hdf5_ray:bugfix/HDFFV-10601-issues-with-chunk-cache-hash to develop
* commit 'cd13d24e5140578a880aebe4e2d8b899179d0870':
HDFFV-10601: I added error checking to the HDF5 functions.
HDFFV10601: Adding performance test to verify the improvement.
HDFFV-10601: I changed to a better way to calculate the number of chunks in a dataset.
HDFFV-10601 Issues with chunk cache hash value calcuation:
|
| | | |
| | | |
| | | |
| | | | |
a dataset.
|
| | |\ \
| | | | |
| | | | |
| | | | | |
https://bitbucket.hdfgroup.org/scm/~songyulu/hdf5_ray into bugfix/HDFFV-10601-issues-with-chunk-cache-hash
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
1. H5D__chunk_hash_val: When the number of chunks in the fastest changing dimension is larger than the number of slots in the hash table, H5D__chunk_hash_val abandons the normal hash value calculation algorithm and simply uses the scaled dimension. This will cause chunks a selection that cuts across chunks in dimensions other than the fastest changing to all have the same hash value, and they will therefore always evict each other from the cache, having an obvious major performance impact. Eliminated the check for the number of slots in this function and always use the full algorithm.
2. H5D__chunk_init: When the scaled dimensions (number of chunks in each dimension) are calculated in H5D__chunk_init, a simple divide ("/") operator is used with the dataset size in elements and the chunk size in elements. While this is fine when the dataset size is an exact multiple of the chunk size, in other cases, since "/" rounds down, it results in a scaled dimension one less than it should (it ignores the partial edge chunk). This has trickle down effects on hash value calculation that can cause excess hash value collisions and therefore performance issues. Changed the calculation to (((dataset_size - 1) / chunk_size) + 1).
Tested the build with Autotool and CMake.
|
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | |
| | | | |
H5Fget_access_plist(). Also, other misc. cleanups, etc.
|