summaryrefslogtreecommitdiffstats
path: root/doc/html/study.html
diff options
context:
space:
mode:
authorQuincey Koziol <koziol@hdfgroup.org>1998-07-08 14:54:54 (GMT)
committerQuincey Koziol <koziol@hdfgroup.org>1998-07-08 14:54:54 (GMT)
commitbd1e676c521d881b3143829f493a28b5ced1294b (patch)
tree69c50f9fe21ce87f293d8617a6bd51b4cc1e0244 /doc/html/study.html
parent73345095897d9698bb1f2f7df830bf80a56dc65a (diff)
downloadhdf5-bd1e676c521d881b3143829f493a28b5ced1294b.zip
hdf5-bd1e676c521d881b3143829f493a28b5ced1294b.tar.gz
hdf5-bd1e676c521d881b3143829f493a28b5ced1294b.tar.bz2
[svn-r467] Restructuring documentation.
Diffstat (limited to 'doc/html/study.html')
-rw-r--r--doc/html/study.html172
1 files changed, 172 insertions, 0 deletions
diff --git a/doc/html/study.html b/doc/html/study.html
new file mode 100644
index 0000000..f9e192d
--- /dev/null
+++ b/doc/html/study.html
@@ -0,0 +1,172 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+ <head>
+ <title>Testing the chunked layout of HDF5</title>
+ </head>
+
+ <body>
+ <h1>Testing the chunked layout of HDF5</h1>
+
+ <p>This is the results of studying the chunked layout policy in
+ HDF5. A 1000 by 1000 array of integers was written to a file
+ dataset extending the dataset with each write to create, in the
+ end, a 5000 by 5000 array of 4-byte integers for a total data
+ storage size of 100 million bytes.
+
+ <p>
+ <center>
+ <img alt="Order that data was written" src="study_p1.gif">
+ <br><b>Fig 1: Write-order of Output Blocks</b>
+ </center>
+
+ <p>After the array was written, it was read back in blocks that
+ were 500 by 500 bytes in row major order (that is, the top-left
+ quadrant of output block one, then the top-right quadrant of
+ output block one, then the top-left quadrant of output block 2,
+ etc.).
+
+ <p>I tried to answer two questions:
+ <ul>
+ <li>How does the storage overhead change as the chunk size
+ changes?
+ <li>What does the disk seek pattern look like as the chunk size
+ changes?
+ </ul>
+
+ <p>I started with chunk sizes that were multiples of the read
+ block size or k*(500, 500).
+
+ <p>
+ <center>
+ <table border>
+ <caption align=bottom>
+ <b>Table 1: Total File Overhead</b>
+ </caption>
+ <tr>
+ <th>Chunk Size (elements)</th>
+ <th>Meta Data Overhead (ppm)</th>
+ <th>Raw Data Overhead (ppm)</th>
+ </tr>
+
+ <tr align=center>
+ <td>500 by 500</td>
+ <td>85.84</td>
+ <td>0.00</td>
+ </tr>
+ <tr align=center>
+ <td>1000 by 1000</td>
+ <td>23.08</td>
+ <td>0.00</td>
+ </tr>
+ <tr align=center>
+ <td>5000 by 1000</td>
+ <td>23.08</td>
+ <td>0.00</td>
+ </tr>
+ <tr align=center>
+ <td>250 by 250</td>
+ <td>253.30</td>
+ <td>0.00</td>
+ </tr>
+ <tr align=center>
+ <td>499 by 499</td>
+ <td>85.84</td>
+ <td>205164.84</td>
+ </tr>
+ </table>
+ </center>
+
+ <hr>
+ <p>
+ <center>
+ <img alt="500x500" src="study_500x500.gif">
+ <br><b>Fig 2: Chunk size is 500x500</b>
+ </center>
+
+ <p>The first half of Figure 2 shows output to the file while the
+ second half shows input. Each dot represents a file-level I/O
+ request and the lines that connect the dots are for visual
+ clarity. The size of the request is not indicated in the
+ graph. The output block size is four times the chunk size which
+ results in four file-level write requests per block for a total
+ of 100 requests. Since file space for the chunks was allocated
+ in output order, and the input block size is 1/4 the output
+ block size, the input shows a staircase effect. Each input
+ request results in one file-level read request. The downward
+ spike at about the 60-millionth byte is probably the result of a
+ cache miss for the B-tree and the downward spike at the end is
+ probably a cache flush or file boot block update.
+
+ <hr>
+ <p>
+ <center>
+ <img alt="1000x1000" src="study_1000x1000.gif">
+ <br><b>Fig 2: Chunk size is 1000x1000</b>
+ </center>
+
+ <p>In this test I increased the chunk size to match the output
+ chunk size and one can see from the first half of the graph that
+ 25 file-level write requests were issued, one for each output
+ block. The read half of the test shows that four times the
+ amount of data was read as written. This results from the fact
+ that HDF5 must read the entire chunk for any request that falls
+ within that chunk, which is done because (1) if the data is
+ compressed the entire chunk must be decompressed, and (2) the
+ library assumes that a chunk size was chosen to optimize disk
+ performance.
+
+ <hr>
+ <p>
+ <center>
+ <img alt="5000x1000" src="study_5000x1000.gif">
+ <br><b>Fig 3: Chunk size is 5000x1000</b>
+ </center>
+
+ <p>Increasing the chunk size further results in even worse
+ performance since both the read and write halves of the test are
+ re-reading and re-writing vast amounts of data. This proves
+ that one should be careful that chunk sizes are not much larger
+ than the typical partial I/O request.
+
+ <hr>
+ <p>
+ <center>
+ <img alt="250x250" src="study_250x250.gif">
+ <br><b>Fig 4: Chunk size is 250x250</b>
+ </center>
+
+ <p>If the chunk size is decreased then the amount of data
+ transfered between the disk and library is optimal for no
+ caching, but the amount of meta data required to describe the
+ chunk locations increases to 250 parts per million. One can
+ also see that the final downward spike contains more file-level
+ write requests as the meta data is flushed to disk just before
+ the file is closed.
+
+ <hr>
+ <p>
+ <center>
+ <img alt="499x499" src="study_499x499.gif">
+ <br><b>Fig 4: Chunk size is 499x499</b>
+ </center>
+
+ <p>This test shows the result of choosing a chunk size which is
+ close to the I/O block size. Because the total size of the
+ array isn't a multiple of the chunk size, the library allocates
+ an extra zone of chunks around the top and right edges of the
+ array which are only partially filled. This results in
+ 20,516,484 extra bytes of storage, a 20% increase in the total
+ raw data storage size. But the amount of meta data overhead is
+ the same as for the 500 by 500 test. In addition, the mismatch
+ causes entire chunks to be read in order to update a few
+ elements along the edge or the chunk which results in a 3.6-fold
+ increase in the amount of data transfered.
+
+ <hr>
+ <address><a href="mailto:matzke@llnl.gov">Robb Matzke</a></address>
+<!-- Created: Fri Jan 30 21:04:49 EST 1998 -->
+<!-- hhmts start -->
+Last modified: Fri Jan 30 23:51:31 EST 1998
+<!-- hhmts end -->
+ </body>
+</html>