hdf5h5pychunking

how to set a proper chunk size in hdf5


according to this answer, a proper chunk size is important for optimizing I/O perfromance.

I am 3000 jpg images, whose size vary from 180kB to 220kB. I am going to save them as bytes.

I know 2 methods to do it. One is to concat all the jpg-bytes, the other is to save only one jpg in each dataset.

How to decide the best chunk size for each method?


def save_images_separate(input_folder, hdf5_path):
    start_time = time.time()

    image_files = [f for f in os.listdir(input_folder) if f.endswith('.jpg')]

    with h5py.File(hdf5_path, 'w') as hdf5_file:
        for i, image_file in enumerate(image_files):
            image_path = os.path.join(input_folder, image_file)
            with open(image_path, 'rb') as img_file:
                image_data = img_file.read()
            hdf5_file.create_dataset(f'images/{i}', data=np.frombuffer(image_data, dtype=np.uint8))
    end_time = time.time()
    return end_time - start_time


def save_images_concatenated(input_folder, hdf5_path):
    start_time = time.time()

    image_files = [f for f in os.listdir(input_folder) if f.endswith('.jpg')]
    all_images_data = bytearray()
    image_lengths = []

    for image_file in image_files:
        image_path = os.path.join(input_folder, image_file)
        with open(image_path, 'rb') as img_file:
            image_data = img_file.read()
            all_images_data.extend(image_data)
            image_lengths.append(len(image_data))

    with h5py.File(hdf5_path, 'w') as hdf5_file:
        hdf5_file.create_dataset('images/all_images', data=all_images_data)
        hdf5_file.create_dataset('images/image_lengths', data=image_lengths)
        
    end_time = time.time()
    return end_time - start_time

Solution

  • Background on HDF5 dataset storage

    By default, HDF5 dataset storage is contiguous. Chunked storage is an option to improve I/O performance for large datasets. Here's the difference:

    So, the first decision is whether to use default (contiguous) or chunked storage. In general, chunked storage is not necessary with small datasets -- e.g., those that can be easily read into memory. It's most helpful when you have large datasets, especially when you only need to read small slices of data at a time (eg. reading 1 image from a dataset with 1000s of images). [Note: If you create resizable datasets OR use compression, chunked storage is automatically enabled and h5py will set a default chunks value.]

    If you are new to HDF5, realize that manually setting an inappropriate chunksize can have negative consequences. If you decide to use chunked storage, start by letting h5py estimate the chunk shape. Set chunks=True.

    Recommendation for posted question:

    If you save each image in a unique dataset, you don't need chunked storage (since your images are 180-220 kB). It's harder to make a recommendation for the concatenated storage method due to the variable image size and the dataset shape. You will almost always be reading from 2 chunks, and sometimes from 3 chunks. Start by setting chunks to the maximum image size (220kB) and test performance.

    More general recommendations:

    If you decide to set chunks manually, careful analysis is required (especially for multi-dimensional datasets). It affects several performance items: file size, creation time, sequential read time and random read time.

    The recommended total chunk size should be between 10 KiB and 1 MiB (larger for larger datasets). Normally it's a good idea to set your chunk shape to match the dimensions you will use to read the dataset. Be careful not to define a size that is too small or too large.

    The PyTables documentation has an excellent analysis and discussion for expert users. Here is the link: Fine-tuning the Chunksize