TIFF to GPU memory via cog3pio backend entrypoint#81
Conversation
Read TIFF data into xarray via cog3pio's experimental CudaCogReader struct that uses nvTIFF as its backend. Use cupy.from_dlpack to read the DLPack tensor, and reshape the 1-D array into a 3-D array (CHW form), setting the coordinates as appropriate. Added some API docs and basic unit tests. Cherry-picked from weiji14/cog3pio#71
| >>> dataarray: xr.DataArray = xr.open_dataarray( | ||
| ... filename_or_obj="https://github.com/OSGeo/gdal/raw/v3.11.0/autotest/gcore/data/byte_zstd.tif", | ||
| ... engine="cog3pio", | ||
| ... device_id=0, # cuda:0 |
There was a problem hiding this comment.
How would this be handles on a multi-GPU system? You may want to load many tif files into a dask-cupy-xarray object where different chunks are on different GPUs. This API feels a little inflexible for this use case.
There was a problem hiding this comment.
Exactly the feedback I needed! Short answer is: I'm probably gonna change the signature of this parameter to device_id: int | None = None. Where the default of None means to get the 'current device' from cp.cuda.runtime.getDevice().
Longer answer is: I'm currently using nvtiffDecoderCreateSimple() which uses the default memory allocator. The multi-gpu case would probably mean I need to use nvtiffDecoderCreate instead that allows a custom device allocator, which I presume dask will have some way of handling. I see dask's scope as more to do with parallel compute, not I/O from a file format, so would appreciate any advice here (the xarray <-> dask integration piece has always felt very CPU-centric to me 🙂)
Note
Alternatively, I also considered having the parameter as just device to take in a cupy.cuda.Device object. I didn't go with this option (yet) because I'd prefer to have something more cross-framework (e.g. allow torch.cuda.device or tf.device) to get the device_id, something touched on in data-apis/array-api#972 which proposes a __dlpack_device__() protocol.
There was a problem hiding this comment.
the default of None means to get the 'current device'
This would probably be fine for a multi-GPU setup. Generally the NVIDIA_VISIBLE_DEVICES env var is set to a unique index for each worker (in Dask this is something dask_cuda.LocalCUDACluser and dask_cuda.CUDAWorker handle), so when a worker uses the "current device" it would be different for each worker.
I see dask's scope as more to do with parallel compute, not I/O from a file format
It's just a task scheduler with some high-level collections. It doesn't matter if the task is compute, IO or anything else (is there anything else? 😅). But overall you need to think about how the high-level collection object filters down to the lower level Dask calls.
If I have a VM with four GPUs, and I call something along the lines of xr.open_mfdataset(filename_or_obj="mytiffs/*.tiff", engine="cog3pio") you want to avoid being explicit with the device otherwise everything will end up on one device and wasting the other three.
the xarray <-> dask integration piece has always felt very CPU-centric to me 🙂
It's true that dask-cuda is a separate package that adds GPU logic to Dask. But GPUs are well supported in Dask today. There may just be work to be done wiring things up to collections like xarray.
Read TIFF data into GPU memory inside an xarray data structure via cog3pio's experimental
CudaCogReaderstruct that uses nvTIFF as its backend. Based on a proof-of-concept I got working at weiji14/cog3pio#71. Would like it to live incupy-xarrayinstead 😃Need to install
cog3piowith the 'cuda' feature flag enabled like so to get the:Notes:
TODO:
cog3pioside requiring cupy-cuda13xReferences: