Conversation
Older notesIssues for iris char data
=========================
EXISTING behaviour
ASIDE: Python "standard encodings" : https://docs.python.org/3/library/codecs.html#standard-encodings
Old discussion in netcdf4-python, refd by xarray docs
In Unidata docs, reference is hard to find
Outstanding issues
|
|
There seems to be a problem with netcdf4-Python byte encodings Unidata/netcdf4-python#1440 For now, here, have just turned off decoding, so everything now reads as character arrays?? I now don't think that people need or want to see cubes or coords with string dimensions: we will convert all to Uxx arrays internally. Note : existing code names dims according to their (byte) lengths. This seems a neat idea, since it means they automatically share where convenient. |
…Mostly working? Get 'create_cf_data_variable' to call 'create_generic_cf_array_var': Mostly working?
| common_dims = [ | ||
| dim for dim in cf_coord_var.dimensions if dim in engine.cf_var.dimensions | ||
| ] | ||
| coord_dims = cf_coord_var.dimensions |
There was a problem hiding this comment.
NOTE: this possibly needs to be implemented for ancillary-variables too
- which might also be strings
- which is awkward because of DRY failure in rules code
| # if encoding == "ascii": | ||
| # print("\n\n*** FIX !!") | ||
| # string = bytes.decode("utf-8") | ||
| # else: |
Status update 2025-11-11
|
Status update 2026-01-05intend to replace this with (roughly) #6850 "plus" #6851 Outstanding errors: |
|
replaced by #6898 |
Closes #6309
So far, just some ideas brewing