- Fix issue with byte shuffle filter when data length is not a multiple of element length. jamesmudd#318
- Improve testing of byte shuffle and deflate filters
- Add validation running on Java 17
- Improve support for NIO Path. Allows jHDF to open files on non-default file systems such as zip files or remote storage systems. Thanks, @tbrunsch for this contribution jamesmudd#304
- Fix accessing a missing fill value could cause and exception jamesmudd#307
- Dependency updates
- CI and release process improvements
- Breaking API change Dataset#getMaxSize now returns
long[]
allowing files with max sizes larger thanint
max to be opened. jamesmudd#283 - Add support for opaque datatype jamesmudd#264
- Improve chunked dataset read performance with default logging jamesmudd#267
- Dependency updates
- Add GitHub Actions CI
- Switch away from Bintray jamesmudd#250
- Add support for committed datatypes jamesmudd#255
- Add support for attributes with shared datatype
- Switch dependencies repository to Maven Central jamesmudd#250
- Code cleanup
- Adds support for reading in-memory files from
byte[]
orByteBuffers
jamesmudd#245 - Breaking API change To support in-memory files
HdfFile#getHdfChannel
is replaced byHdfFile#getHdfBackingStorage
which now returns aHdfBackingStorage
. Internally the new interface replaces the use ofHdfFileChannel
- Fix jamesmudd#247 reading empty arrays in variable length datasets
- Dependency updates
- Update Gradle
- Add LZF compression support allowing LZF datasets to be read. jamesmudd#239
- Test dependency updates
- Add checksum validation, with "Jenkins Lookup 3 Hash". Will help to detect file corruption.
- Add support for opening a HDF5 file from an InputStream. Many Java API provide InputStreams so this improves integration possibilities.
- Test and coverage improvements
- Test and build dependency updates
- Add support for v1 and v2 Data Layout Messages Fix jamesmudd#216
- Add support for Old Object Modification Time Message - Improves compatibility with older files
- Fix issue if compact datasets are read multiple times
- Improve handling of empty contiguous datasets
- Test and coverage improvements
- Test dependency updates
- Breaking API change
Dataset#getDiskSize
is renamedDataset#getSizeInBytes
andAttribute#getDiskSize
is renamedAttribute#getSizeInBytes
- New API method
Dataset#getStorageInBytes
which returns the total storage size of the dataset. By comparison withDataset#getSizeInBytes
allows the compression ratio to be obtained - Fixes an issue when reading empty datasets with no allocated storage jamesmudd#162
- Code quality improvements and cleanup
- Dependency updates
- CI and build improvements
- Fix jamesmudd#177 Reading null or padded strings of zero length
- Fix jamesmudd#182 Typo in
Dataset.isVariableLength
. This is an breaking API change replace calls toisVariableLentgh()
withisVariableLength()
- Add initial support for reading large attributes jamesmudd#183
- Dependency updates
- CI and build improvements
- Add support for reading half precision (16 bit) floats
- Add support for getting the ByteBuffer backing contiguous datasets and attributes
- Memory usage and performance improvements
- Test coverage improvements
- CI and build improvements
- Add support for bitfield datasets jamesmudd#84
- Fix jamesmudd#157 support nested compound datasets
- Fix jamesmudd#159 reading null terminated strings filling their buffer
- Add support for raw chunk access. See https://github.com/jamesmudd/jhdf/blob/master/jhdf/src/main/java/io/jhdf/examples/RawChunkAccess.java
- Fix issues running on systems where default charset is not ASCII/UTF8
- Upgrade to Gradle 6.1.1
- Some CI improvements
- Add support for variable length datasets jamesmudd#123
- Add support for Compound datatype v3 messages allowing more compound datasets to be read
- Fix jamesmudd#139 bug accessing chunked v4 string datasets
- Fix jamesmudd#143 bug traversing links
- Code cleanup
- Upgrade to Gradle 6.1
- Update dependencies
- Add support for chunked v4 datasets with b-tree chunk indexing
- Improve exceptions for unsupported b-tree records
- Improve test coverage
- Upgrade to Gradle 6.0.1
- Fix jamesmudd#124 String padding not handled correctly.
- Fix jamesmudd#132 Multi dimensional fixed length string datasets read incorrectly.
- Fix bug in chunked v4 datasets (added in v0.5.0) where incorrect data was returned if, fixed array or extensible array indexing was used and the dataset dimensions were not a multiple of the chunk dimensions.
- Adds support for enum datasets (which are returned in string form) jamesmudd#121
- Adds
HdfFile
convenience constructors forURI
andPath
- Fix jamesmudd#125
- Update dependencies
- Refactors test files to separate HDF5 files from scrips.
- Improvements to test coverage.
- Adds support for some types (the most common) of chunked v4 datasets:
- Single chunk
- Fixed array
- Extensible array
- Fix jamesmudd#113 fixed length UTF8 datasets can now be read correctly.
- Fix jamesmudd#112 multiple accesses to a global heap object now behave correctly.
- Lots of code cleanup and minor improvements
- Updates dependencies
- Add support for reference data type. Thanks to Gisa Meier and JCzogalla jamesmudd#106 jamesmudd#91
- Creation order tracking is skipped allowing these files to be read
FileChannel
can now be accessed allowing more low-level access to datasets- Add version logging when the library is used
- Fix jamesmudd#101
- Add additional testing of attributes
- Add attribute example
- Adds support for compound datasets
- Adds support for array data type
- Adds support for reading chunked datasets with Fletcher32 checksums. Note: the checksum is not verified.
- Improved performance of
Dataset.isEmpty
method - Dependency updates
- Fix jamesmudd#49 - Big (>10x) performance improvement for chunked dataset reads. Chunks are now decompressed in parallel and the resulting data copies are a large as possible.
- Update Gradle to 5.5
- Update test dependencies
- Fix jamesmudd#88 error when running on Java 8
- Improvements to IDE support
- Improvements to exceptions in currently unsupported cases
- Initial work for #49 slow chunked dataset reads
- Lots of typos cleaned up
- Add additional build data to MANIFEST.MF
- Add support for byte shuffle filter
- Many filter management improvements including support for dynamically loaded filters
- Add support for reading dataset fill values jamesmudd#74
- Checkstyle added to improve code consistency - not full code formatting yet...
- Update Gradle to 5.4
- Update
commons-lang3
to 3.9 (Java 8) - Update
mockito-core
to 2.27.+
- Add support for broken links
- Add support for attribute and link creation order tracking jamesmudd#70
- Allow superblock v1 files to be loaded
- Improve exceptions thrown when lazy loading fails
- Fix bug to allow non-cached groups to be loaded
- Improvement to documentation
- Update Gradle
- Update test dependencies
- Code base cleanup
- Improvements to CI builds and PR validation
- Add support for accessing attributes (see Attribute.java)
- Add support for scalar datasets
- Add support for empty datasets
- Add support for files with user blocks
- Fix bug where "old" style groups containing soft links could not be opened
- Fix bug reading unsigned numbers from "awkward" buffer sizes
- Lots of minor code cleanup and refactoring
- Improvements to tests and coverage
- Fix bug when fixed size string datasets contain strings of exactly that size.
- Fix bug where >1D fixed size datasets could not be read
- Add more JavaDoc
- Minor refactoring
- Add support for String datasets
- Remove
Dataset.getDataBuffer
- Not all datasets can reasonably support accessing the backing buffer Dataset.getMaxSize
now always returns a result previously returnedOptional
if no max size was in the file now it returns the dataset size if no max size is present.- Remove dependency on
org.slf4j.slf4j-simple
, now just depends onslf4j-api
- Update SLF4J to 1.8.0-beta4
- Update to Gradle 5.2.1 and Gradle plugins
- First release to support reading chunked datasets. (note: v1.8 files only)
- Initial support for compressed datasets, GZIP only at the moment.
Lots of initial development towards being able to read HDF5 files in pure Java. See Git history if your interested.