Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is streaming writes an important feature? #266

Open
eslavich opened this issue Jun 25, 2020 · 2 comments
Open

Is streaming writes an important feature? #266

eslavich opened this issue Jun 25, 2020 · 2 comments

Comments

@eslavich
Copy link
Contributor

eslavich commented Jun 25, 2020

The ASDF standard seems optimized for the convenience of file writers in that:

  • The block index occurs at the end of the file and is optional and possibly unreliable
  • The last block in a file may choose not to record block length in its header, instead relying on the end of the file to show the length of the block

The first feature is intended to facilitate streaming writes, since the size of a compressed block may not be known up-front and writing the block index at the end of the file allows the block index to be streamed out when the block sizes are all known. This may be convenient for writers which would otherwise have to go back and overwrite an earlier part of the file if the block index were located elsewhere. The downside is that readers need to consume a file backwards if they want to read the block index early, or "skip along" the blocks header to header to get to the one they want.

The second feature allows writers to stream output when the length of the binary block is not known ahead of time. This seems downright dangerous for readers, who won't be able to detect accidental file truncation.

Is this the right balance of compromises? For an archival format, it may not be desirable to prioritize the convenience of writers over the convenience and safety (data-integrity wise) of readers. I think we should consider the following changes:

  • Move the block index to the front of the block area, immediately following the YAML document
  • Make the block index required
  • Change the byte offsets in the block index to be relative to the start of the block index itself, so that manual edits of the YAML do not invalidate the index
  • Change the block index from YAML to a binary format, to discourage manual editing of the index
  • Require that all blocks include their lengths in the block header, which would eliminate the feature that allows writing a block of unknown length onto the end of the file.

The negative consequence of these changes is that ASDF file writers that do not know the lengths of their blocks ahead of time (due to compression or other reasons) would have to rewind and overwrite the block index and block header length field after the block data was written. This would necessitate writing to a storage medium that supports seeking backwards, e.g., would prevent streaming the file to a cloud storage service without first writing it temporarily to memory or to disk.

The benefit is that file readers would be able to consume the YAML document and block index and then know exactly what byte offset to seek forward to to begin reading a given binary block. There wouldn't be any question as to whether the final block had been truncated, because it would always include its own length in its header.

@perrygreenfield

@eslavich eslavich changed the title Is streaming writes an important capability Is streaming writes an important feature? Jun 25, 2020
@perrygreenfield
Copy link
Contributor

One of the use cases for streaming writes is something that is generating data (time series for example) for which the length is not known ahead of time, where it is not possible to insert the information at the beginning of the file (it may be going over a network pipe) and may be determined by outside events (user hits the stop button, battery dies, etc.)

That it may be confused with a accidental termination, though I suppose if we expect a terminating index and it isn't there, that may be used to indicate an error condition (the data up to that point is presumed good, but incomplete).

In supporting archival uses, perhaps we require an update to the file by some software on archive end to make it more robust?

@lgarrison
Copy link

Just a thought: what about adopting @eslavich's changes, but also write the block length at the end of the block for the special case of streaming blocks? A special value in the block header (like -1 in the length field) could flag that the block is a special one that stores its length at the end.

The benefit of this is that one could write a streaming block of unknown size over, e.g. the network, while still having the reader-friendly capability of detecting truncations in the future (which would probably show up as a garbage size).

One drawback is the complexity of introducing a potential new location for metadata. Another is that raw concatenation of new binary data on to the end of the ASDF file would no longer be supported, but at least this would allow some form of streaming writes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants