-
Notifications
You must be signed in to change notification settings - Fork 23
feat: add segment-based streaming ZIP writer, ZIP64 modes; replace legacy writer #2723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @strantalis, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly refactors the TDF archive writing mechanism to support modern streaming and out-of-order data processing paradigms. By introducing a segment-based writer and advanced ZIP64 capabilities, it enhances the SDK's ability to handle large files efficiently and reliably. The changes improve performance, reduce memory footprint during TDF creation, and ensure compliance with ZIP standards for diverse use cases.
Highlights
- Segment-based Streaming ZIP Writer: Introduced a new
SegmentWriter
for deterministic, out-of-order segment writing, enabling true streaming output for TDF files. This allows segments to be written and processed as they become available, rather than requiring all data upfront. - CRC32 Combine Functionality: Added a
CRC32CombineIEEE
utility to efficiently compute the total CRC32 checksum across multiple segments without needing to buffer the entire payload, which is crucial for streaming operations. - Enhanced ZIP64 Mode Control: Implemented robust control over ZIP64 modes (
Auto
,Always
,Never
) and support for signed data descriptors, ensuring proper handling of large files and compliance with ZIP specifications. - Legacy Writer Replacement: The previous
tdf3 writer
implementation has been entirely replaced by the newSegmentWriter
, simplifying the archive writing logic and improving flexibility. - Updated SDK Integration and Testing: The SDK's TDF creation process now integrates the new
SegmentWriter
. Corresponding reader tests have been updated, and new benchmarks and tests forZIP64
modes and various segment writing patterns have been added to validate performance and correctness.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Segments flow, a stream untold, ZIP64, stories unfold. No more wait, no buffer's hold, Data's journey, brave and bold.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and valuable improvement by replacing the legacy ZIP writer with a new segment-based streaming writer. The new implementation supports out-of-order segment writing, true streaming, and configurable ZIP64 modes, which is a great enhancement for handling large files and improving flexibility. The code is well-structured with new primitives, interfaces, and comprehensive tests, including benchmarks.
I've identified a few areas for improvement: a potential bug in segment cleanup logic, some non-deterministic test behavior, and opportunities to refine test and benchmark code. Additionally, there's a minor typo in a test filename (sdk/internal/archive/segment_writier_test.go
should be sdk/internal/archive/segment_writer_test.go
).
Overall, this is a solid contribution. Addressing the identified issues will further strengthen the robustness and correctness of the new writer.
…eplace legacy writer - Introduce SegmentWriter for deterministic, out-of-order segment writing and true streaming output. - Add CRC32 combine (crc32combine) to compute payload CRC without buffering entire payload. - Implement zip primitives (headers, central directory, EOCD) with ZIP64 mode control (Auto/Always/Never) and signed data descriptors. - Replace legacy tdf3 writer; remove old writer tests and reader/writer harness. - Update reader tests and add ZIP64 mode tests and benchmarks. - Integrate new writer into SDK TDF creation; track total size from streamed segments + finalize bytes. - Adjust TDF tests’ expected sizes and tolerances to match new ZIP layout. Notes: Small archives may differ by ~40–60 bytes vs. legacy due to conditional ZIP64 extras and data descriptor signature; output remains ZIP-spec compliant.
2badaa9
to
7bfa948
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and well-executed refactoring of the archive writing logic. The new SegmentWriter
provides a robust solution for deterministic, out-of-order segment writing and true streaming output, which is a great improvement over the legacy writer. The implementation correctly uses crc32combine
to avoid buffering payloads, and the new ZIP primitives and configuration options make the system more flexible and maintainable. The accompanying tests and benchmarks are comprehensive. I have a few minor suggestions regarding code comments, use of deprecated APIs, and a filename typo, which are detailed in the review comments. Overall, this is an excellent contribution.
right++ | ||
} | ||
} | ||
case "random", "mixed": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Insrtead of
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. Some of the zip logic could benefit from fuzz testing, but I didn't see anything concerning.
sw.finalized = true | ||
|
||
// Return the final bytes | ||
result := make([]byte, buffer.Len()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since buffer is constructed in function (and as far as I see not retained), why not just returned buffer.Bytes()
directly?
Proposed Changes
Notes:
Small archives may differ by ~40–60 bytes vs. legacy due to conditional ZIP64 extras and data descriptor signature; output remains ZIP-spec compliant.
Checklist
Testing Instructions