Best way to retrieve Entry Detail record with huge file size #1281
practiceforever
started this conversation in
Ideas
Replies: 1 comment
-
|
You will defiantly be limited by memory if a single file contains a million entries. JSON parsing and writing consumes the entire file and produces a full array of the contents. I would recommend that you filter the files before processing using the Go sdk. You should be able to collect each line which starts with a ed := ach.NewEntryDetail()
ed.Parse(lines[i]) // Each line beginning with a "6"
err := ed.Validate()
if err != nil {
// handle malformed lines
}I'm suggesting this because with that many records crashes or shutdown of the processing is going to happen. I would recommend parsing and handling each EntryDetail record individually to keep the memory pressure low. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I need to retrieve all transactions from a NACHA file which can contain million transactions and files can arrive concurrently. So essentially I want to get record 6 from ACH NACHA file. What will be the best way? Is there a support for pagination in the apis? I am thinking to convert NACHA file into JSON and then traverse thru the object but a file with that many transactions , may cause OOM. Any suggestions appreciated.
Beta Was this translation helpful? Give feedback.
All reactions