I'm looking for the best practice method to read a large-ish data set from an API using Bridge.
Specifically, I need to read 9000 rows from a Resource Management report and post that data to a Smartsheet sheet.
But I also have this need coming up with other systems such as Gitlab and Workday.
When I use a parent-child workflow in Bridge (parent gets the data, child processes each object), Bridge simply stops responding and errors after 3000 objects or so. It appears to process the set of objects serially, triggering the child workflow for each object one at a time. So the 3000 or so rows end up taking hours to process, and it just eventually times out partway through the data.
Is there a better way?
I'm guessing the answer is... Bridge isn't setup to process large sets. That's fine if that's the answer. But I'd love to know if anyone has solved this.
(I'm aware of Data Shuttle, but that's only helpful if the source system can automate exports, which Resource Mgmt cannot)