I have a flow that extracts data from an XML file and sends it to an application via an API.
With no concurrency, the flow takes about 1.5 ours and inserts 3300 records. The application has a duplicate checker and it confirms none of the entries are duplicates.
To speed it up, I added concurrency. I had to cancel the flow after ~3 minutes because it had inserted over 7,500 records including over 4,000 duplicates, and was still running.
(You can ignore that it says action failed, there are a couple of items out of the 3,300 that failed but all 3,300 were processed so this is a red-herring).
That implies that rather than processing multiple items concurrently (but over all still once each) it is actually processing every item multiple times.
Is this a bug, because that doesn't seem remotely useful? Is there a setting or something I need to check?
Did you ever figure this out? I am having the same issue