Hey Folks,
in my Project i running an complex Flow, which detects new Files in an Folder (Sharepoint) an analyze them with AI Builder to store the Data in Dataverse.
Also the readout is scriptet in a Dataverse Helplist to get an Export of the whole Files which are getting Analyzed.
The Flow is not running in a big Apply to Each Loop, it gets analyzed File by File.
I Set the Degree of Parallelism in Sharepoint Trigger When a file is created or modified (properties only) at 20 to reduce the Concurency of Files.
But when i analyze 500 Files i get 37 Duplicates in it (and i run that several Times with that Amount of Files).
When i anayze 200 Files i get 16 Duplicates in it.
And on 30 Files i get 8 Duplicates in it.
So not only do I have a corrupt database from which I have to throw out the duplicate files, but I have (supposedly) paid for the analysis of 61 additional Files.
How do you deal with this Case, is there an official Document from Microsoft regarding the problem and how to work around it?