
Announcements
Hi all.
We're moving a solution from SQL Server database to Dataverse. It's a small database, largest table is only around 5000 records and everything is working fine in the Power App (Canvas app) side of things.
However, we have a need to export data to a CSV file, for a partner to read. This is a daily (currently 4 x per day but once would probably be enough) routine. We have an Azure function that handles the export. In the old SQL version we have a View in the database that joins 8 tables, three of which have sub-queries over an intermediate table (many-to-many relation where the intermediate table has date and type values).
The SQL Server version generates the export files in one query, in about 0.5 seconds (the file has about 1000 rows).
Now, I made a similar function for Dataverse, using the .NET SDK. I use RetrieveMultiple to get the main data, and then for every item I have to now do separate requests for the intermediate tables and for the table on the other side of the relation. This results in a runtime of around 5 minutes and roughly 4000 API requests to Dataverse.
This seems... stupid. 600 times slower, and if we run this multiple times a day I'm worried we will run into the API call limits, especially if (when) the amount of data grows. I don't think generating an export file is a rare requirement.
So my question here is: is there any way to run more of the logic in the "engine" side of things, running JOINs in the database itself instead of the external code? So far I have not been able to figure out complex joins with multiple tables and conditions in Dataverse.
Or is there an alternative - like, for example I know there is an XrmToolkit, but I have no idea of how it works or what it can do - would that be of any use?
Or is this just the shiny new world we have to live with? Our solution works (for now), so this is not a critical issue just yet, but I would appreciate any ideas.