Hi,
In database design we spend ages normalising data to reduce duplication and improve data quality.
However, it easy to get carried away sometimes and end up over-normalising data to the point it takes longer to put it back together in order to make any sense of it. (I'm certainly guilty of this in the past when starting out with SQL.)
In Azure SQL we have a handy tool called a 'Stored Procedure' that can manage all the inner joins, lookups and aggregations, etc. to produce a simple flat table that Power BI can call and then simply finish off the math.
Is there an equivalent method to Stored Procedures in Dataverse?
Here's the problem I'm faced with.
Step one.
Multiple 'normalised' Dataverse tables that need to be joined up into a flat table.
This is a simple diagram of what I'm trying to achieve (table and column names are for illustrative purposes) - essentially the reverse of what we normally do.
Step two.
If that can work with the standard Dataverse tables the next stage is to repeat the process but this time using Virtual Tables to pull in data from the various end points, including SharePoint lists, Oracle (Cloud and on-premise), Salesforce, and even the odd Excel spreadsheet (if possible).
In one combination (in the same database) I currently have to use 6 linked tables just to match and extract the two values needed - nuts. In another combination I have to pull 4 tables from 3 separate endpoints in order to get the two values required.
Is even part of this possible in Dataverse?
If not the alternative suggestion we have is to use multiple Flows to pull and aggregate the data and then write this to a combined table. Not a great solution as the flows would need to be running almost permanently as a good proportion of the data is constantly changing.
I expect others have been faced with a similar issue but couldn't find any specific posts on how to resolve it... maybe it's not possible.
Not sure of what your end goal is here, but for having a flat table for PowerBI, I use the Azure Synapse link to DataVerse (simple to set up), and then you can stick a virtual SQL on top of that and produce SQL views etc. for PowerBI to look out. I use this when my SQL skills bypass my PowerBI skills ...
Or just go the whole way and build a data lake for the DataVerse and the external tables.
Hi @GuidoPreite
Thanks for your reply, you have confirmed the conclusion we were coming to.
It's a shame that Dataverse is unable to look at any column in any table and use these as a lookups to create a combined table of data.
Maybe one for the suggestion box 🙂
Hi @Gary_Eden ,
as you already aware normalising data inside Dataverse can't arrive to the same level as inside a standard SQL server.
In your example "Stock List" will be not a table but a view on the "Stock" table, as "Stock" have lookups on both "Product" and "Colour" tables, Dataverse add already inside the view of "Stock" the primary column of these two lookup tables (attention here, it just surfaces the primary column, not other columns of the table)
I hope you understand is difficult to give useful advise without knowing the real scenario you are facing, especially with Virtual Tables as it is difficult to combine them, if you really have multiple sources as you described, my suggestion probably (without knowing the details) will be to don't use virtual tables but to sync the data between the external source and the Dataverse table using a synchronization tool (Power Automate can be used if we have a very small amount of data, for higher volumes there are on-premise tools and cloud tools that works better).
Hope it helps
mmbr1606
22
Super User 2025 Season 1
stampcoin
17
ankit_singhal
11
Super User 2025 Season 1