Dear all,
Scenario:
I am using a Power Automate flow for exporting data from a Power BI semantic model to csv every 2 hours. The Power Automate flow uses the action "Run a query against a dataset" and executes a DAX query against a semantic model. The result of the query has 15 columns and roughly 10k rows. The result of the query then is input to action "Create CSV table" which is then input to action "Create file" which is again stored in SharePoint. Some of the columns are sparsely filled, like below 1% of rows with values.
Issue/Observation:
The number of columns in the created csv files is changing: sometimes sparse columns are missing and the export only includes 13 or 14 instead of 15 columns.
Assumption/Testing:
There is an implicit optimization step in my process, that skips "empty" columns, which have been identified trough sampling (first ~500 rows empty ? skip column : include column). The DAX query in Power BI doesn't skip any columns. Adapting the DAX query to sort sparse columns in descending order fixes the issue, as populated rows appear on top.
Question:
Can you confirm that there is such an optimization step in the process chain (sequence of actions) outlined above? Can it be deactivated? What other options are there, besides sorting the sparse columns, to make sure that always all columns, even complete empty ones, are included in the file stored in SharePoint?
Kind regards
Tom