
Announcements
Hello, I'm a citizen developer at my job. I am tasked with setting up a database that our team can query routinely when they need certain info. The database will be storing price and contract info that we receive from a third party via multiple text files weekly. The data we receive is well structured and doesn't require much cleaning or normalizing.
My original plan was to drop these files into a Sharepoint folder and have an automated Dataflow grab the files and load their contents into Dataverse tables. We would then build a few Power BI dashboards that connect to those Dataverse tables and that is how the end users would interact with the data.
The issue I've run into (which I've seen here before) is that one of the text files is 700mb with over 2M records in it. When the Dataflow runs, it fails due to exceeding default buffer limit.
Key constraint to bear in mind is that I'm not in IT and so getting enhanced privileges isn't really an option. My question is two fold.
1. Is there a way to overcome this limit? I saw a suggestion to enable the 'allow chunking' option. I do not see this option. I noticed that ppl have mentioned it only works on certain native formats, I'm guessing text files aren't one of those formats. Any other ideas on how to solve this?
2. Is there an alternative design pattern I should use (i.e. maybe not a power platform solution?)