Dear community,
at my current customer we are running several independant CRM instances at the same tenant. As there are data structures reoccuring in the different system a colleague and me decided to setup a "base" system at which we develop components to be reused in all out systems.
As an example there is "table A" having a specified structure. This table is part of a solution we distribute via Github Actions as a manged solution to all instances.
Beside the structure, also data of this table could be provided from a central place. In best case I would be able to define the records for this table in dev and include it into my release to ensure all systems have the distríbuted data available. If records can be distributed included the records GUID, even better.
Any suggestions how to set this up as part of my pipeline?
Thanks in advance
Good to know this impression is not only with me. But sure, many things are just how we are used to work. But I doubt there will be the time I prefer GitHub over DevOps. For me same for Jira vs. DevOps - both have advantages, but I like DevOps logic for managing work more.
But to make this comparison complete (based on my current customer's project): Confluence is much nicer then DevOps Wiki - but well, most wiki engines are 🤣
I'm totally with you on that @SteRe. I also come from an Azure DevOps background, and only really got into Github because my current client made the strategic enterprise wide decision to use Github as their tool of choice for all CI/CD needs, rather than Azure DevOps. And having worked with Github for over 18 months now, I still prefer Azure DevOps
At the company I run the project there is GitHub Enterprise Server in place and there was a decision to have Linux runners only. To be honest, all this GitHub stuff was new to me - I'm more the Azure DevOps guy. Most stuff is supported by both environments, but if I could choose, I would go for introduction of Azure DevOps to do all this CI/CD part. To me it feels more comfortable.
In case I'll really go for the PS option, I'll check your additional hint, @parvezghumra .
No worries @SteRe . Happy to help. Shame it can't work for you, but kind of makes sense if you're restricted to Linux based runners. Can I check how you run Github Actions locally?
Incidentally, if you're going to go down the route of writing PowerShell scripts, you might want to consider using this PowerShell module to help you move Dataverse data between environments.
https://github.com/rnwood/Rnwood.Dataverse.Data.PowerShell
Hi again,
I gave it some try today. First of all I run it locally and it worked perfectly for the data I'd like to manage this way.
But unfortunatly it did not work at my action - the pipeline quit the step telling only Windows runners are supported. In the environment I can use there are Linux runners only 😞
I'll prepare some workaround and maybe a local PS script to semi automate it anyway.
So thanks for the hint, @parvezghumra
Best Regards
Hi @parvezghumra ,
thanks for this guidance. I read the mentioned articles and a few follow ups. Sound excatly like what I am looking for. I'll give it a try this week.
Best Regards
SteRe
@SteRe You need to create the necessary data in your dev environment manually. Then use the Export Data and Import Data GitHub actions in your workflow definitions to export the data from dev and load it into target environments.
These actions basically use the principles of Configuration Migration Tool (so you will need to define your schema etc)
Some links that should help
https://github.com/microsoft/powerplatform-actions/blob/main/export-data/action.yml
https://github.com/microsoft/powerplatform-actions/blob/main/import-data/action.yml
https://learn.microsoft.com/en-us/power-platform/alm/configure-and-deploy-tools
https://learn.microsoft.com/en-us/power-platform/admin/manage-configuration-data
WarrenBelz
791
Most Valuable Professional
MS.Ragavendar
410
Super User 2025 Season 2
mmbr1606
275
Super User 2025 Season 2