web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Power Apps / Solution Deployments -...
Power Apps
Unanswered

Solution Deployments - deploying single topic large Solution vs many small change Solutions

(1) ShareShare
ReportReport
Posted on by 58
Confused about Microsoft docs recommendations on Solution Deployment, ALM.
 
Imagine we create a custom model-driven App "ABC Tax Plan". It has lots of things. Custom entities, some additional fields and Views to OOTB entities Account and Contact, Model-driven App, Sitenav, Web Resources, Plugins, Cloud Flows.
 
We built it in 1 big new Unmanaged Solution in Dev. Then exported it from Dev as Managed, and took it up to UAT + Prod as Managed. All works, lands well.
 
OK ... 3 days go by...
 
Client wants 2 custom Views changed. Changing columns, sorting, filtering. Nothing else.
 
Do you create a new e.g. "Nov 2024 changes 1" Solution in Dev for just these changes? Or go back to the original 1 big Solution and export everything again etc?
 
And imagine again... another 4 days go by... Client wants a field moved on a Form... nothing else. Same question now?
 
The MS Docs seem a bit vague or contradictory on this... Segmented vs "Planning Solutions" ahead https://learn.microsoft.com/en-us/power-platform/alm/organize-solutions
 
I lean toward always making new small Solutions for each Deployment. The docs say to only include what you changed in the Solution. Well how can you do that other than making a new Solution for each deployment to include/deploy only what you want to? So that seems to completely contradict the idea of "planning ahead" and having specific Solutions for long-running purposes.
 
Thanks for your time!
I have the same question (0)
  • M-M Profile Picture
    58 on at
    A potential downside to this, is ending up with potentially MANY different Solution Layers on components.
     
    E.g. your Contact Form, if you deploy 1 change per month, will have 12 Solution Layers within 1 year.
    But does that actually matter as a downside? 🤷
  • Suggested answer
    Parvez Ghumra Profile Picture
    1,579 Moderator on at
     
    I would not recommend a new solution for every change you want to make. Such an approach necessitates a change to your deployment logic/release pipeline definition for every change, introduces additional risk of failed deployments due to dependency issues and also introduces unnecessary solution layering complexity, giving rise to unexpected behaviour being observed post-deployment.

    My recommendation is to stick with a single, baselined container solution containing all your changes which you add to/remove from incrementally. If you use the recently optimised 'Stage and Upgrade' deployment pattern, you will benefit from the smart-diff deployment logic, and assuming the delta is small, the deployment should be very quick (as long as you don't sent the 'overwrite unmanaged customisations' or 'convert to managed' flags. Even though you're deploying the entire container solution, in the example you gave, only the changes you included in the latest version will have an impact from a deployment perspective. Even the 'Update' deployment pattern will be relatively quick, but note that it is additive only and will not delete removed components.



    Hope this helps
  • M-M Profile Picture
    58 on at
    Hi @ParvezGhumra that's interesting... in practice I've NEVER seen large Solutions import in a timely manner. So I'm very skeptical of this idea that even for large solutions, the platform is smart about diff-checking...
     
    In fact, projects where we use 1 or 2 large Solutions - the Solution import can take over 40 minutes... which is wild and often causes huge delays and timeout errors where we have to restart. So no I think large solutions indeed take very long to import.
     
    The other obvious downside you'll encounter in real-world projects... is if you're deploying 1 Solution containing everything... you can never deploy progress as you go. Example:
     
    Joe is working on Account and Bob is working on Contact. 2 completely separate tasks functionally and technically... Account is done and the business needs it ASAP... but Contact is not... so now we have to wait for Contact to be done even though there is 0 technical/functional dependency. It's purely a dependency just because of the Solution. This is a huge downside to using 1 large Solution. You're constantly waiting for no changes before a deployment - even when they have 0 business or technical overlap - which is a slow and cumbersome approach.
     
    You mentioned these downsides but I actually see these as trivial: 
     
    ""Such an approach necessitates a change to your deployment logic/release pipeline definition for every change, introduces additional risk of failed deployments due to dependency issues and also introduces unnecessary solution layering complexity""
     
    1. Pipeline update. This only applies to if you're doing a code-based or Azure DevOps deployment process. I can see that being annoying. But I actually think going forward using Azure DevOps will be less common than Power Platform Pipelines, which won't have this issue because you pick at the time which Solution to deploy. Or just manual .zip deployment is really not a big deal... and sometimes necessary for Connection owners etc.
     
    2. Dependency issues. This is only ever a problem for newbs. Understanding how components depend on each other, and keeping them organized in a Solution -- these are common activities that are not a problem for experienced CRM people.
     
    3. Unnecessary solution layering complexity. If each new deployment is a new layer, then it won't have any conflicts with other layers. The latest will always "win" and you will see your changes. It seems actually simplifying. Unless you're saying there's some downside simply based on too many layers? I haven't seen that in the docs.
  • Suggested answer
    Parvez Ghumra Profile Picture
    1,579 Moderator on at
    @M-M

    I, and many others I've spoken to in the community, have been able to shrink their managed solution deployment times down from hours to minutes if they use the following options in the solution import operation:
    - Stage & Upgrade: TRUE (or FALSE if you wish to use the Update pattern)
    - Import as Holding: FALSE
    - Overwrite unmanaged Customisations: FALSE
    - Convert to Managed: FALSE
     
    I would recommend you review your selected deployment options in line with the above suggested configurations if your solutions are taking longer than expected to deploy. Of course, it also depends on the delta in the solution being deployed versus any pre-existing versions already installed.

    In addition if you ensure that you don't have any managed objects in your solutions that have not been customised (ie. do not have unmanaged layers) in the development environment, you will find that the solution deploys faster
     
    Historically, people have chosen to segment their solutions for one or both of two reason:
    1. To improve deployment times
    2. Because they have a genuine need for more granular control over deployments - ie. they have intentional target environments to which they do not wish to deploy all of their solutions, just a subset
     
    With the deployment optimisations now available with the new 'Stage & Upgrade' pattern (previously similar optimisations were only available using the Update pattern), the first of these reasons is eliminated.
     
    As far as maker/developer isolation of changes is concerned, having spoken to various people at Microsoft, I've been advised that solution segmentation was never intended to be used for this purpose. Due to the very nature of multiple team members working in the same development environment, solution segmentation will not achieve true developer/maker isolation due to the inherent risk of cross solution dependencies. You only need someone to create a relationship between two tables in different solutions, for the platform to then automatically bring in a table into one of the solutions where you were not expecting to include it. To truly take dependencies, you need to implement proper Solution Layering and take those dependencies on managed objects. This implies that you need to have a separate development environment for each unmanaged solution being built. This introduces additional complexity when you consider what happens when you wish to delete a managed object that you have a dependency on in a second-layer development environment. You will need to go back to the source of that managed solution and delete the object from the source, generate a new version and then upgrade to the new version in the second layer development environment.
     
    In their current state, Power Platform Pipelines do not provide a enterprise-ready ALM approach, where you want source control to be the single source of truth, you want to deploy artifacts built from source control, you want to deploy multiple solutions, you want to deploy artifacts other than Dataverse solutions etc. If you have multiple solutions, you need to account for and carefully consider:
    1. Which order they will be deployed in
    2. What happens in the case of cross-solution dependencies and the impact if you need to delete a managed object as part of a managed solution upgrade. For example, if Version 1 of Solution A has introduced component A, and Version 1 of Solution B has introduced component B and solution B has a hard dependency on component A, so you have installed Solution A first in your target environment. Then in Version 2 of Solution A component A has been deleted. But because solution B has a dependency on solution A, in order to resolve that dependency to allow Solution A to be gracefully upgraded to delete component A,  you need to upgrade solution B first. So now the order/upgrade pattern of your solutions has to change
     
    It's not trivial managing the above using manual deployment, or Power Platform Pipelines which currently only support the deployment of a single solution at a time. Having a single solution totally removes these issues.
     
    Finally, regarding solution layering - it's not just because of too many layers, and it's unfortunately, not always a case of "top one wins". There are some additional nuances to be aware of whereby the customisations to certain component types do not always merge as gracefully as you expect them to across multiple solution layers. So to avoid this complexity, it better to minimise the total number of solution layers you have, or at least, do not include the same component into multiple solutions if you do end up having multiple solutions.
  • Parvez Ghumra Profile Picture
    1,579 Moderator on at

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Forum hierarchy changes are complete!

In our never-ending quest to improve we are simplifying the forum hierarchy…

Ajay Kumar Gannamaneni – Community Spotlight

We are honored to recognize Ajay Kumar Gannamaneni as our Community Spotlight for December…

Leaderboard > Power Apps

#1
WarrenBelz Profile Picture

WarrenBelz 796 Most Valuable Professional

#2
Michael E. Gernaey Profile Picture

Michael E. Gernaey 327 Super User 2025 Season 2

#3
Power Platform 1919 Profile Picture

Power Platform 1919 268

Last 30 days Overall leaderboard