In the two of my recent posts, I tried approaching CI/CD problem for Dynamics, but, in the end, I was obviously defeated by the complexity of either of those two approaches. Essentially, they were both artificial since both assumed that we can’t use source control merge.
If you are wondering what those two attempts were about, have a look at these posts:
Although, I really think neither of the models are viable, which is unfortunate of course.
This is not over yet – I’m still kicking here, so there goes round #3.
The way I see it, we do need source control merge. Which might not be easy to do considering that we are talking about XML files merge, but I don’t think there is any other way. If we can’t merge (automatically, manually, or semi-automatically), the whole CI/CD model starts breaking apart.
Of course the problem with XML merge (whether it’s manual or whether it’s done through the source control) is that whoever is doing it will need to understand what they are doing. Which really means they need to understand the structure of that XML.
And then, of course, there is that long-standing concept of “manual editing of customizations.xml is not supported”.
By the way, I’m assuming you are familiar with the solution packager:
So, to start with, I am going to cheat here.
Manual editing might not be quite supported, but how would anyone know that I’ve edited a file manually if that file has been imported into the Dynamics/PowerPlatform instance?
In other words, imagine that we’ve gone through the automated or manual merge, packaged the solution, and imported that solution into the CDS instance:
What do we have as a result?
We have a solution file that can be imported into the CDS instance, and, so, from the PowerPlatform standpoint it’s an absolutely valid file.
How do we know that the merge went well from the functional standpoint? The only way to prove it would be to look at our customizations in the CDS instance and see if the functionality has not changed. Why would it change? Well, we are talking about XML merge, so who knows. Maybe the order of the form tabs has changed, maybe a section has become invisible, maybe we’ve just managed to remove a field from the form somehow…
Therefore, when I wrote that I am going to cheat, here is what I meant:
- I am going to use XML merge and assume it’s “supported” if solution import works
- In order to cover that regression question, I am going to test the result of the merge by using a UI testing framework. Since we are talking about PowerPlatform/Dynamics, I’ll be using EasyRepro
Finally, I am going to assume that the statements above are included into my “definition of done” (in SCRUM terms). In other words, as long as the solution import works fine and the result passes our regression tests, we can safely release that solution into the QA.
With that in mind, let’s see if we can build out this process!
The scenario I want to cover is:
To complicate thing, let’s say there are two developers on the team. The first one will be creating a new entity, and the other one will be adding a field and updating a script.
At the moment, unpacked version of our ContactManagement solution is stored in the source control. There are a few environments created for this exercise – for each of those environments there is a corresponding service connection in DevOps:
The first developer will be using DevFeature1 environment for Development, and TestFeature1 environment for automated testing.
The second developer will be using DevFeature2 environment for development and TestFeature2 for automated testing.
Master branch will be tested in the TestMaster environment. Once the changes are ready for QA, they will be deployed in the QA environment.
Of course, all the above will be happening in Azure DevOps, so there will be a Git repository, too.
To make developers life easier, there will be 3 pipelines in DevOps:
- Export and Unpack – this one will export solution from the instance, unpack it, and store it in the source control
- Build and Test – this one will package solution back into a zip file, import it into the test environment as a managed solution, and run EasyRepro tests. It will run automatically whenever a commit happens on the branch
- Prepare Dev – similar to “Build and Test” except that it will import unmanaged solution to the dev environment and won’t run the test
Whenever a task within either of those pipelines tasks need a CDS connection, it will be using branch name to identify the connection. For example, the task below will use DevFeature1 connection whenever the pipeline is running on the Feature1 branch:
There is something to keep in mind. Since we will need unmanaged solution in the development environment, and since there is no task that can reset environment to a clean state yet, each developer will need to manually reset corresponding dev environment. That will likely involve 3 steps:
- Delete the environment
- Create a new one and update / create a connection in devops
- Use “Prepare Dev” pipeline on the correct branch to prepare the environment
So, let’s say both developers have created new dev / test environments, all the connections are ready, and the extracted/unpacked solution is in the source control. Everyone is ready to go, but Developer #1(who will be adding a new form) goes first. Actually, let’s call him John. Just so the other one is called Debbie.
Assuming the repository has already been cloned to the local, let’s pull the master and let’s create a new branch:
- $ git pull origin master
- $ git checkout -b Feature1
At this point John has full copy of the master repository in the Feature1 branch. However, this solution includes EasyRepro tests. EasyRepro, in turn, requires a connection string for the CDS instance. Since every branch will have it’s own test environment, John needs to update connection string for Feature1 branch. So he opens test.runsettings file and updates connection parameters:
Now it’s time to push all these change back to the remote repository so John could use a pipeline to prepare his own dev instance.
- $ git add .
- $ git commit –m “Feature1 branch init”
- $ git push origin Feature1
There is, now, a new branch in the repo:
Remember that, so far, John has not imported ContactManagement solution to the dev environment, so he only has a few default sample solutions in that instance:
So, John goes to the list of pipelines and triggers “Prepare Dev” on the Feature1 branch:
As the job starts running, it’s checking out local version of Feature1 branch on the build agent. Which is important since that’s exactly the branch John wants to work on:
It takes a little while, since the pipeline has to repackage ContactManagement solution from the source control and import it into the DevFeature1 instance. In a few minutes, the pipeline completes:
All the tasks completed successfully, so John opens DevFeature1 instance in the browser to verify if ContactManagement solution has been deployed, and, yes, it’s there:
And it’s unmanaged, which is exactly what we needed.
But what about Debbie? Just as John started working on his sprint task, Debbie needs to do exactly the same, but she’ll be doing it on the Feature2 branch.
- $ git checkout master
- $ git pull origin master
- $ git checkout –b Feature2
- Prepare dev and test instances for Feature2
- Update test.runsettings with the TestFeature2 connection settings
- $ git add .
- $ git commit –m “Feature2 branch init”
- $ git push origin Feature2
At this point the sources are ready, but she does not have ContactManagement solution in the DevFeature2 instance yet:
She starts the same pipeline, but on the Feature2 branch this time:
- A couple of minutes later, she has ContactManagement solution deployed into her Dev instance:
Just to recap, what has happened so far?
John and Debbie both have completed the following 3 steps:
They can now start making changes in their dev instances, which will be a topic of the next blog post.