Author Archives: Alex Shlega

CI/CD for PowerPlatform: Developing Feature2

 

Almost a month has passed since the previous post on the DevOps topic, so, the imaginary “Debbie” developer has left the project, and, it seems, I have to finish development of that second feature myself… Oh, well. Let’s do it then!

(Tip: if you have no idea what I am talking about above, have a look at the previous post first)

1. Run Prepare Dev to prepare the dev environment

clip_image002

2. Review the environment to make sure unmanaged solution is there

clip_image004

3. Add new field to the Tag entity form

clip_image005

4. Run Export and Unpack pipeline on the Feature2 branch

This is to get those changes above pushed to the Feature2 branch

5. Make sure I am on Feature2 branch in the local repository

git checkout Feature2

Since I got some conflicts, I’ve deleted my out-of-sync Feature2 first:

git checkout master
git branch -D Feature2
git checkout Feature2
git pull origin Feature2

6. Update the script

At the moment of writing, it seems PowerApps Build Tools do not support solution packager map files, so, for the JS files and plugins (which can be built separately and need to be mapped), it’s done a little differently. There is a powershell script that actually copies those files from their original location to where they should be in the unpacked solution.

In case with the script I need to modify, the script itself is in the Code folder:

clip_image006

clip_image007

The way that script gets added to the solution as a webresource is through the other script that runs in the build pipelines:

clip_image009

So, if I had to add another web resource, I would do this:

  • Open solution in PowerApps
  • Add a web resource
  • Run Export and Unpack pipeline on the branch
  • Pull changes to the local repo
  • Figure out where the source of my new web resource would be (could be added to the same Code subfolder above)
  • Update replacefiles.ps1 script to have one more “Copy-Item” line for this new web resource

 

Since I am not adding a script now, but, instead, I need to update the script that’s there already, I’ll just update existing tagform.js:

clip_image010

7. Commit and push the change to Feature2

git add .
git commit –m “Updated tagform script”
git push origin Feature2

8. Run Prepare Dev build pipeline on Feature2 branch to deploy updated script

This is similar to step #1

Note: the previous two steps could be done differently. I could even go to the solution in PowerApps and update the script there if I did not need/want to maintain the mappings, for example.

9. Now that the script is there, I can attach the event handler

clip_image011

10. Publish and test

clip_image013

11. Run Export and Unpack pipeline on the Feature2 branch to get updated solution files in the repository

12. Pull changes to the local Feature2 branch

git checkout Feature2
git pull origin Feature2

13. Merge changes from Master

git checkout Master
git pull origin Master
git checkout Feature2
git merge –X theirs master
git push origin Feature2

14. Retest everything

First, run Prepare Dev pipeline on the Feature2 branch and review Feature 2 dev manually

At this point, you should actually see New Entity from Feature1 in the Feature 2 dev environment:

clip_image014

Then, run Build and Test pipeline on the Feature2 branch and ensure all existing tests have passed.

15. Finally, merge into Master and push the changes

git checkout master
git merge –X theirs Feature2
git push origin master

16. Build and Test pipeline will be triggered automatically on the master branch – review the results

Ensure automated tests have passed

Go to the TestMaster environment and do whatever manual testing is needed

 

Contents:

Filtered N:N lookup

If you ever tried using out of the box N:N relationships, you may have noticed that, out of the box, we cannot filter the lookups when adding existing items to the relationship subgrids.

In other words, imagine you have 3 entities:

  • Main entity
  • Complaint entity
  • Finding entity

Main entity is the parent entity for the other two. However, every complaint may also be linked to multiple findings and vice versa… Although, that linkage should only be done within the main entity – if there are two main records, it should only be possible to link complaints and findings related to the same main record.

Which is not how it works out of the box. I have two main records below, the first one has 2 complaints and two findings, and the second one has one complaint and one finding:

image

image

image

There is an N:N between Findings and Complaints, so what if I wanted to link Complaint #1 on the first main record to both of the findings for the first main record?

That’s easy – open the complaint, open related findings, click “add existing” and…

image

Wait a second, why are there 3 findings?

Let’s try it the other way around – let’s open Finding #1 (first), and try adding complaints:

image

Only two records this time and both are related to the correct main record?

The trick is that there is a custom script to filter complaints. In essence, that script has been around for a while:

https://www.magnetismsolutions.com/blog/paulnieuwelaar/2018/05/17/filter-n-n-add-existing-lookup-dynamics-365-v9-supported-code

It just did not seem to work “as is” in the UCI, so there is an updated version here:

https://github.com/ashlega/ItAintBoring.FilteredNtoN/blob/master/FilteredNtoN.js

All the registration steps are, mostly, the same. There are a couple of adjustments, though:

You can use the same script for all N:N relationships, but, every time you introduce a new relationship, you need to update the function below to define the filters:

image

For every N:N relationship you want to start filtering, you will need to add one or two conditions there since you may be adding, in my example above, findings to complaints or complaints to findings. Hence, it’s the same relationship, but it can be one or the other primary entity, and, depending on which primary entity it is, there will be different filters.

When configuring command in the ribbon workbench (have a look at that original post above), there is one additional parameter to fill in – that’s the list of relationships for which you want entity lookup to be filtered:

image

In the example above, it’s just one relationship. But it could be a comma-separated list of relationships if I wanted complaint entity to be filtered for different N:N-s.

That’s about it… There is, also, a demo solution with those 3 entities(+the script) which you can import to try it all out:

https://github.com/ashlega/ItAintBoring.FilteredNtoN/blob/master/DemoFilteredSelector_1_0_0_0.zip

MFA, PowerApps, XrmTooling and XrmToolbox

 

If you are working in the online environment where authentication requirements have started to shift towards the MFA, you might be noticing that tools like XrmToolBox (or even the SDK itself) are not always that MFA-friendly.

To begin with, MFA is always interactive – the whole purpose of multi-factor authentication is to ensure that you are who you are, not just somebody who managed to steal your username and password. Hence, there are additional verifications involved – be that an SMS message, an authenticator app on the phone, or, if you are that unlucky, a custom RSA token validation.

There are different ways to bypass the MFA.

If your organization is willing to relax security restrictions,  you might get legacy authentication enabled, so you would be able to get away authenticating the old way – by providing a login/password within the connection string. Having had some experience with this, I think this solution is not quite viable. Security groups within the organizations will be cracking down on this approach, and, sooner or later, you may need something else.

Besides, MFA is not, always, Azure-based. In the hybrid environments where authentication is done through the on-premise ADFS, there could be other solutions deployed. To be fair, having to figure out how to connect XrmToolBox to the online org in this kind of environment is exactly why I ended up writing this blog post.

But the final explanation/solution is applicable to the other scenarios, too.

To be more specific, here is the scenario that did confuse XrmToolBox to the point of no-return:

image

It was all working well when I was connecting to CDS in the browser, but, as far as XrmToolBox was concerned, somehow it just did not want to work with this pattern.

The remaining part of this post may include some inaccuracies – I am not a big specialist in OAuth etc, so some of this might be my interpretation. Anyway, how do we make everything work in the scenario above?

This is where we need to look at the concept of OAuth applications. Basically, the idea is that we can register an application in the Azure AD, and we can give permissions to that App to use Dynamics API-s:

https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/walkthrough-register-app-azure-active-directory

This would be great, but, if we wanted to bypass all the 2FA above, we would have to, somehow, stop using our user account for authentication.

Which is why we might register a secret for our new Azure App. However, application secrets are not supported in the XrmTooling connection strings:

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/xrm-tooling/use-connection-strings-xrm-tooling-connect

So, what was the point of registering an app you may ask?

There is another option where we can use a certificate instead, and you may want to have a look at the following page at some point:

https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/authenticate-oauth

If you look at the samples there, here is how it all goes:

image

It’s a special AuthType (“Certificate”), and the whole set up process involves a few steps:

  • Registering an application in Azure AD
  • Uploading a certificate (I used one of those I had in the certificate store on my windows laptop. It does not even have to be your personal certificate)
  • Creating an application user in CDS
  • Creating a connection string for XrmToolBox

 

To register an app, you can follow one of the links above. Once the app is registered, you can upload the certificate – what you’ll see is a thumbprint which you will need to use in the connection string. Your XrmTooling client, when connecting, will try to find that certificate on the local machine by the thumbprint, so it’s not as if you would able to use the thumbprint (as a password) without the certificate.

While trying to make this work, I’ve uploaded a few certificates to my app, so here is how it looks like:

image

What’s that with the application user in CDS? I think I heard about it before, I just never realized what’s the purpose of this. However:

  • Application users are linked to the Azure applications
  • They do not require a license

 

How do you create one? In the CDS instance, go to Settings->Security->Users and make sure to choose “Application Users” view:

image

Surprisingly, you will actually be able to add a user from that view and the system won’t be suggesting that you need to do it through the Office admin center instead. Adding such a user is a pretty straightforward process, you just need to make sure you are using the right form (Application User):

image

For the email and user name, use whatever you want. For the application ID, make sure to use the actual application ID from the Azure AD.

Don’t forget to assign permissions to that user (in my case, I had to figured I’d have that user as System Admin)

Once you have reached this point, the rest is simple.

Go to the XrmToolBox and start creating a new connection. Make sure to choose “Connection String” option:

image

Set up the connection string like this (use your certificate thumbprint and your application’s appid):

image

Click next, give that connection some name, and voila… You should be able to connect without the MFA under that special application user account now.

Using Admin powershell cmdlets with PowerPlatform

There is a bunch of useful admin cmdlets we can use with the PowerPlatform, but, as it turned out, they can be a little tricky.

As part of the CI/CD adventure, I wanted to start using those admin scripts to create/destroy environments on the fly, so here is what you may want to keep in mind.

Do make sure to keep the libraries up to date by installing updated modules

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -force
Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber –force

EnvironmentName parameter means GUID, not the actual display name

For example, in order to remove an environment you might need to run a command like this:

Remove-AdminPowerAppEnvironment -EnvironmentName 69c2da9a-736b-4f09-9b5c-3163842f539b

You may not be able to change environment display name if there is a CDS database created for the environment

image

I believe this is because the additional “ID” you see in the name of such environments identifies environment url:

image

image

Sometimes it helps to see how your environment looks like from the PowerShell standpoint

You can run these two commands to get those details:

$env = Get-AdminPowerAppEnvironment “*prod”

$env

image

Finally, if you are receiving an error, adding –Verbose switch to the command may help

image

CI/CD for PowerPlatform: Making changes and merging

 

Now that John and Debbie have there own dev/test instances, and they also have their own development branches in Git (Feature1 for John, Feature2 for Debbie), it’s time for them to start making changes.

John was supposed to add a new entity, so let’s just assume he knows how to do that in the solution designer. Here is how the solution looked like when Feature1 branch was created:

image

And below is how the solution looks like in DevFeature1 instance once John has finished adding that entity:

image

John has added that entity to the “Contact Management” application, too, so we can see it on the screenshot below:

image

Technically, John might have stopped here and just all those changes to the master branch. However, what if John is not the only one who was working on the new Features all this time? Maybe the master branch has already been updated. Besides, Debbie will be in this situation just a few pages later since she will have to apply her changes on top of what John has done so far.

Therefore, it’s time to tackle that merge issues, and, as I mentioned before, here is how I’m going to approach it:

  • I am going to use XML merge and assume it’s “supported” if solution import works
  • In order to cover that regression question, I am going to test the result of the merge by using a UI testing framework. Since we are talking about PowerPlatform/Dynamics, I’ll be using EasyRepro

 

In other words, what John needs to do at this point is:

  • He needs to add a UI test to cover the feature he just implemented. When it’s time for Debbie to add her changes, she will be able to test merge results against John’s test to ensure she does not break anything
  • John also need to test his changes against all the tests that have been created so far

 

To do that, John will need to ensure that he is on the Feature1 branch first:

image

If not, the following git command will do it:

$ git checkout Feature1

There is a test project in the repository which John needs to open now:

image

Not to make it overly complicated, John will add a test to verify that a new record of “New Entity” type can be created in the application.

The easiest way to do it would be to create a copy of the existing “Create Tag” test – that can be done in the Visual Studio through the usual copy-paste. And, then, there would be a few changes in the code (to update C# class name and to change the entity name that the code will be using):

image

Once the test is ready, John should run all ContactManagement tests against his dev instance right from the Visual Studio. For that, he will need to use different instance url, so he could use a local test.runsettings file instead of the one used by default. He can do that in the Visual Studio under Test->Test Settings menu:

image

Turns out there is no problem – both the existing and the new test pass, so John’s changes are good from the regression perspective, and they should also help to ensure that, whoever is making changes next, will be able to confirm the feature John just implemented is still working as expected:

image

Now that there is a test John needs export solution from DevFeature1 and unpack it on the Feature1 branch.

  • To export and unpack the solution, John can use “Export and Unpack” pipeline:

image

Once the job completes, John can have a quick look at the repository to double check if the changes have been added there:

image

ita_newentity is there on Feature1 branch. And it’s not there on the master, which is how it should be at this point:

image

So now John needs to do a few things:

    • Bring over remote Feature1 changes into his local Feature1
    • Merge Master changes into Feature1
    • Commit changes on the Feature1 branch and re-test
    • Commit changes to Master and re-test on the
      Master

 

  • $ git add .
  • $ git commit –m “New Entity Test”
  • $ git pull origin Feature1

 

Once John issues the last command, solution changes will be brought over to the local Feature1:

image

Time to merge with the master then.

  • $ git checkout master
  • $ git pull origin master
  • $ git checkout Feature1
  • $ git merge master

 

In other words, checkout the master and bring over remote changes to the local. Checkout Feature1 and merge in the master.

John can, now, push Feature1 to the remote:

$ git push origin Feature1

Finally, John can go to the DevOps and run “Build and Test” pipeline on the Feature1 branch to see how automated regression test works out on the merged managed solution:

image

Once the job completes, John should definitely check if the tests passed. They are this time:

image

image

And, just to give himself a bit of extra piece of mind, he can also go to the TestFeature1 instance to see that the managed solution has been installed, and, also, that NewEntity is there:

image

image

What’s left? Ah, yes… John still needs to push his changes to the master branch.

So:

  • $ git checkout master
  • $ git pull origin master
  • $ git merge Feature1
  • $ git push origin master

 

John’s “New Entity” is on the master branch now, yet Build and Test pipeline has kicked in automatically since there were changes committed to the master branch:

image

That pipeline is now installing managed solution (from the master branch) to the TestMaster environment.

That takes a little while, but, after a few minutes, John can confirm (just like he did previously with TestFeature1) that New Entity is in the TestMaster now:

image

And the tests have passed:

image

Actually, as a result of this last “Build and Test” run, since it ran on the master branch, two solution files were created and published as artifacts:

image

They can now be used for the QA/UAT/Prod.

John can now move on to his next assignment, but I wanted to summarize what has happened so far:

image

As a takeaway so far(before we get to what Debbie has to do now) I need to emphasize a few things:

  • John certainly had to be familiar with Git. It would be difficult for him to go through the steps above without knowing what git can do, how it can do it, what the branches are, etc
  • He also was familiar with EasyRepro, and that’s why he could actually create that additional test for the feature he was working on

 

Still, as a result of all the above, John was actually able to essentially bring his changes to the TestMaster instance using git merge, DevOps pipelines, and automated testing. Which means his CI/CD process is much more mature than what I, personally, used to have on most of my projects.

Let’s see how it works out for Debbie (she is on Feature2 branch, and she still needs to add new field to the Tag entity, and, also, to make a change in the related web resource)

Contents:

 

 

CI/CD for PowerPlatform, round #3

 

In the two of my recent posts, I tried approaching CI/CD problem for Dynamics, but, in the end, I was obviously defeated by the complexity of either of those two approaches. Essentially, they were both artificial since both assumed that we can’t use source control merge.

If you are wondering what those two attempts were about, have a look at these posts:

https://www.itaintboring.com/dynamics-crm/team-development-for-powerapps/

https://www.itaintboring.com/dynamics/power-apps-alm-with-git-theory/

Although, I really think neither of the models are viable, which is unfortunate of course.

This is not over yet – I’m still  kicking here, so there goes round #3.

image

The way I see it, we do need source control merge. Which might not be easy to do considering that we are talking about XML files merge, but I don’t think there is any other way. If we can’t merge (automatically, manually, or semi-automatically), the whole CI/CD model starts breaking apart.

Of course the problem with XML merge (whether it’s manual or whether it’s done through the source control) is that whoever is doing it will need to understand what they are doing. Which really means they need to understand the structure of that XML.

And then, of course, there is that long-standing concept of “manual editing of customizations.xml is not supported”.

By the way, I’m assuming you are familiar with the solution packager:

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/compress-extract-solution-file-solutionpackager

So, to start with, I am going to cheat here.

Manual editing might not be quite supported, but how would anyone know that I’ve edited a file manually if that file has been imported into the Dynamics/PowerPlatform instance?

In other words, imagine that we’ve gone through the automated or manual merge, packaged the solution, and imported that solution into the CDS instance:

image

What do we have as a result?

We have a solution file that can be imported into the CDS instance, and, so, from the PowerPlatform standpoint it’s an absolutely valid file.

How do we know that the merge went well from the functional standpoint? The only way to prove it would be to look at our customizations in the CDS instance and see if the functionality has not changed. Why would it change? Well, we are talking about XML merge, so who knows. Maybe the order of the form tabs has changed, maybe a section has become invisible, maybe we’ve just managed to remove a field from the form somehow…

Therefore, when I wrote that I am going to cheat, here is what I meant:

  • I am going to use XML merge and assume it’s “supported” if solution import works
  • In order to cover that regression question, I am going to test the result of the merge by using a UI testing framework. Since we are talking about PowerPlatform/Dynamics, I’ll be using EasyRepro

 

Finally, I am going to assume that the statements above are included into my “definition of done” (in SCRUM terms). In other words, as long as the solution import works fine and the result passes our regression tests, we can safely release that solution into the QA.

With that in mind, let’s see if we can build out this process!

The scenario I want to cover is:

There is a request to implement a few changes in the solution. Specifically, we need to create a new entity, and we also need to add a field to an existing entity (and to a specific form). Once the field is added, we need to update existing javascript web resource so that an alert is displayed if “test” is entered into that new field.

To complicate thing, let’s say there are two developers on the team. The first one will be creating a new entity, and the other one will be adding a field and updating a script.

At the moment, unpacked version of our ContactManagement solution is stored in the source control. There are a few environments created for this exercise – for each of those environments there is a corresponding service connection in DevOps:

image

The first developer will be using DevFeature1 environment for Development, and TestFeature1 environment for automated testing.

The second developer will be using DevFeature2 environment for development and TestFeature2 for automated testing.

Master branch will be tested in the TestMaster environment. Once the changes are ready for QA, they will be deployed in the QA environment.

Of course, all the above will be happening in Azure DevOps, so there will be a Git repository, too.

To make developers life easier, there will be 3 pipelines in DevOps:

  • Export and Unpack – this one will export solution from the instance, unpack it, and store it in the source control
  • Build and Test – this one will package solution back into a zip file, import it into the test environment as a managed solution, and run EasyRepro tests. It will run automatically whenever a commit happens on the branch
  • Prepare Dev – similar to “Build and Test” except that it will import unmanaged solution to the dev environment and won’t run the test

 

Whenever a task within either of those pipelines tasks need a CDS connection, it will be using branch name to identify the connection. For example, the task below will use DevFeature1 connection whenever the pipeline is running on the Feature1 branch:

image

There is something to keep in mind. Since we will need unmanaged solution in the development environment, and since there is no task that can reset environment to a clean state yet, each developer will need to manually reset corresponding dev environment. That will likely involve 3 steps:

  • Delete the environment
  • Create a new one and update / create a connection in devops
  • Use “Prepare Dev” pipeline on the correct branch to prepare the environment

 

So, let’s say both developers have created new dev / test environments, all the connections are ready, and the extracted/unpacked solution is in the source control. Everyone is ready to go, but Developer #1(who will be adding a new form) goes first. Actually, let’s call him John. Just so the other one is called Debbie.

Assuming the repository has already been cloned to the local, let’s pull the master and let’s create a new branch:

  • $ git pull origin master
  • $ git checkout -b Feature1

At this point John has full copy of the master repository in the Feature1 branch. However, this solution includes EasyRepro tests. EasyRepro, in turn, requires a connection string for the CDS instance. Since every branch will have it’s own test environment, John needs to update connection string for Feature1 branch. So he opens test.runsettings file and updates connection parameters:

image

Now it’s time to push all these change back to the remote repository so John could use a pipeline to prepare his own dev instance.

  • $ git add .
  • $ git commit –m “Feature1 branch init”
  • $ git push origin Feature1

There is, now, a new branch in the repo:

image

Remember that, so far, John has not imported ContactManagement solution to the dev environment, so he only has a few default sample solutions in that instance:

image

So, John goes to the list of pipelines and triggers “Prepare Dev” on the Feature1 branch:

image

As the job starts running, it’s checking out local version of Feature1 branch on the build agent. Which is important since that’s exactly the branch John wants to work on:

image

It takes a little while, since the pipeline has to repackage ContactManagement solution from the source control and import it into the DevFeature1 instance. In a few minutes, the pipeline completes:

image

All the tasks completed successfully, so John opens DevFeature1 instance in the browser to verify if ContactManagement solution has been deployed, and, yes, it’s there:

image

And it’s unmanaged, which is exactly what we needed.

But what about Debbie? Just as John started working on his sprint task, Debbie needs to do exactly the same, but she’ll be doing it on the Feature2 branch.

  • $ git checkout master
  • $ git pull origin master
  • $ git checkout –b Feature2
  • Prepare dev and test instances for Feature2
  • Update test.runsettings with the TestFeature2 connection settings
  • $ git add .
  • $ git commit –m “Feature2 branch init”
  • $ git push origin Feature2

 

At this point the sources are ready, but she does not have ContactManagement solution in the DevFeature2 instance yet:

image

She starts the same pipeline, but on the Feature2 branch this time:

image

  • A couple of minutes later, she has ContactManagement solution deployed into her Dev instance:image

 

Just to recap, what has happened so far?

John and Debbie both have completed the following 3 steps:

image

They can now start making changes in their dev instances, which will be a topic of the next blog post.

Contents:

 

Power Apps ALM with Git (theory)

 

I’ve been definitely struggling to figure out any kind of sane “merge” process for the configuration changes, so I figured I’d just try to approach ALM differently using the old good “master configuration” idea (http://gonzaloruizcrm.blogspot.com/2012/01/setting-up-your-development-environment.html)

Here is what I came up with so far:

image

 

  • There are two repositories in Git: one for the code, and another one for the unpacked solution. Why two repos? We can use merge in the code repository, but we can’t, really, use merge in the solution repository. Instead, it’ll have to be “push –force” to the master branch in that repo so the files are always updated(not merged) with whatever comes from the Dev instance. Am I overthinking it?
  • Whenever there is a new feature to develop, we should apply configuration changes in the main DEV instance directly. The caveat is that they might be propagated to the QA/UAT/PROD before the feature is 100% ready, so we should try to isolate those changes through new views/forms/applications. Which we can, eventually, delete (And, since we are using managed solutions in the QA/UAT/PROD, “delete” will propagate to those environments through the managed solution)
  • At some point, once we are satisfied with the configuration, we can push (force) it to the solution repo. Then we can use a devops pipeline to create a feature Dev instance from Git. We will also need to create a code branch
  • In that feature Dev instance, we’ll only be developing code (on the feature code branch)
  • Once the code is ready, we will merge it with the master branch, will refresh Feature Dev instance from the main Dev Instance, will register required SDK steps and event handlers in the main DEV instance, and we will update solution repo. At this point the feature might be fully ready, or we may have to repeat the process again (maybe a few times)

 

We might utilize a few devops pipelines there:

  • One pipeline to create an instance, deploy a solution, and populate sample data in the Feature Dev instance (to use when we are starting to work on the code for the feature)
  • Another pipeline to push (force) unpacked managed/unmanaged DEV instance solution to GIT. This one might be triggered automatically whenever “publishall” event happens. Might try using a plugin to kick off the build
  • Another pipeline to do smoke tests with EasyRepro in the specified environment (might run smoke tests in Feature Dev, but might also run them in the main Dev)
  • And yet another pipeline to deploy managed solution to the specified environment (this one might be a gated release pipeline if I understand those correctly)

Team development for PowerApps

 

Team development for Dynamics has always been a little vague topic.

To start with, it’s usually recommended to use SolutionPackager – presumably, that helps with the source control since you can unpack solution files, then pack them, then observe how individual components have changed from one commit to another. But what does it really give you? Even Microsoft itself admits that there is this simple limitation:

image

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/use-source-control-solution-files

In that sense you might, of course, use git to merge various versions of the solution component files, but that would not be different from manual editing which, as per the screenshot above, is only partially supported.

The only real merge solution we have (at least as of now) is deploying our changes to the target environment through a solution file or, possibly, re-applying them manually in that environment using a solution designer.

That might be less of a problem if all Dynamics/PowerApps source artefacts were stored in CDS. But, of course, they are not. Plugin source code and Typescript sources for various javascript web resources are supposed to be stored in the source control. And even more – the solution itself would better be stored in the source control just so we don’t lose everything when somebody accidentally deletes the PowerApps environment.

So what do we do? And why do we need to do anything?

Apparently, developers are used to the best development practices, and, so, there is no wonder they want to utilize the same familiar Git workflows with Dynamics/PowerApps.

I am not sure I can really suggest anything magical here, but, it seems, we still need a way to somehow incorporate solutions into the Git workflow which looks like this:

image

 

 

 

 

 

 

 

 

 

 

 

 

https://www.quora.com/What-is-the-difference-between-master-and-develop-branch-in-Git ( although, I guess the original source is not Quora)

Come to think of it, the only idea I really have when looking at this diagram is:

  • Creating a branch in Git would be an equivalent of creating a copy environment in Dynamics/CDS
  • Merging in Git would be an equivalent of bringing a transport solution and/or re-applying configuration changes from the corresponding feature development environment to the higher “branch” environment

 

That introduces a bunch of manual steps along the way, of course. Besides, creating a new environment in PowerApps is not free – previously, we would have to pay for each new instance. If your subscription is storage-based these days, then, at the very least, you need to ensure you have enough additional storage in your subscription.

And there is yet another caveat – depending on what it is you need to develop on the “feature branch”, you may also need some third-party solutions in the corresponding CDS environment, and those solutions may require additional licenses, too.

At the very least, we need two environments:

  • Production (logically mapped to the master branch in Git)
  • Development (logically mapped to the development branch in Git)

 

When it comes to feature development, there might be two scenarios:

  • We may be able to create a separate CDS environment for feature development, in which case we should also create a source code branch
  • We may not be able to create a separate CDS environment for feature development, in which case we should not be creating a source code branch

 

Altogether, the whole workflow might look like this:

image

We might create a few more branches for QA and UAT – in that case QA, for example, would be in place of Master on the diagram above. From QA to UAT to Master it would be the same force push followed by build and deploy.

Of course there is one remaining step here, which is that I need to build out a working example, probably in devops…

PS. On the other hand, if somebody out there reading this post has figured out how to do “merge” of the unpacked solution components in the source control without entering the “unsupported area”, maybe you could share the steps/process. That would be awesome.

 

 

 

 

Public Preview of PowerApps Build Tools

 

Recently, there was an interesting announcement from the Power Apps Team:

image

https://powerapps.microsoft.com/en-us/blog/automate-your-application-lifecycle-management-alm-with-powerapps-build-tools-preview/

Before I continue, I wanted to quickly summarize the list of Azure DevOps tasks available in this release. Here it goes:

  • PowerApps Tools Installer
  • PowerApps Import Solution
  • PowerApps Export Solution
  • PowerApps Unpack Solution
  • PowerApps Pack Solution
  • PowerApps Set Solution Version
  • PowerApps Deploy Package
  • PowerApps Create Environment
  • PowerApps Delete Environment
  • PowerApps Copy Environment
  • PowerApps Publish Customizations

This looks interesting, yet I can’t help but notice that Wael Hamze had most of those tasks in his Build Tools for a while now:

https://marketplace.visualstudio.com/items?itemName=WaelHamze.xrm-ci-framework-build-tasks

Actually, I’ve seen a lot of different tools and scripts which were all meant to facilitate automation.

How about Scott Durow’s sparkle? (https://github.com/scottdurow/SparkleXrm)

Even I tried a few things along the way (https://www.itaintboring.com/tag/ezchange/, https://www.itaintboring.com/dynamics-crm/a-powershell-script-to-importexport-solutions-and-data/)

So, at the first glance, those tasks released by the PowerApps team might not look that impressive.

But, if that’s what you are thinking, you might be missing the importance of this release.

Recently, PowerApps team has taken a few steps which might all be indicating that the team is getting serious about “healthy ALM”:

  • Solution Lifecycle Management whitepaper was published in January
  • Solution history viewer was added to PowerApps/Dynamics
  • Managed solutions have become “highly recommended” for production (try exporting a solution from the PowerApps admin portal, and you’ll see what I’m talking about)

And there were a few other developments: Flows and Canvas Apps became solution-aware, solution packager was updated to support most recent technologies (Flows, Canvas apps, PCF), etc

The tooling, however, was missing. Of course there always used to be third-party tooling, but I can see how somebody in the PowerApps team decided that it’s time to create solid foundation for the ALM story they are going to build, and there can be no such foundation without suitable internal tooling.

As it is now, that tooling might not, really, be that superior to what the community has already developed in various forms by this time. But the importance of it is that PowerApps team is demonstrating that they are taking this whole ALM thing seriously, and they’ve actually stated pretty much that in the release announcement:

“This initial release is the first step towards a more comprehensive, yet simplified story around ALM for PowerApps. A story we will continue to augment by adding features based on feedback, but equally important – by continuing to invest in more training and documentation with prescriptive guidance. In other words, our goal is to enable our customers and partners to focus more on innovation and building beautiful, innovative apps and less time on either figuring out how to automate or perform daunting manual tasks that are better done automated.”

So… I’m eager to see how it’s going to evolve – it’s definitely been long overdue, and I’m hoping we’ll see more ALM from the PowerApps team soon!

PS. There is a link buried in that announcement that you should definitely read through as well: https://pabuildtools.blob.core.windows.net/docs/PowerApps%20Build%20Tools.htm  Open that page, scroll down almost to the bottom. There will be a “Tutorial”, and, right at the start of the tutorial, you’ll see a link to the hands-on lab. Make sure to download it! There is a lot of interesting stuff there which will give you a pretty good idea of where ALM is going for PowerApps.

When the error message is lost in translations

Every now and then, I see this kind of error message in the UCI:

image

It may seem useful, but, when looking at the log file, all I can say is that, well, something has happened. Since all I can see in the downloaded log file is a bunch of callstack lines similar to the one below:

at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Update(Entity entity, InvocationContext invocationContext, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Boolean checkForOptimisticConcurrency, Dictionary`2 optionalParameters)

One trick I learned about those errors in the past is that switching to the classic UI helps. Sometimes. Since the error may look more useful there. Somehow, though, I was not able to reproduce the error above in the classic UI this time around, so… here is another trick if you run into this problem:

  • Open browser dev tools
  • Reproduce the error
  • Switch to the “Network” tab and look for the errors

There is a chance you’ll find a request that errored out, and, if you look at it, you might actually see the error message:

image

That said, I think it’s been getting better lately since there are errors that will show up correctly in the UCI. Still, sometimes the errors seem to be literally lost in translations between the server and the error dialog on the browser side, so the trick above might help you get to the source of the problem faster in such cases.