Monthly Archives: July 2019

Using Admin powershell cmdlets with PowerPlatform

There is a bunch of useful admin cmdlets we can use with the PowerPlatform, but, as it turned out, they can be a little tricky.

As part of the CI/CD adventure, I wanted to start using those admin scripts to create/destroy environments on the fly, so here is what you may want to keep in mind.

Do make sure to keep the libraries up to date by installing updated modules

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -force
Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber –force

EnvironmentName parameter means GUID, not the actual display name

For example, in order to remove an environment you might need to run a command like this:

Remove-AdminPowerAppEnvironment -EnvironmentName 69c2da9a-736b-4f09-9b5c-3163842f539b

You may not be able to change environment display name if there is a CDS database created for the environment

image

I believe this is because the additional “ID” you see in the name of such environments identifies environment url:

image

image

Sometimes it helps to see how your environment looks like from the PowerShell standpoint

You can run these two commands to get those details:

$env = Get-AdminPowerAppEnvironment “*prod”

$env

image

Finally, if you are receiving an error, adding –Verbose switch to the command may help

image

CI/CD for PowerPlatform: Making changes and merging

 

Now that John and Debbie have there own dev/test instances, and they also have their own development branches in Git (Feature1 for John, Feature2 for Debbie), it’s time for them to start making changes.

John was supposed to add a new entity, so let’s just assume he knows how to do that in the solution designer. Here is how the solution looked like when Feature1 branch was created:

image

And below is how the solution looks like in DevFeature1 instance once John has finished adding that entity:

image

John has added that entity to the “Contact Management” application, too, so we can see it on the screenshot below:

image

Technically, John might have stopped here and just all those changes to the master branch. However, what if John is not the only one who was working on the new Features all this time? Maybe the master branch has already been updated. Besides, Debbie will be in this situation just a few pages later since she will have to apply her changes on top of what John has done so far.

Therefore, it’s time to tackle that merge issues, and, as I mentioned before, here is how I’m going to approach it:

  • I am going to use XML merge and assume it’s “supported” if solution import works
  • In order to cover that regression question, I am going to test the result of the merge by using a UI testing framework. Since we are talking about PowerPlatform/Dynamics, I’ll be using EasyRepro

 

In other words, what John needs to do at this point is:

  • He needs to add a UI test to cover the feature he just implemented. When it’s time for Debbie to add her changes, she will be able to test merge results against John’s test to ensure she does not break anything
  • John also need to test his changes against all the tests that have been created so far

 

To do that, John will need to ensure that he is on the Feature1 branch first:

image

If not, the following git command will do it:

$ git checkout Feature1

There is a test project in the repository which John needs to open now:

image

Not to make it overly complicated, John will add a test to verify that a new record of “New Entity” type can be created in the application.

The easiest way to do it would be to create a copy of the existing “Create Tag” test – that can be done in the Visual Studio through the usual copy-paste. And, then, there would be a few changes in the code (to update C# class name and to change the entity name that the code will be using):

image

Once the test is ready, John should run all ContactManagement tests against his dev instance right from the Visual Studio. For that, he will need to use different instance url, so he could use a local test.runsettings file instead of the one used by default. He can do that in the Visual Studio under Test->Test Settings menu:

image

Turns out there is no problem – both the existing and the new test pass, so John’s changes are good from the regression perspective, and they should also help to ensure that, whoever is making changes next, will be able to confirm the feature John just implemented is still working as expected:

image

Now that there is a test John needs export solution from DevFeature1 and unpack it on the Feature1 branch.

  • To export and unpack the solution, John can use “Export and Unpack” pipeline:

image

Once the job completes, John can have a quick look at the repository to double check if the changes have been added there:

image

ita_newentity is there on Feature1 branch. And it’s not there on the master, which is how it should be at this point:

image

So now John needs to do a few things:

    • Bring over remote Feature1 changes into his local Feature1
    • Merge Master changes into Feature1
    • Commit changes on the Feature1 branch and re-test
    • Commit changes to Master and re-test on the
      Master

 

  • $ git add .
  • $ git commit –m “New Entity Test”
  • $ git pull origin Feature1

 

Once John issues the last command, solution changes will be brought over to the local Feature1:

image

Time to merge with the master then.

  • $ git checkout master
  • $ git pull origin master
  • $ git checkout Feature1
  • $ git merge master

 

In other words, checkout the master and bring over remote changes to the local. Checkout Feature1 and merge in the master.

John can, now, push Feature1 to the remote:

$ git push origin Feature1

Finally, John can go to the DevOps and run “Build and Test” pipeline on the Feature1 branch to see how automated regression test works out on the merged managed solution:

image

Once the job completes, John should definitely check if the tests passed. They are this time:

image

image

And, just to give himself a bit of extra piece of mind, he can also go to the TestFeature1 instance to see that the managed solution has been installed, and, also, that NewEntity is there:

image

image

What’s left? Ah, yes… John still needs to push his changes to the master branch.

So:

  • $ git checkout master
  • $ git pull origin master
  • $ git merge Feature1
  • $ git push origin master

 

John’s “New Entity” is on the master branch now, yet Build and Test pipeline has kicked in automatically since there were changes committed to the master branch:

image

That pipeline is now installing managed solution (from the master branch) to the TestMaster environment.

That takes a little while, but, after a few minutes, John can confirm (just like he did previously with TestFeature1) that New Entity is in the TestMaster now:

image

And the tests have passed:

image

Actually, as a result of this last “Build and Test” run, since it ran on the master branch, two solution files were created and published as artifacts:

image

They can now be used for the QA/UAT/Prod.

John can now move on to his next assignment, but I wanted to summarize what has happened so far:

image

As a takeaway so far(before we get to what Debbie has to do now) I need to emphasize a few things:

  • John certainly had to be familiar with Git. It would be difficult for him to go through the steps above without knowing what git can do, how it can do it, what the branches are, etc
  • He also was familiar with EasyRepro, and that’s why he could actually create that additional test for the feature he was working on

 

Still, as a result of all the above, John was actually able to essentially bring his changes to the TestMaster instance using git merge, DevOps pipelines, and automated testing. Which means his CI/CD process is much more mature than what I, personally, used to have on most of my projects.

Let’s see how it works out for Debbie (she is on Feature2 branch, and she still needs to add new field to the Tag entity, and, also, to make a change in the related web resource)

Contents:

 

 

CI/CD for PowerPlatform, round #3

 

In the two of my recent posts, I tried approaching CI/CD problem for Dynamics, but, in the end, I was obviously defeated by the complexity of either of those two approaches. Essentially, they were both artificial since both assumed that we can’t use source control merge.

If you are wondering what those two attempts were about, have a look at these posts:

https://www.itaintboring.com/dynamics-crm/team-development-for-powerapps/

https://www.itaintboring.com/dynamics/power-apps-alm-with-git-theory/

Although, I really think neither of the models are viable, which is unfortunate of course.

This is not over yet – I’m still  kicking here, so there goes round #3.

image

The way I see it, we do need source control merge. Which might not be easy to do considering that we are talking about XML files merge, but I don’t think there is any other way. If we can’t merge (automatically, manually, or semi-automatically), the whole CI/CD model starts breaking apart.

Of course the problem with XML merge (whether it’s manual or whether it’s done through the source control) is that whoever is doing it will need to understand what they are doing. Which really means they need to understand the structure of that XML.

And then, of course, there is that long-standing concept of “manual editing of customizations.xml is not supported”.

By the way, I’m assuming you are familiar with the solution packager:

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/compress-extract-solution-file-solutionpackager

So, to start with, I am going to cheat here.

Manual editing might not be quite supported, but how would anyone know that I’ve edited a file manually if that file has been imported into the Dynamics/PowerPlatform instance?

In other words, imagine that we’ve gone through the automated or manual merge, packaged the solution, and imported that solution into the CDS instance:

image

What do we have as a result?

We have a solution file that can be imported into the CDS instance, and, so, from the PowerPlatform standpoint it’s an absolutely valid file.

How do we know that the merge went well from the functional standpoint? The only way to prove it would be to look at our customizations in the CDS instance and see if the functionality has not changed. Why would it change? Well, we are talking about XML merge, so who knows. Maybe the order of the form tabs has changed, maybe a section has become invisible, maybe we’ve just managed to remove a field from the form somehow…

Therefore, when I wrote that I am going to cheat, here is what I meant:

  • I am going to use XML merge and assume it’s “supported” if solution import works
  • In order to cover that regression question, I am going to test the result of the merge by using a UI testing framework. Since we are talking about PowerPlatform/Dynamics, I’ll be using EasyRepro

 

Finally, I am going to assume that the statements above are included into my “definition of done” (in SCRUM terms). In other words, as long as the solution import works fine and the result passes our regression tests, we can safely release that solution into the QA.

With that in mind, let’s see if we can build out this process!

The scenario I want to cover is:

There is a request to implement a few changes in the solution. Specifically, we need to create a new entity, and we also need to add a field to an existing entity (and to a specific form). Once the field is added, we need to update existing javascript web resource so that an alert is displayed if “test” is entered into that new field.

To complicate thing, let’s say there are two developers on the team. The first one will be creating a new entity, and the other one will be adding a field and updating a script.

At the moment, unpacked version of our ContactManagement solution is stored in the source control. There are a few environments created for this exercise – for each of those environments there is a corresponding service connection in DevOps:

image

The first developer will be using DevFeature1 environment for Development, and TestFeature1 environment for automated testing.

The second developer will be using DevFeature2 environment for development and TestFeature2 for automated testing.

Master branch will be tested in the TestMaster environment. Once the changes are ready for QA, they will be deployed in the QA environment.

Of course, all the above will be happening in Azure DevOps, so there will be a Git repository, too.

To make developers life easier, there will be 3 pipelines in DevOps:

  • Export and Unpack – this one will export solution from the instance, unpack it, and store it in the source control
  • Build and Test – this one will package solution back into a zip file, import it into the test environment as a managed solution, and run EasyRepro tests. It will run automatically whenever a commit happens on the branch
  • Prepare Dev – similar to “Build and Test” except that it will import unmanaged solution to the dev environment and won’t run the test

 

Whenever a task within either of those pipelines tasks need a CDS connection, it will be using branch name to identify the connection. For example, the task below will use DevFeature1 connection whenever the pipeline is running on the Feature1 branch:

image

There is something to keep in mind. Since we will need unmanaged solution in the development environment, and since there is no task that can reset environment to a clean state yet, each developer will need to manually reset corresponding dev environment. That will likely involve 3 steps:

  • Delete the environment
  • Create a new one and update / create a connection in devops
  • Use “Prepare Dev” pipeline on the correct branch to prepare the environment

 

So, let’s say both developers have created new dev / test environments, all the connections are ready, and the extracted/unpacked solution is in the source control. Everyone is ready to go, but Developer #1(who will be adding a new form) goes first. Actually, let’s call him John. Just so the other one is called Debbie.

Assuming the repository has already been cloned to the local, let’s pull the master and let’s create a new branch:

  • $ git pull origin master
  • $ git checkout -b Feature1

At this point John has full copy of the master repository in the Feature1 branch. However, this solution includes EasyRepro tests. EasyRepro, in turn, requires a connection string for the CDS instance. Since every branch will have it’s own test environment, John needs to update connection string for Feature1 branch. So he opens test.runsettings file and updates connection parameters:

image

Now it’s time to push all these change back to the remote repository so John could use a pipeline to prepare his own dev instance.

  • $ git add .
  • $ git commit –m “Feature1 branch init”
  • $ git push origin Feature1

There is, now, a new branch in the repo:

image

Remember that, so far, John has not imported ContactManagement solution to the dev environment, so he only has a few default sample solutions in that instance:

image

So, John goes to the list of pipelines and triggers “Prepare Dev” on the Feature1 branch:

image

As the job starts running, it’s checking out local version of Feature1 branch on the build agent. Which is important since that’s exactly the branch John wants to work on:

image

It takes a little while, since the pipeline has to repackage ContactManagement solution from the source control and import it into the DevFeature1 instance. In a few minutes, the pipeline completes:

image

All the tasks completed successfully, so John opens DevFeature1 instance in the browser to verify if ContactManagement solution has been deployed, and, yes, it’s there:

image

And it’s unmanaged, which is exactly what we needed.

But what about Debbie? Just as John started working on his sprint task, Debbie needs to do exactly the same, but she’ll be doing it on the Feature2 branch.

  • $ git checkout master
  • $ git pull origin master
  • $ git checkout –b Feature2
  • Prepare dev and test instances for Feature2
  • Update test.runsettings with the TestFeature2 connection settings
  • $ git add .
  • $ git commit –m “Feature2 branch init”
  • $ git push origin Feature2

 

At this point the sources are ready, but she does not have ContactManagement solution in the DevFeature2 instance yet:

image

She starts the same pipeline, but on the Feature2 branch this time:

image

  • A couple of minutes later, she has ContactManagement solution deployed into her Dev instance:image

 

Just to recap, what has happened so far?

John and Debbie both have completed the following 3 steps:

image

They can now start making changes in their dev instances, which will be a topic of the next blog post.

Contents:

 

Power Apps ALM with Git (theory)

 

I’ve been definitely struggling to figure out any kind of sane “merge” process for the configuration changes, so I figured I’d just try to approach ALM differently using the old good “master configuration” idea (http://gonzaloruizcrm.blogspot.com/2012/01/setting-up-your-development-environment.html)

Here is what I came up with so far:

image

 

  • There are two repositories in Git: one for the code, and another one for the unpacked solution. Why two repos? We can use merge in the code repository, but we can’t, really, use merge in the solution repository. Instead, it’ll have to be “push –force” to the master branch in that repo so the files are always updated(not merged) with whatever comes from the Dev instance. Am I overthinking it?
  • Whenever there is a new feature to develop, we should apply configuration changes in the main DEV instance directly. The caveat is that they might be propagated to the QA/UAT/PROD before the feature is 100% ready, so we should try to isolate those changes through new views/forms/applications. Which we can, eventually, delete (And, since we are using managed solutions in the QA/UAT/PROD, “delete” will propagate to those environments through the managed solution)
  • At some point, once we are satisfied with the configuration, we can push (force) it to the solution repo. Then we can use a devops pipeline to create a feature Dev instance from Git. We will also need to create a code branch
  • In that feature Dev instance, we’ll only be developing code (on the feature code branch)
  • Once the code is ready, we will merge it with the master branch, will refresh Feature Dev instance from the main Dev Instance, will register required SDK steps and event handlers in the main DEV instance, and we will update solution repo. At this point the feature might be fully ready, or we may have to repeat the process again (maybe a few times)

 

We might utilize a few devops pipelines there:

  • One pipeline to create an instance, deploy a solution, and populate sample data in the Feature Dev instance (to use when we are starting to work on the code for the feature)
  • Another pipeline to push (force) unpacked managed/unmanaged DEV instance solution to GIT. This one might be triggered automatically whenever “publishall” event happens. Might try using a plugin to kick off the build
  • Another pipeline to do smoke tests with EasyRepro in the specified environment (might run smoke tests in Feature Dev, but might also run them in the main Dev)
  • And yet another pipeline to deploy managed solution to the specified environment (this one might be a gated release pipeline if I understand those correctly)

Team development for PowerApps

 

Team development for Dynamics has always been a little vague topic.

To start with, it’s usually recommended to use SolutionPackager – presumably, that helps with the source control since you can unpack solution files, then pack them, then observe how individual components have changed from one commit to another. But what does it really give you? Even Microsoft itself admits that there is this simple limitation:

image

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/use-source-control-solution-files

In that sense you might, of course, use git to merge various versions of the solution component files, but that would not be different from manual editing which, as per the screenshot above, is only partially supported.

The only real merge solution we have (at least as of now) is deploying our changes to the target environment through a solution file or, possibly, re-applying them manually in that environment using a solution designer.

That might be less of a problem if all Dynamics/PowerApps source artefacts were stored in CDS. But, of course, they are not. Plugin source code and Typescript sources for various javascript web resources are supposed to be stored in the source control. And even more – the solution itself would better be stored in the source control just so we don’t lose everything when somebody accidentally deletes the PowerApps environment.

So what do we do? And why do we need to do anything?

Apparently, developers are used to the best development practices, and, so, there is no wonder they want to utilize the same familiar Git workflows with Dynamics/PowerApps.

I am not sure I can really suggest anything magical here, but, it seems, we still need a way to somehow incorporate solutions into the Git workflow which looks like this:

image

 

 

 

 

 

 

 

 

 

 

 

 

https://www.quora.com/What-is-the-difference-between-master-and-develop-branch-in-Git ( although, I guess the original source is not Quora)

Come to think of it, the only idea I really have when looking at this diagram is:

  • Creating a branch in Git would be an equivalent of creating a copy environment in Dynamics/CDS
  • Merging in Git would be an equivalent of bringing a transport solution and/or re-applying configuration changes from the corresponding feature development environment to the higher “branch” environment

 

That introduces a bunch of manual steps along the way, of course. Besides, creating a new environment in PowerApps is not free – previously, we would have to pay for each new instance. If your subscription is storage-based these days, then, at the very least, you need to ensure you have enough additional storage in your subscription.

And there is yet another caveat – depending on what it is you need to develop on the “feature branch”, you may also need some third-party solutions in the corresponding CDS environment, and those solutions may require additional licenses, too.

At the very least, we need two environments:

  • Production (logically mapped to the master branch in Git)
  • Development (logically mapped to the development branch in Git)

 

When it comes to feature development, there might be two scenarios:

  • We may be able to create a separate CDS environment for feature development, in which case we should also create a source code branch
  • We may not be able to create a separate CDS environment for feature development, in which case we should not be creating a source code branch

 

Altogether, the whole workflow might look like this:

image

We might create a few more branches for QA and UAT – in that case QA, for example, would be in place of Master on the diagram above. From QA to UAT to Master it would be the same force push followed by build and deploy.

Of course there is one remaining step here, which is that I need to build out a working example, probably in devops…

PS. On the other hand, if somebody out there reading this post has figured out how to do “merge” of the unpacked solution components in the source control without entering the “unsupported area”, maybe you could share the steps/process. That would be awesome.

 

 

 

 

Public Preview of PowerApps Build Tools

 

Recently, there was an interesting announcement from the Power Apps Team:

image

https://powerapps.microsoft.com/en-us/blog/automate-your-application-lifecycle-management-alm-with-powerapps-build-tools-preview/

Before I continue, I wanted to quickly summarize the list of Azure DevOps tasks available in this release. Here it goes:

  • PowerApps Tools Installer
  • PowerApps Import Solution
  • PowerApps Export Solution
  • PowerApps Unpack Solution
  • PowerApps Pack Solution
  • PowerApps Set Solution Version
  • PowerApps Deploy Package
  • PowerApps Create Environment
  • PowerApps Delete Environment
  • PowerApps Copy Environment
  • PowerApps Publish Customizations

This looks interesting, yet I can’t help but notice that Wael Hamze had most of those tasks in his Build Tools for a while now:

https://marketplace.visualstudio.com/items?itemName=WaelHamze.xrm-ci-framework-build-tasks

Actually, I’ve seen a lot of different tools and scripts which were all meant to facilitate automation.

How about Scott Durow’s sparkle? (https://github.com/scottdurow/SparkleXrm)

Even I tried a few things along the way (https://www.itaintboring.com/tag/ezchange/, https://www.itaintboring.com/dynamics-crm/a-powershell-script-to-importexport-solutions-and-data/)

So, at the first glance, those tasks released by the PowerApps team might not look that impressive.

But, if that’s what you are thinking, you might be missing the importance of this release.

Recently, PowerApps team has taken a few steps which might all be indicating that the team is getting serious about “healthy ALM”:

  • Solution Lifecycle Management whitepaper was published in January
  • Solution history viewer was added to PowerApps/Dynamics
  • Managed solutions have become “highly recommended” for production (try exporting a solution from the PowerApps admin portal, and you’ll see what I’m talking about)

And there were a few other developments: Flows and Canvas Apps became solution-aware, solution packager was updated to support most recent technologies (Flows, Canvas apps, PCF), etc

The tooling, however, was missing. Of course there always used to be third-party tooling, but I can see how somebody in the PowerApps team decided that it’s time to create solid foundation for the ALM story they are going to build, and there can be no such foundation without suitable internal tooling.

As it is now, that tooling might not, really, be that superior to what the community has already developed in various forms by this time. But the importance of it is that PowerApps team is demonstrating that they are taking this whole ALM thing seriously, and they’ve actually stated pretty much that in the release announcement:

“This initial release is the first step towards a more comprehensive, yet simplified story around ALM for PowerApps. A story we will continue to augment by adding features based on feedback, but equally important – by continuing to invest in more training and documentation with prescriptive guidance. In other words, our goal is to enable our customers and partners to focus more on innovation and building beautiful, innovative apps and less time on either figuring out how to automate or perform daunting manual tasks that are better done automated.”

So… I’m eager to see how it’s going to evolve – it’s definitely been long overdue, and I’m hoping we’ll see more ALM from the PowerApps team soon!

PS. There is a link buried in that announcement that you should definitely read through as well: https://pabuildtools.blob.core.windows.net/docs/PowerApps%20Build%20Tools.htm  Open that page, scroll down almost to the bottom. There will be a “Tutorial”, and, right at the start of the tutorial, you’ll see a link to the hands-on lab. Make sure to download it! There is a lot of interesting stuff there which will give you a pretty good idea of where ALM is going for PowerApps.

When the error message is lost in translations

Every now and then, I see this kind of error message in the UCI:

image

It may seem useful, but, when looking at the log file, all I can say is that, well, something has happened. Since all I can see in the downloaded log file is a bunch of callstack lines similar to the one below:

at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Update(Entity entity, InvocationContext invocationContext, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Boolean checkForOptimisticConcurrency, Dictionary`2 optionalParameters)

One trick I learned about those errors in the past is that switching to the classic UI helps. Sometimes. Since the error may look more useful there. Somehow, though, I was not able to reproduce the error above in the classic UI this time around, so… here is another trick if you run into this problem:

  • Open browser dev tools
  • Reproduce the error
  • Switch to the “Network” tab and look for the errors

There is a chance you’ll find a request that errored out, and, if you look at it, you might actually see the error message:

image

That said, I think it’s been getting better lately since there are errors that will show up correctly in the UCI. Still, sometimes the errors seem to be literally lost in translations between the server and the error dialog on the browser side, so the trick above might help you get to the source of the problem faster in such cases.