Monthly Archives: August 2019

New licensing is just around the corner now – more details keep coming out

Microsoft has just published more details on the upcoming licensing changes – don’t miss this update:

I know there are different ways to look at it – some of us are working more on the model-driven applications side, yet others are using CanvasApps and/or Flows. What’s coming might affect us differently. However, looking at it from my point of view (which is, mostly, model-driven apps), I think it’s worth keeping in mind a few things:

  • We are getting a cheaper “introductory” plan: $10 per app plan. It can be used for both model-driven and canvas apps
  • Users licensed for Dynamics won’t be able to work in the Core CDS environments. I am not exactly sure why it matters since they can still work in the Dynamics CDS environments where PowerApps-licensed users can work as well. In other words, we just need to make sure those Power Apps (model-driven or canvas) are deployed in the Dynamics CDS environments to allow our Dynamics-licensed users work with them. I have a feeling this is a bit of a license hack, so it might not work this way forever
  • It’s probably more important from the Dynamics-licensed users perspective that they will be loosing general-purpose Flow use rights
  • Embedded Canvas Apps will not be counted towards the limit
  • Irrespective of the “app licensing”, there will be API limits per user. This it what actually bothers me because I am not sure if there is an easy way to estimate usage at the moment, and, also, since this may or may not affect everyday usage but will certainly have to be accounted for on the data-migration projects
  • That API limit above will affect all types of accounts (including application user accounts and non-interactive accounts)
  • Building a portal for Power Apps is becoming an exercise in cost estimates.  On the one hand, we can get a portal starting at $2 per external login. They way I understand it, API calls from the portal are not counted in this case. On the other hand, we can build a custom portal, but we will have to count those API calls now

All that said, I think this licensing change should have been expected – in the cloud, we always pay per use, so now that PowerApps licensing will be clearly accounting for the utilization, the whole licensing model will probably start becoming more straightforward. Although, we might also have to rethink some common scenarios we got used to (for example: data integration, data migration, sharepoint permissions replication, etc)

DevOps for Dynamics/PowerApps – Q & A


In the last several weeks, I wrote a few blog posts on the devops topic. However, I also meant to conclude those series with a dedicated Q & A post, so here it goes…

1. Why did I use PowerApps build tools?

Quite frankly, it was, mostly, out of curiosity. There are other community tools available out there, but I was curious to see how “native” tools will measure up. Turned out they can be very useful; however, they are missing a bunch of parameters for solution import and/or for the solution packages:

  • we can’t “stage for upgrade” our managed solutions while importing
  • we also can’t choose a mapping file for the solutionpackager

2. What are the caveats of XML merge?

This is a painful topic. We can probably go on and on about how merging XML files might be confusing and challenging – the only way to mitigate this would be to merge often, but even that is not, always, possible.

However, that issue aside, there is one very specific problem which you might encounter.

Imagine that you have a dev branch where you’ve been doing some work on the solution, and you need to merge the changes that occurred on the master branch into your dev branch. Imagine that, as part of those changes, one or more of the solution components have been removed. For example, another developer might have deleted a workflow that’s no longer required.

In this scenario, you will need to also delete the files which are no longer needed from the branch you’ll be merging into. So you might try to use the following git command to ensure that, when merging, git would prioritize the “master”:

git merge –squash -X theirs master

The problem with “theirs” is that git might not delete any of the files you have in your local branch. And, still, it would merge the XML, so that XML wouldn’t be referencing workflow file anymore, but the file itself would still be sitting there in the solution folders.

So, when you try to re-package your solution with the solutionpackager, that operation will fail.

If this happens, you may need to actually delete the file/folder, and, then, checkout that folder from the master branch into your local branch:

git rm –r <folder>

git checkout master <folder>

For more details on this issue, have a look at the discussion below, for example:

3. How do we store passwords for EasyRepro?

EasyRepro is not supposed to work with the connection strings – it’s a UI testing framework, so it needs to know the user name and password in order to “type them into” the login dialog. Which means those parameters may have to be stored somewhere as regular text. It’s not the end of the world, but you may want to limit test user account permissions in your environment since that password won’t be too secure anymore.

4. What about the unit tests for various plugins etc?

I have not used them in my demo project. If you want to explore the options, Fake Xrm Easy might be a good place to start.

5. What about the “configuration data”?

Again, this is something I did not do in the demo project; however, things are starting to get interesting in this area. It used to be that utilizing the Configuration Migration tool would be a semi-manual process. However, there is a powershell module now:

Eventually, this is going to be a “go to” option.

For the time being, since it’s only the first release (and I still have to try it…. maybe it’s what I’ll do next), you may still find other options useful. For example, I’ve been using the approach described here on one of the client projects.


6. What about more complicated scenarios where patching is required?

There are situations where that “simple pipeline” approach does not work. After all, as the solution keeps growing, it may become close to impossible to keep moving complete solution between different environments every time, and, instead, we may have to start splitting that solution into smaller ones and/or we may want to start using solution patches. In either case, there are a few problems with that, and I am not even sure what the solution would be here:

  • Even though solution packager supports patches, I am not quite certain what and how should we be merging in the source control
  • The pipelines have to be re-designed to understand this patching process


Use AAD Groups to set up your PowerApps / Dynamics teams


Have you tried creating a team lately? If not, give it a try, and you may see a couple of team types that were not there before and that might actually get your AD admins excited:


Now imagine that there is a group in AAD which you want to use to set up a team in PowerApps. Previously, it would be a relatively complicated task that would require some kind of integration to be set up. Now you can do it out of the box. Setup the teams in PowerApps, assign security roles, and let AD/Office admins manage the membership outside of Dynamics.

Here is an AAD group:


Here is a team in PowerApps that’s using the ID of the AAD group above:



Here is something important that I did not realize first: any Azure AD group membership maintenance done on the team member in Azure AD will not be reflected until the next time the team member logs in or when the system refreshes the cache (after 8 hours of continuous log-in).

But, once I’ve logged out and logged in under each of the user accounts above, I can see them added to the team:


And, once it happens, the users will get their permissions derived from the team. Which means that the next time you need to create a Sales Person user account in Dynamics, you might probably just add that user to the corresponding AD group.

PowerPlatform Environments: Order vs Chaos

  It seems in the effort to promote PowerApps, Microsoft has created the situation where there had to be a way to take this fast spreading powerapps movement under control – otherwise, everyone with the proper license can create environments, so, if there are 100 licensed users, we may end up with 100 trial environment that will be showing up on the list and with a lot of production environments that will be adding up to the total storage consumption. This might be a nightmare from the management standpoint if you are not prepared.

  Of course if we found ourselves in this situation, that would mean our users are actively exploring the platform, so that might be just what we need. However, this is where it’s becoming an existential question of what is it you prefer to live with: do you want to have more order by introducing some restrictions, or are you ok with having more chaos there by allowing your users to explore freely?

The choice is yours, since you can do it this way or the other, and here is how:


This setting, I believe, applies to the production environments, since they would be taking up the space. But, if you also want to disable trials, there is a PowerShell command for that:


PCF Control Manifest file setting that’s easy to ignore


It’s been a little while since I noticed that my treeview checkbox control stopped working, so I was eager to see what’s going on. Finally, got to look into it today, and it turned out there is a settings in the control manifest file that I overlooked before.

Normally, when creating a web resource, we would be using Xrm client-side library. With the PCF controls, it seems the idea is that we should be using context.webAPI instead:

Mind you, not everything is available there that we may need, so, while creating the control, I ended up using a mix of context.webAPI where I could and Xrm where I could not.

It was working fine until it broke, though I am not sure when did it happen exactly. Either way, when looking at it in the chrome dev tools earlier today, I noticed that webAPI was not properly initialized for some reason:

Fast forward, it turned out if we want to use webAPI, we need to enable related feature in the control manifest as per this page:

<feature-usage> <uses-feature name=”WebAPI” required=”true” /> </feature-usage>

And, of course, once I have added WebAPI feature to the manifest and rebuilt the control, it all started to work again. Guess there was an update at some point, but this is what previews are forSmile

What other features are available, though? To see that, go to the page below:

At the moment of writing this post, here is the list of features that can be added to the manifest:

    <uses-feature name="Device.captureAudio" required="true" />
    <uses-feature name="Device.captureImage" required="true" />
    <uses-feature name="Device.captureVideo" required="true" />
    <uses-feature name="Device.getBarcodeValue" required="true" />
    <uses-feature name="Device.getCurrentPosition" required="true" />
    <uses-feature name="Device.pickFile" required="true" />
    <uses-feature name="Utility" required="true" />
    <uses-feature name="WebAPI" required="true" />

Flow and workflow permissions in CDS


Funny how you hope you know stuff, and, then, you discover something very basic that’s not working the way you’d think it would.

That’s my life, though.

I was having a hard time trying to figure out why a user with Sales Manager permissions can use a link to access a Flow I created. And not only to access it, but, also, to modify it and to save those changes.

No, that flow would not show up under flows:



However, if that user knew what the link is to the flow, they would be able to open the flow and edit it:


A little weird you’d think? Well…

My Sales Manager user account had only “Sales Manager” role assigned to it. So, I tried something else – I went to the environment to have a look at the workflows under that user account, and, to my surprise, I could actually activate and deactivate pretty much any of the classic workflows:


Turned out it’s all about how the role is set up:


Sales Manager role allows “write” access to the process (which is also “workflows”, and which is also “Flows”) records in the user’s business unit.

In this environment, there is only one business unit, so, even though the workflows and flows are created by system admin and/or deployed through the solutions, a lot of non-admin users might end up having access to those flows just because they have permissions out-of-the-box.

How do you mitigate this?

There seem to be a few options:

  • Tweak your security roles so that BU-write on the workflows is not allowed. For example, here is how SalesPerson role is dealing with this:image
  • Although, maybe your users want to have access to each other’s workflows/flows, in which case you might create a child BU and move all non-admin/non-customizer users into that BU instead. once they are there, they can still share workflows in their BU, but they wan’t be able to update system workflows anymore


Either of that would work for both Flows and Workflows.

CI/CD for PowerPlatform: Developing Feature2


Almost a month has passed since the previous post on the DevOps topic, so, the imaginary “Debbie” developer has left the project, and, it seems, I have to finish development of that second feature myself… Oh, well. Let’s do it then!

(Tip: if you have no idea what I am talking about above, have a look at the previous post first)

1. Run Prepare Dev to prepare the dev environment


2. Review the environment to make sure unmanaged solution is there


3. Add new field to the Tag entity form


4. Run Export and Unpack pipeline on the Feature2 branch

This is to get those changes above pushed to the Feature2 branch

5. Make sure I am on Feature2 branch in the local repository

git checkout Feature2

Since I got some conflicts, I’ve deleted my out-of-sync Feature2 first:

git checkout master
git branch -D Feature2
git checkout Feature2
git pull origin Feature2

6. Update the script

At the moment of writing, it seems PowerApps Build Tools do not support solution packager map files, so, for the JS files and plugins (which can be built separately and need to be mapped), it’s done a little differently. There is a powershell script that actually copies those files from their original location to where they should be in the unpacked solution.

In case with the script I need to modify, the script itself is in the Code folder:



The way that script gets added to the solution as a webresource is through the other script that runs in the build pipelines:


So, if I had to add another web resource, I would do this:

  • Open solution in PowerApps
  • Add a web resource
  • Run Export and Unpack pipeline on the branch
  • Pull changes to the local repo
  • Figure out where the source of my new web resource would be (could be added to the same Code subfolder above)
  • Update replacefiles.ps1 script to have one more “Copy-Item” line for this new web resource


Since I am not adding a script now, but, instead, I need to update the script that’s there already, I’ll just update existing tagform.js:


7. Commit and push the change to Feature2

git add .
git commit –m “Updated tagform script”
git push origin Feature2

8. Run Prepare Dev build pipeline on Feature2 branch to deploy updated script

This is similar to step #1

Note: the previous two steps could be done differently. I could even go to the solution in PowerApps and update the script there if I did not need/want to maintain the mappings, for example.

9. Now that the script is there, I can attach the event handler


10. Publish and test


11. Run Export and Unpack pipeline on the Feature2 branch to get updated solution files in the repository

12. Pull changes to the local Feature2 branch

git checkout Feature2
git pull origin Feature2

13. Merge changes from Master

git checkout Master
git pull origin Master
git checkout Feature2
git merge –X theirs master
git push origin Feature2

14. Retest everything

First, run Prepare Dev pipeline on the Feature2 branch and review Feature 2 dev manually

At this point, you should actually see New Entity from Feature1 in the Feature 2 dev environment:


Then, run Build and Test pipeline on the Feature2 branch and ensure all existing tests have passed.

15. Finally, merge into Master and push the changes

git checkout master
git merge –X theirs Feature2
git push origin master

16. Build and Test pipeline will be triggered automatically on the master branch – review the results

Ensure automated tests have passed

Go to the TestMaster environment and do whatever manual testing is needed



Filtered N:N lookup


If you ever tried using out of the box N:N relationships, you may have noticed that, out of the box, we cannot filter the lookups when adding existing items to the relationship subgrids.

In other words, imagine you have 3 entities:

  • Main entity
  • Complaint entity
  • Finding entity


Main entity is the parent entity for the other two. However, every complaint may also be linked to multiple findings and vice versa… Although, that linkage should only be done within the main entity – if there are two main records, it should only be possible to link complaints and findings related to the same main record.

Which is not how it works out of the box. I have two main records below, the first one has 2 complaints and two findings, and the second one has one complaint and one finding:




There is an N:N between Findings and Complaints, so what if I wanted to link Complaint #1 on the first main record to both of the findings for the first main record?

That’s easy – open the complaint, open related findings, click “add existing” and…


Wait a second, why are there 3 findings?

Let’s try it the other way around – let’s open Finding #1 (first), and try adding complaints:


Only two records this time and both are related to the correct main record?

The trick is that there is a custom script to filter complaints. In essence, that script has been around for a while:

It just did not seem to work “as is” in the UCI, so there is an updated version here:

All the registration steps are, mostly, the same. There are a couple of adjustments, though:

You can use the same script for all N:N relationships, but, every time you introduce a new relationship, you need to update the function below to define the filters:


For every N:N relationship you want to start filtering, you will need to add one or two conditions there since you may be adding, in my example above, findings to complaints or complaints to findings. Hence, it’s the same relationship, but it can be one or the other primary entity, and, depending on which primary entity it is, there will be different filters.

When configuring command in the ribbon workbench (have a look at that original post above), there is one additional parameter to fill in – that’s the list of relationships for which you want entity lookup to be filtered:


In the example above, it’s just one relationship. But it could be a comma-separated list of relationships if I wanted complaint entity to be filtered for different N:N-s.

That’s about it… There is, also, a demo solution with those 3 entities(+the script) which you can import to try it all out:

MFA, PowerApps, XrmTooling and XrmToolbox


If you are working in the online environment where authentication requirements have started to shift towards the MFA, you might be noticing that tools like XrmToolBox (or even the SDK itself) are not always that MFA-friendly.

To begin with, MFA is always interactive – the whole purpose of multi-factor authentication is to ensure that you are who you are, not just somebody who managed to steal your username and password. Hence, there are additional verifications involved – be that an SMS message, an authenticator app on the phone, or, if you are that unlucky, a custom RSA token validation.

There are different ways to bypass the MFA.

If your organization is willing to relax security restrictions,  you might get legacy authentication enabled, so you would be able to get away authenticating the old way – by providing a login/password within the connection string. Having had some experience with this, I think this solution is not quite viable. Security groups within the organizations will be cracking down on this approach, and, sooner or later, you may need something else.

Besides, MFA is not, always, Azure-based. In the hybrid environments where authentication is done through the on-premise ADFS, there could be other solutions deployed. To be fair, having to figure out how to connect XrmToolBox to the online org in this kind of environment is exactly why I ended up writing this blog post.

But the final explanation/solution is applicable to the other scenarios, too.

To be more specific, here is the scenario that did confuse XrmToolBox to the point of no-return:


It was all working well when I was connecting to CDS in the browser, but, as far as XrmToolBox was concerned, somehow it just did not want to work with this pattern.

The remaining part of this post may include some inaccuracies – I am not a big specialist in OAuth etc, so some of this might be my interpretation. Anyway, how do we make everything work in the scenario above?

This is where we need to look at the concept of OAuth applications. Basically, the idea is that we can register an application in the Azure AD, and we can give permissions to that App to use Dynamics API-s:

This would be great, but, if we wanted to bypass all the 2FA above, we would have to, somehow, stop using our user account for authentication.

Which is why we might register a secret for our new Azure App. However, application secrets are not supported in the XrmTooling connection strings:

So, what was the point of registering an app you may ask?

There is another option where we can use a certificate instead, and you may want to have a look at the following page at some point:

If you look at the samples there, here is how it all goes:


It’s a special AuthType (“Certificate”), and the whole set up process involves a few steps:

  • Registering an application in Azure AD
  • Uploading a certificate (I used one of those I had in the certificate store on my windows laptop. It does not even have to be your personal certificate)
  • Creating an application user in CDS
  • Creating a connection string for XrmToolBox


To register an app, you can follow one of the links above. Once the app is registered, you can upload the certificate – what you’ll see is a thumbprint which you will need to use in the connection string. Your XrmTooling client, when connecting, will try to find that certificate on the local machine by the thumbprint, so it’s not as if you would able to use the thumbprint (as a password) without the certificate.

While trying to make this work, I’ve uploaded a few certificates to my app, so here is how it looks like:


What’s that with the application user in CDS? I think I heard about it before, I just never realized what’s the purpose of this. However:

  • Application users are linked to the Azure applications
  • They do not require a license


How do you create one? In the CDS instance, go to Settings->Security->Users and make sure to choose “Application Users” view:


Surprisingly, you will actually be able to add a user from that view and the system won’t be suggesting that you need to do it through the Office admin center instead. Adding such a user is a pretty straightforward process, you just need to make sure you are using the right form (Application User):


For the email and user name, use whatever you want. For the application ID, make sure to use the actual application ID from the Azure AD.

Don’t forget to assign permissions to that user (in my case, I had to figured I’d have that user as System Admin)

Once you have reached this point, the rest is simple.

Go to the XrmToolBox and start creating a new connection. Make sure to choose “Connection String” option:


Set up the connection string like this (use your certificate thumbprint and your application’s appid):


Click next, give that connection some name, and voila… You should be able to connect without the MFA under that special application user account now.