Author Archives: Alex Shlega

Have you noticed a change in the field mapping logic?

There seems to be new mapping behavior in my environment as of this morning – it could be a feature, but it could also be a bug. But it’s something to be aware of.

Notice how, when creating a new record to populate a lookup, I’m getting one of the fields (“Feature Book”) populated automatically:


This is normally supposed to happen on the 1:N relationships (so, basically, in the subgrids) where we can define mappings. And I have that mapping defined in my test solution, but that’s a different relationship:


In case with the lookup above (which is N:1), I could still define the mapping, but it would be from Author to Book:


So, apparently, since I have sort of a “circular” relationship between those two tables, the system is picking up on the “reverse” relationship mapping when I’m populating the lookup.

Now that’s interesting, and could be useful, but, it seems, I just can’t differentiate, anymore, between adding a record from the lookup and adding it from the subgrid (so can’t say, easily, if the record being created is on the “N” side of the relationship, or if it’s on the “1” side).

Performance insights for model-driven apps

With the performance insights feature available in preview now, I was wondering what is it I could possibly learn about one of my applications?

For the most part, what I found there is not, necessarily, actionable. There are a few informational messages, and there are a few warnings; however, there is one insight that I got curious about:


So, cold page loads are slower than warm page loads, and they can be as much as 100% slower. The difference between cold page loads and warm page loads is not, necessarily, that we would be opening our app for the first time on that day – it’s more about how we’d be opening it. As long as we are “starting” a new browser and/or using a new tab, the page load is considered to be “cold”. Otherwise, when we are using existing browser window/tab, that’s a warm load:


That reminded me of the discussion I had with a colleague a couple of weeks ago. Both of us kept noticing slow load times in different applications, here and there, and those times might seem pretty significant sometimes. Unfortunately, I did not check performance insights at that time (and, besides, it might not even be available back then), but now I am wondering if having client-side only cold/warm page load details would be enough to troubleshoot or if we might also need server-side “cold/warm” information there? In reality, it’s, likely, both of those pieces that we may need?

Either way, curious to see how far Microsoft can take this tool – looking forward to this feature going GA.

Command Checker for model-driven app ribbons

I keep saying there is just no way to know everything about Power Platform these days.

Below is a a year-old news, and I just ran into it out of a sudden while trying Power FX buttons (which I am still trying). Somehow, a new item showed up on the menu which I am pretty sure I have not seen before:



Turned out it’s a very useful tool that was first released more than a year ago:

It is normally hidden from the ribbon, and you have to add ribbondebug=true to the url for this option to even show up. I guess while I was working with all those preview features (pages, Power FX commanding, modern app designer), that parameter got added to the url at some point, so that’s how I ended up enabling command checker without even knowing about it.

So now that I know it’s there, though, I am just thinking of how useful it would be every time I were adding a new button to the ribbon, since I would have to spend a bit of time troubleshooting rules for almost each of those buttons. What if I could just open the command checker and see what’s going on?


Oh, the button is there… but that specific rule has evaluated to false. No need to do the guess work.

Although, it seems this tool is not, yet, completely hooked up to the Power FX command buttons, since here is what I see for one of them:


I’m guessing there is some kind of script behind which is not recognized by the command checker, but which is created by the app designer for the button below:


Ok, let’s put it this way then:

  • Command checker can be very useful when troubleshooting ribbon issues
  • However, it is not, yet, fully integrated with the Power FX commanding

It will probably be updated, though, so everything will come together.

Power BI Paginated Reports ALM

I think Power FX Command designer is going to be the news of the month (or, possibly, of the whole summer):

However, that’s something I’ll need to look at a bit later. In the meantime, there was a more pressing need on the project I’ve been working on, and that’s all about setting up some sort of ALM for Power BI Paginated reports.

In hindsight, it seems “specialization” is starting to bite me every now and then. Microsoft keeps building functionality in different areas, and keeping up with all of that is not that simple.

However, first things first.

I was looking for a way to set up Dev/Test/Production deployment process for Power BI Paginated Reports. Unlike with SSRS reports, where reports can be added to Dataverse solutions, Paginated Reports have nothing to do with Dataverse – they treat it as yet another datasource, but, other than that, they are not integrated that much. They are not showing up in the user interface unless we add links in the app designer (or, possibly, through that new Power FX Command designer?) and they cannot be added to the solutions.

My original response to that problem was to write a Power Shell script that would be using Power BI Rest API to help with the ALM. It might be a topic for another blog post, but it took me almost two days to figure out how that might work, and, as I was still digging into it, here is what I found:

Deployment Pipelines for Power BI


Wow! To be honest, this whole ALM question for Power BI reports kept bothering me for the last few months, but I never had time to dig into it until recently. And just when I was about to give up on the complete automation, since I was hitting some issues with Power Shell, so it would have been limited anyways… it turned out most of my questions have already been answered.

What exactly can we do with deployment pipelines?

  • We can deploy workspace components from dev workspace to test workspace, and, then, to prod workspace
  • We can deploy some components, or we can deploy all
  • We can configure rules, so, for instance, when deploying a report from Dev to Test, we can specify datasource rules to automatically connect our report to a datasource in the test environment. And, then, we can configure similar rules for production deployments


So, basically, with the rules above, my paginated report, once it’s deployed to the test environment, will have its connection string updated so it starts querying data from the test dataverse environment (instead of the dev environment).

This, of course, is not the same as solution deployment in Dataverse, but, that said, paginated reports will most likely be developed, tested, and deployed separately from other solutions components anyways. And, even though we don’t necessarily need this to be integrated with Azure Devops, we can still do that, too, if this is what we would prefer to do:

There are only a couple of obvious issues:

  • This will only work for the organizations which have premium capacity. That might not be such a big deal keeping in mind that paginated reports do need premium capacity either way
  • If you wanted to create a pipeline, you’d have to be an admin in the new workspace experience (I don’t know enough about Power BI to identify all possible implications, but, for example, I will probably have to request those permissions on the project, since, right now, I can’t create workspaces there. All the screenshots above are from my own tenant)

Hope this helps! And I’m off to trying Power FX Command designer now…

Bulk deletion seems a bit fishy

Have you tried bulk deletion lately? You may have seen the error below when you tried:


Well, don’t you worry. Just wait a minute… I mean, literally. Although, you may have to wait a few minutes to be honest, but the key is that you need to wait a bit and try again. It might just work on your next attempt:


How come? Who knows. Apparently, bulk deletion looks a bit fishy:

10 of the World's Most Dangerous Fish | Britannica

(The fish above came here from

Earlier today, I was crying for help and Nick Doelman offered the workaround above – thanks Nick!(although, come to think of it, a better one might be “take a coffee break and come back”Smile)

Now that everyone can quickly build and share low-code apps – will everyone do?

“Now everyone can quickly build and share low-code apps with Microsoft Power Apps” – this is the slogan you see when you open


This is how Microsoft sees Power Apps, and, in the bigger schema, this is what Power Platform is all about – democratizing development by providing the tools everyone can use. With Power Automate, we can connect to almost everything almost everywhere, and, then, we can perform actions on those connections without writing a single line of code. With Power Apps, we can use the same connectors, but, this time, we can create UI experiences without writing any pro code.

Naturally, when looking at it that way, the next logical step would be to encourage business users to start writing applications for themselves since, of course, they are the ones who know how businesses operate, and, from that standpoint, they have obvious advantage over developers who might be able to use pro-code, but who would not be able to foresee all the peculiar scenarios their applications may have to cover because of how businesses are supposed to operate.

Traditionally, application development would be organized around a project team, and that team would be expected to follow some kind of delivery methodology – it could be agile/scrum with all the user stories, it could be waterfall with all the business requirements, or, possibly, it could be something else. However, all of those would, normally, assume that business requirements would be captured, this way or another, before development starts. This would be to ensure that all those peculiarities of the business process were clearly explained to the developers, and, therefore, to eventually develop applications that provide expected functionality.

With the introduction of low-code, which has brought application development within the reach of the actual business people, where is this all going to go? Is it, really, that everybody in any organization will start building low-code apps or is there more to it?

Personally, I believe there are at least a few things to keep in mind:

  • Business users are not going to become low-code developers just because they can
  • For any organization to stay manageable, there should be some consistency in the data it’s collecting from various sources and in the processes it’s following
  • There are certain rules an organization may need to follow, and those rules may have to be enforced (think of various data protection laws, for instance)

I will talk about this more below, but, just by looking at the list above, you can probably see what I’m going to arrive at, and, basically, it’s that an organization may have to meet certain conditions to really start benefiting from the democratized application development.

Why so?

First of all, if you think of the business folks, they have their immediate operational responsibilities, and, often, their performance might be measured by certain metrics. Those have nothing to do with application development – this holds true for the sales agents, this holds true for the CEO-s, and this holds true for the business owners.

Of course some of them might be interested to jump into the application development since this would be something they always wanted to try, but there we are talking about people trying another hobby. For those turning to the low-code tools to improve their personal efficiency, there will always be a very interesting dilemma (I find it surprising “improving personal efficiency” is often touted as a benefit, since it’s sort of ignoring the obvious): you can become more efficient, but, once your secret is revealed to your peers, they will all reach the same level of efficiency, and you will all be equal, again. What’s the point? It seems obvious, so why would anybody other than those who have been given some incentives push for the personal efficiency? Of course business owners would be naturally incentivized to improve personal efficiency of their employees/contractors. People on commission might also see benefits in becoming more efficient, even if that end up being only a temporary boost in payments.

However, for most of the business folks, unless, again, they were always dreaming of doing this, the idea of becoming a low-code developer might seem far from what they would really want to be doing on their spare time.

And there is no judgement there – after all, not that many pro-developers would want to become sales agents, right?

Although, there could be an alternative (or complimentary) approach where organizations will start encouraging employees to spend time on citizen development somehow. Possibly, they will, but, this way or another, for the business users to start using low-code development tools, they must be willing to do so. In other words, everyone probably can start developing apps now, but not everyone will.

But that would still be only the first step. Imagine everybody jumping into it and starting to develop all sorts of low-code applications. For everyone involved, it might turn into a really interesting experience, but, in the end, if some of those applications start storing their data in Excel, other are going to start using personal one-drive, and yet others would opt for and Azure SQL database, this will become a nightmare for the organization. Since, after all, you need to ensure the data produced by all those apps is available and manageable somehow. Otherwise, what’s the point in having that data?

But, even when you have manageable data, you can’t really allow one sales person to start following a process that’s completely different from what another sales person would be doing. Mind you, there might be advantages in doing exactly that, since, after all, this way the organization might be able to invent and/or identify better processes.

And, then, the second pre-condition for “everyone starting to do low-code development” would be to either have standardized business processes in the organization (so, any improvement that someone comes up with would be useful for others), or to implement some sort of “incubator” program where the organization would be able to identify the improvements made by individual citizen developers and adjust  remaining business processes accordingly. Which, in the end, will lead to standardizing the processes across the organization.

Again, those scenarios are not, necessarily, mutually exclusive. There could be well-defined data and processes in the organization, but there could still be room for significant process adjustments (not just for minor process improvements/“automation”).

Let’s say there are people in the organization who can see how  turning to low-code development might be useful to them, and let’s say there is some structure to the organizational data / processes those folks can rely on.

There can still be internal / external regulations and rules that the organization has to follow, and that may have to be accounted for while doing any sort of development (be it low-code or pro-code). Externally, those rules may come from various source, and of course, various data protection laws would be one such example. Internally, there might be a need to ensure integration of certain data with specific systems – for example, when taking a payment through a low-code app somehow, and assuming finance users are relying on SAP to see all financial data, some kind of integration may have to be implemented to ensure all payment data ends up in the SAP.

How would you enforce that on the organizational level? Since it is very unlikely every low-code developer would just start following those rules. The only way I see that happening is by somehow implementing an application development methodology that would be followed by everyone in the organization, and, then, you would need someone (possibly a team) to oversee proper implementation of that methodology on each “project”.

This could all be represented by a funnel like this:


In the end, low-code development has a lot of potential, and it’s true that “everybody can start building applications”. Whether they will, and whether they even should, depends on a number of things, though, in the end, it all comes down to whether the organization is seeing this as an opportunity to improve, whether it has create the environment that encourages application development by “everyone”, and whether it has the mechanisms to ensure that development is done consistently across the board.

This is not to say it’s not possible, and, where possible, it might be really beneficial, but, in many cases, getting there is not a small feat.

Testing a polymorphic lookup

There are polymorphic lookups now, but what’s the big deal? Well, I don’t have to go far to find an example of where it might be helpful. Here is a screenshot from the actual model-driven form on one of the projects I’ve been working on:


There are 3 different lookups on that screen, yet only one of them is supposed to be populated. Which means there are business rules involved, possibly validations, etc. I could simplify the screen above and get rid of those business rules right away by adding a polymorphic lookup – that would be a single lookup which would reference either of those three referenced tables.

As of today, polymorphic lookups can only be created through the API, and the link I put right at the top of this post provides all the required details. Except that… there is an error in the json source:


Heh… they had to make a mistake somewhere, or I’d really have to start worshipping the product team for the goodies they are deliveringSmile

Anyways, to create a polymorphic lookup we need something to send an HTTP POST request to Dataverse, so I used Restman for that:

Here is how the whole thing looks like, and I’ll provide the json I used below:

And here is the json:

 "OneToManyRelationships": [
     "SchemaName": "new_test_contact",
     "ReferencedEntity": "contact",
     "ReferencingEntity": "new_test"
     "SchemaName": "new_test_account",
     "ReferencedEntity": "account",
     "ReferencingEntity": "new_test"
     "SchemaName": "new_test_systeuser",
     "ReferencedEntity": "systemuser",
     "ReferencingEntity": "new_test",
	 "CascadeConfiguration": {  
        "Assign": "NoCascade",  
        "Delete": "RemoveLink",  
        "Merge": "NoCascade",  
        "Reparent": "NoCascade",  
        "Share": "NoCascade",  
        "Unshare": "NoCascade"  

 "Lookup": {
   "AttributeType": "Lookup",
   "AttributeTypeName": {
     "Value": "LookupType"

   "Description": {
     "@odata.type": "Microsoft.Dynamics.CRM.Label",
     "LocalizedLabels": [
         "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
         "Label": "Test Client",
         "LanguageCode": 1033

     "UserLocalizedLabel": {
       "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
       "Label": " Test Client",
       "LanguageCode": 1033

   "DisplayName": {
     "@odata.type": "Microsoft.Dynamics.CRM.Label",
     "LocalizedLabels": [
         "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
         "Label": "TestClientLookup",
         "LanguageCode": 1033

     "UserLocalizedLabel": {
       "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
       "Label": "TestClientLookup",
       "LanguageCode": 1033

   "SchemaName": "new_TestClientLookup",
   "@odata.type": "Microsoft.Dynamics.CRM.ComplexLookupAttributeMetadata"

With that, I now have my lookup field added to the new_test table:


Although, it does not look quite right there, since, it seems, I can only see one related table in the designer. But, well, that’s only a preview after all, and that seems to be just a minor inconvenience.

Once this new field is added to the form, I can use records from either of the 3 referenced tables to populated the lookup:


Now, here is what I was interested in beyond the fact that we can reference multiple tables. What happens on the TDS endpoint side? Since this is what I’d be using for Power BI paginated reports. And, actually, it’s all good there. Here is an example:


So, basically, TDS endpoint will give us three columns for this kind of lookups. The first column would provide referenced record guid, the second one would provide referenced record type, and, finally, the third one would provide referenced record “name”. Which is more than enough to do proper reporting in Power BI/SSRS.

And what about the advanced find?

It’s interesting there, since, when specifying filtering conditions, I can see combined list of possible operators. For example, on the screenshot below, I’ve selected “Equals Current User” (and this is a polymorphic lookup, so I could have selected “Equals” and pick an account instead):

Still, if I used “Equals” condition, I could pick from either of the three referenced tables:

Well, that’s some cool stuff from the product team, and the only question for me is whether it’s something I should start using in production? Or if I should wait till this is out of preview? Holding off for now, but this is really testing my patienceSmile

Using Environment Variables to configure Word Template action in the cloud flows

Power Automate word templates are much more fun to work with than the classic D365 word templates, yet we can happily use them in the model-driven app with a bit of javascript:

However, how are we supposed to deploy flows containing “Populate a Microsoft Word Template” action in different environments so that each environment had its own version of the template? Assuming this is how we would ensure proper ALM for the template files (wow that sounds crazy, but… we would not want developers to just update production version of the templates, right?)

To start with, let’s look at how that action would, normally, be configured:


Although, those user-friendly names are not, really, what the connector is using behind the scene. We need to look a bit deeper by using “peek code” option:


There are a few “id” strings there, and that’s what the connector is using instead of the friendly names it’s showing us.

Therefore, if we wanted to configure the action to take in dynamics values for those parameters, we would need to know those values. One way to get them for each environment would be to simply point that action to another environment, use “peek code”, copy the values, then do it for another environment, and so on.

Let’s assume we have correct values for each environment.

The next step would be to create 3 environment variables, configure default values, and use them to configure action parameters:


It’s worth noting that, unlike flow variables, environment variables will actually have values in the “design time”, which means the Flow will be able to parse the template. If you tried using flow variables, they would not be “set” yet, since it would only happen at run time. Hence, the action above would not be able to find the template in the design time, and you’d be getting an errors.

By I digressed. We are using environment variables above.

Now, all you need from here is:

  • Export your managed solution that includes the flow and required environment variables
  • In the target environment, open Default Solution, locate those 3 variables, and configure custom values which would be specific to each environment
  • You may need to turn the flow on and off, since cloud flows tend to cache environment variable values

For example, here is how one of those env variables is configured:

And voila… Every environment will now be loading document template from the proper location, yet those templates can even have minor differences as you can see on the screenshot below:


And that’s it.

PS. Btw, can’t help but mention implementing this kind of “ALM” might be a pain in the neck with the classic word templates since they are not solution aware at all, and the only way to copy them from one environment to another would be to manually update template files to match proper entity id-s… or to use XrmToolBox for that, since, I believe, there is a plugin there.

Upcoming pricing changes for Power Apps

With the recent announcement, one thing is clear: Power Apps licensing has never been cheaper. With no licence minimums or other purchase requirements, pretty much any organization should be able to start using Power Apps now:


Although, as it usually is with licensing, there is a fly in the ointment. Those changes are coming into effect on October 1, 2021. Until then, even though there is a promotional offer of $12 ($3 on the per app plan), that offer is only applicable to the minimum  purchase of 5000 licenses.

Either way, cheaper licensing is always a good thing, and I’m happy this is happening.

But, then, I still remember the time when Dynamics CRM 2011 was somewhere in the range of $40-45 per user licence, and, from there, licensing fees only kept growing.

Funny enough, current Power Apps pricing is exactly the same, but, since first-party applications are not included into that price, it only seems fair that Power Apps pricing should have been less, right? Since, after all, we are, essentially, paying for the platform access, but there is no out of the box business functionality included there.

This seems to be an ongoing problem with Power Platform licensing. We all may have some idea of what is fair and what is not. Microsoft may have some idea, too, and, as this announcement shows, they might actually be quite a bit off… to such an extent that they can cut licensing fees by 50%… but none of that is going to mean anything until, somehow, Power Platform licensing fees get translated into the resource consumption fees so that we could see underlying resource usage and associated fees.

At least that way we could clearly see how $20 or $5 translates into CPU / memory / traffic / etc usage. Apparently, there would also be some licensing fee for the “platform access”, but, right now, this is all bundled together, and, so, I’d think those prices can go up or down depending on how it all balances out in the books year after year.

Hence, it’s great the prices are going down. I think it would have been even better if there was a clear explanation of how those prices are set (in relation to the Azure resource consumption fees in general). Without that, will the price go up next year? Will it go further down? It’s kind of hard to say. Well, still, it does not change the fact that we just got much better pricing, so… have fun with Power Platform!

EasyRepro tips

It’s been great attending Ali Youssefi’s EasyRepro workshop this week (and it’s also been great to be invited into those workshops by the client to start with). That’s a nice workshop which will get you through the basics of Easy Repro (and some of the related things you may need to keep in mind) – we are not supposed to share workshop materials, but, well, you might still want to see all of the EasyRepro videos Ali has on Youtube:

Either way, that’s been a great opportunity for me to dig a little deeper into how EasyRepro works, what’s doable and what’s not.

On the high level, I wrote about Easy Repro before:

All of that still stands, so, in this post, I am just going to dig into a few areas where I had problems with Easy Repro this week.

1. Invoking “New” lookup action

It seems easy to do, since you just need to call xrmApp.Lookup.New()

However, xrmApp has a few properties which are only applicable in the context:

  • Lookup
  • QuickCreate
  • Grid
  • Entity

And there are, probably, a few more.

What  this means is that those properties of xrmApp make (or do not make) sense depending on what’s currently happening in the model-driven application. For example, when a quick create form is visible, xrmApp.QuickCreate will allow access to that form. When a lookup is “selected”, we can access that lookup through xrmApp.Lookup, and, therefore, we can initiate new record creation in the user-interface using xrmApp.Lookup.New() method.

Which means that in order to use New, you have to somehow select the lookup first. So you could, for example, do this:

LookupItem liContact = new LookupItem()
Name = “ita_contact”



And, along the way, if your lookup is polymorphic (it’s a customer, for example), you may need to call SelectRelatedEntity before calling “New”:


2. “You might want to close “Unsaved changes” popup automatically in your tests


The function below will click “Discard changes” button (although, you might want to update it so it uses “Save and continue” instead). Normally, you would call this function in those places where you would expect “Unsaved changes” dialog to pop up:

public bool DiscardChangesIfPresent()
var cancelButton = _client.Browser.Driver.FindElement(By.XPath(“//*[@id=\”cancelButton\”]”));
if (cancelButton != null)
return true;
return false;
return false;

3. Updating two lookup fields in a sequence

It seems that, sometimes, you may have to “focus out” of the lookup before switching to another lookup. For example, I’ve been having problems with the code below:

LookupItem li = new LookupItem()
Name = “<column_name>”,
Value = “<column_value>”,
Index = 0


LookupItem liContact = new LookupItem()
Name = “<another_column_name>”



You can see how that code is setting value of the first lookup, and, then, it immediately proceeds to create “new” record for another lookup. Or, at least, that’s the intent. Instead of that, though, “New” method gets called on the first lookup somehow.

It seems all we need to do is add an extra line to select another element on the form (sort of to “focus out”). There are, probably, other ways to do it, but here is what worked for me:


Once I added this line right where the placeholder is in the code above, everything started to work properly.

4. Killing chromedriver and related chrome.exe processes

If you tend to terminate test debugging in Visual Studio, and if you are using Chrome there, you will inevitably end up with chromedriver.exe and chrome.exe processes stuck in memory, so you will either have to keep terminating them from the task manager, or you could make your life a little easier using some kind of script/tool. Here is a powershell script that might do the trick:

function Kill-Tree {


    if($recursive -eq $true){
Get-CimInstance Win32_Process | Where-Object { $_.ParentProcessId -eq $ppid -and $_.Name -eq “chrome.exe”} | ForEach-Object { Kill-Tree $_.ProcessId $false }
Stop-Process -Id $ppid


Get-Process -Name chromedriver | ForEach-Object -Process{
Kill-Tree $_.Id $true

For Edge, FireFox, and other browsers, you might need to adjust the script above.