Author Archives: Alex Shlega

Bulk deletion seems a bit fishy

Have you tried bulk deletion lately? You may have seen the error below when you tried:

image001

Well, don’t you worry. Just wait a minute… I mean, literally. Although, you may have to wait a few minutes to be honest, but the key is that you need to wait a bit and try again. It might just work on your next attempt:

image

How come? Who knows. Apparently, bulk deletion looks a bit fishy:

10 of the World's Most Dangerous Fish | Britannica

(The fish above came here from https://www.britannica.com/list/10-of-the-worlds-most-dangerous-fish)

Earlier today, I was crying for help and Nick Doelman offered the workaround above – thanks Nick!(although, come to think of it, a better one might be “take a coffee break and come back”Smile)

Now that everyone can quickly build and share low-code apps – will everyone do?

“Now everyone can quickly build and share low-code apps with Microsoft Power Apps” – this is the slogan you see when you open https://powerapps.microsoft.com/en-us/

image

This is how Microsoft sees Power Apps, and, in the bigger schema, this is what Power Platform is all about – democratizing development by providing the tools everyone can use. With Power Automate, we can connect to almost everything almost everywhere, and, then, we can perform actions on those connections without writing a single line of code. With Power Apps, we can use the same connectors, but, this time, we can create UI experiences without writing any pro code.

Naturally, when looking at it that way, the next logical step would be to encourage business users to start writing applications for themselves since, of course, they are the ones who know how businesses operate, and, from that standpoint, they have obvious advantage over developers who might be able to use pro-code, but who would not be able to foresee all the peculiar scenarios their applications may have to cover because of how businesses are supposed to operate.

Traditionally, application development would be organized around a project team, and that team would be expected to follow some kind of delivery methodology – it could be agile/scrum with all the user stories, it could be waterfall with all the business requirements, or, possibly, it could be something else. However, all of those would, normally, assume that business requirements would be captured, this way or another, before development starts. This would be to ensure that all those peculiarities of the business process were clearly explained to the developers, and, therefore, to eventually develop applications that provide expected functionality.

With the introduction of low-code, which has brought application development within the reach of the actual business people, where is this all going to go? Is it, really, that everybody in any organization will start building low-code apps or is there more to it?

Personally, I believe there are at least a few things to keep in mind:

  • Business users are not going to become low-code developers just because they can
  • For any organization to stay manageable, there should be some consistency in the data it’s collecting from various sources and in the processes it’s following
  • There are certain rules an organization may need to follow, and those rules may have to be enforced (think of various data protection laws, for instance)

I will talk about this more below, but, just by looking at the list above, you can probably see what I’m going to arrive at, and, basically, it’s that an organization may have to meet certain conditions to really start benefiting from the democratized application development.

Why so?

First of all, if you think of the business folks, they have their immediate operational responsibilities, and, often, their performance might be measured by certain metrics. Those have nothing to do with application development – this holds true for the sales agents, this holds true for the CEO-s, and this holds true for the business owners.

Of course some of them might be interested to jump into the application development since this would be something they always wanted to try, but there we are talking about people trying another hobby. For those turning to the low-code tools to improve their personal efficiency, there will always be a very interesting dilemma (I find it surprising “improving personal efficiency” is often touted as a benefit, since it’s sort of ignoring the obvious): you can become more efficient, but, once your secret is revealed to your peers, they will all reach the same level of efficiency, and you will all be equal, again. What’s the point? It seems obvious, so why would anybody other than those who have been given some incentives push for the personal efficiency? Of course business owners would be naturally incentivized to improve personal efficiency of their employees/contractors. People on commission might also see benefits in becoming more efficient, even if that end up being only a temporary boost in payments.

However, for most of the business folks, unless, again, they were always dreaming of doing this, the idea of becoming a low-code developer might seem far from what they would really want to be doing on their spare time.

And there is no judgement there – after all, not that many pro-developers would want to become sales agents, right?

Although, there could be an alternative (or complimentary) approach where organizations will start encouraging employees to spend time on citizen development somehow. Possibly, they will, but, this way or another, for the business users to start using low-code development tools, they must be willing to do so. In other words, everyone probably can start developing apps now, but not everyone will.

But that would still be only the first step. Imagine everybody jumping into it and starting to develop all sorts of low-code applications. For everyone involved, it might turn into a really interesting experience, but, in the end, if some of those applications start storing their data in Excel, other are going to start using personal one-drive, and yet others would opt for and Azure SQL database, this will become a nightmare for the organization. Since, after all, you need to ensure the data produced by all those apps is available and manageable somehow. Otherwise, what’s the point in having that data?

But, even when you have manageable data, you can’t really allow one sales person to start following a process that’s completely different from what another sales person would be doing. Mind you, there might be advantages in doing exactly that, since, after all, this way the organization might be able to invent and/or identify better processes.

And, then, the second pre-condition for “everyone starting to do low-code development” would be to either have standardized business processes in the organization (so, any improvement that someone comes up with would be useful for others), or to implement some sort of “incubator” program where the organization would be able to identify the improvements made by individual citizen developers and adjust  remaining business processes accordingly. Which, in the end, will lead to standardizing the processes across the organization.

Again, those scenarios are not, necessarily, mutually exclusive. There could be well-defined data and processes in the organization, but there could still be room for significant process adjustments (not just for minor process improvements/“automation”).

Let’s say there are people in the organization who can see how  turning to low-code development might be useful to them, and let’s say there is some structure to the organizational data / processes those folks can rely on.

There can still be internal / external regulations and rules that the organization has to follow, and that may have to be accounted for while doing any sort of development (be it low-code or pro-code). Externally, those rules may come from various source, and of course, various data protection laws would be one such example. Internally, there might be a need to ensure integration of certain data with specific systems – for example, when taking a payment through a low-code app somehow, and assuming finance users are relying on SAP to see all financial data, some kind of integration may have to be implemented to ensure all payment data ends up in the SAP.

How would you enforce that on the organizational level? Since it is very unlikely every low-code developer would just start following those rules. The only way I see that happening is by somehow implementing an application development methodology that would be followed by everyone in the organization, and, then, you would need someone (possibly a team) to oversee proper implementation of that methodology on each “project”.

This could all be represented by a funnel like this:

image

In the end, low-code development has a lot of potential, and it’s true that “everybody can start building applications”. Whether they will, and whether they even should, depends on a number of things, though, in the end, it all comes down to whether the organization is seeing this as an opportunity to improve, whether it has create the environment that encourages application development by “everyone”, and whether it has the mechanisms to ensure that development is done consistently across the board.

This is not to say it’s not possible, and, where possible, it might be really beneficial, but, in many cases, getting there is not a small feat.

Testing a polymorphic lookup

There are polymorphic lookups now, but what’s the big deal? Well, I don’t have to go far to find an example of where it might be helpful. Here is a screenshot from the actual model-driven form on one of the projects I’ve been working on:

image

There are 3 different lookups on that screen, yet only one of them is supposed to be populated. Which means there are business rules involved, possibly validations, etc. I could simplify the screen above and get rid of those business rules right away by adding a polymorphic lookup – that would be a single lookup which would reference either of those three referenced tables.

As of today, polymorphic lookups can only be created through the API, and the link I put right at the top of this post provides all the required details. Except that… there is an error in the json source:

image

Heh… they had to make a mistake somewhere, or I’d really have to start worshipping the product team for the goodies they are deliveringSmile

Anyways, to create a polymorphic lookup we need something to send an HTTP POST request to Dataverse, so I used Restman for that:

https://chrome.google.com/webstore/detail/restman/ihgpcfpkpmdcghlnaofdmjkoemnlijdi?hl=en

Here is how the whole thing looks like, and I’ll provide the json I used below:

And here is the json:

{
 "OneToManyRelationships": [
   {
     "SchemaName": "new_test_contact",
     "ReferencedEntity": "contact",
     "ReferencingEntity": "new_test"
   },
   {
     "SchemaName": "new_test_account",
     "ReferencedEntity": "account",
     "ReferencingEntity": "new_test"
   },
   {
     "SchemaName": "new_test_systeuser",
     "ReferencedEntity": "systemuser",
     "ReferencingEntity": "new_test",
	 "CascadeConfiguration": {  
        "Assign": "NoCascade",  
        "Delete": "RemoveLink",  
        "Merge": "NoCascade",  
        "Reparent": "NoCascade",  
        "Share": "NoCascade",  
        "Unshare": "NoCascade"  
     }
   }
 ],

 "Lookup": {
   "AttributeType": "Lookup",
   "AttributeTypeName": {
     "Value": "LookupType"
   },

   "Description": {
     "@odata.type": "Microsoft.Dynamics.CRM.Label",
     "LocalizedLabels": [
       {
         "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
         "Label": "Test Client",
         "LanguageCode": 1033
       }
     ],

     "UserLocalizedLabel": {
       "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
       "Label": " Test Client",
       "LanguageCode": 1033
     }
   },

   "DisplayName": {
     "@odata.type": "Microsoft.Dynamics.CRM.Label",
     "LocalizedLabels": [
       {
         "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
         "Label": "TestClientLookup",
         "LanguageCode": 1033
       }
     ],

     "UserLocalizedLabel": {
       "@odata.type": "Microsoft.Dynamics.CRM.LocalizedLabel",
       "Label": "TestClientLookup",
       "LanguageCode": 1033
     }
   },

   "SchemaName": "new_TestClientLookup",
   "@odata.type": "Microsoft.Dynamics.CRM.ComplexLookupAttributeMetadata"
 }
}

With that, I now have my lookup field added to the new_test table:

image

Although, it does not look quite right there, since, it seems, I can only see one related table in the designer. But, well, that’s only a preview after all, and that seems to be just a minor inconvenience.

Once this new field is added to the form, I can use records from either of the 3 referenced tables to populated the lookup:

image

Now, here is what I was interested in beyond the fact that we can reference multiple tables. What happens on the TDS endpoint side? Since this is what I’d be using for Power BI paginated reports. And, actually, it’s all good there. Here is an example:

image

So, basically, TDS endpoint will give us three columns for this kind of lookups. The first column would provide referenced record guid, the second one would provide referenced record type, and, finally, the third one would provide referenced record “name”. Which is more than enough to do proper reporting in Power BI/SSRS.

And what about the advanced find?

It’s interesting there, since, when specifying filtering conditions, I can see combined list of possible operators. For example, on the screenshot below, I’ve selected “Equals Current User” (and this is a polymorphic lookup, so I could have selected “Equals” and pick an account instead):

Still, if I used “Equals” condition, I could pick from either of the three referenced tables:

Well, that’s some cool stuff from the product team, and the only question for me is whether it’s something I should start using in production? Or if I should wait till this is out of preview? Holding off for now, but this is really testing my patienceSmile

Using Environment Variables to configure Word Template action in the cloud flows

Power Automate word templates are much more fun to work with than the classic D365 word templates, yet we can happily use them in the model-driven app with a bit of javascript:

https://www.itaintboring.com/dynamics-crm/power-automate-word-templates-in-model-driven-apps-forms-integration/

However, how are we supposed to deploy flows containing “Populate a Microsoft Word Template” action in different environments so that each environment had its own version of the template? Assuming this is how we would ensure proper ALM for the template files (wow that sounds crazy, but… we would not want developers to just update production version of the templates, right?)

To start with, let’s look at how that action would, normally, be configured:

image

Although, those user-friendly names are not, really, what the connector is using behind the scene. We need to look a bit deeper by using “peek code” option:

image

There are a few “id” strings there, and that’s what the connector is using instead of the friendly names it’s showing us.

Therefore, if we wanted to configure the action to take in dynamics values for those parameters, we would need to know those values. One way to get them for each environment would be to simply point that action to another environment, use “peek code”, copy the values, then do it for another environment, and so on.

Let’s assume we have correct values for each environment.

The next step would be to create 3 environment variables, configure default values, and use them to configure action parameters:

image

It’s worth noting that, unlike flow variables, environment variables will actually have values in the “design time”, which means the Flow will be able to parse the template. If you tried using flow variables, they would not be “set” yet, since it would only happen at run time. Hence, the action above would not be able to find the template in the design time, and you’d be getting an errors.

By I digressed. We are using environment variables above.

Now, all you need from here is:

  • Export your managed solution that includes the flow and required environment variables
  • In the target environment, open Default Solution, locate those 3 variables, and configure custom values which would be specific to each environment
  • You may need to turn the flow on and off, since cloud flows tend to cache environment variable values

For example, here is how one of those env variables is configured:

And voila… Every environment will now be loading document template from the proper location, yet those templates can even have minor differences as you can see on the screenshot below:

image

And that’s it.

PS. Btw, can’t help but mention implementing this kind of “ALM” might be a pain in the neck with the classic word templates since they are not solution aware at all, and the only way to copy them from one environment to another would be to manually update template files to match proper entity id-s… or to use XrmToolBox for that, since, I believe, there is a plugin there.

Upcoming pricing changes for Power Apps

With the recent announcement, one thing is clear: Power Apps licensing has never been cheaper. With no licence minimums or other purchase requirements, pretty much any organization should be able to start using Power Apps now:

image

Although, as it usually is with licensing, there is a fly in the ointment. Those changes are coming into effect on October 1, 2021. Until then, even though there is a promotional offer of $12 ($3 on the per app plan), that offer is only applicable to the minimum  purchase of 5000 licenses.

Either way, cheaper licensing is always a good thing, and I’m happy this is happening.

But, then, I still remember the time when Dynamics CRM 2011 was somewhere in the range of $40-45 per user licence, and, from there, licensing fees only kept growing.

Funny enough, current Power Apps pricing is exactly the same, but, since first-party applications are not included into that price, it only seems fair that Power Apps pricing should have been less, right? Since, after all, we are, essentially, paying for the platform access, but there is no out of the box business functionality included there.

This seems to be an ongoing problem with Power Platform licensing. We all may have some idea of what is fair and what is not. Microsoft may have some idea, too, and, as this announcement shows, they might actually be quite a bit off… to such an extent that they can cut licensing fees by 50%… but none of that is going to mean anything until, somehow, Power Platform licensing fees get translated into the resource consumption fees so that we could see underlying resource usage and associated fees.

At least that way we could clearly see how $20 or $5 translates into CPU / memory / traffic / etc usage. Apparently, there would also be some licensing fee for the “platform access”, but, right now, this is all bundled together, and, so, I’d think those prices can go up or down depending on how it all balances out in the books year after year.

Hence, it’s great the prices are going down. I think it would have been even better if there was a clear explanation of how those prices are set (in relation to the Azure resource consumption fees in general). Without that, will the price go up next year? Will it go further down? It’s kind of hard to say. Well, still, it does not change the fact that we just got much better pricing, so… have fun with Power Platform!

EasyRepro tips

It’s been great attending Ali Youssefi’s EasyRepro workshop this week (and it’s also been great to be invited into those workshops by the client to start with). That’s a nice workshop which will get you through the basics of Easy Repro (and some of the related things you may need to keep in mind) – we are not supposed to share workshop materials, but, well, you might still want to see all of the EasyRepro videos Ali has on Youtube:

https://www.youtube.com/results?search_query=Ali+Youssefi+easyrepro

Either way, that’s been a great opportunity for me to dig a little deeper into how EasyRepro works, what’s doable and what’s not.

On the high level, I wrote about Easy Repro before:

https://www.itaintboring.com/dynamics-crm/easy-repro-what-is-it/

All of that still stands, so, in this post, I am just going to dig into a few areas where I had problems with Easy Repro this week.

1. Invoking “New” lookup action

It seems easy to do, since you just need to call xrmApp.Lookup.New()

However, xrmApp has a few properties which are only applicable in the context:

  • Lookup
  • QuickCreate
  • Grid
  • Entity

And there are, probably, a few more.

What  this means is that those properties of xrmApp make (or do not make) sense depending on what’s currently happening in the model-driven application. For example, when a quick create form is visible, xrmApp.QuickCreate will allow access to that form. When a lookup is “selected”, we can access that lookup through xrmApp.Lookup, and, therefore, we can initiate new record creation in the user-interface using xrmApp.Lookup.New() method.

Which means that in order to use New, you have to somehow select the lookup first. So you could, for example, do this:

LookupItem liContact = new LookupItem()
{
Name = “ita_contact”
};

xrmApp.Entity.SelectLookup(liContact);

xrmApp.Lookup.New();

And, along the way, if your lookup is polymorphic (it’s a customer, for example), you may need to call SelectRelatedEntity before calling “New”:

xrmApp.Lookup.SelectRelatedEntity(“Contacts”);

2. “You might want to close “Unsaved changes” popup automatically in your tests

image

The function below will click “Discard changes” button (although, you might want to update it so it uses “Save and continue” instead). Normally, you would call this function in those places where you would expect “Unsaved changes” dialog to pop up:

public bool DiscardChangesIfPresent()
{
try
{
var cancelButton = _client.Browser.Driver.FindElement(By.XPath(“//*[@id=\”cancelButton\”]”));
if (cancelButton != null)
{
cancelButton.Click();
_client.Browser.Driver.WaitForTransaction();
return true;
}
}
catch
{
return false;
}
return false;
}

3. Updating two lookup fields in a sequence

It seems that, sometimes, you may have to “focus out” of the lookup before switching to another lookup. For example, I’ve been having problems with the code below:

LookupItem li = new LookupItem()
{
Name = “<column_name>”,
Value = “<column_value>”,
Index = 0
};
xrmApp.Entity.SetValue(li);
xrmApp.ThinkTime(2000);

<SEE EXPLANATION BELOW FOR AN EXTRA LINE HERE>

LookupItem liContact = new LookupItem()
{
Name = “<another_column_name>”
};

xrmApp.Entity.SelectLookup(liContact);

xrmApp.Lookup.New();

You can see how that code is setting value of the first lookup, and, then, it immediately proceeds to create “new” record for another lookup. Or, at least, that’s the intent. Instead of that, though, “New” method gets called on the first lookup somehow.

It seems all we need to do is add an extra line to select another element on the form (sort of to “focus out”). There are, probably, other ways to do it, but here is what worked for me:

xrmApp.Entity.SelectTab(“General”);

Once I added this line right where the placeholder is in the code above, everything started to work properly.

4. Killing chromedriver and related chrome.exe processes

If you tend to terminate test debugging in Visual Studio, and if you are using Chrome there, you will inevitably end up with chromedriver.exe and chrome.exe processes stuck in memory, so you will either have to keep terminating them from the task manager, or you could make your life a little easier using some kind of script/tool. Here is a powershell script that might do the trick:

function Kill-Tree {
Param(
[int]$ppid,
[bool]$recursive
)

 

    if($recursive -eq $true){
Get-CimInstance Win32_Process | Where-Object { $_.ParentProcessId -eq $ppid -and $_.Name -eq “chrome.exe”} | ForEach-Object { Kill-Tree $_.ProcessId $false }
}
Stop-Process -Id $ppid
}

 

Get-Process -Name chromedriver | ForEach-Object -Process{
Kill-Tree $_.Id $true
}

For Edge, FireFox, and other browsers, you might need to adjust the script above.

You might be getting unwanted tables in your solutions – here is one reason why

Have you ever noticed a table in your solution that you had never added there intentionally? It is kind of a stowaway table – you did not want it there, you did not authorize it to be there, you did not even notice it there until, of course, it was all in production… and, yet, there it is. Creating a nice managed layer in your production environment.

It’s extremely easy to get such tables in your solutions whenever you add a lookup field to another table. For example, below is an empty solution:

image

I would add “Opportunity” table, and I’ll do it so that no metadata/components from the opportunity entity will actually be added”:

image

Then I’ll add a lookup to the contact table:

image

And I’ll save my changes:

image

Back on the main solution screen, there are, now, two tables:

image

And, if you look at what’s been added to the solution for the contact, you’ll see everything there:

image

It’s easy to fix – just remove that extra table from the solution manually. It might have been better if it had not been added at all, so here is an idea you might want to upvote: https://powerusers.microsoft.com/t5/Power-Apps-Ideas/Do-not-add-referenced-table-to-the-solution-automatically-when/idi-p/953352#M33840

Why you should consider periodic / on demand refresh for the Canvas Apps / Flows

Have you ever noticed the following paragraph in the Canvas Apps coding guidelines?

Periodically republishing your apps

The PowerApps product team is continually optimizing the Power platform. Sometimes, for backwards compatibility, these optimizations will apply only to apps that are published by using a certain version or later. Therefore, we recommend that you periodically republish your apps to take advantage of these optimizations

image

Personally, I initially dismissed it as some kind of weird recommendation when I was reading the guidelines a while back. Since, after all, would you really want to go back to the app that’s working only to republish it? You’d also have to deploy updated version in test/production from there, so this may require quite a bit of planning/coordination. Of course if you have configured all the pipelines in devops, you might be able to automate most of the technical steps, but, still, you’d have to start putting some efforts into those periodic updates.

And, yet, in the context of the issue I ran into earlier this week, it starts making quite a bit more sense.

For a little while now, environment variables have been available in the Power Automate flows:

image

Which is absolutely awesome, since now we can use this feature to configure flows for different environments.

For example, we are using this feature to build Power BI reports from within the flows, and we are using environment variables to identify the actual report (which is different for dev/UAT/prod, since each version of the report is using different connection settings).

Turned out environment variables have some limitations:

https://docs.microsoft.com/en-us/powerapps/maker/data-platform/environmentvariables#current-limitations

Most of them are almost cosmetic, but there is this one which is quite a bit more important:

  • When environment variable values are changed directly within an environment instead of through an ALM operation like solution import, flows will continue using the previous value until the flow is either saved or turned off and turned on again.

So, basically, if you import a solution that has a variable into the environment, then if you follow up by importing a solution that contains a flow that’s using that variable, and if you forget to set correct value for the environment variable along the way, you might end up with a flow that’s using “default” value even once you’ve added updated “current” value.

In which case you may need to turn the flow off and on. Or you may have to re-import the flow.

Long story short, it turns out certain things can get “cached” in canvas apps / flows, and you might want to keep that in mind when working on your ALM strategy.

I wonder if, ideally, all/some of that would be done as part of “nightly refresh” job in devops, though I wonder if that would be doable. Going to try – will see how it works out.

Managed solutions – what are the benefits?

So, we all know how managed solutions vs unmanaged solutions discussions used to be raging a few years ago.

Then Microsoft stepped in to make it clear that managed solutions are recommended for production environments and unmanaged ones are only supposed to be used in development.

This is where things currently stand, and this is what we are all recommending to the clients, but, come to think of it, I would probably want to see more explanations than “it’s better because we say so”.

Unfortunately, looking at the ALM demos, this is how it usually is:

  • Managed solutions should be used in production
  • Managed solutions can be deleted
  • Components in the managed solutions can be locked down
  • Managed solutions support layering

I see how the ability to “delete” managed solutions can be important for ISV-s. Although, as soon as we create a dependency in our own solutions on those third-party managed solutions that ability becomes pretty much non-existent.

I also see how it may be useful to lock down some components of the managed solutions, but possibly, it’s an ISV thing, too? I’ve never used it for the internally developed solutions, even for the “core” ones.

Layering, although it’s an interesting concept, is, arguably, one feature that can mess up quite a few things. There is a lot of functionality in Power Platform that had to be implemented around layering to support it better (solution history screen, component layers screen, merging, etc). Just to hypothesize, would not it be easier not to have layering? As in, make everything unmanaged and leave it to the admins/developers to make sure solutions are installed in the right sequence.

I guess the core problem is “merging”, and all that layering is meant to facilitate/resolve merging issues (or, at least, to put some rules around how it’s done). Given that the conflicts are still not getting resolved in the same way it’s done with pro-code, though, I wonder if it would be better/easier to just go with “most recent change wins” approach (which is, basically, “unmanaged”). After all, how often would you actually even try resolving a conflict directly in the XML file when merging in the source control? I would just overwrite in those cases.

So, I wonder, is there someone there who can clearly state the advantages of managed solutions, possibly with good examples, so that we could all say “yes, you nailed it”? Would be great to have this kind of closure to the old question of why should we be actually using managed solutions.

Is it a Guid or a Name in the PowerAutomate action input dropdown?

Just learned the other day that where Power Automate actions can be showing “display” names in various dropdowns, it might still be some sort of ID/Guid that we should be using when trying a custom value there.

For example, in the flow below, when using a “custom value”, I have to use a GUID to identify my paginated report:

image

Although, if I knew in advance which report I’m going to use, I could just choose it by name from the list:

image

That said, if you try using “display name” for the custom value, you’ll get an error in the flow:

image

As you can see, action inputs above look identical to a successful run where report name was selected from the list:

image

So, you might not be able to easily see it from the error message, and, therefore, just need to keep in mind that, when using a custom value, you may need to provide a unique id rather than a display name.