Monthly Archives: June 2021

EasyRepro tips

It’s been great attending Ali Youssefi’s EasyRepro workshop this week (and it’s also been great to be invited into those workshops by the client to start with). That’s a nice workshop which will get you through the basics of Easy Repro (and some of the related things you may need to keep in mind) – we are not supposed to share workshop materials, but, well, you might still want to see all of the EasyRepro videos Ali has on Youtube:

https://www.youtube.com/results?search_query=Ali+Youssefi+easyrepro

Either way, that’s been a great opportunity for me to dig a little deeper into how EasyRepro works, what’s doable and what’s not.

On the high level, I wrote about Easy Repro before:

https://www.itaintboring.com/dynamics-crm/easy-repro-what-is-it/

All of that still stands, so, in this post, I am just going to dig into a few areas where I had problems with Easy Repro this week.

1. Invoking “New” lookup action

It seems easy to do, since you just need to call xrmApp.Lookup.New()

However, xrmApp has a few properties which are only applicable in the context:

  • Lookup
  • QuickCreate
  • Grid
  • Entity

And there are, probably, a few more.

What  this means is that those properties of xrmApp make (or do not make) sense depending on what’s currently happening in the model-driven application. For example, when a quick create form is visible, xrmApp.QuickCreate will allow access to that form. When a lookup is “selected”, we can access that lookup through xrmApp.Lookup, and, therefore, we can initiate new record creation in the user-interface using xrmApp.Lookup.New() method.

Which means that in order to use New, you have to somehow select the lookup first. So you could, for example, do this:

LookupItem liContact = new LookupItem()
{
Name = “ita_contact”
};

xrmApp.Entity.SelectLookup(liContact);

xrmApp.Lookup.New();

And, along the way, if your lookup is polymorphic (it’s a customer, for example), you may need to call SelectRelatedEntity before calling “New”:

xrmApp.Lookup.SelectRelatedEntity(“Contacts”);

2. “You might want to close “Unsaved changes” popup automatically in your tests

image

The function below will click “Discard changes” button (although, you might want to update it so it uses “Save and continue” instead). Normally, you would call this function in those places where you would expect “Unsaved changes” dialog to pop up:

public bool DiscardChangesIfPresent()
{
try
{
var cancelButton = _client.Browser.Driver.FindElement(By.XPath(“//*[@id=\”cancelButton\”]”));
if (cancelButton != null)
{
cancelButton.Click();
_client.Browser.Driver.WaitForTransaction();
return true;
}
}
catch
{
return false;
}
return false;
}

3. Updating two lookup fields in a sequence

It seems that, sometimes, you may have to “focus out” of the lookup before switching to another lookup. For example, I’ve been having problems with the code below:

LookupItem li = new LookupItem()
{
Name = “<column_name>”,
Value = “<column_value>”,
Index = 0
};
xrmApp.Entity.SetValue(li);
xrmApp.ThinkTime(2000);

<SEE EXPLANATION BELOW FOR AN EXTRA LINE HERE>

LookupItem liContact = new LookupItem()
{
Name = “<another_column_name>”
};

xrmApp.Entity.SelectLookup(liContact);

xrmApp.Lookup.New();

You can see how that code is setting value of the first lookup, and, then, it immediately proceeds to create “new” record for another lookup. Or, at least, that’s the intent. Instead of that, though, “New” method gets called on the first lookup somehow.

It seems all we need to do is add an extra line to select another element on the form (sort of to “focus out”). There are, probably, other ways to do it, but here is what worked for me:

xrmApp.Entity.SelectTab(“General”);

Once I added this line right where the placeholder is in the code above, everything started to work properly.

4. Killing chromedriver and related chrome.exe processes

If you tend to terminate test debugging in Visual Studio, and if you are using Chrome there, you will inevitably end up with chromedriver.exe and chrome.exe processes stuck in memory, so you will either have to keep terminating them from the task manager, or you could make your life a little easier using some kind of script/tool. Here is a powershell script that might do the trick:

function Kill-Tree {
Param(
[int]$ppid,
[bool]$recursive
)

 

    if($recursive -eq $true){
Get-CimInstance Win32_Process | Where-Object { $_.ParentProcessId -eq $ppid -and $_.Name -eq “chrome.exe”} | ForEach-Object { Kill-Tree $_.ProcessId $false }
}
Stop-Process -Id $ppid
}

 

Get-Process -Name chromedriver | ForEach-Object -Process{
Kill-Tree $_.Id $true
}

For Edge, FireFox, and other browsers, you might need to adjust the script above.

You might be getting unwanted tables in your solutions – here is one reason why

Have you ever noticed a table in your solution that you had never added there intentionally? It is kind of a stowaway table – you did not want it there, you did not authorize it to be there, you did not even notice it there until, of course, it was all in production… and, yet, there it is. Creating a nice managed layer in your production environment.

It’s extremely easy to get such tables in your solutions whenever you add a lookup field to another table. For example, below is an empty solution:

image

I would add “Opportunity” table, and I’ll do it so that no metadata/components from the opportunity entity will actually be added”:

image

Then I’ll add a lookup to the contact table:

image

And I’ll save my changes:

image

Back on the main solution screen, there are, now, two tables:

image

And, if you look at what’s been added to the solution for the contact, you’ll see everything there:

image

It’s easy to fix – just remove that extra table from the solution manually. It might have been better if it had not been added at all, so here is an idea you might want to upvote: https://powerusers.microsoft.com/t5/Power-Apps-Ideas/Do-not-add-referenced-table-to-the-solution-automatically-when/idi-p/953352#M33840

Why you should consider periodic / on demand refresh for the Canvas Apps / Flows

Have you ever noticed the following paragraph in the Canvas Apps coding guidelines?

Periodically republishing your apps

The PowerApps product team is continually optimizing the Power platform. Sometimes, for backwards compatibility, these optimizations will apply only to apps that are published by using a certain version or later. Therefore, we recommend that you periodically republish your apps to take advantage of these optimizations

image

Personally, I initially dismissed it as some kind of weird recommendation when I was reading the guidelines a while back. Since, after all, would you really want to go back to the app that’s working only to republish it? You’d also have to deploy updated version in test/production from there, so this may require quite a bit of planning/coordination. Of course if you have configured all the pipelines in devops, you might be able to automate most of the technical steps, but, still, you’d have to start putting some efforts into those periodic updates.

And, yet, in the context of the issue I ran into earlier this week, it starts making quite a bit more sense.

For a little while now, environment variables have been available in the Power Automate flows:

image

Which is absolutely awesome, since now we can use this feature to configure flows for different environments.

For example, we are using this feature to build Power BI reports from within the flows, and we are using environment variables to identify the actual report (which is different for dev/UAT/prod, since each version of the report is using different connection settings).

Turned out environment variables have some limitations:

https://docs.microsoft.com/en-us/powerapps/maker/data-platform/environmentvariables#current-limitations

Most of them are almost cosmetic, but there is this one which is quite a bit more important:

  • When environment variable values are changed directly within an environment instead of through an ALM operation like solution import, flows will continue using the previous value until the flow is either saved or turned off and turned on again.

So, basically, if you import a solution that has a variable into the environment, then if you follow up by importing a solution that contains a flow that’s using that variable, and if you forget to set correct value for the environment variable along the way, you might end up with a flow that’s using “default” value even once you’ve added updated “current” value.

In which case you may need to turn the flow off and on. Or you may have to re-import the flow.

Long story short, it turns out certain things can get “cached” in canvas apps / flows, and you might want to keep that in mind when working on your ALM strategy.

I wonder if, ideally, all/some of that would be done as part of “nightly refresh” job in devops, though I wonder if that would be doable. Going to try – will see how it works out.

Managed solutions – what are the benefits?

So, we all know how managed solutions vs unmanaged solutions discussions used to be raging a few years ago.

Then Microsoft stepped in to make it clear that managed solutions are recommended for production environments and unmanaged ones are only supposed to be used in development.

This is where things currently stand, and this is what we are all recommending to the clients, but, come to think of it, I would probably want to see more explanations than “it’s better because we say so”.

Unfortunately, looking at the ALM demos, this is how it usually is:

  • Managed solutions should be used in production
  • Managed solutions can be deleted
  • Components in the managed solutions can be locked down
  • Managed solutions support layering

I see how the ability to “delete” managed solutions can be important for ISV-s. Although, as soon as we create a dependency in our own solutions on those third-party managed solutions that ability becomes pretty much non-existent.

I also see how it may be useful to lock down some components of the managed solutions, but possibly, it’s an ISV thing, too? I’ve never used it for the internally developed solutions, even for the “core” ones.

Layering, although it’s an interesting concept, is, arguably, one feature that can mess up quite a few things. There is a lot of functionality in Power Platform that had to be implemented around layering to support it better (solution history screen, component layers screen, merging, etc). Just to hypothesize, would not it be easier not to have layering? As in, make everything unmanaged and leave it to the admins/developers to make sure solutions are installed in the right sequence.

I guess the core problem is “merging”, and all that layering is meant to facilitate/resolve merging issues (or, at least, to put some rules around how it’s done). Given that the conflicts are still not getting resolved in the same way it’s done with pro-code, though, I wonder if it would be better/easier to just go with “most recent change wins” approach (which is, basically, “unmanaged”). After all, how often would you actually even try resolving a conflict directly in the XML file when merging in the source control? I would just overwrite in those cases.

So, I wonder, is there someone there who can clearly state the advantages of managed solutions, possibly with good examples, so that we could all say “yes, you nailed it”? Would be great to have this kind of closure to the old question of why should we be actually using managed solutions.

Is it a Guid or a Name in the PowerAutomate action input dropdown?

Just learned the other day that where Power Automate actions can be showing “display” names in various dropdowns, it might still be some sort of ID/Guid that we should be using when trying a custom value there.

For example, in the flow below, when using a “custom value”, I have to use a GUID to identify my paginated report:

image

Although, if I knew in advance which report I’m going to use, I could just choose it by name from the list:

image

That said, if you try using “display name” for the custom value, you’ll get an error in the flow:

image

As you can see, action inputs above look identical to a successful run where report name was selected from the list:

image

So, you might not be able to easily see it from the error message, and, therefore, just need to keep in mind that, when using a custom value, you may need to provide a unique id rather than a display name.