Author Archives: Alex Shlega

Canvas app error: the requested operation is invalid. Server Response: _scrubbedSensitiveData_

Here is an interesting error I ran into the other day:

image

It does sound like there was something in the server response that was considered “sensitive data”, but what could have happened?

The screenshot above is, actually, from the application I created specifically to reproduce the error and see why it’s happening, so it’s very simple, and it only does this on click of the button:

image

That seems pretty innocent, right? And it is – turned out the problem is not, really, on the application side (well, sort of).

There is a plugin involved that kicks in on update of the account record. In that plugin, an exception is raised:

image

Could you guess what’s causing scrubbedSensitiveData error on the app side?

Here is another line where you can still try guessing before you continue reading.

Ok, so it’s, actually, not the quote characters.

It is the word “token”. Huh?

By the way, I also tried the word “password” and got exactly the same results.

In the actual app, I was getting “token” keyword in the exception message since I was doing json deserialization, and “token” keyword can appear in the error messages raised by the json libraries when the json is incorrect.

Export to File for Paginated Reports execution time

For the last few days, I’ve been trying to figure out one mysterious aspect of  “Export to File for Paginated Reports” action, which is that it was always taking just a little more than 30 seconds to complete.

Which, even for the simplest static paginated report that would not even connect to a data source, would always result in the flow duration which is just over 30 seconds as well:

image

image

There was a support ticket, yet I kept asking around, and, eventually, it was confirmed that this is just how it works because of some internal details, so we should expect 30+ seconds there.

Which is an interesting caveat, since once of the scenarios I was trying to implement is printing invoice documents from a model-driven app.  That would include a flow, which would be triggered from a ribbon button using Javascript, and, as a result, generated invoice would be downloaded so the user would print it.

Would it be reasonable to ask the users to wait for about 40-50 seconds just to get the invoice generated when it’s part of the process where the end client is waiting for that invoice? It’s a good question, and I don’t, yet, know, what the answer would be, so may actually have to fall back to the word templates in this particular scenario.

There are other scenarios, though, where 30 seconds wait time would not make any difference (that’s when it takes longer to generate the report anyways, or when the process is a-synchronous by nature. As in, when the report is supposed to be sent by email/stored in Sharepoint, for example).

Anyways, at least now I know what’s happening, so I can stop thinking I’m losing my mindSmile

PS. There is ItAintBoring PowerPlatform chat session coming up on Oct 12 where we’ll talk about Power BI paginated reports in particular, but, also, we will, hopefully, have a broader discussion about document/report generation in Power Apps. If you have something to share, or if you just wanted to listen in, don’t forget to register for the event: https://www.linkedin.com/events/oct12itaintboringpowerplatformc6843586222267478016/

When “adding required components”, be cautious of the solution layers

You probably know how we can add required components to our solutions – there is an option for that in the maker portal, and it’s easy to use. For example, if I had “contact” table in the solution already, I could add required components for that table:

image

And the idea is that all the relationships, web resources, etc would be brought into the solution when we use that option. In theory, this is very useful, since, if some of the required components were missing in the target environment, we just would not be able to import our solution there until all those components got included into the solution.

However, depending on the environment, there could be different, and, possibly, unexpected consequences. In my slightly customized environment above, I got 2 more tables added to the solution when I used that option. One of them got added with all subcomponents, probably since it’s a custom table created in the same environment. I also got a few subcomponents from the Account table:

image

In a different environment, in exactly the same situation, I got a lot more components added to the solution as a result,  and that does not seem to have much to do with where those components were created:

image

For example, the lookup column below is there now, and we don’t even touch that column in any of our solutions – it is there out of the box, and it’s part of the Field Service:

image

And that is where one potential problem with “add required components” seems to show up. Once this kind of solution is imported into the downstream environment (as a managed solution), we’ll be getting an additional layer for the subcomponents that had been included into the solution as a result of “adding required components”, and it might not be what we meant to do. Since, if those components are supposed to be updated through other solutions, too, this new layer might prevent changes applied through those other solutions from surfacing in the target environment since the layers would now look like this:

  • New solution with required components that now includes Column A
  • Original solution that introduced Column A

If we try deploying updated version of the original solution to update Column A, those changes would still be in the same original layer where Column A was introduced, which means we’ll be somewhat stuck there.

And this is, actually, exactly what the docs are talking about:

https://docs.microsoft.com/en-us/power-platform/alm/organize-solutions

image

So, I guess, beware of the side effects.

DataSourceInfo – checking table permissions

You might have seen Lance Delano’s post about permissions support functions:

https://powerapps.microsoft.com/en-us/blog/permissions-support-in-datasourceinfo-and-new-recordinfo-functions-for-dataverse/

This can be very useful, and it sort of feels this might be part of the functionality that would be required to use Power FX when configuring command bar buttons in model driven apps (since we would need to disable/enable those buttons based on the permissions quite often), but, it’s also useful in general when we need to check permissions.

The syntax is very simple, and it’s described in the post above.

One thing I noticed while playing with these functions is that it may make sense to use disabled display mode by default to avoid the effect below when the button shows up as enabled, and, then, it flips to the disabled mode:

enabledbydefault

This happens since it takes some time for the DataSourceInfo to check permissions, so, basically, the button shows up first, and, after a second or so (it probably depends on the load/connection/etc), it flips to the desired state once the function comes back with the result of permission check:

image

It works, and it might not be that big of an issue, but, if you wanted to avoid that flipping so it would be working more or less the way it’s doing it below:

disabledbydefault

You might do this, for example.

1. In the OnStart of the application, you could set IsDeleteAllowed variable using DataSourceInfo function:

image

This would set IsDeleteAllowed to “false” initially, and, then, the second like would set it to the correct value. In between, the button would show up as disabled (see below)

2. Then you might use the value of that variable to set DisplayMode property of the button

image

Of course, this way the button will start flipping from “disabled” to “enabled” for those who can use it, but I usually prefer interface elements to work that way, since there is no confusion caused by them being enabled for a little while on load of the app (so a user might try clicking those buttons) before becoming disabled.

When your Power Automate flow history run would not stop spinning

This is probably one of those things the product team is going to fix really soon, but, in the meantime, if you started to see spinner recently when looking at the flow run history, you might want to try one of the following:

  • Resize the browser window (could just maximize it as well)
  • Click somewhere in the “white” area
  • Possibly open dev tools

As of today (Oct 4), resizing the browser seems to work for me. The third option (opening dev tools) is a workaround suggested by a colleague. And the second one (clicking somewhere) used to work in the preview, but it does not seem to work now.

Here is an example:

spinner

Open in new window button – would you keep it or would you disable it?

Just noticed a new button showing up in my environments this morning, and, at first, I thought “wow”, but, then, I thought a bit more, and now I ‘m wondering.

It does allow opening the same model-driven screen in a separate window, but is it worth giving up some real-estate in the command bar for the sake of having this button? It seems we can do the same just by dragging current tab “out of the browser”?

image

What are your thoughts?

Child flows and retry policy

It is often helpful to split bigger Power Automate flows into a bunch of smaller ones, so there is a parent flow, and, possibly, a few child flows.

However, when the child flow fails, here is what you might see as a result in the flow run history:

image

In the example above, there was a parent flow, and there was a child flow. For the same parent flow run, the child flow tried to run 5 times. It failed each and every time, and, of course, it has done whatever it was doing 5 times for the same parent flow run. That might involve creating new data, updating existing data, etc.

Which might or might not be a problem, depending on what exactly is happening in your flows.

Either way, if you wanted to make sure that there are no retries in this situation, you’d need to remember to update retry policy of the “Run a Child Flow” action. This can easily be done through the action->Settings menu:

image

By default, it’ll be configured to retry 4 times:

image

You can either choose “none” to specify no retries at all, or, possibly, you can choose other options if retries are, actually, expected:

image

Quick testing of Javascript web resources

When working with model-driven applications, we always have to publish our changes to see them applied to the user interface. In some cases, it’s ok. We can keep adding columns to the form and publish those changes one, for example. That’s a little frustrating, but it’s acceptable.

In other cases, this may become much more of a problem. Let’s say there is a javascript web resource which I need to test/fix. If I went about it the regular way, I’d have to do this:

  • Update the javascript
  • Update corresponding web resource in the application
  • Publish my changes
  • Refresh the form and see if the fix was successful
  • If not, repeat this process until I have desired result

This always feels like a slow-motion movie, to be honest, since it may take a minute to update the script, but it takes a few more minutes to see whether that change was successful because of all the subsequent publishing/refreshing.

Well, there is a faster way to do this, though it all needs to be done slightly differently.

  • Event handlers should be attached programmatically in the OnLoad of the form
  • The only event handler that should be configured in the form designer is form’s “OnLoad” handler

With that, I could easily use web browser dev tools to add / update my web resources any time I want without having to go through the regular “save->publish->refresh” route. And, once the script is working as expected, I could finally update the web resource.

For example, let’s see how I could use this to add a side pane (see my other post) on form load.

First of all, since onFormLoad will need executionContext parameter, I need mock context to call onFormLoad when I’m doing it outside of the regular execution pipeline:

image

Also, since I am trying to avoid having to use “publish” every time, in my onFormLoad function I need to make sure that all event handlers are getting attached programmatically. And, also, if I end up calling it more than once, I need to make sure those handlers are not attached multiple time (so, in the example above, I am just removing and re-adding them)

Note: one the script has been tested, I’ll need to update the web resource so all other users could enjoy the benefits of having this script there. I’ll just need to remember not to add mock context and onFormLoad call to the web resource, sine that part will be taken care of by the regular execution pipeline.

Then, of course, I need lookupTaglickHandler in my script, so here is the full version:

function lookupTagClickHandler(executionContext)
{   
    var columnName = "parentcustomerid";
	var formContext = executionContext.getFormContext();
	executionContext._eventArgs._preventDefault = true;
	var pane = Xrm.App.sidePanes.getPane("LookupDetails");
	if(typeof(pane) == "undefined" || pane == null){
		Xrm.App.sidePanes.createPane({
			 title: "Lookup details",
			 paneId: "LookupDetails",
			 canClose: true,
			 width: 500
		}).then((pane) => {
			  displayLookupInPane(pane, 
				 formContext.getAttribute(columnName).getValue()[0].entityType,
				 formContext.getAttribute(columnName).getValue()[0].id);
		   });
	}
	else{
		displayLookupInPane(
		   pane,
		   formContext.getAttribute(columnName).getValue()[0].entityType,
		   formContext.getAttribute(columnName).getValue()[0].id
		);
	}
}

function displayLookupInPane(pane, entityType, id)
{
	pane.navigate({
           pageType: "entityrecord",
           entityName: entityType,
		   entityId: id
        });
}

function onFormLoad(executionContext)
{
	var columnName = "parentcustomerid"; 
	var formContext = executionContext.getFormContext();
	formContext.getControl("parentcustomerid").removeOnLookupTagClick(lookupTagClickHandler);
	formContext.getControl("parentcustomerid").addOnLookupTagClick(lookupTagClickHandler);
}

var executionContext = {
	getFormContext: function(){
		return Xrm.Page;
	}
};
onFormLoad(executionContext);

Now all I need to do to test it out (even if there is no web resource at all yet) is open the form in the app, open browser dev tools, and paste my script to the console. Then, if I wanted to make changes to see how they work, I’d just need to update the script and paste it to the dev tools console again. Here, have a look – I’ll open the form, will paste script to the dev tools console, and my side page event handler will start working:

That was fast, wasn’t it?

Now all I need is create a web resource and configure on-load event handler for the form, but I already know that my script is good to go, so I am not anticipating any issues and will only need to go through saving/publishing once.

PS. Another practical application of this approach is that we can test changes locally without “bugging” other team members/users.

Skeletons in the closet: missing technical documentation

When working with Microsoft Dataverse, have you ever tried to figure out what exactly is happening behind the scene, and why that specific column is getting a certain value which you would not expect it would be getting?

One might think it is easy to do, but consider how many ways are there to actually change that column value:

image

  • A Power App (canvas or model-driven) can make a change
  • A Power Automate Flow can make a change
  • A plugin can make a change
  • A classic workflow can make a change
  • And this is not to mention possible integrations/external processes

The first 4 on this list are not even exceptional in any way – they are just something we would use as/when we need, so, on a relatively large project, we would often end up having all of those mixed and matched based on such factors as:

  • Available functionality (synchronous vs asynchronous is, probably, the most prominent example)
  • Microsoft recommendations (use Power Automate, avoid workflows)
  • “Center of excellence” recommendations (if a version of it exists in the organization)
  • Business preferences, as strange as it sounds (low-code as much as possible)
  • Project team preferences overall
  • Personal preferences of the team members

And this can turn into a nightmare when there is no obvious structure to it, and when there is no documentation.

A record is created, and there is a Flow/Workflow/Plugin that kicks in. However, exactly because there can be different ways to respond, some logic might end up in the Flow, but other logic might end up in the plugin, and, yet, there could be a business rule or a javascript on top of that. Now, there could be valid reasons mixing everything – for instance, it’s quite possible that we would only want synchronous logic to be in the plugin, and the rest might be in the Flow since we would want functional consultants to work on those.

I always feel bad when another team member has to ask “where does this value get set – is it in the plugin? Is it in the Flow? Is is happening in Javascript?”

And I always feel frustrated when it’s me who is asking the same question.

Because, in either case, I usually have to spend quite a bit of time figuring out how do all those pieces work together and where exactly the change is happening.

Would not it be nice to have some kind of documentation, or, at least, some way of answering this kind of questions faster? This is exactly why, in the pro-code world, adding comments directly to the code has been recognized is the best practice (and has been well-documented). Here is one example:

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/xmldoc/

However, there is no universal way of documenting Power Automate flows, workflows, plugins, scripts, etc. We could, probably, start creating diagrams/documents, but all those details are extremely difficult to document, and, even if someone were diligent enough to maintain that level of documentation, it might turn into a full-time job for that person on anything but a relatively simple project.

This is why, sooner or later, almost every project becomes a bit of a “black box” for new team members. This is not to say the same problem does not exist on the pro-code projects, but, because of the variety of tools, there just seem to be an extra layer in Power Platform.

Well, now that the skeleton has been uncovered… what do we do about it? I have no perfect answer, but here is what I’ve seen so far:

  • For the plugins, apparently, commenting in the code helps. Putting some thought into the structure of those plugins helps, too
  • With Classic Workflows, adding notes to the description may help to an extent
  • With Power Automate flows, adding “notes” to the actions may be helpful. As well as filling in Flow description
  • It also helps to keep high-level diagrams updated every now and then so that all those relationships between different parts of the system were a bit more clear

Although, as helpful as those can be, you will usually need a way to perform solution-wide search, and this is one benefit of using source control:

https://docs.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/use-source-control-solution-files?view=op-9-1

Personally, I don’t believe a lot of folks would have the ability to do advanced changes directly in the the extracted xml / json files in the source control without breaking something. However, it’s easy to perform search over all those files to find a column name, and that can help identify all low-code/pro-code components which are using that column quickly.

Although, that does require some discipline, yet the clients need to understand the importance of this for future maintenance and support. Which, again, is not always the case – it usually comes as the clients, projects, and teams mature.

And, yes, I had my share of projects where I would not put enough efforts into  documenting implemented solutions. Although, to my (somewhat lame) excuse, in many cases there would be no one to provide this kind of documentation to. Which is just how those skeletons end up being stuck in the closet.

PS. Do you have a “skeletons story” to share for the month of October? Everyone’s invited:

image

Skeletons in the closet – are there things that should have been done differently on your Power Platform projects?

The month of October is upon us, and, with that, there are all the pumpkins, costumes, decorations, and all the other Halloween stuff.

So, then, why don’t we talk about the skeletons we have either found, or, possibly, left in the virtual closets of all those Dynamics/PowerPlatform projects we have worked on? While doing this, we might also talk about why we should have done certain things differently, and how that might have improved the outcomes.

Pretty sure that could turn into an interesting series, but, of course, each of us, alone, would only have that many skeletons – you would not be working on the projects just to create those, right?

Although, who am I to say… Just a few months ago I was chatting to someone who mentioned they were working on the solution I had been working on 6 or 7 years earlier. And that scared me – I could hardly stop myself from asking how many problems did they find in my implementation? Since I knew there were some.

In either case, everyone is invited. Share your story in your blog, and I’ll put a link here. If you don’t have a blog, send me a message on linkedin (https://www.linkedin.com/in/alexandershlega/), and I’ll post it in my blog. Or do a youtube video. Or reach out to the community somehow else and let me know.

Those should be real-life stories, though!

Just like with every Halloween trick, there will be a treat. At the end of October, I’ll write a blog post featuring the best stories. I mean, it’s you who will be featured there! Unless you choose to stay anonymous, of course:)

Also, the best stories will be featured in the ItAintBoring PowerPlatform Chat session on October 26. You are welcome to co-present if you’d like, or I can just present them with your permission.  If you want to hear the stories, or if you want to co-present, don’t forget to register:

https://www.linkedin.com/events/skeletonsinthecloset-whenthings6849130439559516160/