Monthly Archives: November 2020

Shared variables in Dataverse plugins. Surprise, surprise…

It’s a rare situation when 3 different Dataverse developers, all having quite a bit of experience with plugins, start talking about plugins functionality and can’t agree on how it actually worksSmile

Yet, this is exactly what happened when Daryl LaBar suggested that I should be using shared variables to handle the problem described in the previous post

Actually, what led me to writing that post is exactly that shared variables did not work. However, I did forget that I should have looked in the parentContext, too.

That was an apparent omission, so I figured I’d find my shared variable in the parentContext. I ran a quick test… and it was not there. Huh?

Anyway, if you are curious, here is the code I used – highlighted is the line where the plugin will be throwing an error if there is no shared variable. And that’s the error I saw:


This is where Martin Tölk mentioned we might be able to add shared variables from the organization service as per the link below:

Well, that’s where I had my “aha” moment! Because that should work, should not it?


Looks simple… so, here is my code this time:


Here comes the moment of truth… and it’s a bummer:


What just happened?

After digging around a little more, here is what I’ve found:


Turned out this method only allows “tag” variable to be passed to the context.

Finally, my shared variable is there (I had to replace “test” with “tag” in the code sample above):


So, basically, I think the way shared variables work is:

  • When all plugins in the execution pipeline are sharing the same context,  we can use context.SharedVariables to pass variables between plugins.
  • If the context changes, though, we can only pass 1 variable using the method above, and it has to be called “tag”


By the way, we can also add that “tag” variable through Web API:


Well, that’s a viable option.

Although, given it’s only one shared variable we can use that way, I might stick to the “dummy field” method described yesterday, at least for the time being. Otherwise, there is always a chance “tag” variable would mean something completely different if other plugins/developers start using it for their own purpose, too.

Besides, it has one advantage over shared variables, since it could be used from SSIS and other ETL tools if I wanted to stop plugins from running when modifying data in the Dataverse (data migration would be one common scenario).

Well, this was interesting – not sure I’m still missing something there. Would not be too surprised (this time) if I amSmile

Here is a little trick to beat plugin’s depth

There is a common problem which happens to my plugins over time. As I keep adding more and more code, at some points all those relationships between the plugins are becoming really confusing – there might be an update in one plugin which will kick off another plugin, and that plugins, in term, might lead to the code in the first plugin being executed again.

This goes into an infinite loop, which, of course, get intercepted by the engine, and, eventually, the whole operation that started this cycle gets terminated.

When this happens, the most practical approach is, sometimes, to just cut this Gordian not:




The recipe is quite simple. Imagine there is this code in the plugin:



And imagine the plugin is registered to run on update of the “contact” and on create of the account.

With the code above, whenever I try updating a contact, the plugin will run. It will start creating a new account. That will trigger the same plugin to run again. In that second run the plugin will try updating an existing contact. Which, in turn, will trigger the same plugin again.

And, so, this will go into an infinite loop:


Which I could try preventing by comparing “contact.Depth” to 1, for example, but what if, in my example, I only wanted this process to stop once the contact has been updated? No matter if the depth is 1 or more?

Especially since, in the example above, it’s all very simple. In reality, this kind of sequence may involve different plugins calling each other, and blindly verifying “depth” might not even be the right approach.

Well, unless you want to refactor the whole solution at that point, here is a trick.


You just need to add a dummy field to the table and use it to terminate the plugins chain above.

Set it whenever you want all subsequent plugins to ignore the action.

In those other plugins, just add a condition to verify if that attribute is set in the target, and, if it is, do nothing.

That way, you won’t have to rely on the “depth” property, since the value of that property may vary depending on what exactly has happened, how the plugins were executed, etc.

Instead, you’ll instruct your plugins not to do anything whenever no further processing is needed by passing that dummy field in the target. And, since you won’t be adding it to the forms, it’ll never come with the data that’s being submitted through the UI forms.


PS. There is a continuation to this post here

Power Platform Elephants

A few things happened recently which got me thinking about things we usually don’t talk about in the Power Platform world:

And, yet, somebody posted an issue to my PCF components github repo asking if it’s correct that I don’t have a managed solution there

This all got me thinking about… well, about the elephants:

Spotting the elephant in the room: Why cloud will not burst colo's bubble just yet - Cloud Computing News

Although, not just about any elephants, but about those which are, sometimes, in the room.

So, maybe, let’s reveal some of those.

When it comes to the PCF components, there are lots of great PCF components in the PCF Gallery, and I’m sure every one of us can find at least a few there which might be useful on the projects we are working on. Those components can improve user experience, they can take away some awkward out-of-the-box customizations, and they can add features which just would not be possible otherwise:


And, yet, what do we do if one of those components misbehaves half way through the project, or, worse  yet, once it’s all in production? With the open-source components, there is no guarantee we’ll be able to get any support from the original developer, so the only option would be to hire a developer to figure it all out.

Which is my elephant #1: with all the good PCF components can bring about on the Power Platform projects, there is one inevitable evil that will come, too… which is the professional developer.

Is it really that bad? Surely not, it’s just  something you can count on if you are staring to use PCF components irrespective of the fact that those components might be developed by a third-pary originally.

Now, how about low-code development with Flows and Power Apps? Originally touted as something that business users will be able to do, I think it’s now becoming a thing Citizen Developers would rather be doing. And, yet, as Olena somewhat jokingly noted, this is already starting to feel like real development. You have to know how to handle errors, how to use connectors, how to parse json, how to orchestrate canvas apps and flows so they work together, how to use FetchXML, what is Web API and how to build queries there… that list never ends. And, of course, you have to keep up with all the changes Microsoft keeps pushing.

Which is my elephant #2:  so-called low-code application development tools are still development tools no matter what, and those who have more experience with them yet who are willing to toy with such tools will usually be able to develop better applications. Besides, one you start talking about git repositories, ALM, solution layering, etc… it’s very likely  you need a person who does not mind talking that language. And, of course, that’s almost a definition of what a “developer” is. Now, whether it’s a “citizen” developer or a “professional” developer… I’m not sure it matters. You won’t hire a .NET developer to work on Power Automate Flows/canvas apps development except, maybe, if he/she is willing to change their career.

This misconception might be one reason why a lot of clients might be marching towards that peak Steve mentioned.

But, I think, there are more elephants hiding there.

My #3 elephant would be licensing and limits.

Licensing has always been painful, it still is, and a lot has been written about it. What licenses do we need? Will we run into those limits? If we will, what do we do? How much will it cost us?

It seems to be almost impossible to get precise answers to those questions – instead, clients will often deal with this by taking a leap of faith approach.


And this all brings me to the elephant #4 for today: quality Power Platform implementations will rarely be fast, furious, and cheap

I’m glad most of the projects I worked on so far were big enterprise projects. And it’s not that the projects were, always, big. But the enterprises were definitely big enough to deal with the unknown. This unknown comes from the overall depth and breadth  of the platform, from the rate of change, and from the licensing uncertainties.

Out-of-the-box functionality, even though it’s quite powerful, will usually have to be extended. This will all but ensure that developers will be involved (professional or “citizen”… for what it’s worth, I usually see them being the same people). End users will want features. Business groups will not agree on the security. SSRS report won’t work, so Power BI will be required, and not just any Power BI but a Premium one.

It won’t be long before you need somebody who can help you navigate those waters, so you’ll be asking for some help from the community, partners, consultants, Microsoft services, etc.


And, yet, with all that said, those are just elephants hiding around. They are nice animals, and they are easy to spot. Besides, they have thick skin, so it won’t hurt them if you name them, deal with them as best as you can, and, then, proceed to building awesome projects with the Power Platform!


Have fun!


PS. Just realized it’s Thursday, and this post came out as a bit of a rant… which means  “Thursday rant” is becoming a bit of a tradition. Not sure if it’s good or bad:)

How to: re-use historical user input/selections to populate form controls

What if you wanted to populate model-driven app form controls based on the previous selections made by the same user? To give you a specific example: let’s say there is a location field on the form, and let’s say our user can travel between multiple locations. In both locations, this user is going to work at the front desk, where there is a desktop computer.

How do we only ask the user to provide location selection once on each of those desktops?

Here is a simplified version – notice how “Location” is empty the first time I’m creating the invoice. Then it keeps using “Toronto” by default:


I’m sure there could be different ways to do it, but here is one option:

  • We need a javascript function that executes on load of the form and reads stored values from the local storage
  • We need another function that executes on save of the form and saves attribute values to the local storage

And we just need to configure onload/onsave events of that form properly

So, here is the script:

var attributeList = null;

function loadAttributeValues(executionContext, attributes)
	var formContext = executionContext.getFormContext();
	attributeList = attributes.split(",");
	if(attributeList == null) return;
	for(var i in attributeList)
		let attrName = attributeList[i];
		if(formContext.getAttribute(attrName) != null
		   && formContext.getAttribute(attrName).getValue() == null)

function storeAttributeValues (executionContext)
	var formContext = executionContext.getFormContext();
	for(var i in attributeList)
		if(attributeList == null) return;
		let attrName = attributeList[i];
		if(formContext.getAttribute(attrName) != null)
			let value = formContext.getAttribute(attrName).getValue();
			if(value != null) value = JSON.stringify(value);
			localStorage.setItem(attrName, value);

  • Here is how the form is set up:
  • OnLoad Event
  • image
  • image

  • OnSave Event



Both functions will use the same list of attributes, so I only need to provide that list to the onload function (and, of course, depending on how you want it to work and what you want it to do, you may have to change that).

Have fun!

Power Automate “Scope” action – what does it have to do with the error handling?

There is a strange action in Power Automate which is really not there to do anything other than to help you organize the Flow a little better, it seems:


We can use scopes to put other actions in them, so we can, then collapse and expand those scopes this improving the manageability of the Flow:


Is that all there is to it?

Actually, there is more, since there is also a function in Flows which can return inputs and outputs of all the actions which are inside a scoped action:


So, what’s the big deal?

See, it can really help with the error handling. How often would you see an error like this and you wouldn’t know exactly what has happened there?


The reason is obvious in this example – I used incorrect syntax for the Filter. However, what if the Flow becomes much longer and what if I did not know that there is an intentional error in the flow? With the error message above, I might not be able to say a lot about what was the cause of the error.

This is where “result” function comes in.

It’s an interesting function to start with, since it does not show up in the list of available functions (that’s what we call a “hidden gem”Smile )

I can use it:


But there are no hints or “intellisense”:


That does not make it less useful, though, since I can set the Flow like this:


Where my “Set Variable” action would run only if “Scope 1” fails:


With the actions configured as per the screenshots above, here is what will be store in the error variable:


Looking at the output of that “Set Variable” action, I can now see a lot more detailed error message:


And this has all become possible since there was

  • A scope action
  • A “result” function that can be used with the scoped actions


Those scopes look much more useful now, it seems!

PS. To those who have been working with Power Automate for at least a few years, this may have been known for a while – there is a great post on this topic here:

Changing Flow action names, updating entity (schema) names, setting PCF properties… one script does it all

Every now and then, we need to change Flow action names. This usually happens to me once the Flow has already been somewhat developed and I have a bunch of actions which have very meaningful names:


It helps to know that there are 3 list record actions in the Flow above. But it does not tell me anything about what those actions are doing.

What if I wanted to rename some of them?

This is not always straightforward. There could be other actions utilizing those I’m trying to rename – for some of those, Power Automate flow designer will be able to update the dependencies. But, for others, it may become more involved. For example, let’s say I had a flow like this:


I could try renaming “List records” action, but I’d get an error if I tried saving the Flow after that:


Which is an actionable error – I can go ahead and make required changes in the other actions. If those dependent actions are spread across the Flow, though (various conditions, for example), it may become quite a bit more involved.

So, if there comes a point where you’d want to have some scripting help in this situation, here is a Power Shell script that might help:


This script will  take a solution file, a regular expression, a replacement string (to replace matching substrings in all solution components), and an output solution file.

For example:

.\psreplace.ps1 -solutionPath “c:\work\temp\psreplace\ources\” -regex “List records” -replaceWith “Get records” -outputSolutionPath “c:\work\temp\psreplace\sources\”

You can output to the same file you used for the input. In the above example, you will also need to replace “List_records” with “Get_records”, so you’d probably have to run the same script twice, and, for the second run, you might use the following command:

.\psreplace.ps1 -solutionPath “c:\work\temp\psreplace\sources\” -regex “List_records” -replaceWith “Get_records” -outputSolutionPath “c:\work\temp\psreplace\sources\”


The first time this script starts, it will download nuget and use it do deploy SolutionPackager into the PSModules\Tools\CoreTools subfolder. After that, it’ll be using the same copy of the solution packager.

For instance, with the command lines above, here is how it’s going to work:

Once that updated solution is imported, the Flow above gets updated and the action with all its references is named differently now:


So what else can you use this script for?

  • Rename entities in the solution
  • Update view names
  • Update PCF control properties to match the environment
  • Etc


Essentially, it’s a text replacement script. It just knows how to do that replacement in all solution components.

Power Automate word templates in Model-Driven apps – forms integration

Now that we went through the process of creating a Power Automate flow that’s using Word Template action, why don’t we get this integrated into a model-driven application?



Let’s first create a very simple invoice form. You should be able to do it easily (or you could use any other entity – I’m just doing it so it’s consistent with the word template):


How do we add “Print” functionality in such a way that we’d be utilizing Power Automate flow created earlier?

There are a few options. We could use a web resource to add an HTML button somewhere on the form. Or we could ribbon dashboard to add “Print” button to the command bar.

So, let’s deal with the Flow first.

1. We need the Flow to start accepting document id as a parameter

The easiest way to do it would be to pass it as a url parameter to the HTTP Request trigger. If it were a POST trigger, I might want to add specify json schema. But, to simplify, I’ll just set it up as a “GET” trigger:


And, then, I’ll initialize a variable with the documentId parameter from the URL:



If you wanted to read more on what’s happening there, have a look at the docs:

From here on, I can simply add “&documentId=…” to the trigger url, and my Flow will have documentId value stored in the variable. Which means I can, then, use it to query invoice record from the CDS/Dataverse.

And, of course, I can copy Flow url from the Flow designer:



2. We need to add “Response” action to the Flow so it actually sends the document back right away


Basically, it’s just sending the output of “Populate a Microsoft Word Template” action through the response. But, also, there are a couple of headers.

The first one is used to specify content-type. For that matter, we could also generate a PDF document, which would have a different content-type.

And the second header specifies the file name.

Here are header values, to make it easier:

  • application/vnd.openxmlformats-officedocument.wordprocessing
  • inline; filename=”test.docx”


And you would probably want to replace “test.docx” with something more useful. Invoice # maybe?

Anyway, this is where we need to go back to the Invoice form and add a button there.

Before that, let’s think it through:

  • We will have a button on the form
  • That button will “call” a javascript function (that’ll be a javascript web resource)
  • And that javascript function will open a new window using Flow trigger url (with an additinal documentId parameter which will take the value of the invoice ID)


So, let’s add a web resource first.


There is nothing extraordinary there. There is a documentId parameter. There is Flow trigger url. There is concatenation. And there is As promised.

Now let’s fire up Ribbon Workbench in the XrmToolBox, and let’s customize Invoice command bar there:



For the command, here is how it’s defined (notice that Crm Parameter – FirstPrimaryItemId):


This is how my javascript function (printInvoice) will receive documentId parameter.

Now I just need to add “Enable Rule”:


And link it to the command:


That’s it, now it’s all abot publishing the changes and watching Scott Durow advertising his PCF course:



Finally, there is my “Print” button:


Was not that difficult? Or was it? In either case I think it’s a worthy replacement for the classic word templates in D365.

And I could also suggest a few improvements (might or might not blog about them later):

  • That “print” button could actually be a fly-out button with a few menu items in the drop down menu. We could have different print forms. And which ones are available might be decided based on the invoice type, for example. How cool is that?
  • We might send that invoice to the client by email right away. It’s a Flow, so why not?
  • We might store that invoice in Sharepoint
  • We might convert it to PDF, and, again, send a PDF version by email (there is a correspondig OneDrive action in Power Automate)
  • One advantage of PDF files is that they will usually open automatically once downloaded. For the Word Documents, you can achieve the same result in Chrome, at least:


Pretty sure this list of improvements can get rather long, so… have fun with this improved version of Word Templates!

PS. Of course it would be nice if there was an easier way to create those “buttons” (or to integrate such Flows into the model-driven apps).  Well, one can dreamSmile

Using Power Automate word templates with Model-Driven apps

I’ve never been a big fan of Word Template in Dynamics 365 since they have quite a few limitations, and, yet, I had to use them every now and then since they are so easy to develop (till  you run into the limitations, of course).

Besides, they are not so easy to deploy into different environments, so you’d almost inevitably need XrmToolBox (if you are ok with manual deployment) or a script like this one (if you wanted to automate the process)

But for the last few months I’ve been using Power Automate version of the Word Template, and I’m finding it much more useful:

To create a template, we need to do a few things:

1. Create a new word document

2. Add controls to the document

This is where you need to open “Developer” tab and start adding controls. Just keep in mind that every control you put there should have a title, and that title is exactly what you’ll see in the Power Automate flow when setting up Word Template action:



2.1. What if I wanted to add a repeater above to display invoice fees?

I’d start by adding a plain text content control to the table row:


And I’d give it a title:


Would, then, select the whole row:


And would add a repeater:


As with any other content control, I’d add a title to the repeater:


As a result, once I’m back in the Flow designer, I can see Fee repeater there (may need to reload the Flow in the browser):


With the individual controls, it’s pretty straightforward – just use output from the other actions (or use expressions) to populate those.

It’s a little more tricky with the repeater, but it’s actually not that complicated.

What you need is an array variable, and you’ll need to add elements to that array which are all objects, and each of those objects should have properties which correspond to the titles of the controls used within the repeater.

If you lost me here, here is an example:


You might do it in the “apply to each” for a CDS (oh… sorry… Dataverse) entity (crap… table), or you might do it somehow else. You might add conditional logic when populating the array (so that’s less one limitation of D365 word templates). Basically, you have the power of Power Automate at your disposal when preparing values for the controls (repeating or not).

3. Finally, you need to configure “Word Template” actions and set template control values

For the sake of simplicity, I’ll use static values for the individual controls.

I will, however, switch to the “array” mode for the repeater:


And, for the repeater, I will use my “Fees” variable:



Time for a test? Let’s just add an action to send the result to me by email:


And, once I run the Flow(it’s an HTTP Request flow. I just run it by opening that URL in the browser… have to be configured to allow “Get” method on the trigger) and the email comes in here is what I see:


How about sorting? Well, just sort that array.

How about filtering? Same thing – just don’t add those values to the array.

And how about using this directly from the model-driven app?

That’s in the next post.

CDS is Microsoft Dataverse!

There is another renaming, so just be aware:


And I can’t help but notice

Well, I know it’s difficult to come up with good names these days since all of them have already been taken, but, quite frankly, what’s wrong with CDS, or, at least, how come Dataverse sounds better?

Either way, it seems Microsoft is quite insistent on having the old name (“CDS”) replaced, so I’m just hoping this new name will stick this time around.

Excel Online connector vs Google Sheets connector

Recently, I happened to look into how google sheets connector works, and, to be honest, I was a little bit disappointed.

First, I wanted to blame google. But, it turned out,  there is no hiding the fact that’s it’s been provided by Microsoft itself:


In general, this connector works. But there is one notable issue, which is we can’t specify “id” value when adding new row:


You may think that Id above is what it is, but no. I would have to be “Row Id”, which is what the connector would be using in other actions, such as Get Row:


You will find more details on “row id” in my earlier post:

Long story short, if I wanted to add a row to the spreadsheet in such a way that I’d still be able to read that specific row later using “Get Row” action, I would not be able to. Since I would not be able to specify “Row Id” – instead, it would be generated for me (for example, imagine that “Ford”/”Dodge”/etc are all id values. Assuming you’d want to be able to load car make data from the spreadsheet, you could do it if you kept adding rows to the spreadsheet manually, but you won’t be able to keep adding new rows to the spreadsheet directly from the Flow, since you’d be getting random row-id values instead of car makes).

For this one, I just added an idea, though I’m guessing it won’t be too popular since it might not be such a common scenario to use Google sheets with Power Automate:

Which brings me to the Excel Online connector. And that one shines in comparison.

None of the google sheets connector issues seem to exist there. There is a similar “get row” action, but I can specify a key column – there is no  need for some special id columns/values:


Which is why “Add a row”, even though it looks very similar to the same action in Google sheets connector, works perfectly fine:


It’s my own “id” column this time, I don’t need a special “Row Id”, so it’s all good.

This seems to be consistent throughout Excel connector actions, which brings me to a very simple conclusion:

As of Nov 2020, if you need to use spreadsheets with Power Automate, and if you have a choice between Google Spreadsheets and Excel Online, go with Excel.

This has nothing to do with Google vs Microsoft. This has everything to do with the connector capabilities.