Author Archives: Alex Shlega

Power Platform Elephants

A few things happened recently which got me thinking about things we usually don’t talk about in the Power Platform world:

And, yet, somebody posted an issue to my PCF components github repo asking if it’s correct that I don’t have a managed solution there

This all got me thinking about… well, about the elephants:

Spotting the elephant in the room: Why cloud will not burst colo's bubble just yet - Cloud Computing News

Although, not just about any elephants, but about those which are, sometimes, in the room.

So, maybe, let’s reveal some of those.

When it comes to the PCF components, there are lots of great PCF components in the PCF Gallery, and I’m sure every one of us can find at least a few there which might be useful on the projects we are working on. Those components can improve user experience, they can take away some awkward out-of-the-box customizations, and they can add features which just would not be possible otherwise:

image

And, yet, what do we do if one of those components misbehaves half way through the project, or, worse  yet, once it’s all in production? With the open-source components, there is no guarantee we’ll be able to get any support from the original developer, so the only option would be to hire a developer to figure it all out.

Which is my elephant #1: with all the good PCF components can bring about on the Power Platform projects, there is one inevitable evil that will come, too… which is the professional developer.

Is it really that bad? Surely not, it’s just  something you can count on if you are staring to use PCF components irrespective of the fact that those components might be developed by a third-pary originally.

Now, how about low-code development with Flows and Power Apps? Originally touted as something that business users will be able to do, I think it’s now becoming a thing Citizen Developers would rather be doing. And, yet, as Olena somewhat jokingly noted, this is already starting to feel like real development. You have to know how to handle errors, how to use connectors, how to parse json, how to orchestrate canvas apps and flows so they work together, how to use FetchXML, what is Web API and how to build queries there… that list never ends. And, of course, you have to keep up with all the changes Microsoft keeps pushing.

Which is my elephant #2:  so-called low-code application development tools are still development tools no matter what, and those who have more experience with them yet who are willing to toy with such tools will usually be able to develop better applications. Besides, one you start talking about git repositories, ALM, solution layering, etc… it’s very likely  you need a person who does not mind talking that language. And, of course, that’s almost a definition of what a “developer” is. Now, whether it’s a “citizen” developer or a “professional” developer… I’m not sure it matters. You won’t hire a .NET developer to work on Power Automate Flows/canvas apps development except, maybe, if he/she is willing to change their career.

This misconception might be one reason why a lot of clients might be marching towards that peak Steve mentioned.

But, I think, there are more elephants hiding there.

My #3 elephant would be licensing and limits.

Licensing has always been painful, it still is, and a lot has been written about it. What licenses do we need? Will we run into those limits? If we will, what do we do? How much will it cost us?

It seems to be almost impossible to get precise answers to those questions – instead, clients will often deal with this by taking a leap of faith approach.

 

And this all brings me to the elephant #4 for today: quality Power Platform implementations will rarely be fast, furious, and cheap

I’m glad most of the projects I worked on so far were big enterprise projects. And it’s not that the projects were, always, big. But the enterprises were definitely big enough to deal with the unknown. This unknown comes from the overall depth and breadth  of the platform, from the rate of change, and from the licensing uncertainties.

Out-of-the-box functionality, even though it’s quite powerful, will usually have to be extended. This will all but ensure that developers will be involved (professional or “citizen”… for what it’s worth, I usually see them being the same people). End users will want features. Business groups will not agree on the security. SSRS report won’t work, so Power BI will be required, and not just any Power BI but a Premium one.

It won’t be long before you need somebody who can help you navigate those waters, so you’ll be asking for some help from the community, partners, consultants, Microsoft services, etc.

 

And, yet, with all that said, those are just elephants hiding around. They are nice animals, and they are easy to spot. Besides, they have thick skin, so it won’t hurt them if you name them, deal with them as best as you can, and, then, proceed to building awesome projects with the Power Platform!

 

Have fun!

 

PS. Just realized it’s Thursday, and this post came out as a bit of a rant… which means  “Thursday rant” is becoming a bit of a tradition. Not sure if it’s good or bad:)

How to: re-use historical user input/selections to populate form controls

What if you wanted to populate model-driven app form controls based on the previous selections made by the same user? To give you a specific example: let’s say there is a location field on the form, and let’s say our user can travel between multiple locations. In both locations, this user is going to work at the front desk, where there is a desktop computer.

How do we only ask the user to provide location selection once on each of those desktops?

Here is a simplified version – notice how “Location” is empty the first time I’m creating the invoice. Then it keeps using “Toronto” by default:

default

I’m sure there could be different ways to do it, but here is one option:

  • We need a javascript function that executes on load of the form and reads stored values from the local storage
  • We need another function that executes on save of the form and saves attribute values to the local storage

And we just need to configure onload/onsave events of that form properly

So, here is the script:

var attributeList = null;

function loadAttributeValues(executionContext, attributes)
{	
	var formContext = executionContext.getFormContext();
	attributeList = attributes.split(",");
	if(attributeList == null) return;
	
	for(var i in attributeList)
	{
		let attrName = attributeList[i];
		if(formContext.getAttribute(attrName) != null
		   && formContext.getAttribute(attrName).getValue() == null)
		{
			try{
			formContext.getAttribute(attrName).setValue(JSON.parse(localStorage.getItem(attrName)));
			}
			catch{
			}
		}
	}
}

function storeAttributeValues (executionContext)
{
	var formContext = executionContext.getFormContext();
	for(var i in attributeList)
	{
		if(attributeList == null) return;
		let attrName = attributeList[i];
		if(formContext.getAttribute(attrName) != null)
		{
			let value = formContext.getAttribute(attrName).getValue();
			if(value != null) value = JSON.stringify(value);
			localStorage.setItem(attrName, value);
		}
	}
}

  • Here is how the form is set up:
  • OnLoad Event
  • image
  • image

  • OnSave Event

image

image

Both functions will use the same list of attributes, so I only need to provide that list to the onload function (and, of course, depending on how you want it to work and what you want it to do, you may have to change that).

Have fun!

Power Automate “Scope” action – what does it have to do with the error handling?

There is a strange action in Power Automate which is really not there to do anything other than to help you organize the Flow a little better, it seems:

image

We can use scopes to put other actions in them, so we can, then collapse and expand those scopes this improving the manageability of the Flow:

image

Is that all there is to it?

Actually, there is more, since there is also a function in Flows which can return inputs and outputs of all the actions which are inside a scoped action:

https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference#result

image

So, what’s the big deal?

See, it can really help with the error handling. How often would you see an error like this and you wouldn’t know exactly what has happened there?

image

The reason is obvious in this example – I used incorrect syntax for the Filter. However, what if the Flow becomes much longer and what if I did not know that there is an intentional error in the flow? With the error message above, I might not be able to say a lot about what was the cause of the error.

This is where “result” function comes in.

It’s an interesting function to start with, since it does not show up in the list of available functions (that’s what we call a “hidden gem”Smile )

I can use it:

image

But there are no hints or “intellisense”:

image

That does not make it less useful, though, since I can set the Flow like this:

image

Where my “Set Variable” action would run only if “Scope 1” fails:

image

With the actions configured as per the screenshots above, here is what will be store in the error variable:

image

Looking at the output of that “Set Variable” action, I can now see a lot more detailed error message:

image

And this has all become possible since there was

  • A scope action
  • A “result” function that can be used with the scoped actions

 

Those scopes look much more useful now, it seems!

PS. To those who have been working with Power Automate for at least a few years, this may have been known for a while – there is a great post on this topic here:

https://sharepains.com/2018/02/07/power-automate-try-catch-finally-flow/

Changing Flow action names, updating entity (schema) names, setting PCF properties… one script does it all

Every now and then, we need to change Flow action names. This usually happens to me once the Flow has already been somewhat developed and I have a bunch of actions which have very meaningful names:

image

It helps to know that there are 3 list record actions in the Flow above. But it does not tell me anything about what those actions are doing.

What if I wanted to rename some of them?

This is not always straightforward. There could be other actions utilizing those I’m trying to rename – for some of those, Power Automate flow designer will be able to update the dependencies. But, for others, it may become more involved. For example, let’s say I had a flow like this:

image

I could try renaming “List records” action, but I’d get an error if I tried saving the Flow after that:

image

Which is an actionable error – I can go ahead and make required changes in the other actions. If those dependent actions are spread across the Flow, though (various conditions, for example), it may become quite a bit more involved.

So, if there comes a point where you’d want to have some scripting help in this situation, here is a Power Shell script that might help:

https://github.com/ashlega/ITAintBoring.PowerShell

image

This script will  take a solution file, a regular expression, a replacement string (to replace matching substrings in all solution components), and an output solution file.

For example:

.\psreplace.ps1 -solutionPath “c:\work\temp\psreplace\ources\FlowActionNameChange_1_0_0_1.zip” -regex “List records” -replaceWith “Get records” -outputSolutionPath “c:\work\temp\psreplace\sources\UPDATEDFlowActionNameChange_1_0_0_1.zip”

You can output to the same file you used for the input. In the above example, you will also need to replace “List_records” with “Get_records”, so you’d probably have to run the same script twice, and, for the second run, you might use the following command:

.\psreplace.ps1 -solutionPath “c:\work\temp\psreplace\sources\UPDATEDFlowActionNameChange_1_0_0_1.zip” -regex “List_records” -replaceWith “Get_records” -outputSolutionPath “c:\work\temp\psreplace\sources\UPDATEDFlowActionNameChange_1_0_0_1.zip”

 

The first time this script starts, it will download nuget and use it do deploy SolutionPackager into the PSModules\Tools\CoreTools subfolder. After that, it’ll be using the same copy of the solution packager.

For instance, with the command lines above, here is how it’s going to work:

Once that updated solution is imported, the Flow above gets updated and the action with all its references is named differently now:

image

So what else can you use this script for?

  • Rename entities in the solution
  • Update view names
  • Update PCF control properties to match the environment
  • Etc

 

Essentially, it’s a text replacement script. It just knows how to do that replacement in all solution components.

Power Automate word templates in Model-Driven apps – forms integration

Now that we went through the process of creating a Power Automate flow that’s using Word Template action, why don’t we get this integrated into a model-driven application?

documentgeneration

 

Let’s first create a very simple invoice form. You should be able to do it easily (or you could use any other entity – I’m just doing it so it’s consistent with the word template):

image

How do we add “Print” functionality in such a way that we’d be utilizing Power Automate flow created earlier?

There are a few options. We could use a web resource to add an HTML button somewhere on the form. Or we could ribbon dashboard to add “Print” button to the command bar.

So, let’s deal with the Flow first.

1. We need the Flow to start accepting document id as a parameter

The easiest way to do it would be to pass it as a url parameter to the HTTP Request trigger. If it were a POST trigger, I might want to add specify json schema. But, to simplify, I’ll just set it up as a “GET” trigger:

image

And, then, I’ll initialize a variable with the documentId parameter from the URL:

image

triggerOutputs()[‘queries’][‘documentId’]

If you wanted to read more on what’s happening there, have a look at the docs:

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-http-endpoint#get-parameters

From here on, I can simply add “&documentId=…” to the trigger url, and my Flow will have documentId value stored in the variable. Which means I can, then, use it to query invoice record from the CDS/Dataverse.

And, of course, I can copy Flow url from the Flow designer:

image

 

2. We need to add “Response” action to the Flow so it actually sends the document back right away

image

Basically, it’s just sending the output of “Populate a Microsoft Word Template” action through the response. But, also, there are a couple of headers.

The first one is used to specify content-type. For that matter, we could also generate a PDF document, which would have a different content-type.

And the second header specifies the file name.

Here are header values, to make it easier:

  • application/vnd.openxmlformats-officedocument.wordprocessing
  • inline; filename=”test.docx”

 

And you would probably want to replace “test.docx” with something more useful. Invoice # maybe?

Anyway, this is where we need to go back to the Invoice form and add a button there.

Before that, let’s think it through:

  • We will have a button on the form
  • That button will “call” a javascript function (that’ll be a javascript web resource)
  • And that javascript function will open a new window using Flow trigger url (with an additinal documentId parameter which will take the value of the invoice ID)

 

So, let’s add a web resource first.

image

There is nothing extraordinary there. There is a documentId parameter. There is Flow trigger url. There is concatenation. And there is window.open. As promised.

Now let’s fire up Ribbon Workbench in the XrmToolBox, and let’s customize Invoice command bar there:

image

image

For the command, here is how it’s defined (notice that Crm Parameter – FirstPrimaryItemId):

image

This is how my javascript function (printInvoice) will receive documentId parameter.

Now I just need to add “Enable Rule”:

image

And link it to the command:

image

That’s it, now it’s all abot publishing the changes and watching Scott Durow advertising his PCF course:

image

 

Finally, there is my “Print” button:

image

Was not that difficult? Or was it? In either case I think it’s a worthy replacement for the classic word templates in D365.

And I could also suggest a few improvements (might or might not blog about them later):

  • That “print” button could actually be a fly-out button with a few menu items in the drop down menu. We could have different print forms. And which ones are available might be decided based on the invoice type, for example. How cool is that?
  • We might send that invoice to the client by email right away. It’s a Flow, so why not?
  • We might store that invoice in Sharepoint
  • We might convert it to PDF, and, again, send a PDF version by email (there is a correspondig OneDrive action in Power Automate)
  • One advantage of PDF files is that they will usually open automatically once downloaded. For the Word Documents, you can achieve the same result in Chrome, at least:

 

Pretty sure this list of improvements can get rather long, so… have fun with this improved version of Word Templates!

PS. Of course it would be nice if there was an easier way to create those “buttons” (or to integrate such Flows into the model-driven apps).  Well, one can dreamSmile

Using Power Automate word templates with Model-Driven apps

I’ve never been a big fan of Word Template in Dynamics 365 since they have quite a few limitations, and, yet, I had to use them every now and then since they are so easy to develop (till  you run into the limitations, of course).

Besides, they are not so easy to deploy into different environments, so you’d almost inevitably need XrmToolBox (if you are ok with manual deployment) or a script like this one (if you wanted to automate the process)

But for the last few months I’ve been using Power Automate version of the Word Template, and I’m finding it much more useful:

https://docs.microsoft.com/en-us/connectors/wordonlinebusiness/#populate-a-microsoft-word-template

To create a template, we need to do a few things:

1. Create a new word document

2. Add controls to the document

This is where you need to open “Developer” tab and start adding controls. Just keep in mind that every control you put there should have a title, and that title is exactly what you’ll see in the Power Automate flow when setting up Word Template action:

image

image

2.1. What if I wanted to add a repeater above to display invoice fees?

I’d start by adding a plain text content control to the table row:

image

And I’d give it a title:

image

Would, then, select the whole row:

image

And would add a repeater:

image

As with any other content control, I’d add a title to the repeater:

image

As a result, once I’m back in the Flow designer, I can see Fee repeater there (may need to reload the Flow in the browser):

image

With the individual controls, it’s pretty straightforward – just use output from the other actions (or use expressions) to populate those.

It’s a little more tricky with the repeater, but it’s actually not that complicated.

What you need is an array variable, and you’ll need to add elements to that array which are all objects, and each of those objects should have properties which correspond to the titles of the controls used within the repeater.

If you lost me here, here is an example:

image

You might do it in the “apply to each” for a CDS (oh… sorry… Dataverse) entity (crap… table), or you might do it somehow else. You might add conditional logic when populating the array (so that’s less one limitation of D365 word templates). Basically, you have the power of Power Automate at your disposal when preparing values for the controls (repeating or not).

3. Finally, you need to configure “Word Template” actions and set template control values

For the sake of simplicity, I’ll use static values for the individual controls.

I will, however, switch to the “array” mode for the repeater:

image

And, for the repeater, I will use my “Fees” variable:

image

 

Time for a test? Let’s just add an action to send the result to me by email:

image

And, once I run the Flow(it’s an HTTP Request flow. I just run it by opening that URL in the browser… have to be configured to allow “Get” method on the trigger) and the email comes in here is what I see:

image

How about sorting? Well, just sort that array.

How about filtering? Same thing – just don’t add those values to the array.

And how about using this directly from the model-driven app?

That’s in the next post.

CDS is Microsoft Dataverse!

There is another renaming, so just be aware:

image

https://powerapps.microsoft.com/en-us/blog/reshape-the-future-of-work-with-microsoft-dataverse-for-teams-now-generally-available/

And I can’t help but notice https://dataverse.org/

Well, I know it’s difficult to come up with good names these days since all of them have already been taken, but, quite frankly, what’s wrong with CDS, or, at least, how come Dataverse sounds better?

Either way, it seems Microsoft is quite insistent on having the old name (“CDS”) replaced, so I’m just hoping this new name will stick this time around.

Excel Online connector vs Google Sheets connector

Recently, I happened to look into how google sheets connector works, and, to be honest, I was a little bit disappointed.

First, I wanted to blame google. But, it turned out,  there is no hiding the fact that’s it’s been provided by Microsoft itself:

image

https://docs.microsoft.com/en-us/connectors/googlesheet/

In general, this connector works. But there is one notable issue, which is we can’t specify “id” value when adding new row:

image

You may think that Id above is what it is, but no. I would have to be “Row Id”, which is what the connector would be using in other actions, such as Get Row:

image

You will find more details on “row id” in my earlier post:

https://www.itaintboring.com/power-platform/google-sheets-connectors-row-id-in-power-automate/

Long story short, if I wanted to add a row to the spreadsheet in such a way that I’d still be able to read that specific row later using “Get Row” action, I would not be able to. Since I would not be able to specify “Row Id” – instead, it would be generated for me (for example, imagine that “Ford”/”Dodge”/etc are all id values. Assuming you’d want to be able to load car make data from the spreadsheet, you could do it if you kept adding rows to the spreadsheet manually, but you won’t be able to keep adding new rows to the spreadsheet directly from the Flow, since you’d be getting random row-id values instead of car makes).

For this one, I just added an idea, though I’m guessing it won’t be too popular since it might not be such a common scenario to use Google sheets with Power Automate:

https://powerusers.microsoft.com/t5/Power-Automate-Ideas/Google-sheets-connector-quot-Insert-Row-quot-action-should-have/idi-p/745993#M23460

Which brings me to the Excel Online connector. And that one shines in comparison.

None of the google sheets connector issues seem to exist there. There is a similar “get row” action, but I can specify a key column – there is no  need for some special id columns/values:

image

Which is why “Add a row”, even though it looks very similar to the same action in Google sheets connector, works perfectly fine:

image

It’s my own “id” column this time, I don’t need a special “Row Id”, so it’s all good.

This seems to be consistent throughout Excel connector actions, which brings me to a very simple conclusion:

As of Nov 2020, if you need to use spreadsheets with Power Automate, and if you have a choice between Google Spreadsheets and Excel Online, go with Excel.

This has nothing to do with Google vs Microsoft. This has everything to do with the connector capabilities.

Testing if a specific bit is set in Power Automate flows

Bitwise operators are, likely, not the most popular operators in the low-code world, so there is no surprise Power Automate does not have those.

Which is a bit unfortunate, since, when we do need those operators, it seems like we have to opt for a custom connector/azure function/etc.

But, come to think of it, if all you need is just to see whether a certain bit is set in the integer number, it’s totally doable with Power Automate only.

Imagine you have this number (in binary form):

image

The number itself is in black, and each bit in the binary representation is “numbered” in red.

If you use “div” function to divide that number by 2, it will shift to the right and will look like this:

1110011

You can use “div” again, and you’ll get this:

111001

You can continue like this till you get the bit you were looking for in the first “bit” of the result, and, then, just use mod function to get the remainder:

mod(x, 2)

If the remainder is 1, the bit you were looking for is set. If the remainder is 0, the bit is not set.

Here is an example of how we might use this in the Power Automate flow:

image

How to: verify principle object access directly from the Flow

If you ever tried using “List Records” action with the POA table (principalobjectaccess), you might have noticed it’s not showing up in the dropdown list:

image

However, it’s easy to solve. You just need to know the “set” name for that table (which is “principalobjectaccessset”), and, then, you can enter that name as a custom value:

image

How would you know it’s supposed to be principalobjectaccessset? One option would be to open XrmToolBox and use metadata browser to figure it out:

image

Or you might just read this post, of courseSmile

Anyway, once that’s done, you can create a Flow similar to this one to get all POA records for a random contact (you’d need to define filter conditions to work with a specific contact record. This is just an example, so I’m using Top Count = 1 instead:

image

From there, you can iterate through the POA records and see if there is one that grants “write” access:

image

Now there is a math trick there. There are no bitwise operators in Power Automate flows, and “access mask” is, essentially, an integer where every “bit” corresponds to a certain permission.

“Write” access is granted in the second bit, which means we could just divide access mask by 2 to move that second bit to the first place, then use mod to divide by 2 again and see the remainder.

If the remainder is 0, there is no “write” permission.

If the remainder is 1, there is “write” permission.

In other words, my sample Flow above is using the following expression in the condition step:

mod(div(outputs(‘AccessRightMask’), 2), 2)

Depending on the permission you wanted to check, you might have to divide by 2 a few more times before using mod and looking at the remainder (check this post, for example, for the meaning of each bit: https://blog.crmguru.co.uk/2015/11/10/figuring-out-shares-in-the-principalobjectaccess-poa-table-in-crm/)

Have fun!