Monthly Archives: November 2020

Testing if a specific bit is set in Power Automate flows

Bitwise operators are, likely, not the most popular operators in the low-code world, so there is no surprise Power Automate does not have those.

Which is a bit unfortunate, since, when we do need those operators, it seems like we have to opt for a custom connector/azure function/etc.

But, come to think of it, if all you need is just to see whether a certain bit is set in the integer number, it’s totally doable with Power Automate only.

Imagine you have this number (in binary form):

image

The number itself is in black, and each bit in the binary representation is “numbered” in red.

If you use “div” function to divide that number by 2, it will shift to the right and will look like this:

1110011

You can use “div” again, and you’ll get this:

111001

You can continue like this till you get the bit you were looking for in the first “bit” of the result, and, then, just use mod function to get the remainder:

mod(x, 2)

If the remainder is 1, the bit you were looking for is set. If the remainder is 0, the bit is not set.

Here is an example of how we might use this in the Power Automate flow:

image

How to: verify principle object access directly from the Flow

If you ever tried using “List Records” action with the POA table (principalobjectaccess), you might have noticed it’s not showing up in the dropdown list:

image

However, it’s easy to solve. You just need to know the “set” name for that table (which is “principalobjectaccessset”), and, then, you can enter that name as a custom value:

image

How would you know it’s supposed to be principalobjectaccessset? One option would be to open XrmToolBox and use metadata browser to figure it out:

image

Or you might just read this post, of courseSmile

Anyway, once that’s done, you can create a Flow similar to this one to get all POA records for a random contact (you’d need to define filter conditions to work with a specific contact record. This is just an example, so I’m using Top Count = 1 instead:

image

From there, you can iterate through the POA records and see if there is one that grants “write” access:

image

Now there is a math trick there. There are no bitwise operators in Power Automate flows, and “access mask” is, essentially, an integer where every “bit” corresponds to a certain permission.

“Write” access is granted in the second bit, which means we could just divide access mask by 2 to move that second bit to the first place, then use mod to divide by 2 again and see the remainder.

If the remainder is 0, there is no “write” permission.

If the remainder is 1, there is “write” permission.

In other words, my sample Flow above is using the following expression in the condition step:

mod(div(outputs(‘AccessRightMask’), 2), 2)

Depending on the permission you wanted to check, you might have to divide by 2 a few more times before using mod and looking at the remainder (check this post, for example, for the meaning of each bit: https://blog.crmguru.co.uk/2015/11/10/figuring-out-shares-in-the-principalobjectaccess-poa-table-in-crm/)

Have fun!

Entities are Tables now, so what?

You have probably heard that Entities are Tables now? If not, have a look here:

https://docs.microsoft.com/en-ca/powerapps/maker/common-data-service/data-platform-intro#terminology-updates

image

Well, am I thrilled about it? Am I concerned about it?

Quite frankly, we should all get used, by now, to all those changes in the product names and/or in the terminology around Microsoft products. Sometimes, those changes are successful, and, sometimes, they are not. One thing is certain – they did happen in the past, they keep happening, and they will be happening in the future.

And I just think I reached the point where it does not matter to me what the name is, since:

  • I don’t know the reasons behind renaming (other than vague references to the users feedback etc)
  • If anyone tells me they don’t like new names, I’m just going to say “it’s not worse or better than it used to be. As long as this is what Microsoft will be using these days, I’m fine with that”

 

For example, in case with entities and tables, we’ve all got used to “entities” over the years. But the concept is rather vague to be honest. It’s not a table, it’s not a view… it’s some combination of metadata and business logic.

It is vague to the point where even XRM SDK has it wrong. There is “Entity” class in the SDK, but, realistically, it should have been called EntityInstance. Or, maybe, EntityRecord. Or even just “Instance”.

If it’s easier to call it Table when discussing these concepts with new clients/developers, so be it. Although, of course, in this new terminology we will likely always have to add “well, it’s not quite the same table you’d have in SQL. But it’s a good enough approximation”

In that sense, it seems I almost became immune to the renaming virus. I know it’s there, but I’m staying cool.

Although, on a more personal level, this change may affect me, and not in the best way.

See, half a year later, when new terminology settles in, everyone will be searching for “CDS tables…” in google. But all my blog posts up until now used different terminology, so there will be two immediate consequences:

  • Those older posts might stop showing in the search results
  • Even if they do show up, blog readers (especially those new to Power Platform)might actually get confused even more when they start seeing old terminology

 

Almost inevitably, there will be  some period of adjustment, when old and new terminology will have to co-exist, and, yet, every Power Platform user/client/developer would have to be familiar with both sets of names to be able to understand older posts/articles/blogs/or recordings.

From that perspective, it might be quite a conundrum, of course. Although, everyone is going to be in this boat, so we might, as well, simply keep sailing – just need to adopt new terminology and start using it moving forward.

Polymorphic lookup delegation in Canvas Apps

Right on the heels of my previous post where I was talking about delegation in Canvas Apps, here is another one on the same topic.

We can’t help but break delegation when filtering polymorphic lookups, right? Since, of course, “AsType” cannot be delegated:

image

Well, if you are up to writing a little plugin, it’s, actually, quite doable:

image

The idea is very simple:

  • Let’s create a dummy field (“Dummy Account Name”)
  • Let’s create a plugin to kick in on RetrieveMultiple
  • And let’s update the query in the pre-operation so that the filter we specify for the “Dummy Account Name” is converted into a filter on the linked account entity

In other words, in the pre-operation, the plugin will receive this query:

image

The plugin will convert this query into another one:

image

And the rest of the execution pipeline will work as is.

So, to start with, we’ll need to add “Dummy Account Field” to the contact entity:

image

We’ll need a plugin:

image

And we’ll need to register that plugin:

image

There you go. Don’t you ever forget about pluginsSmile

PS. And you will find the source code here: https://github.com/ashlega/ITAintBoring.PolymorphicDelegation

Think twice when using functions to filter your data sets in the Canvas Apps

Here is a warning message which I keep running into:

image

(The “Filter” part of this formula might not work correctly on large data sets)

It’s not that I keep running into it every day. But I find myself looking at this warning every time I’m setting up data sources for a new application.

So, maybe, if I write it a few times here, I’ll remember to do it right from the start the next time I’m doing it.

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Now, if you are new to this, and if this does not make a lot of sense so far, let me explain.

Canvas Apps are lazy – they know how to delegate work  to the data sources. For example, if I wanted to find all accounts that are called “Big Corp”, ignoring the character case, I could use Filter function like this:

image

As a result, I’d get that “Big Corp” account. Yet, there would be no delegation warning.

This is because, for this kind of straightforward condition, Canvas App would just delegate filtering work to the data source – instead of loading all accounts and filtering them on the client, all that work will be happening on the “server” (in this particular case, it means CDS Web API service would be doing the filtering).

Not everything can be delegated, though. Actually, there are only a few basic functions/operators which are delegate-able:

image

https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/delegation-overview

By the way, I’m not going to speculate why “Upper” would not be delegatable for SQL data source either. There is a corresponding MS SQL “Upper” function, so, it seems, this might have been  delegateable… But it’s not.

There is one scenario where we can use “non-delegatable” functions in the conditions. It’s when those functions are applied not to the “columns”, but to the constant values ( to the variables, for example):

image

In this case, Canvas App would know that it can calculate Upper(“big corp”) beforehand, then delegate the rest of filtering work to the data source.

Hope this makes sense so far?

How about this one, then:

image

Compare the last two screenshots. Can you tell why, in the last example, there is no data that matches the condition, whereas in the previous example “Big Corp” account showed up in the results?

PS. You are welcome to reply in the comments or on LinkedIn if that’s how you landed hereSmile

Retrieving environment variable value in Javascript

The script below might help if you wanted to read CDS environment variable value in your javascript web resource.

top.environmentVariables = [];

function getEnvironmentVariableInternal(varName){
  "use strict";
   top.environmentVariables[varName] = null;
   Xrm.WebApi.retrieveMultipleRecords("environmentvariabledefinition", `?$top=1&fetchXml=<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='true'>
	  <entity name='environmentvariabledefinition'>
		<attribute name='defaultvalue' />
		<filter type='and'>
		  <condition attribute='schemaname' operator='eq' value='` + varName + `' />
		</filter>
		<link-entity name='environmentvariablevalue' from='environmentvariabledefinitionid' to='environmentvariabledefinitionid' link-type='outer' alias='varvalue'>
		<attribute name='value' />      
		</link-entity>
	  </entity>
	</fetch>`).then(
		function success(result) {
			for (var i = 0; i < result.entities.length; i++) {
			        if(typeof(result.entities[i]["varvalue.value"]) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i]["varvalue.value"];
                                }
				else if(typeof(result.entities[i].defaultvalue) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i].defaultvalue;
                                }
                                else{
                                   top.environmentVariables[varName] = null;
                                }
			}                    
		},
		function (error) {
			console.log(error.message);
			
		}
	  );
	  
}

function getEnvironmentVariable(executionContext)
{
  "use strict";
   getEnvironmentVariableInternal("SCHEMA_NAME_OF_YOUR_VARIABLE");	
}

Just a couple of notes:

1. I’m using WebAPI + FetchXML to get the values

I think this is just because I’m so used to FetchXml it’s my first choice. As Diana Birkelbach just noted, it should actually be easier with Web API. So will be updating this post soon to add a Web API – only version of the same function.

2.  I’m storing variable value (default or overridden) in the top.environmentVariables array

This way, I can access that array from the script associated to a ribbon button (which is a completely separate script)

PS. As promised, here is an updated version that’s not using FetchXML:

top.environmentVariables = [];

function getEnvironmentVariableInternal(varName){
  "use strict";
   top.environmentVariables[varName] = null;
   Xrm.WebApi.retrieveMultipleRecords("environmentvariabledefinition", "?$select=defaultvalue,displayname&$expand=environmentvariabledefinition_environmentvariablevalue($select=value)&$filter=schemaname eq '"+varName+"'").then(
		function success(result) {
			for (var i = 0; i < result.entities.length; i++) {
			        if(typeof(result.entities[i]["environmentvariabledefinition_environmentvariablevalue"]) != "undefined"
                                   && result.entities[i]["environmentvariabledefinition_environmentvariablevalue"].length > 0)
                                {
                                   top.environmentVariables[varName] = result.entities[i]["environmentvariabledefinition_environmentvariablevalue"][0].value;
                                }
                                else if(typeof(result.entities[i].defaultvalue) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i].defaultvalue;
                                }
                                else{
                                   top.environmentVariables[varName] = null;
                                }
			}                    
		},
		function (error) {
			console.log(error.message);
			
		}
	  ); 
}


function getEnvironmentVariable(executionContext)
{
  "use strict";
   getEnvironmentVariableInternal("coo_InvoicePrintFlowUrl");	
}

 

Thursday rant – long ignored issues with the Admin UI

With Wednesday being a holiday (Remembrance Day in Canada), Thursday feels a bit like Monday. Which means I am still in the “holiday mode”, I don’t necessarily want to do anything at all, and, hence, some of the minor issues with Power Platform which should have been fixed long ago seem somewhat inflated on this bleak morning.

And there could be no better time to rant about it!

Why is it that there is no sorting in the list of solution components?

image

Why is it that list above has components which are simply not actionable in the UI?

image

I can try creating a new relationship attribute, but here is all I get:

image

 

Why is it that security roles, field security profiles, web resources, option sets, site maps, and a few others are all grouped under “Other” in the list of solution components?

image

How come we can’t use a wildcard in the search?

image

Why is it that highlighted columns are not sortable? (and the other ones are)

image

Is there any reason why web resource are given this strange customization type?

image

And where the heck are my javascript web resources this morning?

There are a few in the classic UI:

They do show up under “All” in the new UI (though I have to order by type and scroll to the right place):

But only one of them shows up under “other” – so what’s that “other”, then?

Well, there you go. I have missed a few for sure, but the rant is over – going back to work now!

Output “fields” – now you see them… and not you don’t. What’s up with Flow mechanics?

You know how we can pick output fields from the Flow actions:

image

So, why is it that, for the flow below, “Filter Array” output is not presented in the same way it’s done for the “Get Rows” action? Even though I’m really just filtering on the results of the “Get rows”, so would not it be reasonable to assume both should have the same output? Yet, they don’t.

image

This might not be obvious, and, really, I had not thought about it till today, but this is where Flow mechanics kick in.

Every connector has a set of actions. Every action can tell the Flow what the output of that action is going to be. However, in order for the “Filter Array” action above to produce the same output as “Get Rows” (which is the input for “Filter Array”), that Filter Array action would have to be quite a bit more intelligent… I’m not sure if that’s even doable.

Otherwise, the outputs are just different, and there is no reason to expect that one action would have the same output as the other.

And what do we do about it? Well, we don’t panic and we start using expressions:

first(outputs(‘Filter_array’)[‘body’])[‘Make’]

Which would be the same as this:

outputs(‘Filter_array’)[‘body’][0][‘Make’]

image

It keeps throwing me off when I have to resort to this kind of expressions, but, come to think of it, it’s not that complicated.

If I wanted to get a bit of reassurance, I could have run the Flow once to look a the outputs, and, then, writing the expression above would have been straightforward:

image

Well, not sure it feels any better after this post, but I hope so. Have fun!

Google sheets connector’s “row id” in Power Automate

If you ever wanted to use Google sheets in Power Automate, there is a connector for that:

image

However, it turned out to be a little tricky, and this is what this post is about.

Imagine there is a very simple spreadsheet:

image

What if I wanted to read a specific row from that spreadsheet in the Power Automate flow?

image

I can pick the file, I can also pick the worksheet… but what is supposed to go into the Row Id parameters?

Well, what if I put “1” into that field:

image

As soon as that happens, a new column gets added to my spreadsheet automatically:

image

Apparently, this column is where google sheets connector will try to find a match on the row id.

The column title is “__PowerAppsId__”. If I had that column in the spreadsheet, it would not be added one again – google sheets connector would just start using that column for the Get Row action.

So, now, I just need to fill that row with the correct ids (and I can use a formula for that):

image

And voila:

image