Monthly Archives: February 2020

Canvas Apps: Sync processing vs Async processing

I used to think Canvas Apps are synchronous, since, after all, there are no callback functions, which means implementing a real async execution pattern would be  a problem. As it turned out, there are at least a couple of situations where Canvas Apps are starting to behave in a very asynchronous way.

There is a “Select” function: https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-select

And there is a “Concurrent” function: https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-concurrent

Looking at the documentation for those two, you will get a hint of how Canvas Apps functions are, normally, evaluated:

image

image

Those two excerpts, bundled with the mere existence of the “chain” operator (“;”) are telling me that, normally, all functions in a chain are executed sequentially.

Except, of course, for those two above. Actually, “Concurrent” is also executed sequentially – it’s just that all those other functions within “Concurrent” won’t wait for each other.

“Select” turned out to be quite a different story, though.

I was having a problem with my app. As it often happens, I added a hidden button, so I could use that button to call the same “code” over and over again from different places. That seems to be a common workaround whenever we need to define our own “function” in Canvas Apps.

It usually works seamlessly – all I need to do is call Select(<ButtonName>) wherever we need to run that encapsulated code (while the user is on the same application screen).

However, the fact that “Select” only queues the target “OnSelect” for processing and does not execute it immediately makes all the difference in some situations.

Why did it bite me this time?

In my application I have a number of checkboxes per screen. Different users can open the same screen at the same time and start updating those checkboxes. I wanted to make sure that any change one user makes are presented to the other user at the earliest opportunity.

So, I figured, I’d do it this way:

  1. When the user changes something, I’d call “Select” for a hidden button
  2. In the “OnSelect” of that button, I’d re-read all values from the input controls, and I’ll use “Patch” to push changes to the CDS
  3. Then I’d re-read all data from CDS to capture any changes made by other users
  4. And, then, updated data would be displayed in the interface

 

Why does it matter that Select calls are not executing “OnSelect” immediately? Because what I ended up with is this:

image

I wanted to go with the least amount of efforts, so I figured I would go easy on the “OnChange” events, and I would just call “Select” there. In the Select, I would read values from the UI controls, I would patch the datasource, then I would reload data, and, then, I would update the UI with reloaded data.

Problem is, because of the queued nature of those “OnSelect” events, it may turn out that the data OnSelect would be using to update the UI would not really reflect the  most recent changes made by the user, so some of those changes might be lost in the end.

Well, if you run into this issue, the only workaround I could think of is to put an overlay control (I used a label) on top of all other elements on my screen so that it would be hidden most of the times, but it would be displayed once OnSelected has started to prevent any user input:

image

We can easily manipulate the visibility of such control using a variable:

image

So, I just need to set that variable to true at the start of OnSelect, and, then, reset it to “false” once there is no more processing in the OnSelect.

Transparent effect can be achieved by using that last (“alpha”) parameter of the RGBA color – it indicates the opacity (in the range from 0 to 1, where 0 stands for “transparent”):

image

When that stubborn Default property does not work for a Canvas App input control, there is still a way

It was a strange day today – I kept finding new stuff (as in “new for me”) in Canvas Apps. Apparently it had something to do with the fact that a Canvas App I was working on is being used in production now, so it’s getting much more real-life testing than it used to.

A couple of months ago, I wrote a blog post about the “default property”: https://www.itaintboring.com/powerapps/default-property-in-the-canvas-apps-controls-there-is-more-to-it-than-the-name-assumes/

Just to reiterate. When using a variable for the “Default” property of the input control, we can normally expect that, once the variable has been updated, those changes will also be reflected  in the input control through its “Default” property.

This works, but, as it turned out, there is one edge case when it does not.

In the following scenario, it seems my Canvas App stops recognizing changes in the underling variable:

  • I set my variable to a new value
  • As expected, that value gets displayed in the text box input control
  • I type in a value into the text box
  • And, then, I use Set operator to update my variable once again using the same value as before

 

It does not work – my text box control is still displaying the same value I entered manually. Why? Because I actually have to update the variable, and, of course, it’s not happening if I am using the same value again and again.

Here is a quick demonstration – notice how I keep pushing “Set Blank” toward the end of the recording, and nothing is happening – this is exactly because my variable had already been set to Blank, so setting it to Blank again does not change anything. However, once I click “Set Value”, it all starts working again:

default_property

Why did it suddenly hit me today? That’s because my Canvas Application is, essentially, a multi screen wizard application, and, as the user keeps going  through the screens, they can go back to the start screen any time. At which point I may need to reset all input controls, and, since I am using variables, I need to reset those variables.

Because of how this wizard-like application is working, some of the variables would not be updated till the very last screen has been reached. So, if the user decides to start over somewhere in the middle… Those variables will still be “Blank”, and, so, resetting them to “Blank” won’t do much because of what I wrote above.

Dead end? Have to redesign the whole app? That might well be the case, but it’s for when I get some spare time. As it turned out, there is a workaround – I’m just afraid I’ll have to add a note every time I write something like this:

image

Yep… When using a variable for the Default property of your input control, you might want to change the value of that variable twice whenever you want to make a change. First, change it to a dummy value. Then, change it to the actual value. That all but guarantees that the change will get reflected in the input control.

User licensing in D365 instance

When thinking about user licensing in D365 instances, you may be thinking of D365 applications. However, from the licensing standpoint D365 instance is nothing but an “advanced” CDS instance with a bunch of first-party apps deployed there, so it is still possible to use Power Apps “per app” and “per user” plans in those instances.

Which is exactly what the diagram below tells us, and that’s coming straight from the D365 licensing guide:

image

However, that diagram is looking at the licensing in a sort of exclusive manner, and, also, it’s doing it more from the custom entities standpoint. Also, it’s not mentioning Power Automate licensing in any way.

Still, it’s a great starting point, but I was wondering if there might be a case where a Team Member license would need to be combined with the Power App and/or Power Automate licenses. And/or whether it’s actually possible to replace a Team Member license with a Power App license.

This might be especially relevant now when team member license enforcement is almost there:

https://docs.microsoft.com/en-us/dynamics365-release-plan/2020wave1/dynamics365-sales/license-enforcement-users-new-team-member-licenses

Hence, here is my take on it.

In both cases, we just need to see which use rights are covered by each of those license types, and here is how it goes:

Team Members:

  • Read-only access to the restricted entities
  • Dedicated first-party applications only (no custom apps, though can still extend those first-party)
  • Power Automate use rights within D365 app context
  • 15 custom entities per covered app module
  • Access to the employee self-service portal (ability to create cases)

Power Apps plans:

  • Unlimited custom entities
  • Custom applications (limited # in case with “per app” plan)
  • Read-only access to the restricted entities
  • Custom applications only (no first-party apps)
  • Power Automate use rights within app context
  • Unlimited custom entities
  • “Per app” plan is linked to the environment

Power Automate plans:

  • Power Automate use rights for the general-purpose Flows (per user or per flow)

For the most part, it’s clear how to mix and match license assignments for the same user account. Except for the two questions I mentioned above.

Would we ever need to combine Team Member license with a Power App license?

The only scenario I see clearly is when there is a user who needs access to the employee self-service portal, yet that same user needs to go beyond Team Member license limitations (15 custom entities per module, read-only accounts, etc). Team Member license will give access to the self-service portal, and everything else will come with the Power App license.

Can we replace a Team Member license with a Power App license?

This is really the same question, just asked differently. We might not be able to use first-party apps; however, a model-driven app is nothing but a set of application components and an associated site map. We can always create a custom app which will include required components, and we can customize a site map. That will still be within the use rights of the Power App license.

There is a caveat here, though. In terms of pricing, Team Member license is comparable with the “Power App Per App” plan. However, while a Team Member license can be used in multiple D365 instances, a “Power App Per App” plan is linked to a single environment. From that standpoint, the answer to the question above depends, of course.

Other than that, a Power App license seems to be more powerful than a Team Member license – after all, Power App users will be getting access to all those additional non-restricted entities, including the “account” entity. Yet Power App users will be able to utilize two custom apps, and that may include a model-driven app and a canvas app (assuming “per app” plan).

Finally, what about the Power Automate?

The most important thing to remember is that you can only use generic Flows with the dedicated Power Automate plans. Any use rights provided by Power App/Dynamics licenses will only cover your users for the app-specific scenarios. This is a vague language, but, just to give you an example… a Team Member license would give you access to the dedicated Dynamics apps, and those apps have only one data sources (CDS). If you wanted your Team Member users to start using Flows which are connecting to SQL, you’d need to throw in Power Automate licenses into this mix.

PS. As usual with licensing, everything I wrote above is based on my interpretation of the licensing guides. You can use it as a starting point, but do your own research as well.

Working with the grid onLoad event

Sometimes, I get a feeling that, as far as Dynamics/Model-Driven javascript event handlers are concerned, everything has already been said and done. However, I was recently asked a question which, as it later turned out, did not really have the kind of a simple answer I thought it would have (meaning, “just google it” did not work).

How do you refresh a form once a grid on the form has been updated?

For example, imagine there is a subgrid on the form, and, every time a new record is added to the subgrid, there is a real-time process that updates the “counter” field. By default, unless there are further customizations, I will have to hit “refresh” button to see updated value of my counter field. Otherwise, I will keep seeing 0:

image

Which is not correct, since, if I clicked “Refresh” there, I would see “2”:

image

Apparently, some customization is in order, and, it seems, what we need is an event that will trigger on update of the subgrid. If there were such an event, I could just refresh the form to update the data.

This seems to be a no-brainer. For the form refresh, there is formContext.data.refresh method:

https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/formcontext-data/refresh

For the sugrid, there is addOnLoad method for adding event handlers:

https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/grids/gridcontrol/addonload

So, it seems, I just need to use addOnLoad to add a listener function, and, from that function, I need to refresh the form.

Except that, as it turned out, there are a few caveats:

  • When you open a record in your app, form onLoad will fire first. This is a good place to call addOnLoad for the grid
  • Gird’s onLoad event will follow shortly. But only if there is some data in the grid. Otherwise, it won’t happen for an empty grid
  • Every time a linked record is added to the grid or removed from it, grid’s onLoad event will fire. Even one the last record has been removed from the grid and the grid is empty after that
  • Once formContext.data.refresh is called, form data, including all grids on the form, will be refreshed. Form onLoad won’t fire, but onLoad event for the grid will fire at that time (Although, see note above about empty grids). This may lead to an infinite recursion if another formContext.data.refresh is called at that time

 

Strangely, I could not find a simple solution for that recursion problem above. At some point, I figured I could just add a variable and use it as a switch. So, once in the grid’s “onload” event, I would check if it’s set to true, and, if yes, would reset it and do nothing. Otherwise, I would set it to true, and, then, would call formContext.data.refresh

This was supposed to take care of the recursion, since I would be calling “refresh” every second time, and, therefore, the recursion wouldn’t be happening. And this was all working great until I realized that, when the form opens up initially, there is no way of telling if grid’s onload will happen or not (since that depends on whether there are any linked records – see the list above). Which means I can’t be sure which is the “first” time and which is the “second” when it comes to the grid events.

Eventually, I got a solution, but this now involves an API call to check modifiedon date. Along the way, it turned out that “modifiedon” date that we can get from the attributes on the form does not include seconds. You can try it yourself – I was quite surprised.

On the other hand, if we use Xrm.WebApi.retrieveRecord, we can get modifiedon date with the seconds included there.

What I got in the end is javascript code below.

  • gridName should be updated with the name of your grid control
  • onFormLoad should be added as an onLoad event handler for the form
  • onFormSave should be added as an onSave event handler for the form

 

Basically, this script will call refresh whenever modifiedon date changes after a grid control has been reloaded. Keeping in mind that I’d need to compare seconds as well, I am using Xrm.WebApi.retrieveRecord to initialize lastModifiedOn variable in the form onLoad.

And, then, I’m just using the same API call to verify if modifiedon has changed (and, then, to call “refresh”) in the grid onLoad event.

Finally, I need onFormSave to reset lastModifiedOn whenever some other data on the form is saved. Otherwise, once the form comes back after “save”, all grids will be reloaded, and, since modifiedon will be updated by then, an additional refresh will follow right away. Which is not ideal, of course.

 

var formContext = null;
var lastModifiedOn = null;
var gridName = "Details";

function onFormLoad(executionContext)
{
  formContext = executionContext.getFormContext();
  //Can't use
  //lastModifiedOn = formContext.getAttribute("modifiedon").getValue();
  //Since that value does not include "Seconds"
  //Also, this needs to be done for "updates" only
  if(formContext.ui.getFormType() == 2){
    Xrm.WebApi.retrieveRecord(formContext.data.entity.getEntityName(), formContext.data.entity.getId(), "?$select=modifiedon").then(onRetrieveModifiedOn);
  }
}

function onFormSave()
{
	//Not to refresh on save
	lastModifiedOn = null;
}

function onSubgridLoad(executionContext)
{
   Xrm.WebApi.retrieveRecord(formContext.data.entity.getEntityName(), formContext.data.entity.getId(), "?$select=modifiedon").then(onRetrieveModifiedOn);
}

function onRetrieveModifiedOn(result)
{
	if(lastModifiedOn != result.modifiedon)
	{
		debugger;
		var doRefresh = false;
		if(lastModifiedOn == null){
			formContext.getControl(gridName).addOnLoad(onSubgridLoad);
		}
		else{
			doRefresh = true;
		}
		lastModifiedOn = result.modifiedon;
		if(doRefresh) formContext.data.refresh();
	}
}

 

Have fun with the Power!

PCF Controls solution dependencies

I have definitely managed to mess up my PCF controls solution a few weeks ago, since I put some test entities into that solution, and, then, I missed to include a few dependencies. Definitely my apologies for that to everyone who tried to deploy that solution while I was happily spending time on vacation, but, hopefully, this post will help.

First of all, there are two different solutions now. In the main solution, I have those PCF controls and a plugin to support N:N.

Then, in a separate solution, I have all the test entities and forms to set up a quick demo.

Those two solutions should be imported in exactly this order, since, of course, you won’t be able to install “demo” solution having not installed the PCF solution first:

This is a good lesson learned, though. I guess it does make sense to always create a separate solution for the PCF controls?

If you are an ISV, and you are using those controls in your own solutions, you would probably want to be able to update the PCF controls without having to update anything else.

If you are developing PCF controls internally, it’s, essentially, the same idea, since you may want to reuse those controls in various environments internally.

Although… here is what looks a little unusual here. I can’t recall any other solution component that would be so independent from anything else. In the past, we used to put workflows into separate solutions to ensure we can bring in required reference data first. We might use separate solutions for a set of base entities since we’d be building other solutions on top of that core entity model. We might use dedicated solutions for the plugins, since plugins might make our solution files really big.

Still, those were all specific reasons – sometimes, they would be applicable, and, sometimes, they would not be. As for the PCF, when all the entity names, relationship names, and other configuration settings are passed through the component parameters, it seems we have a solution component that will be completely independent from anything else most of the times.  Instead, other components in the system will become dependent on the PCF components as time goes, so it probably make sense to always put PCF into a separate solution just because of that.

A CDS security model which supports data sharing, but which is not using business units or access teams

We all know about the owner teams, security roles, access teams, field security profiles, etc. But there is yet another not-so-obvious security feature which controls access to CDS records through the “reparent” behavior on the “parental”/”configurable cascading” 1:N relationship. Why does it matter?

See, on the diagram below, even though the security role is configured to give access to the user-owned records only, my User A can still access records on the far right side of the parental entity hierarchy:

How come? Well, carry on reading…

I’ve been trying to figure out how to set up CDS security for the very specific requirements, and it has proven to be a little more complicated than I originally envisioned (even though I had never thought it would be simple).

To start with, there are, essentially, only two security mechanisms in CDS:

  • Security roles
  • Record sharing

Yes, there are, also, teams. However, teams are still relying on the security roles/record sharing – they are just adding an extra layer of ownership/sharing.

There is hierarchy security, too. But it’s only applicable when there is a hierarchy relationship between at least some users in the system, and it’s not the case in my scenario.

And there is Field Security, of course, but I am not at that level of granularity yet.

There is one additional aspect of CDS security which might be helpful, and that’s the ability of CDS to propagate some of the security-related operations through the relationships:

image

However, if you try using that, you’ll quickly find out that there can be only one cascading/parental relationships per entity:

image

“The related entity has already been configured with a cascading or parental relationship”

Which makes sense – if an entity had two or more “parent” entities (which would have  cascading/parental relationships with this entity), the system would not, really, know, which parent record to use when cascading “share”/”assign” operations through such relationships. The first parent record could be assign to one user, and the second one could be assigned to the another user. There would be no way for the system to decide which user to assign the child record to.

Hence, there is that limitation above. Besides, “cascading” only happens when an operation occurs on the parent record. So, for instance, if a new child record is added, it won’t be automatically shared/assigned through such a relationship.

On the other hand, there is “Reparent” behavior which happens when a child record’s parent is set. In which case the owner of the parent record gets access to the child record.

With all that in mind, the scenario I have been trying to model (from the security standpoint) is this:

  • There is a single CDS environment
  • There are functional business teams – each team corresponds to a model-driven app in the environment
  • Within each application, there is a hierarchy of records (at the top, there is a case. Then there are direct and indirect child entities)
  • There are some shared entities
  • Functional team members are supposed to have full access to all corresponding application features/entities
  • The same user can be a member of more than one functional team

This “business security” model does not map well to the business units, since a user in CDS can be a member of only one business unit, and, as mentioned above, in my case I need the ability to have the same user added to multiple “functional teams”.

One way to do it would be to micromanage record access by sharing every record with the teams/users as required. That can become messy, though. I would need a plugin to automate sharing, I would need to define, somehow, which teams/users to share each record with, etc. Besides, “sharing” is supposed to be an exception rather than a norm because of the potential performance issues.

Either way, since I can’t user multiple business units, let’s assume there is a single BU. This means there are only two access levels I can work with in the security roles:

  • User
  • Business Unit

 

image

“Parent-Child” and “Organization” would not make any difference when there is only one business unit.

I can’t set up the security role with the “Business Unit” access, since every user in that BU will have access to all data. Which is not how it should be.

But, if I configure that security role to allow access to the “user-owned” records, then there is a problem: once a record is assigned to a user, other users won’t be able to see that record.

It’s almost like there is no solution to this model, but, with one additional assumption, it may still work

  • Let’s create an owner team per “functional team”
  • Let’s create a security role which gives access to the “user-owned” records and grant that role to each team
  • Let’s keep cases (which are at the top of the entity hierarchy) assigned to the teams
  • And let’s add users to the teams (the same user might be added to more than one team)

 

Would it work? Yes, but only if there is a relationship path from the case entity to every other entity through the relationships with cascaded “Reparent” behavior.

That’s how “regarding” relationship is set up for notes and activities out of the box (those are all parental relationships), so I just need to ensure the same kind of relationship exists for everything else:

image

All users which are members of those teams above will get access to the cases which are assigned to the teams. And, as such, they will also have access to the “child” records of those cases.

If a new child record is created under the case (or under an existing case’s child record), that record will still be accessible to the team members because of “reparent” behavior.

So, as long as cases are correctly routed to the right teams… this model should work, it seems?

What’s unusual about it is that not a single security role will have “business-unit” (or deeper) access level, there will be no access teams, and, yet, CDS data will still be secured.

And, finally, what if I wanted to assign cases to the individual users? That would break this whole “reparent” part  (since team member won’t have access to the case anymore). However, what if there were a special entity which would be a case’s parent, and what if, for each case, the system would create a parent record and assign it to the team? Then, out of a sudden, “reparent” would kick in, and all team members would get access to the cases AND to the child records of those cases. Even if those cases were assigned to the the individual users. Of course this would mean I’d have to reconfigure existing parental relationship (which is between cases and “Customers”). But, in this scenario, it seems to be fine.

PS. As for the “reparent”, you will find some additional details below. However, even though  that page is talking about “read access” rights, it’s more than that (write access is also inherited, for example):

https://docs.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/entity-relationship-behavior#BKMK_ReparentAction

Application Insights for Canvas Apps?

In the new blog post, Power Platform product team is featuring Application Insights integration for Canvas Apps:

https://powerapps.microsoft.com/en-us/blog/log-telemetry-for-your-apps-using-azure-application-insights/

It does look great, and it’s one of those eye-catching/cool features which won’t leave you indifferent for sure:

image

Although, I can’t get rid of the feeling that we are now observing how “Citizen Developers” world is getting on the collision course with the “Professional Developers” world.

See, every time I’m talking about Canvas Apps, I can’t help but mention that I don’t truly believe that real “citizen developers” are just lesser versions of the “professional developers”.

If that were the case, a professional developer would be able to just start coding with Canvas Apps right away. Which they can, to an extent, but there is an always a learning curve for the professional developers there. On the other hand, “Citizen Developers”, unless they have development background, may have to face an even steeper learning curve, and not just because they have to learn to write functions, understand events, etc. It’s also because a lot of traditional development concepts are starting to trickle into the Canvas Applications world.

ALM is, likely, one area where both worlds are not that different. Since it’s all about application development, whether it’s lo-code or not, the question of ALM comes up naturally, and, out of a sudden, Citizen Developers and Professional Developers have to start speaking the same language.

As is in the case of Application Insights integration. I don’t have to go far for the example:

image

“Microsoft Azure Resource”, “SDK”,  “telemetry”, “instrumentation key” – this is all written in a very pro developer-friendly language, and, apparently, this is something “Citizen Developers” may need to learn as well.

Besides, using Application Insights to document user journeys seems to make sense only when we are talking about relatively complex canvas applications which will live through a number of versions/iterations, and that all but guarantees a “citizen developer” must have some advanced knowledge of the development concepts to maintain such applications.

Well… this was mostly off-topic so far to be fair.

Getting back to the Application Insights, we just had a project went live with a couple of “supporting” canvas applications, and I am already thinking of adding Application Insights instrumentation to those apps, so I could show application usage patters to the project stakeholders. That would certainly be a screen they might want to spend some time talking about.

So, yes, I’m pretty sure we just got a very useful feature. If anything is missing there, it’s probably having a similar feature for the model-driven appsSmile

TCS Tools v 1.0.23.0

It’s been a while since I’ve updated TCS Tools the last time – there are a few reasons for that, of course. First of all, the most popular component in that solution has always been “attribute setter” which essentially allowed to do advanced operations in the workflows using FetchXml:

  • Using FetchXml to query and update related child records (or to run a workflow on them)
  • Using FetchXml to query and update related parent record (or to run a workflow on that record)

 

with the Power Automate Flows taking over process automation from the classic workflows, most of that can now be done right in the Flows, though there are a couple of areas where TCS Tools might still be useful:

  • Real-time classic workflows (since there are no real-time Flows)
  • Dynamics On-premise

 

With the on-premise version, it’s getting really complicated these days. I know it does exist in different flavors (8.2 is, likely, the most popular). Unfortunately, I have no way of supporting on-premise anymore.

This only leaves real-time classic workflows in the online version as a “target” for TCS Tools at the moment.

With all that said, I just released a minor update which fixes an issue with the special characters not being encoded properly (for the details, have a look at the “invalid XML” comments here)

To download and deploy the update, just follow the same steps described in the original post:

https://www.itaintboring.com/tcs-tools/solution-summary/