Monthly Archives: March 2019

This or That #2: Canvas Apps or Model-Driven Apps?

There is nothing more revealing than trying to explain something you don’t quite understand! So, as I mentioned in the recent post, I had to get my head around the transformation that has occurred to the Dynamics CRM/XRM platform in the recent years before I could start thinking about the real differences between Canvas Apps and Model-Driven Apps.

Interesting you may not even need to repeat the exercise if you started with Model-Driven Apps. But, if you are transitioning from Dynamics XRM, you may actually want to see that post before you proceed: From XRM to Power Platform

Anyhow, I was thinking of about the best way to compare Canvas Apps to the Model Driven Apps, and I figured the diagram below should do it:

image

Is there anything that’s not clear there? I’m hoping this will explain it a bit more:

 

Essentially, it would be wrong to start comparing canvas apps to the model-driven apps hoping to find a clear winner. Those are very different application styles, and they are offering very different capabilities. So there might be no way of identifying a winner if we just put one against the other and try to decide.

They definitely do have something in common – they are both “applications”. But, to me, trying to compare those two application styles is like trying to compare a pickup truck to a sedan car. Well, those are, definitely, both cars, but they are so different that you need to know what is it you need from a car in order to decide. Once you know that, though, it might be easy to pick a winner. Do you need to tow a boat? It’s going to be a pickup. If you can sacrifice everything else and are mostly concerned about the fuel efficiency, then you need a car, and an electric one for that.

And this is exactly the purpose of the diagram above. There are a few questions there which you may want to ask yourself in order to decide on the most appropriate application style. Even though, in the end, it may turn out to be a matter of preference/available resources.

 

Better support experience depends on you just as much as it depends on Microsoft

 

I had a really good experience with Microsoft support today. What made it kind of special is that we almost had a bet with a colleague who had not such a pleasant experience a couple of months back. Anyhow, in my case we got the issue resolved in less than one hour. In his case, it took a month, and it was not, eventually, resolved. So, yes, I sort of won..

 

Image result for yeah

 

However, what if I said that your experience with Microsoft support(and, probably, with any other “support”) depends on you just as much as it depends on the individual support engineer and on the support process in general?

You might say that some tickets stay open for a long time, they don’t get resolved to your satisfaction, bugs are not getting fixed as fast as you’d like, and so on. So, basically, the experience is not as pleasant as you would expect.

That is exactly what I mean, though..

Let me explain.

To start with, even though I usually say that I have .NET development background, there also were 3-4 years in my life when I was essentially working as a support team lead. Those who still remember AVIcode probably remember those times. In my case, I was in a very comfortable support environment since, to some extent, I was still a member of the dev team. Which means I had full access to the source code, I would, occasionally, manage delivery and deployment of hotfix builds, and I could actually go talk to the upper management any time I wanted to. That gives a lot of flexibility and power to resolve client issues quickly.

Historically, there is a model where support is provided on different levels. The efficiency (and the cost) of support resources is lowest on Level 1. Resources are getting more expensive (but, also, more efficient) on Level 2. It goes even further on Level 3.

Apparently, the thinking is that every support request should go through a bit of a process. If Level 1 can’t handle it, Level 2 will try. If Level 2 can’t, Level 3 will get engaged. If Level 3 can’t, the product team may have to be involved.

Depending on the efficiency of the support process, there could be triggers in this process which will allow a ticket to bypass level 1, for example, and go directly to level 2. And so on.

Here is what I’m talking about:

image

Is it necessarily how all support is done? No, but it is safe to say that, especially for the more complex issues, you won’t, usually, get your ticket assigned directly to the “most-experience know-it-all” team member who can solve your problem on the spot. This process may take some time.

Now, imagine a typical support engineer (without the superpowers of the dev team member). There are  a few things you may have to keep in mind:

  • They don’t have the authority to make any changes in the product
  • They don’t have the authority promise any fixes unless approved by the dev team
  • They don’t always have the time, ability, or even the opportunity to review product source code in the hope to understand what’s going on behind the scene
  • They have no idea what’s going on in your environment, so they have to understand not only what’s happening to the product, but, also, what is it that’s so special about your environment
  • And sometimes it can be a problem between the keyboard and the chair.. so it can be you or your users

 

There is a lot to consider, there are lots of different factors, so this can all get really complicated and time-consuming.

But there are a few things that can turn possibly frustrating experience into something much more civilized. And it’s really about the expectations which you have to set for yourself.

If you have not done any research, if you have not provided concise and clear explanation of your problem (which is not just clear for you, but, also, clear for somebody who has no clue about the context in which the problem is happening), and if you expect the problem to be resolved right away.. That’s just wrong expectations, so you need to either be ready for a lot of questions OR you need to start preparing your ticket descriptions better. Basically, it’s extra work in either case.

Once you’ve done the above, there is another piece of this puzzle that you may need to consider to come up with the right expectations. What are you really asking about? Is it about some error that’s happening in a certain situation which is easy to reproduce? Or is it a conceptual question which likely does not have an immediate answer and may involve some back and forth between the support engineer and the product team, possibly with the management, etc? It’s much more likely that you will have the problem resolved quickly in the first scenario.

And, finally, there are always product limitations, and there are always bugs. The trick is that those issues can’t be resolved strictly on the support side – you might feel extremely disappointed that there is a bug; however, even though it’s expected that a support engineer will identify something as a bug, but it’s not expected at all that they will have a fix (or even a workaround) available right away.

To that point, if you, like me, tend to ask Microsoft support to contact you by email rather than by phone.. you are actually complicating the process since you are slowing down communications.

Anyway, what is it I had a problem with today?

There was an import job that was stuck in the “submitted” state. In the on-prem environments, where I used to work for almost 8 years, this would normally be an indication of some issues with the Async Service. So, even though it was online this time, I opened a support ticket and mentioned that my import jobs are stuck in the “submitted” state, so, maybe, support could do something with the async service. And identified it as a critical issue.

15 minutes later, our imports were up and running, even though we spent 15 more minutes in the screensharing session just double-checking things.

And I certainly learned something. Which is, if you are looking at the import file and you see “Cancelled” status reason under the System Jobs, it could be that your instance is still in the Admin mode:

image

Import jobs won’t be working in that mode.

What was that other question that took more than a month to answer? That was a conceptual one – whether a certain product follows a certain standard. Problem is, if there is no immediate and clear answer, it’s not up to the support engineers to answer such questions, so it goes to the product team, to the marketing, to the sales.. you name it.

So, a really long story short, when opening a support ticket, don’t just expect an immediate resolution all the time – think of what is actually possible, try to provide helpful details where you can, and, sometimes, maybe don’t even send those questions through the support channel if it’s really a conceptual question.

And, by the way, somebody else asked me after that what is the best place to open support tickets for Dynamics/PowerApps. Personally, I find it works best from the new admin center, which is at https://admin.powerplatform.microsoft.com/ :

image

Although, I think it’s still possible to open support request from the office admin center, but, even there, you might see a message like this:

image

Anyway, have fun, possibly don’t run into any issues that require support, but, if you do, be conscious of what to expect.

 

 

 

From XRM to Power Platform

 

I’ve been working on a presentation of Embedded Canvas Apps and some things were not adding up. Mostly, I just could not pinpoint what exactly is the difference between Canvas Apps and Model-Driven apps?

Yes, I know how to build a Canvas App. Yes, I know how to build a Model-Driven App. What sets them apart, though?

You can’t just say that you have no control over the app layout in the model-driven app. Sure you have.. some.

You can’t say that there is no coding in the Canvas Apps. There is certainly some. Just think of all those event handlers and functions.. if you still disagree, have a look at the PowerApps Canvas App Coding Standards and Guidelines – the name says it all.

So, at some point I found myself looking at this page:

image

And then it clicked.

The main problem for me used to be my “Dynamics legacy”. I kind of thought of the model-driven apps as of the whole Dynamics XRM experience.. probably because of how model-driven apps were introduced first:

image

https://powerapps.microsoft.com/en-us/blog/introducing-model-driven-apps/

Thing is, the concept of Model-Driven Apps is not really a re-branded version of “Dynamics XRM”.

It’s just that particular piece of the Dynamics solution that, at some point, used to be called “Apps”, and, then, became “Model-driven Apps”:

image

As a platform, Dynamics XRM did not just turn into Model-Driven Apps, though. It lent some of its capabilities to the CDS, others to the Model-Driven Apps, and yet others to the PowerPlatform in general. Those 3 have already built more capabilities on top of what XRM used to be, but, the way I see it, here is what has really happened:

 

 

Does it make sense? Let me know if it does not.

But I kind of deviated from the original question. What’s the difference between Canvas Apps and Model Driven Apps?

Given the diagram above, and keeping in mind that Model-Driven Apps do not include CDS (it’s more like they work on top of CDS), it’s probably like this:

Canvas Apps are, generally, more specific. You have to build the UI layout essentially from scratch, but you have more control there.

There is a benefit in using “frameworks” when having to built interfaces for lots of data, and that’s what Model-Driven Apps are all about – they provide that kind of framework/template. And you are paying for that but loosing the ability to do some low-level customizations. Funny there is one feature of Model-Driven apps that I would probably expect to see on the Canvas Apps side, and it’s the ability to use javascripts.

Either way, if you look at it that way, Embedded Canvas Apps make a lot of sense since they do provide that ability to create “custom dedicated UI” within the model-driven apps.

Anyway, that’s enough of rationalizations for today:)

Have fun!

If you want to add a timeline to the form, make sure you’ve enabled notes for your custom entity

 

When creating a new entity, I’m often not enabling notes, activities, or any other “optional” features by default. And it always bites me in the end.

Why? Because sooner or later comes that moment when I need to add activities to my entity. And I still don’t want to add notes:

image

That’s perfectly legal from the solution designer standpoint. Except that the timeline control just does not like it at all and refuses to show up on the “Insert” tab of the form designer:

image

Of course once notes have been enabled, I can add the timeline to the form:

image

There are things in Dynamics that I just can’t get used to, it seems, and this is certainly one of them.

Especially since the reverse works fine – if I enable notes without enabling activities, I can still add timeline control to the form:

image

image

Well, maybe now that I have written a blog post I’ll remember to enable notes before spending a few minutes looking for that missing timeline control ribbon button!

Onload events for form updates

 

Form notifications can be a little sensitive as I found out this morning when the notification I expected to show up was totally misbehaving. Of course it was Monday morning, so my first thought was that, maybe, that notification just had a really good weekend and simply was not willing to get back to work, but, as it turned out, it actually needed some help to get back on its feet again.

Essentially, my form notification was supposed to tell the user that there are some validation errors. Those errors would be stored in a text field which, in turn, would be populated from a plugin. So the whole process would look like this:

image

Here is the piece of plugin code just so you could see it:

image

And here is a piece of the related javascript:

image

So the plugin would put validation results into the field, and the script would look at that field to either display a message or to clear the notification (the script is registered on both form OnLoad and field OnChange).

All good? Well, no.. That part with “clearFormNotification” worked fine in the form “OnLoad” but did not want to work with the field OnChange whenever there were no validation errors.

So I started looking around and found this tip:

https://crmtipoftheday.com/90/simulate-onload-event-for-form-updates/

Which basically proved the thought that had started to form by that moment.

This kind of workaround with OnChange works only if there is some value in the field. It does not work, though, if the field value can be set to null as a result of the update operation.

So, as per the tip above, I’ve introduced a new boolean field, updated the plugin to always set that field to false or true(never to null), and attached OnChange event handler to that field instead of the original one.

My notifications are feeling well and happy now!

image

This or That #1: PowerApps solution designer vs Classic solution designer

You probably know that we do have two solution designers these days.  Here is a related “this or that” question, then. There is a new designer, and there is an old designer.. Which one are you going to use moving forward and why?

Btw, would you rather see and listen? Just scroll down to the bottom – there is a video there.

The old one offers classic solution designer experience:

image

The new one is aligned with the PowerApps interface:

image

Up until very recently, I was thinking of the new solution designer as of an emerging tool that will sooner or later overtake the classic designer. However, I sort of thought this would be happening gradually by introducing new “convenience” features such as WYSIWYG form designer, but, in general, classic designer would stay relevant until it’s, finally, disabled (and at that time new PowerApps designer would have to cover the functionality classic designer used to cover).

What it seems I did not realize is that the product team may have decided to take a slightly different approach. There are features which have been missing even in the classic designer, so they would have to be implemented in both versions. Now would it really make sense to keep adding those new features to the tool that’s probably going to disappear anyway? Or would it make sense to pivot at this point and say that those new features will be implemented in the new tool only?

Actually, I am not sure if this is how PowerApps product team is looking at it, but, judging from what has recently been delivered, it might well be how it is:

image

Yes, we do have an out of the box interface to create Autonumber fields now without having to resort to the XrmToolBox or SDK calls. But.. We only have that new field type in the new interface. There is no such option in the classic UI:

image

But, then, is there anything that’s completely missing from the new designer? There might be other things, but I figured I’d just dig in the more advanced areas, so I looked for the Field Security settings, and, from what I can see, there is no way of enabling/disabling Field Security on a field in the new designer yet:

image

So, things are getting really interesting because there is no definitive answer. There is certain functionality that’s available in the new designer and that is not available in the classic one. But there is, also, some functionality that’s available in the classic designer and is not available in the new one.

Personally, I think it only makes sense to start using new designer experience wherever I can simply because it’s certainly the version that’s going to stay, and, also, it’s the version that seems to be getting new functionality first:

image

Just one note.. Why did I write “first” above as if I were thinking that classic designer might still be updated? See, everything is great with the new designer except that it’s not available on-premise. So it could be that, at some point, those new features will still be added to the classic designer as well.  We’ll see.

If you were looking for a recording of this episode, here you go:

PS. Have a look at the other “This or That” episodes !

Good old validations, and why the plugins/workflows are still alive and kicking

 

Interesting how, with all the power Flows can offer these days (we can even customize Sharepoint integration with Flows), there is still one scenario which just cannot be covered without the classic plugins/workflows.

Namely, anything that requires synchronous (or real-time) processing, would need a plugin or a workflow.

For example, what if you wanted to intercept all create/update operations in such a way that nothing gets saved when the data does not validate?

The diagram below would not reveal anything new to the folks who have been working with Dynamics for a while, but, if you are just getting into the model-driven applications development, this may be something to keep in mind:

 

image

Although, what complicates this a little bit is licensing. Validations are great, but, if there are users utilizing your data in a Canvas Application on Plan 1, adding a plugin/real-time workflow to an entity exposed to such an app would require those users to go up to Plan 2.

Anyway, just to make this post “complete”.. how do you display server-side validation errors in the interface?

Here is how you can do it from a real-time workflow: https://survivingcrm.com/2013/11/using-real-time-workflows-to-show-error-messages/

And here is how you can do it from a synchronous plugin: https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/best-practices/business-logic/use-invalidpluginexecutionexception-plugin-workflow-activities

Creating custom folder structure for Sharepoint integration using Flows

We’ve come across an interesting problem recently. Imagine that you have a few different business units, and you want Sharepoint integration to work in such a way that all the documents within each business units would be shared, but there would be no sharing between the business units.

This is not how it normally works in Dynamics, though, so the options are:

 

The permissions replicator is great, and it would do the job, but, just to explore that custom implementation option a little more, what can we actually do?

If it were on-premise, or if it were a few years ago, we’d have to think about a plugin for this.

But, since we have Flows, I figured it would be interesting to give it a try. And it worked.. Not without a few quirks, but I did not have to write a single line of code. Which is unusual to say the least.

Here is how it happened, step-by-step.

1. Creating a solution

First, we need to create a solution..  I’ll be creating a flow in the next step, so I figured flow.microsoft.com would be just the right place to create this solution:

image

2. Creating a flow

I will add more details about the flow down below, but, at this point, here is a screenshot of what the flow will look like once it’s built:

image

3. Preventing default document location logic

There is a bit of a problem with this whole approach. What if somebody created a case and jumped to the “related documents” right away? The flow above is asynchronous, so it won’t create document location immediately. If a user navigates to the related documents too quickly, Dynamics will create default document location.. so we need to prevent that from happening somehow.

That’s what this real-time workflow will be doing:

image

It will run “on create” of the document location records, check if that record is regarding a case AND if it does not have a keyword in the Name field (which will be added to all locations created from the Flow above), and, then, will stop the workflow as cancelled.

Honestly, the error message is going to look somewhat nasty:

image

After a minute or so that grid will get back to normal, but, of course, the users will have to refresh that screen:

image

Besides, most of the time Dynamics users would not create a case just to start uploading documents right away, so there is a good chance that error would not be showing up in 90% of the cases. Still, there seem to be no way around it except the notorious “user training” (as in “yes, if you navigate to the related documents too fast, you will see an error message”)

Actually, the screenshot below shows exactly what’s happening in Dynamics as a result of the flow execution.

And, of course, a folder gets created in Sharepoint:

image

4. What about that flow, though?

Step one is a trigger. We need to create a document location for every new case.

Steps 2 will get case owner user record from Dynamics – this is to get to the business unit.

Which is what step 3 will do – it will query case owner’s business unit record from Dynamics.

image

In the next step, the flow will query document location record by name:

image

This is part of the setup, actually.

For every business unit, an administrator will have to do 2 things in advance:

  • Create a sharepoint library and setup permissions in such a way that only BU users will have access to that library
  • Create a document location record in Dynamics pointing to that sharepoint library (and having the same “name”)

 

Here is an example of the sharepoint library:

image

And here is a corresponding document location record:

image

So, basically, the flow will find that location by name (treecatsoftware is the business unit name), and, then, will use it as a parent location when creating document location for the cases in that business unit.

Finally, we need a foreach in the flow. Technically, there is supposed to be only one record that matches the condition (name = business unit name). I just don’t know how to reference exactly the first one in the record set, so I figured I’ll go with foreach:

image

This first action above will create a folder in Sharepoint for the new case. It’s an http request Sharepoint connector action, and here is the URI:

_api/web/GetFolderByServerRelativePath(decodedurl=’/sites/Dynamics/@{body(‘GetBU’)?[‘name’]}’)/AddSubFolderUsingPath(decodedurl=’@{triggerBody()?[‘incidentid’]}’)

And the second action will create a document location in Dynamics:

image

Here is a link to the exported solution (I did not try to import it anywhere yetSmile ): http://itaintboring.com/downloads/SharepointFolders_1_0_0_0.zip

Unexpected goodies just keep showing up!

I am not sure what’s been happening to the PowerApps product team recently, but, whatever it is, I think a “thank you” is in order.. it just seems they are really working on tiding up the UI these days.

Just today, I noticed a couple of new(?) things, and it feels like.. an unexpected gift. No, really.

I can scroll through all the records in the lookup control now. It will open up with some records loaded, and, then, it will load more data as I keep scrolling. This is it – dark days of the classic UI are over!

image

But there is one less reason to use optionsets in place of lookups now.

Actually, I don’t know when this change happened. Has it been there for a while and I just did not see it?

What I do know, though, is that only a week ago I made this post:

https://www.itaintboring.com/dynamics-crm/quick-tip-get-yourself-a-bit-more-space-when-enabling-dynamics-365-app-for-outlook/

And I‘m pretty sure the post itself does not have anything to do with the fact that it’s not an issue anymore and the grid is not truncated:

image

Well, it’s been a good day, it seemsSmile)

Using FetchXml in the Flows

 

Having established (for myself.. and not without the help from other folks) that CDS Connector is the way to go when working with Dynamics or CDS data, I quickly found that ListRecords action does not support fetchXml. Or, at least, I don’t see a parameter for that:

image

That said, WebAPI endpoint does support fetch execution:

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/webapi/web-api-query-data-sample#bkmk_fetchxml

So this seems to be current limitation of the CDS Connector(intentional or not).

Technically, fetchXml is more powerful  than OData when it comes to building complex queries, when traversing the relationships, etc. I am not sure what the future holds for FetchXml, since it’s a proprietary “query language” developed and maintained by Microsoft, but, for now at least, it’s a legitimate way of querying data from Dynamics and/or from CDS.

So, what if I wanted to use FetchXml to validate some advanced conditions in my Flow?

Not to go too far into the complexities of FetchXml, let’s try adding a condition that verifies if a primary contact on the account that’s just been created has associated cases. The query would look more or less like this:

image

And, if that query returns any cases at all for the account, I’d like my flow to add a note to the account description field.

So, to give you an idea of how my flow will look like, eventually, here is a screenshot:

image

The first step is straightforward – it’s a CDS Connector trigger which will kick in whenever an account record is created.

The second step is where I’ll actually run FetchXml.

The third and forth steps are all about parsing the results and verifying the condition.

For the second step, even though it’s probably possible to do the same with a pure HTTP connector, I figured I’d use an Http with Azure AD (Preview) connector instead. Turned out it does take care of the authentication already, so I don’t need to worry about that (with a note, though, that I am not exactly sure what’s going to happen when/if I add this flow to a solution and export/import to another Dynamics CE instance.. will try it later).

There seem to be two tricks about that connector, and it took me a little while to figure them out (I’ve almost given up to tell the truth). When you are adding it to the flow, you’ll be presented with this screen:

image

I used “Invoke an HTTP request” action in my flow(and you’ll see below how that action was set up), so, let’s say you’ve selected that action.

Depending on something in your environment (and I am not sure what it is exactly), you will see one of these two screens after that:

Screen a:

image

Screen b:

image

If you see screen a right away, that means your HTTP with Azure connector has picked up a connection, and it’s not necessarily the right connection. In my case, I quickly discovered (once I tried running the flow) that my action was permitted to access Sharepoint but not Dynamics – that must have something to do with Azure AD OAuth, although I am not 100% sure of what’s happening behind the scene yet.

So, if, somehow, you run into this issue, make sure to verify the connection your action has picked up, and, if it’s not the right one, create a new connection:

image

This will bring you back to Screen B from above. Once you are there, fill in the textboxes like this:

image

Do not put “.api.” in the url-s as you would normally do when accessing WebAPI endpoint. Just use root url for your dynamics instance.

After that, sign in, and you’ll be back to Screen A with the right connection this time.

The rest should be straightforward..

Set up the action like this:

image

  • Choose Get method
  • Make sure to use root instance url for the request (do not add “.api.” in the middle)
  • Add FetchXML to the url (download it form the advanced find, update as required, etc)
  • Don’t forget to update filter condition in the FetchXML so that the request is using correct account id

 

Next, add Parse JSON action like this:

image

You can just run the same url that you put into the Http action directly in the browser to generate sample data, and, then, you can feed that sample data to the “use sample payload to generate schema” tool of the “Parse JSON” action.

And the last step – just add a condition:

image

I used an expression to get the total # of records in the result set (length function) – you can see it in the tooltip on the screenshot. But, in retrospect, fetchXml aggregation might work even better for this scenario.

Do something when the condition evaluates to true (or false), and the flow is ready:

image

Time for a test drive? Here we go:

image

image