Monthly Archives: February 2019

Office365 authentication for the SDK & XrmToolBox

 

One thing about Dynamics which I love and hate at the same time is that I’m constantly getting challenged. In the old days we had an on-premise system where the updates were rare, where we had access to the database, and where, to a certain extent, we were in control. At least it sometimes felt like this (and it’s such a nice memory, so let’s just pretend we never had to deal with the Generic SQL errors)

Things have changed, though, and Dynamics is now part of the huge online ecosystem which goes far beyond Dynamics or PowerPlatform. There are different concepts, principles, applications, and technologies involved there, and it’s almost impossible to know all of them.

However, and this is where that love-hate cliché comes into play, if you are working with Dynamics these days, you are almost inevitably getting exposed to the other parts of this ecosystem.

Anyway, if you think this post was too abstract so far, here is how my morning started today:

  • XrmToolBox was not connecting to the Dynamics instances
  • Deployment automation scripts were throwing errors

And, yet, Dynamics was working just fine in the browser or when we tried PluginRegistration tool

Besides, XrmToolBox and some console applications (which were using SDK) were connecting just fine to the Dynamics instance hosted in another tenant.

Have I mentioned that it was all working yesterday?

Having done a few more tests, we decided that it’s probably Office365 authentication that was not working anymore.

https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/xrm-tooling/use-connection-strings-xrm-tooling-connect

image

But why? Of course there were voices claiming Microsoft did a silent update..

This is where I should probably clarify that it is a relatively big environment, and not everyone working with Dynamics would have global administrator permissions. In other words, we don’t always know what’s happening in this tenant outside of Dynamics.

Turns out there are settings in the Azure Active directory we did not even know about. They were updated the night before by the tenant admins, and that affected our ability to use Office365 authentication.

Here is another link:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/block-legacy-authentication

And here is a screenshot:

image

Yes, it turns out that it’s possible to disable Office365 authentication through the conditional access policies in the Azure Active Directory.

Once required exclusions were added there, everything went back to normal and we had XrmToolBox and other tools are working now.

And the takeaway? Well, just keep in mind that Office365 authentication falls into the category of legacy authentication methods, and, so, it can be disabled through the conditional access policies. So if you want it to work for the time being, and if you have conditional access policies set up as per the link above, you’ll need to make sure those policies implement all required exclusions.

PS. Why did we have to use Office365 authentication in the first place? Normally, we have multi-factor authentication enabled for the users. So we were looking at how to bypass the MFA for our tools(would not be very convenient if we had to go over the MFA process every time we wanted to prepare a nightly build, for example), and it turned out Office365 authentication ignores MFA. Which is, probably, one reason it is considered “legacy” from the policy standpoint.

Alphabetical filtering: the same but different

When you are working with the grids in Dynamics, the last line shows up as a list of letters from A to Z, so you can click one of the letters and that will quickly apply a filter to the grid.

What kind of filter, though?

After some digging around earlier today, it turned out the answer depends on whether you are using Unified Interface or whether you are using Classic Interface.

Classic interface is filtering by the first ordered column

image

Unified Interface is filtering by the first (ordered or not) column.. or so I thought until Feb 15

image

What you see below is my original interpretation (things have changed – keep reading):

This means the behavior is slightly different between those two interfaces, although, it’s probably not such a big issue. Unless you are used to the way alphabetical filtering used to work before, you just need to know how it works in the Unified Interface since that’s the one to stay.

Besides, it may actually be a more straightforward approach. When sorting is applied to multiple columns in the grid, alphabetical filtering becomes a bit confusing since it only applies to the first sorted column, so it’s just more nuances for the users to keep in mind.

Added on Feb 15: but what about the screenshot below?

image

There is no First or Last name in the view, and, yet, some record does show up there once I’ve selected “F” for the filter. The only explanation is that the contact has this kind of full name:

image

It seems what’s really happening is that Unified Interface applies filtering on the entity primary field, even there is no column for that field in the grid.

Well.. Wondering if there is still more to it.

HTTP Connector Support for Data Loss Prevention Policies

With the recent update, HTTP Connector Support has been implemented so we can add HTTP Connector to the DLP policies now. This is different from HTTP with Azure AD, btw, and this may affect you if you are using HTTP Actions in the flows.

For all the finer details, have a look at the post below:

https://flow.microsoft.com/en-us/blog/introducing-http-and-custom-connector-support-for-data-loss-prevention-policies/

Here is just a summary of what it is about:

HTTP Connector Support

The HTTP actions and triggers up to this point have not been considered connectors. Due to customer feedback, we decided to go ahead and re-categorize those items so they could be subject to DLP to offer customers a greater level of flexibility and control over their environments.

We have added the option to support these triggers/actions when a policy is created or modified using the PowerShell cmdlets or given Flow Templates. More specifically, you can now manage:

  • HTTP (and HTTP + Swagger)

  • HTTP Webhook

  • HTTP Request

Activating or deactivating somebody else’s workflow when you are not a System Admin

 

We were investigating the option of using non-interactive user account for automated deployment, and it was all going great until we ran into the error below while importing the solution:

“This process cannot be activated or deactivated by someone who is not its owner. If you contact support, please provide the technical details.”

image

Actually, I remember having seen this error before. As in, a few years ago it would be normal to see it. Then it went away somehow, so, normally acting as a System Administrator, I have not seen it for a while. Apparently, what we saw this time had something to do with the permissions granted to our non-interactive account – it was set up as System Customizer.

So, just in case you run into it and to save you some time, here is the permission you’ll need to add to the System Customizer role in this scenario – took us a bit of testing to figure it out:

clip_image002

(I also noticed that it may take a minute to “apply” this “Act on Behalf of Another User” permission.. then it all starts to work)

Optionsets mystery

 

There was a time when Dynamics did not want to assign optionset values in the workflows. If you are wondering how it was, the issue(and a possible workaround) is best described here:

https://crmtipoftheday.com/870/how-to-set-an-optionset-value-in-the-workflow/

So, earlier today when somebody showed me how they still can’t assign an optionset field I was 100% certain that’s exactly the same problem. Turned out it’s not necessarily the same, though, since somebody else came up and said “try a global optionset”. Surprisingly, it worked.

So, if you are experiencing the same strange behavior in the workflow designer (if you are trying to assign one optionset field from another and it does not work), you might try using the same global optionset for both fields – it works then.

image

image

Interestingly, it also works when you are using a lookup to the same entity. It’s almost as if there were a “type validation” of some kind. Apparently, it passes for the global optionset. It also passes if it’s the same attribute of the same entity(not necessarily the same record):

image

PS. As a couple of people pointed out(no less than George Doubinski and Gus Gonzalez.. I guess they took it to heart that somebody is still using local optionsets), workflow designer has never had the ability to map local optionsets. And yes, the article below is talking about this behavior in CRM 2011:

https://www.c5insight.com/Resources/Blog/tabid/148/entryid/298/CRM-2011-Option-Set-Mapping-in-Workflows.aspx

I guess I simply managed to avoid this problem so far by always using global optionsets most of the times (and by not using a lot of update steps to set optionset values in the workflows:) )

Working with Queues in Dynamics

I have always been a bit cautious of the queues in Dynamics since I could not fully understand when to use them. I don’t know, somehow my technical knowledge of what they are just did not materialize into a clear understanding of what to do with them.

This is until on one project business users just said “we will be using queues”. And on another project somebody asked if they should be using queues. So, if you are in the same boat, I’m hoping this post will help.

Basically, queues are called queues since you can add items to them. Yet it’s not just one specific entity type per queue – you can add different entities to the same queue (as long as queues have been enabled for the corresponding entities).. which that makes queues very different from the regular entity views.

Now let’s say you have queues and there are some items in them – you can look at those queues from different perspectives:

You can look at the items you are working on:

image

Or you can look at all items:

image

And, as you can see above, you can either look at the items in all queues (which includes public queues and those private queues you are a member of), or you can look at the queues you are a member of.

What’s involved there is 3 different entities:

  • There is an entity that can be added to the queue (case, for example)
  • There is a Queue
  • And there is a “Queue Item” entity which links an “item” to a “queue”

 

There can be only one Queue Item per record – you can’t put the same record (case in this example) in more than one queue.

This is where things get a bit confusing from the terminology standpoint. I think I’m going to use “Queue Item” for the queue item entities, and I’m going to use “item record” for the actual cases (or other records) added to the queue and referenced by the queue items.

Also, that Queue Item entity has an interesting property which is saying if a queue item is being worked by somebody:

image

If an item is being worked by you, you’ll see that item in the “Items I am working on” view.

If an items is not being worked by anybody, you’ll see it in the “Items available to work on” view.

Once you’ve selected an item in the list, you can use “PICK” button from the command bar. That button gives you a choice of either removing queue item from the queue or keeping it there:

image

But, if you choose not to keep that queue item in the original queue on the screen above, which queue is it going to be in then? And is it going to be anywhere at all?

Hope you remember that every user and/or team would normally have a default queue:

image

All the records assigned to the user will automatically go to that queue (even if they used to be in a different queue). Actually, it’s also where the records will be placed in the “pick” scenario above if you choose not to keep queue items in the original queue when using “pick” option.

But, it’s only if the entity is configured that way – cases are, by default, but they don’t have to be, so what if we re-configure that setting on the entity configuration screen (and publish etc)?

image

After some experimenting, I think here is how “PICK” really works:

  • It will assign an record (case) to you
  • Once the record (case) is assigned to  you, and if you opted to remove queue item from the Queue, the queue item will either be deleted, or it will be re-linked to your private queue (depending on the entity configuration – see the screenshot above)
  • If you opted not to remove that queue item from the Queue, the queue item will still be linked to the original queue
  • In either case, if there is still a queue item, that queue item will be updated so that “worked by” will be showing your user name

So, then, the basic scenario for working with queues is, probably, this(Assuming your entities “automatically move .. to the owner’s default queue when a record is created or assigned”:

  1. Make sure your entities are configured so that records move to the owner’s default queue when a record is created or assigned
  2. Start looking at the queues screen periodically (maybe create a dashboard, too)
  3. If you want to see what you are working on, just pick “Items I am working on” view, and select all queues. That’s your current work
  4. If you need more work, look at the “Items available to work on”, select one, and PICK it from the queue. Do not remove that item from the queue. Corresponding record will be assigned to you, it will stay in the current queue, and a queue item will be “worked by” you
  5. Once you are done working on the item, either remove it(no more work.. closing) or release it (somebody else will probably need to pick it). As an option, you can first add it to a different queue, and, then, release

 

There can be a few variations depending on the configuration settings and selections discussed above, but, basically, it’s all about working with those queue views. Those views become your “home page” since you don’t even need to look at the individual entity views to see your workload.

However, what remains is the question of control. As in, how do we ensure that an item has not been forgotten/left unattended for too long? What if nobody wants to pick an item from the queue? Or what if there is a record which, somehow, just has not been added to a queue at all?

It’s a 1:N relationship between entities and queue items in Dynamics:

So you cannot, really, set up a workflow on the case entity to watch for the queue item changes. You can create a workflow on the queue item entity, though, so, through the queue item ,you may be  able to update a field on the case.. or on another queue-enabled entity. Then you can use that field to run notification workflows, or to build views, etc. But you’ll have to do that separately for each queue-enabled entity, so this solution does not sound very promising.

And, actually, this is why the whole concept of SLA-s was introduced, but, somehow, it’s never been discussed in the context of the queue items. Even more, we cannot enable SLA-s for the queue items (unless we implement those SLA-s manually using different workflows etc)

Of course you can build a separate view to show you all queue items which have no “worked by” and which have not been modified for a few days.. But that’s not necessarily what you need either because there could be different conditions for different types of work(for different entities).

Maybe that’s the gap that has not been fully addressed yet? It seems for now that kind of “control” has to be implemented against the individual entities rather than against the queue items.

But, as usual, if you think there is another option.. let me know!

 

Testing solution layering

 

As I mentioned in the previous post, solution layering seems to be explained quite well in the solution lifecycle management whitepaper, so I figured I’d give a try to something here.. and, yes, it got a bit confusing.

Here is what I did in the source instance:

  • Created a solution and added a new entity to that solution. “Name” attribute of that entity was set to 100 length. Then exported it as a managed solution
  • Created another solution, added the same entity to that solution (with all assets), set “name” property length to 150, and exported that as a managed solution, too
  • Increased version number of the original solution, updated “name” property length to be 120 this time, and added a new attribute to the entity. Then exported this version of the original solution as a managed solution again
  • And, then, imported those solution into a new instance in exactly that order

 

And the results?

1. Step 1 (importing the original solution)

image

image

2. Step 2 (importing the second solution)

image

3. Step 3 (importing new version of the original solution)

image

image

So, as expected, “Name” field kept its length in the target instance even though it had a different length in the updated version of the original solution. This is layering in action.. What about that new attribute, though? How did that attribute show up in the target instance if it was not included into solution B? In other words, why did “name” attribute changes did not show up, but that new attribute did, even though both changes were introduced in the updated version of the original solution?

image

The blurb above is talking about the unmanaged layer, but maybe there is more to it.. Let’s try this:

  • Update tst_newattribute length in the source (from 100 to 150)
  • Export updated version of solution B

Import it to the target instance

Here we go, it’s 150 in the target instance now:

image

  • Ok.. one last test. Let’s change it to 120 and prepare another updated version of the original solution (not solution B). The import it to the target.

And of course, it’s still l150 in the target:

image

 

Interesting stuff, so I figured maybe there is a slightly different explanation of layering.

 

imageWhen looking at the timeline, it seems at least solutions are not installed in layers. Actually, maybe the whole concept of layering is a bit confusing since it’s not, really, all about layering.

It seems to be like this:

  • As the solutions are installed, they can introduce the same component multiple times
  • Each solution can have many different versions of the same component. Within each solution’s stack of versions, the most recent version of the component is what takes precedence
  • Where the same component exists in more than one solution, Dynamics decides which version of the component will “surface” depending on which solution stack of versions first introduced that component more recently

 

And, as for the unmanaged changes, they have the power to break that logic above by introducing a change “here and now”. Although, what happens when we choose to update unmanaged customizations when importing the managed solution.. Does that just remove unmanaged component version in the process?

 

Solution Lifecycle Management for Dynamics

A new version of the solution lifecycle management whitepaper for Dynamics has been published recently, so I was reading it the other night.. and I figured I’d share a few thoughts. But, actually, if you wanted to see the whitepaper first, you can download it from this page:

https://www.microsoft.com/en-us/download/details.aspx?id=57777

First, there is one thing this whitepaper is doing really well – it’s explaining solution layering to the point where all the questions seem to be answered.

1. It is worth looking at the behavior attribute if you like knowing how things work

image

There are a few examples of what that behavior attribute stands for, and I probably still need to digest some of those bits

2. There are a few awesome example of upgrade scenarios

I’ll admit it – I could never fully understand the layering. Having read this version, I think I’m getting it now. Those diagrams are simply brilliant in that sense – don’t miss them if you are trying to make sense of the ALM for Dynamics:

 image

3. There is a lot of detailed information on patching, cloning, etc

Make sure to read through those details even if you are skeptical of this whole ALM thing. It’s definitely worth reading.

Once you’ve gone over all the technical details, you will be getting into the world of recommendations, though. The difference is that with the recommendations you have a choice – you can choose to follow all of them, some of them, or none of them at all.

There is no argument that solution layering in Dynamics is a rather advanced concept, and it’s also rather confusing.

So maybe it’s worth thinking of why it’s there in the first place. The whitepaper provides an idea of the answer:

image

But I think there is more to it. To start with, what’s so wrong with the compensating changes?

From my standpoint, nothing.

However, where I stand we are developing internal solutions where we know what we are doing, so we can create those compensating changes, and, possibly, script them through the SDK/API, etc.

What if there is a third party solution that needs to be deployed into the environment? Suddenly, that ability to delete managed solutions with all associated customizations starts looking much more attractive. We try it, we don’t like it, so we delete it. Easy.

Is it, though?

As far as “delete” strategies go, any organization that has even the weakest auditing requirements will likely revoke “hard delete” permissions from anyone leaving “soft delete” (which is “deactivate”) option only. And, yet, when deleting a managed solution, you are not deleting just the solution itself. If there are any attributes which are not included into the other solutions, you’ll be losing the attributes, the data in them, and the auditing logs data for them. And what if you delete the whole entity?

So, deleting a solution can easily turn into a project of its own if you still want to meet your auditing requirements once it’s all done and over, since, technically, you’d need to run a few tests, to analyze the data, to talk to the users, to confirm with the regulations.. might be easier and cheaper to not even start this kind of project.

If you eventually decide to delete a component, managed solutions can help because of the reference counting. Dynamics will only delete a component (an attribute, for instance), once there are no references from the managed solutions. Which is an extra layer of protection for you.

Still, here is at least some of what you lose when you start deploying managed solutions:

  • You can’t restore your dev instance from production backup, since you won’t be able to export from the restored instance
  • There are some interesting side-effects of the “instance copy” approach, btw. Imagine you have marketing enabled in production, but it’s not something you need or want to really enable in dev. Those licenses are rather expensive, after all. Still, you might want to update a form for marketing, so you would that solution in dev. You could bring it to dev through the instance copy. Marketing in general wouldn’t work because of the missing license, but all customizations would still be in dev that way, so you’d be able to work with those customizations
  • When looking at the application behavior in production, you have to keep layering in mind. Things might not be exactly what they look like since there can be layers over layers, and it may even depend on the order in which different solutions have been deployed
  • So.. managed? Or unmanaged? I don’t think it’s been settled once and for all yet. Although.. if the solution is for external distribution, I’d be arguing in favor of “managed” any time.

Then, there is the question of instances. To start with, they are not free anymore.

Image result for expensive

The idea of having an instance per developer works perfectly well in the on-prem environments, but, when it comes to the online instances, this is where (at least for now), we have to pay. Is it worth it? Is it not worth it? It’s definitely making it more difficult to try.

And then there is tooling

Merging is the most complicated problem in Dynamics. As soon as you start getting multiple instances you have to start thinking of how to merge all the configurations. Of course you can try segmentation, but it’s not a holy grail. No matter how accurate you are in making sure that everyone in your team is only working on their own changes and there is no interference, it will still happen. There will still be a need to merge something every now and then.

SolutionPackager is trying to address that need and, in mind mind, does a wonderful job. Just that’s not, really, enough. You can export your solution, you can identify where the changes are by looking at the updated XML files, and, then, if there are conflicting changes, you have to fire up a Dynamics instance and do the merge manually there. Technically, you are not supposed to manually merge those XML files(you can try.. but, if you wanted to, you’d have to have a very advanced understanding of those files in the first place). So it’s useful, but it’s not as if you were using source control to merge a couple of C# files.

Then, for the configuration data, you have to introduce a manual step of preparing the data. There were a few attempts to make this an automated process, including this tool of my own: http://www.itaintboring.com/tag/ezchange/ But, in the end, I tend to think that it might be easier to store that kind of configuration data in CSV format somewhere in the source control and use a utility to import that data to Dynamics as part of the deployment (and, possibly, do the same even for dev environment). So your dev instance would not be the source of this data – it would be coming right from a CSV file

All that said, I think the whole reason for this complexity is not that Dynamics has not been trying to make things easier

It’s just a very complicated task given that Dynamics is a platform, and there can be many applications running on that platform in the same instance. They all have to live together, comply to some rules, follow some upgrade procedures, and don’t break each other in the process.

So, to conclude this post.. Of course it would be really great if some kind of API were introduced in Dynamics to do all the configuration changes programmatically, since you know how it goes with the customization.xml – manual changes are only supported to a certain extent.. Maybe it’ll happen at some point – I can’t even start imagining what tools the community would come up with then.

For now, though, make sure not to ignore this whitepaper. Whether you agree with everything there or not, there is a lot of technical knowledge there that will help you in setting up your ALM processes for Dynamics.