Monthly Archives: February 2019

Optionsets mystery


There was a time when Dynamics did not want to assign optionset values in the workflows. If you are wondering how it was, the issue(and a possible workaround) is best described here:

So, earlier today when somebody showed me how they still can’t assign an optionset field I was 100% certain that’s exactly the same problem. Turned out it’s not necessarily the same, though, since somebody else came up and said “try a global optionset”. Surprisingly, it worked.

So, if you are experiencing the same strange behavior in the workflow designer (if you are trying to assign one optionset field from another and it does not work), you might try using the same global optionset for both fields – it works then.



Interestingly, it also works when you are using a lookup to the same entity. It’s almost as if there were a “type validation” of some kind. Apparently, it passes for the global optionset. It also passes if it’s the same attribute of the same entity(not necessarily the same record):


PS. As a couple of people pointed out(no less than George Doubinski and Gus Gonzalez.. I guess they took it to heart that somebody is still using local optionsets), workflow designer has never had the ability to map local optionsets. And yes, the article below is talking about this behavior in CRM 2011:

I guess I simply managed to avoid this problem so far by always using global optionsets most of the times (and by not using a lot of update steps to set optionset values in the workflows:) )

Working with Queues in Dynamics

I have always been a bit cautious of the queues in Dynamics since I could not fully understand when to use them. I don’t know, somehow my technical knowledge of what they are just did not materialize into a clear understanding of what to do with them.

This is until on one project business users just said “we will be using queues”. And on another project somebody asked if they should be using queues. So, if you are in the same boat, I’m hoping this post will help.

Basically, queues are called queues since you can add items to them. Yet it’s not just one specific entity type per queue – you can add different entities to the same queue (as long as queues have been enabled for the corresponding entities).. which that makes queues very different from the regular entity views.

Now let’s say you have queues and there are some items in them – you can look at those queues from different perspectives:

You can look at the items you are working on:


Or you can look at all items:


And, as you can see above, you can either look at the items in all queues (which includes public queues and those private queues you are a member of), or you can look at the queues you are a member of.

What’s involved there is 3 different entities:

  • There is an entity that can be added to the queue (case, for example)
  • There is a Queue
  • And there is a “Queue Item” entity which links an “item” to a “queue”


There can be only one Queue Item per record – you can’t put the same record (case in this example) in more than one queue.

This is where things get a bit confusing from the terminology standpoint. I think I’m going to use “Queue Item” for the queue item entities, and I’m going to use “item record” for the actual cases (or other records) added to the queue and referenced by the queue items.

Also, that Queue Item entity has an interesting property which is saying if a queue item is being worked by somebody:


If an item is being worked by you, you’ll see that item in the “Items I am working on” view.

If an items is not being worked by anybody, you’ll see it in the “Items available to work on” view.

Once you’ve selected an item in the list, you can use “PICK” button from the command bar. That button gives you a choice of either removing queue item from the queue or keeping it there:


But, if you choose not to keep that queue item in the original queue on the screen above, which queue is it going to be in then? And is it going to be anywhere at all?

Hope you remember that every user and/or team would normally have a default queue:


All the records assigned to the user will automatically go to that queue (even if they used to be in a different queue). Actually, it’s also where the records will be placed in the “pick” scenario above if you choose not to keep queue items in the original queue when using “pick” option.

But, it’s only if the entity is configured that way – cases are, by default, but they don’t have to be, so what if we re-configure that setting on the entity configuration screen (and publish etc)?


After some experimenting, I think here is how “PICK” really works:

  • It will assign an record (case) to you
  • Once the record (case) is assigned to  you, and if you opted to remove queue item from the Queue, the queue item will either be deleted, or it will be re-linked to your private queue (depending on the entity configuration – see the screenshot above)
  • If you opted not to remove that queue item from the Queue, the queue item will still be linked to the original queue
  • In either case, if there is still a queue item, that queue item will be updated so that “worked by” will be showing your user name

So, then, the basic scenario for working with queues is, probably, this(Assuming your entities “automatically move .. to the owner’s default queue when a record is created or assigned”:

  1. Make sure your entities are configured so that records move to the owner’s default queue when a record is created or assigned
  2. Start looking at the queues screen periodically (maybe create a dashboard, too)
  3. If you want to see what you are working on, just pick “Items I am working on” view, and select all queues. That’s your current work
  4. If you need more work, look at the “Items available to work on”, select one, and PICK it from the queue. Do not remove that item from the queue. Corresponding record will be assigned to you, it will stay in the current queue, and a queue item will be “worked by” you
  5. Once you are done working on the item, either remove it(no more work.. closing) or release it (somebody else will probably need to pick it). As an option, you can first add it to a different queue, and, then, release


There can be a few variations depending on the configuration settings and selections discussed above, but, basically, it’s all about working with those queue views. Those views become your “home page” since you don’t even need to look at the individual entity views to see your workload.

However, what remains is the question of control. As in, how do we ensure that an item has not been forgotten/left unattended for too long? What if nobody wants to pick an item from the queue? Or what if there is a record which, somehow, just has not been added to a queue at all?

It’s a 1:N relationship between entities and queue items in Dynamics:

So you cannot, really, set up a workflow on the case entity to watch for the queue item changes. You can create a workflow on the queue item entity, though, so, through the queue item ,you may be  able to update a field on the case.. or on another queue-enabled entity. Then you can use that field to run notification workflows, or to build views, etc. But you’ll have to do that separately for each queue-enabled entity, so this solution does not sound very promising.

And, actually, this is why the whole concept of SLA-s was introduced, but, somehow, it’s never been discussed in the context of the queue items. Even more, we cannot enable SLA-s for the queue items (unless we implement those SLA-s manually using different workflows etc)

Of course you can build a separate view to show you all queue items which have no “worked by” and which have not been modified for a few days.. But that’s not necessarily what you need either because there could be different conditions for different types of work(for different entities).

Maybe that’s the gap that has not been fully addressed yet? It seems for now that kind of “control” has to be implemented against the individual entities rather than against the queue items.

But, as usual, if you think there is another option.. let me know!


Testing solution layering


As I mentioned in the previous post, solution layering seems to be explained quite well in the solution lifecycle management whitepaper, so I figured I’d give a try to something here.. and, yes, it got a bit confusing.

Here is what I did in the source instance:

  • Created a solution and added a new entity to that solution. “Name” attribute of that entity was set to 100 length. Then exported it as a managed solution
  • Created another solution, added the same entity to that solution (with all assets), set “name” property length to 150, and exported that as a managed solution, too
  • Increased version number of the original solution, updated “name” property length to be 120 this time, and added a new attribute to the entity. Then exported this version of the original solution as a managed solution again
  • And, then, imported those solution into a new instance in exactly that order


And the results?

1. Step 1 (importing the original solution)



2. Step 2 (importing the second solution)


3. Step 3 (importing new version of the original solution)



So, as expected, “Name” field kept its length in the target instance even though it had a different length in the updated version of the original solution. This is layering in action.. What about that new attribute, though? How did that attribute show up in the target instance if it was not included into solution B? In other words, why did “name” attribute changes did not show up, but that new attribute did, even though both changes were introduced in the updated version of the original solution?


The blurb above is talking about the unmanaged layer, but maybe there is more to it.. Let’s try this:

  • Update tst_newattribute length in the source (from 100 to 150)
  • Export updated version of solution B

Import it to the target instance

Here we go, it’s 150 in the target instance now:


  • Ok.. one last test. Let’s change it to 120 and prepare another updated version of the original solution (not solution B). The import it to the target.

And of course, it’s still l150 in the target:



Interesting stuff, so I figured maybe there is a slightly different explanation of layering.


imageWhen looking at the timeline, it seems at least solutions are not installed in layers. Actually, maybe the whole concept of layering is a bit confusing since it’s not, really, all about layering.

It seems to be like this:

  • As the solutions are installed, they can introduce the same component multiple times
  • Each solution can have many different versions of the same component. Within each solution’s stack of versions, the most recent version of the component is what takes precedence
  • Where the same component exists in more than one solution, Dynamics decides which version of the component will “surface” depending on which solution stack of versions first introduced that component more recently


And, as for the unmanaged changes, they have the power to break that logic above by introducing a change “here and now”. Although, what happens when we choose to update unmanaged customizations when importing the managed solution.. Does that just remove unmanaged component version in the process?


Solution Lifecycle Management for Dynamics

A new version of the solution lifecycle management whitepaper for Dynamics has been published recently, so I was reading it the other night.. and I figured I’d share a few thoughts. But, actually, if you wanted to see the whitepaper first, you can download it from this page:

First, there is one thing this whitepaper is doing really well – it’s explaining solution layering to the point where all the questions seem to be answered.

1. It is worth looking at the behavior attribute if you like knowing how things work


There are a few examples of what that behavior attribute stands for, and I probably still need to digest some of those bits

2. There are a few awesome example of upgrade scenarios

I’ll admit it – I could never fully understand the layering. Having read this version, I think I’m getting it now. Those diagrams are simply brilliant in that sense – don’t miss them if you are trying to make sense of the ALM for Dynamics:


3. There is a lot of detailed information on patching, cloning, etc

Make sure to read through those details even if you are skeptical of this whole ALM thing. It’s definitely worth reading.

Once you’ve gone over all the technical details, you will be getting into the world of recommendations, though. The difference is that with the recommendations you have a choice – you can choose to follow all of them, some of them, or none of them at all.

There is no argument that solution layering in Dynamics is a rather advanced concept, and it’s also rather confusing.

So maybe it’s worth thinking of why it’s there in the first place. The whitepaper provides an idea of the answer:


But I think there is more to it. To start with, what’s so wrong with the compensating changes?

From my standpoint, nothing.

However, where I stand we are developing internal solutions where we know what we are doing, so we can create those compensating changes, and, possibly, script them through the SDK/API, etc.

What if there is a third party solution that needs to be deployed into the environment? Suddenly, that ability to delete managed solutions with all associated customizations starts looking much more attractive. We try it, we don’t like it, so we delete it. Easy.

Is it, though?

As far as “delete” strategies go, any organization that has even the weakest auditing requirements will likely revoke “hard delete” permissions from anyone leaving “soft delete” (which is “deactivate”) option only. And, yet, when deleting a managed solution, you are not deleting just the solution itself. If there are any attributes which are not included into the other solutions, you’ll be losing the attributes, the data in them, and the auditing logs data for them. And what if you delete the whole entity?

So, deleting a solution can easily turn into a project of its own if you still want to meet your auditing requirements once it’s all done and over, since, technically, you’d need to run a few tests, to analyze the data, to talk to the users, to confirm with the regulations.. might be easier and cheaper to not even start this kind of project.

If you eventually decide to delete a component, managed solutions can help because of the reference counting. Dynamics will only delete a component (an attribute, for instance), once there are no references from the managed solutions. Which is an extra layer of protection for you.

Still, here is at least some of what you lose when you start deploying managed solutions:

  • You can’t restore your dev instance from production backup, since you won’t be able to export from the restored instance
  • There are some interesting side-effects of the “instance copy” approach, btw. Imagine you have marketing enabled in production, but it’s not something you need or want to really enable in dev. Those licenses are rather expensive, after all. Still, you might want to update a form for marketing, so you would that solution in dev. You could bring it to dev through the instance copy. Marketing in general wouldn’t work because of the missing license, but all customizations would still be in dev that way, so you’d be able to work with those customizations
  • When looking at the application behavior in production, you have to keep layering in mind. Things might not be exactly what they look like since there can be layers over layers, and it may even depend on the order in which different solutions have been deployed
  • So.. managed? Or unmanaged? I don’t think it’s been settled once and for all yet. Although.. if the solution is for external distribution, I’d be arguing in favor of “managed” any time.

Then, there is the question of instances. To start with, they are not free anymore.

Image result for expensive

The idea of having an instance per developer works perfectly well in the on-prem environments, but, when it comes to the online instances, this is where (at least for now), we have to pay. Is it worth it? Is it not worth it? It’s definitely making it more difficult to try.

And then there is tooling

Merging is the most complicated problem in Dynamics. As soon as you start getting multiple instances you have to start thinking of how to merge all the configurations. Of course you can try segmentation, but it’s not a holy grail. No matter how accurate you are in making sure that everyone in your team is only working on their own changes and there is no interference, it will still happen. There will still be a need to merge something every now and then.

SolutionPackager is trying to address that need and, in mind mind, does a wonderful job. Just that’s not, really, enough. You can export your solution, you can identify where the changes are by looking at the updated XML files, and, then, if there are conflicting changes, you have to fire up a Dynamics instance and do the merge manually there. Technically, you are not supposed to manually merge those XML files(you can try.. but, if you wanted to, you’d have to have a very advanced understanding of those files in the first place). So it’s useful, but it’s not as if you were using source control to merge a couple of C# files.

Then, for the configuration data, you have to introduce a manual step of preparing the data. There were a few attempts to make this an automated process, including this tool of my own: But, in the end, I tend to think that it might be easier to store that kind of configuration data in CSV format somewhere in the source control and use a utility to import that data to Dynamics as part of the deployment (and, possibly, do the same even for dev environment). So your dev instance would not be the source of this data – it would be coming right from a CSV file

All that said, I think the whole reason for this complexity is not that Dynamics has not been trying to make things easier

It’s just a very complicated task given that Dynamics is a platform, and there can be many applications running on that platform in the same instance. They all have to live together, comply to some rules, follow some upgrade procedures, and don’t break each other in the process.

So, to conclude this post.. Of course it would be really great if some kind of API were introduced in Dynamics to do all the configuration changes programmatically, since you know how it goes with the customization.xml – manual changes are only supported to a certain extent.. Maybe it’ll happen at some point – I can’t even start imagining what tools the community would come up with then.

For now, though, make sure not to ignore this whitepaper. Whether you agree with everything there or not, there is a lot of technical knowledge there that will help you in setting up your ALM processes for Dynamics.