Monthly Archives: June 2018

Skipping process initialization on create

We had a scenario where we did not want to initialize a BPF when a record is created – basically, those were records for which it would take some time before anybody would really need to start looking at them. Until that moment, they would be staying “dormant” in the system, and, then, once they’ve been marked as “ready”, the process would really start.

So, if we did start the BPF right away, we would end up with incorrect process duration since all those records could be spending days or weeks in the “dormant” state, and, then, they could be processed in just a few days once they are activated.

Anyway, somehow we had to prevent Dynamics from automatically initializing a BPF for those records.

Turned out there is an extremely simple solution that’s been mentioned here:

https://blogs.msdn.microsoft.com/crm/2017/10/03/handling-business-process-flows-bpfs-on-record-create/

We just had to set processid to Guid.Empty on all those records initially, and that’s just the right task for a simple plugin:

public void Execute(IServiceProvider serviceProvider)
  {
             IPluginExecutionContext context = (IPluginExecutionContext)
                 serviceProvider.GetService(typeof(IPluginExecutionContext));
             Entity target = (Entity)context.InputParameters[“Target”];
             target[“processid”] = Guid.Empty;
  }

Register that plugin on pre-create of the entity, and that’s it.. Every new record will get Guid.Empty for the processid, so Dynamics will skip initializing the BPF.

Parental relationship does not need “assign” permission to propagate the assignment

This might be something to keep in mind.. If you have two entities with a parental relationship between them, your users may still be able to re-assign child record to others even if they don’t have “write/assign” permissions on the child entity.

In the example below, Sales Person role does not give “write” and/or “assign” permissions on the Test SLA entity:

image

So a SalesPerson can’t do anything with Test SLA directly:

image

But they can still go to the parent record which is currently assigned to me:

image

And re-assign that record to themselves:

image

And here we go – that child “Test SLA” record is, now, re-assigned to the Sales Person user as well:

image

Dynamics: What are your Reporting Options

 

When looking at the reporting options in Dynamics, it sometimes feels that there are just way too many, and, even though there are lots of good ones, there is always something that seems to be missing.

Just so we could start somewhere, let’s see what Microsoft has to say:

https://technet.microsoft.com/en-us/library/dn531183.aspx

In other words, we are talking about SSRS, Dashboards, and Power BI. Realistically, though, dashboards are nothing but layout pages where we can add charts and views, so we should really be talking about charts and views there. Also, Power BI is somewhat limited in the on-prem environments, and SSRS is somewhat limited in the online environments.

But that’s not all. There are other options, too. After all, the purpose of reporting is to give additional insight into the data we have in the system, and, come to think of it, there are at least a few more options:

  • Advanced Find
  • Views
  • Excel export
  • Excel/Word templates
  • Power Query in Excel
  • Within SSRS category, we have reporting wizard and custom SSRS reports
  • It’s also possible to use Dynamics data with Cognos BI or other external tools
  • And, on top of that, there are calculated and rollup fields in Dynamics which we can use in conjunction with all the other options (probably more so with the views/charts)

 

So, how do we choose? I have compiled a table which, even if not very detailed, might get you started (and, also, might put this comparison in perspective):

image

Just  a few clarifications:

  • Minimum Query Limitations – there are always some limitations, mostly because of FetchXML(but, also, because of how different tools are treating fetch), but SSRS and Power BI will have less of those
  • Clickable – this is all about having clickable links in the report to open records directly in Dynamics
  • Simple Drill-Down – it just comes with the dashboards. Looks like no other tool/approach can easily beat it

 

Now if it looks like Dashboards are winning this race, not necessarily. Dashboards are really good for a lot of things, but, in a way, it’s a half-cooked tool. If you compare Power BI visualizations with the dashboard, you’ll see that Power BI is more powerful. The same goes for SSRS. Actually, Power BI (and SSRS) will likely beat dashboards anywhere – dashboards will be offering some basic option, and Power BI/SSRS will be offering an enhanced alternative.. except when it comes to the “drill-down” feature.

How do you choose, then? Is it on a case-by-case basis, or would you just say “go with Power BI”(for example) these days?

Plugin development: don’t use Context.Depth to prevent recursions!

It’s been said a number of times in different blogs that context.Depth may have side effects, but I am starting to think we should actually ban the practice of using context.Depth all together.

It’s almost impossible to predict plugin execution paths as long as your solution gains any level of complexity, but, even in a relatively simple scenario below, you may quickly start noticing how your data is becoming inconsistent over time:

image

That validation plugin at the bottom will run when Entity 3 is being updated through the UI. And it will not run when Entity 3 is being updated from one of the other plugins (each of those may run in response to some other events).

So, basically, you may either have to repeat validation logic in each and every plugin affecting Entity 3, or you may have to come up with some other way to avoid recursions. The easiest solution would be to carefully control which attributes are updated every time and which attributes are configured to trigger the plugin.

Problem is, if you don’t do that, fixing those issues in the existing solution can quickly turn into a nightmare. When developing new functionality you will be assuming that validations are working, so you might not even bother to test some of them until it’s already too late and your data has become inconsistent. At which point you’ll have to fix it somehow, explain the consequence to the users, etc.

So take your time to craft the validation criteria carefully, don’t use Depth, and you should be able to save yourself from those troubles.

Field Service: What does negative quantity mean?

From the inventory tracking perspective, finding out that you actually have negative quantity at the warehouse might lead to some interesting experiences. I mean it would be nice if we could sell things which are not there yet. And you may find yourself in that kind of situation when working with the Field Service:

image

This is not necessarily a bad thing, and, in reality, what you have in the system will rarely be 100% in sync with the actual inventory, at the very least because of the recording delays, but it’s interesting that Field Service solution is trying to handle it a bit inconsistently – basically, how those negative quantities will be handled depends on the situation.

For example, eventually we can get negatives:

image

If this happens, such products won’t be showing up on the transfer screen:

image

Even more – if I try creating an inventory transfer “manually” in this situation, I’ll get an error message like this:

image

So there are validations. And, still, I can go to an existing purchase order product receipt and update the quantity there:

image

For example, if I set it to 1 on the screenshot above, I’ll get the numbers updated right away:

image

(Looks like those calculations are not straightforward when it comes to negative numbers –  I was expecting to see –10.95 after reducing the purchase order product quantity by 4)

Anyway, point being, in some scenarios you may get negative quantities, so don’t panic if that happens, have a look at all the data that might have affected the numbers, and, then, add some validations to the process if you need those.

Field Service: playing hide and seek with the products

 

I was setting up products for Field Service and made a bit of a rookie mistake.. Which resulted in the same product being available in some places but not in others.

See, on the following screenshot I have a few mugs in the Main warehouse:

image

However, when I tried creating a new purchase order, I could not, really, find any mugs:

image

Turned out that, even though those mugs were already showing up at the Main warehouse (so the system did allow me to create inventory adjustment records), I had not set up the product correctly.

It was still in the “draft” state, and that’s why it was not showing up in some views. Publishing the product

image

Did take care of the issue:

image

And, on a related note, if you were wondering (like I was) what’s the purpose of the product type field

image

Here is what the user guide would tell you:

image

I am not sure this definition explains all the details, but, since the field is not mandatory, you might want to keep in mind that some lookup controls in the Field Service solution will be using filtered views where filtering will be happening on that field (and some will not be using filtering). For example, you will see a product in the list of purchase order products only if that product is either an inventory or a non inventory product:

image

Setting up the dev process: can we automate configuration change tracking?

 

It’s not unusual that we need to know what has changed since the last time we did anything in the Dynamics environment. As I mentioned in the previous post, we have some manual and half-manual options, but it would be nice to have a bit more automation there.

For example, there is a change log screenshot below – if we could get that kind of change log created automatically, that might be a good start:

The solution below is based on the original CRM Comparer tool:

https://docs.microsoft.com/en-us/previous-versions/dynamics-crm4/developer-articles/dd442453(v=crm.6)

Which had a reincarnation here:

https://archive.codeplex.com/?p=crmcomparer

And I just uploaded it (with some changes) to github:

https://github.com/ashlega/CRMComparer

It’s possible that the tool will not recognize some of the newer solution components (such as applications, for example), but we can work on that later.

If you follow setup instructions from the link above, you can, basically, do the following:

  • Configure the tool to connect to your instance of Dynamics
  • Create a solution in Dynamics and add all components you are interested in to that solution (make sure to keep maintaining the solution moving forward)
  • Create a scheduled task to run the following command periodically: ChangeTrackingTool.exe SolutionName

 

Every time that scheduled task runs, the tool will go to Dynamics, download the solution, and compare the contents of that solution with what was downloaded previously.

For example.. If you look at the screenshot above, you’ll see that ita_firstentity was created that time. Now let’s go to the solution in Dynamics, let’s add an attribute, and let’s put it on the form:

Here is how the form looked like before:

And here is how it looks like after adding another attribute – notice New Attribute on the form:

So now let’s start the tool and see what happens:

It does take a little bit to do the comparison, but, in the end, here is what shows up in Dynamics:

1. There is a new change log for the “TrackingTest” solution

2. And, if I open that record, I can see the details

In other words:

  • A new attribute on the ita_firstentity was created (ita_newattribute)
  • Information Form for the ita_firstentity was updated
  • And, more specifically, a row was added to that form

 

And what kind of row was that? Let’s open that particular component record:

If you ever saw customizations.xml, that XML on the screenshot should look somewhat familiar, since, basically, it’s part of the customizations.xml.

So give it a try – let me know what you think!

Setting up the dev process: Change Tracking options

One of the problems with Dynamics solution development is that there is no change history. Anyone with sufficient permissions can go to Dynamics, add an attribute or update an optionset, and there will be no record of it anywhere.

There are a couple of ways we can deal with that, but let’s have a look at what’s available from the market place first.

There is a change tracking solution for Dynamics 365 provided by Microsoft:

https://blogs.msdn.microsoft.com/crminthefield/2017/10/23/new-from-microsoft-labs-change-tracking-solution-for-dynamics-365-released/

When it was first published, I thought this is it, we got it covered now. Realistically, it probably comes somewhat short of the real change tracking though. The way it works is it’s registering a couple of SDK message processing steps on publish/publishAll:

image

And, of course, there is a related plugin. All the changes are reported in the Change Tracking entity:

image

This is helpful, but, unfortunately, this only gives us high-level details of what was published. For example, I just added a new field to the entity, but I can only see that the entity was published in the change tracking:

image

And we can’t publish an attribute separately, so that’s just about as much information as we can get from the system in such cases.

There is, also, another limitation. When using “Publish All” message is not producing any meaningful results:

image

Looking at the screenshot above, all I know is a user initiated PublishAll, but what exactly was published as part of that request is not mentioned at all.

Those limitations are discussed in more details in the thread below (more in relation to Publish/PublishAll messages, but that’s what is being used in this solution):

https://stackoverflow.com/questions/39265795/crm-plugin-for-publish-and-publish-all-messages

That said, it may be useful to know that some components were published, and, potentially, get a bit more details about exactly which components were updated.

And alternative option might be to maintain a change log manually. Maybe as a spreadsheet.. or we might create a solution with a couple of entities:

  • Change Log
  • Change Log Component

So we might use the first entity as a bucket to store details on the individual component updates, and, then, we might store more specific details in the Change Log Component records.

It might look like this:

image

(You can download that solution from github: https://github.com/ashlega/CRMComparer/blob/master/sourceCode/Solution/ChangeTracking_1_0_0_0.zip)

Now those are all manual options, but, while writing this post, I did recall an old good CRM Comparer tool which keeps showing up on the horizon (and disappearing) from CRM 4 times:

https://docs.microsoft.com/en-us/previous-versions/dynamics-crm4/developer-articles/dd442453(v=crm.6)

In the original version it might still be helpful, if it still worked, but what if we could integrate it with the solution above?

That’s coming in the next post..

Dynamics – setting up the dev process, Part 2

Frankly, I got a bit stuck on this. As far as Dynamics solution development process is concerned, I’ve seen quite a few different scenarios. Some of them were more mature than others, but none of them were without problems.

Merge problem aside, you may have multiple Dynamics organizations with one org per developer or you may have just a single development org, you may or may not have a dedicated QA/UAT/Staging, you may or may not have the source control, you may or may not have automated testing, you may be following Waterfall or you may be following Agile..

And the choice is not, always, yours. For example, depending on the licensing situation, and, also, depending on the infrastructure and project limitations, you might or might not be able to spawn multiple Dynamics organizations and/or instances. This is not to mention that clients purchasing Dynamics are not necessarily expecting that there will be any development involved, so they might not be even thinking of provisioning the source control at all.

Maybe there is just no one process that fits all, then, so it probably does not make sense to look for the Holy Grail here, but it may still may make sense to look at what tools and approaches we have at our disposal and how to use them so the whole process works better, not worse (if and when we have those tools available).

For example, I’ve already mentioned “merge” problem before, but, come to think of it, it’s a bit of a made up problem.

Imagine the environment with only one organization, which is your production organization. If every consultant were working in the same organization, there would be no merge problem at all. Sure that would lead to a whole set of other issues related to concurrent work, but there would be no need for merging.

We introduce the merging issue once we choose to have multiple Dynamics organizations in our environment, and we do that because we want to avoid different consultants stepping on each other’s toes. Which seems to be a rather straightforward answer to the problem of concurrent work, but, apparently, it brings another problem.
Point being: depending on what we need more in each particular situation, we may have to resort to different tools and approaches. So what are those, what problems do they solve, and what problems do they introduce?

Here is my uncategorized list of those tools (and related notes) then.


1.    The methodology

Is it going to be Agile, or is it going to be Waterfall?

From the Dynamics perspective, a lot of that is about the amount of customizations and configuration we’ll have to keep in the pre-production environments before we can release them to production. It’s literally impossible to follow the Waterfall when we only have one organization(production), so this assumes we’ll have at least two organizations.  Which is reasonable – even in the online model, we will usually have Sandbox available for this.

But it does not end there. Imagine that we have features which will be delivered at different times. For example, there could be a module that is going to cost us a couple of months of development, but there will be a few smaller modules that we will be developing in parallel. Different consultants will be working on those.

If there is any level of intersection between those modules, we may not be able to deliver those smaller modules to production if we choose Waterfall approach where only a fully-functional product will ever be delivered since we’ll have to wait for two months. Yes, we may do longer-term development in a separate organization, but we’ll end up with that dev organization being completely out of sync from production, so it’s just introducing other problems.

Still, if your users don’t want to deal with half-cooked functionality, maybe Waterfall is the way to go, and, then, you may have to plan your releases accordingly to avoid too much merging.

However, if the organization you are working in is open to a more agile approach, it might consider allowing those features which are still in development to be delivered to production before they are fully ready. That may require some extra work from the team to ensure that the end users don’t get confused, so there may be more emphasis on setting up security roles, delivering demos, and providing training. All of that is going to take time from your dev team and from the end users, though.

And what’s my personal favourite? I think Dynamics is not meant for the Waterfall – everything is just way too fluid and dynamic there, so it may be better to take a more agile approach from the beginning. Mind you, the client will have to make an effort, too. Really it’s all about getting all configurations and customizations at least to the integration environment as soon as possible to make sure no one is getting out of sync too much. Worst case scenario – anything up to and including the staging environment can be part of the agile process. Delivery to production may happen in a more waterfall-ish way, once certain milestones have been reached.


2.    One Dynamics organization vs multiple organizations

The main problem with having one organization only is that everyone working there has to be mindful of the work others are doing. The main problem with having multiple organizations is that there is no easy way to merge configuration changes from different development organizations in Dynamics.

In other words, it’s almost as if there were some amount of pain and all you could do is just distribute the pain between those two problems (concurrent work and merge).

Still, there are some ways to make it easier.

As far as concurrent work is concerned, it’s often possible to assign different pieces of work to different consultants. If there is not a lot of interference, concurrent work might not be an issue at all. If there is some interference, those consultants may just have to talk more often.

On the other hand, if there is a piece of functionality that has to be developed in isolation, the best way to do it might be to create a separate organization, do the development there, and merge the results into the integration environment after that.

Does it mean that all development should be done that way? Not really – merging is not always simple, and you might not want to mess with it too much.

Let’s consider a few examples.

a)    We need to add a new field to the account entity and put it on the form and add it to the views

In this scenario, there is no need for a separate organization. A consultant might do this work directly in the integration environment, and, with all likelihood, other consultants won’t be affected by this change

b)    Compared to the previous scenario, what if we also needed to add a plugin / workflow for the business logic?

If that were a new plugin, we might be ok, but there is a catch. Sometimes we need to debug the plugins, we may need to show error message, we may need to add profiler, etc. Assuming somebody else is working with the entity that we are adding a plugin to, these debugging activities can affect their work. Does it mean we need a separate organization?

Possibly, and the good thing is that plugins are relatively easy to merge (source control tools to the rescue), so it’s the least painful merge problem we can get. On the other hand, we may also try to continue working in the integration environment, and we might just add a condition to the plugin so it only fires for our user account or for one particular record. Something like this might be useful:

if(context.UserId != <your user id>) return;

Or, if it were a workflow, we might just check if it’s running against a specific test record.

You would just need to remember to remove all those special conditions once everything is ready.

c)    Now what if there already were a plugin and we would want to update that plugin instead of creating a new one?

If that plugin were only touching the entities which nobody else would be working on for now, it might still be ok to do development directly in the integration environment.

Chances are, you might want to do this in a separate organization (if you can) in case there is any level of development interference, since, when updating an existing plugin, it’s very likely development and debugging could not be isolated from the existing functionality when done directly in the integration environment.

Another way to work around this problem would be to plan carefully. Split the work between consultants in such a way that there is no interference, and your problem is solved. Can you split the work that way? Possibly not all the time, but, quite often, you should be able to.

d)    What if you wanted to delete an attribute?

That can be a tricky scenario. There might be data stored in that attribute. If you delete the attribute, you will lose the data, and you will lose the audit history, too. That might or might not be an issue. If it is, it might be better to keep the attribute there and mark it somehow instead (add “x” to the display name, for example. That will move the attribute down on the list of attributes when using the advanced find, and, also, you will know that it’s marked for deletion).

In all those scenarios, we can change the complexity of the development process by adjusting the number of dev organizations and, also, by re-arranging the work so that different consultants do not interfere with each other since the dependency between those is more or less like this:

  image

This puts more emphasis on the planning project managers/team leads should be doing, and, also, on the overall communication.. Even if it’s not been planned properly, different team members should, usually, be able to re-arrange their work to avoid interference.

Now what I wanted to illustrate above is that we may not have to decide if we’ll be using one organization per developer or if it’ll be a single dev environment at the start of the project. Instead, creating a dedicated dev organization might be considered just one of those tools we should be using in some situations. Even more, it might not be an organization per developer –it might make more sense to have an organization per new feature/set of related features.

3.    Solution components

When you do have multiple organizations, and when you need to bring over the changes from your dev organization to the integration environment (be it a dedicated integration, or production), solution components is what you can use to package those changes:

image

Add exactly what you need, then export your solution from dev and import to the target organization. Basically, this is the way of not having to do a merge when bringing over the changes. If it’s all been planned carefully, there is a good chance you will only have to bring over those components that nobody else has worked on while you were working on them, so there might be no need for merge at all.

4.    The source control

If you are a developer, you probably can’t imagine your life without a source control solution. It’s your source code repository, it’s your history tracking, it’s your team works, it’s your merge tool, it’s your automated builds and deployments..

Yet if you are a client looking for a CRM tool, source control might be one of the last things on your mind. And, if you are a product team behind Dynamics, I’m guessing you are focusing more on delivering what the clients expect and what they need. Hence, as useful as it is when it comes to development in general, source control integration is missing in Dynamics.

There are some things you can still put into the source control:

–    Custom code for the plugins/workflows/custom actions
–    Web resources

There are, also, tools that may help you automate the process of deploying those components to Dynamics directly from the Visual studio:
https://marketplace.visualstudio.com/items?itemName=DynamicsCRMPG.MicrosoftDynamicsCRMDeveloperToolkit
You don’t have to use the development toolkit, though.
Where the disconnect starts is when it turns out not everything can go to the source control. How do you put an entity form in the source control, for example?

There is a difference between plugin assemblies and entity forms, though. Dynamics does not store plugin assembly’s source code, but it does store form definitions. So, in a way, Dynamics itself is the source control for some of the components. It’s just a much reduced implementation of the source control since there is no versioning and/or history.

Tracking the history for those changes may end up being a manual effort. Literally you may need to create an entity in Dynamics where every developer would describe the changes they are making, why they are making them, and any related change requests/bug numbers. You may use a shared spreadsheet on the Sharepoint site, for example, to do the same. This will create some overhead, so you will have to keep encouraging the practice, but it will all pay off eventually. Sooner or later, you will run into a bug or a feature, and you’ll need to trace the history of that bug/feature. The kind of log I just mentioned will be one of the few tools you’ll have at that moment.

5.    SolutionPackager tool

It is an interesting concept, and it’s a good tool:
https://msdn.microsoft.com/en-us/library/jj602987.aspx

The idea is that you can split Dynamics solutions into subcomponents, store those subcomponents in separate files, and, then, put those files in the source control.

Does it help with the source control? Yes and no.

Yes, we can store everything in the source control now. But what do we do with those individual XML files? There is no editor which allows us to update form XML directly in the Visual Studio, for example. Which means we can only make those changes in Dynamics, then use the solution packager to create an XML.. then try merging that XML with the original file automatically or manually.

That last part looks a bit fishy – to start with, we are not supposed to mess with the customizations.xml so much:
https://msdn.microsoft.com/en-us/library/gg328486.aspx

But, once we start updating those individual files manually, we will actually be updating customziations.xml (which is what all individual files will be packaged into). Whether it will work, when it will break, and, in general, how convenient it is to edit form xml (and other xml) in the text editor, whether you can safely use checked in files to roll back the changes.. It’s not to say that SolutionPackager should not be used on the projects – it’s just questionable whether it’s that helpful.


6.    My scenario

In either case, it seems this is what normally works best for me:
 

image

Basically, I don’t mind making changes in the integration environment directly – why not.. just think of it as of the Source Control for all your code components, and keep in mind that some of the stuff the source control would do for you automatically have to be done manually in this case.

It may make sense to take a nightly backup of the solution file in this scenario just so that you have something to fall back to when you need.

Then, once everything is ready in the integration environment, you can push it to the higher environments using a solution file.

7.    How do you “delete” an attribute?

So how do you handle “deletes”? A few year ago I would say don’t delete any attributes until at least a few months(or years) have passed since you have hidden them from the forms/views. People change their minds all the time, and your system stakeholders are people, too. But it probably depends on the situation.

Anyway, when using unmanaged solutions, one approach to it would be:

–    Never delete from the lower environment till you have deleted the attributes from the higher ones (so, do it in production first)
–    To start with, mark the attributes being deleted – for example, use “x” prefix for the display name. Remove such attributes from the views, forms, workflows, plugins, scripts, business rules, etc. Check the dependencies, make sure none are left. Do it in the integration environment first.
–    Move those changes to production.
–    Once you get those x-s in production, decide when you want to delete them, and if you want to do it at all. Deletions are final, and you may need to keep all attributes for the possible data audits.
–    If, eventually, you will decide to delete such attributes, do it in production first. You should be able to do it easily since such attributes would have no dependencies anymore.  Once it’s done in production, do the same in the lower environments. Don’t forget to update your solution log file.

8.    And what about testing?

That’s another painful area. It’s almost impossible to test Dynamics solutions from start to end for every new “release” manually, and, yet, it’s extremely easy to break something by changing the business logic in the plugins or by modifying the security roles, or even by removing a value from the option set.

Unless you are ready to mix UAT and production so that your production environment is what you are sort of using for testing, and unless you have unlimited supply of testes on the project, you may have start thinking of the automation.

There are different strategies – you might be using unit tests with fake context, for example, but that’s not an equivalent of user testing.

The only automated test solution I saw so far that would come anywhere close to the actual user testing is this:

https://github.com/Microsoft/EasyRepro

It’s not something just any tester can use.. It’s, actually, more like a test framework for developers, since we have to code test scenarios there. There can be quite a bit of maintenance involved, since, most of the time, you’ll be using xpath expressions to identify HTML elements, and those can change for every new version. Although, since it’s an open source solution, yet the framework itself has being maintained by Microsoft so far, it might be safe enough to use it.

On the other hand, you can easily come up with all sorts of test scenarios – you can start with UI testing/validation, and, then, you can add server-side data validations (since it’s coding anyway.. why not to check how the data looks in Dynamics after you run a specific UI action?)

Most importantly, you can run such tests automatically every night if you wish, and that alone might be a great contribution to the stability of your solutions.

9.    References and must-read

a)    Microsoft Dynamics CRM2011: CRM Solution Lifecycle Management
https://www.microsoft.com/en-ca/download/confirmation.aspx?id=39044

b)    A framework for automating xRM development processes (what I find really interesting about it is the “delete” feature which comes with the so-called extended solutions. Why is it all done with F#, though?)
https://github.com/delegateas/Daxif

c)    Solution Packager tool
https://msdn.microsoft.com/en-us/library/jj602987.aspx

d)    Developer toolkit for Dynamics

https://marketplace.visualstudio.com/items?itemName=DynamicsCRMPG.MicrosoftDynamicsCRMDeveloperToolkit

e)    EasyRepro – automated UI testing for Dynamics

https://github.com/Microsoft/EasyRepro

Dynamics – setting up the dev process

 

The project I’ve been working on slowed down a little bit in the last few days, so I suddenly got some spare cycles.. And what do you do with the spare cycles? You spend them on something, preferably on something useful. So.. I started to do this:

image

It’s actually nice to take a break and just spend some time thinking.

Especially when the topic you have to think about is Dynamics, and, when it’s all about setting up proper development process.. there are certainly things to consider.

So, to start with, I came up with the following diagram:

image

It’s arguable whether it’s, actually, a standard process, but it’s a reasonable approximation.

And the important part which breaks it for Dynamics is that red “Merge” rectangle.

See, there are so many things in the Dynamics solutions that may have to be merged.. Just think about the entities, attributes, security roles, field security profiles, views, dashboards, applications, sitemaps, workflows, etc. We know that Dynamics supports merge, to an extent, for managed solutions. But, when it comes to unmanaged, and that’s what development teams have to deal with, there is little to no support. If Dynamics had some kind of out of the box merge resolution process, that would be great, but, since it does not, the only other option we have it to merge solution files as regular xml files.

And maybe we could do it, somehow, but this is where Microsoft warns us that we probably should not go too far, since editing of the customizations.xml file is, for the most part, not supported:

image

https://msdn.microsoft.com/en-us/library/gg328486.aspx

From that standpoint, even the solution packager can’t help that much. Here is another diagram I draw:

image

Yes, solution packager can split solution files into the individual components, and we can store them in the source control. But what do we do when we need to merge the updates applied by two different developers to the same individual component? Strictly speaking, we are stuck in that situation. Since, as I mentioned above, we are not supposed to edit the xml. We can still try doing that, but that’s, actually, a much more complicated process if you compare that to the merge we normally do for C# code, for example. You have to understand the structure of those files, and, even when you do, there is no guarantee you won’t mess something up if you just keep merging the files.

This is why, in the end, we either need to assume there will be “manual” merge, which is, really, all about re-applying the same configuration changes in the target environment manually, or we should try to avoid even the possibility of merge by using some sort of master configuration environment (for example, have a look at this post: http://gonzaloruizcrm.blogspot.com/2012/01/setting-up-your-development-environment.html )

That creates another problem, though. In either of those approaches the assumption is that all the configuration changes will be implemented in Dynamics. Which is fine until such time when you need to find out what were those changes, when they happened, and, possibly, who actually did them. That’s what source control solutions such as TFS or Git can do, but, again, Dynamics does not support TFS/Git natively.

Some of that is doable with the solution packager, though, so would it be possible to combine either of those two merge approaches with the solution packager so we could also track the history in the source control? In theory it should be possible, but I’ll need to set up a test environment to try it.. that’s for the next post, though, stay tuned!