Monthly Archives: November 2017

POA: What it looks like on the SQL side

POA table is well known for its ability to cause performance issues, so, even though we can’t avoid record sharing, it’s always been recommended to sort of keep record sharing at bay by reducing the number of records in the POA.

This is where Access Teams come into play. Sure we can use Owner Teams as well, but there is an important difference between those two types of teams – Access Teams do not affect security cache on the server, so they are ideally suited for record sharing (I’d say that’s exactly why they were introduced):

How does it work behind the scene, though?

It’s really all about this part of any filtered view query:


This particular example is for the account entity, but you will find exactly the same query in any other filtered view for other entities.

Here is what’s happening there:

  • SystemUserPrincipals is a table that lists all the objects a user can be granted access through. Not sure if what I just wrote makes sense.. But imagine a user who is also a member of various teams. That user can be granted access individually or through the team membership, and SystemUserPrincipals is the table that, for each User, maintains all those additional “user principals” in a single table. For example:image
  • So, if you look at both of these screenshots, it should probably make sense that, in order to determine if a user has been granted access to a specific Dynamics record through sharing, Filtered View is joining 3 tables:
    – Entity table (Account in this example)
    – SystemUserPrincipals table (to determine all the id-s through which the access could be granted)
    – PrincipalObjectAccess table (to actually check the access)


What’s interesting about that is:

  • Imagine we have 10 users and we share a record with those users. That’s 10 records in the POA. Now what if we move those 10 users into a team and share the same record with that team instead? That’s only 1 record in the POA.
  • What if we create 10 teams instead of one and add every user to those teams? It seems SQL would have to look at about the same number of records to verify access (10 teams, 1 POA this time)
  • What if we reduce the number of POA records by 50%? Let’s say there was 1 million records, and we make it 500K. And, then, what if every user in the system gets added to a couple of new teams? It seems that the query in the Filtered View will have to look at 500K POA records for the user, that user’s default team, and those 2 additional teams now. So, before any optimizations, it would be the same number of records.


In other words, there are, actually, two tables participating in that query, and the total number of records SQL may have to look at depends on the number of records not only in the POA, but, also, in the SystemUserPrincipals. The number of records in that last table depends on the team membership – as we keep creating teams and adding users to those teams, SystemUserPrincipals keeps growing.

Which means that, although it’s a good idea to reduce the POA, it’s not necessarily going to help if we go heavy on the teams/team membership, and that’s just something to keep in mind..

Happy 365-ing!

Dynamics portals–just thinking out loud..

I have to admit, I have not worked a lot with the portals. It’s always been occasional. But every time I turn to them, somehow it feels like the concept itself is almost foreign to Dynamics. Not because there is no need for the portals – there has always been, and there is still a need. More because it’s a completely different area, and, it seems, there are some natural limits to where the portals(as they are now) can reach.

As in.. Before you even start developing a portal, you need to get Dynamics. You need to learn how Dynamics works, you need to learn how it works with the portals, how to configure the portal in Dynamics. You can do no portal development without Dynamics.

When did Dynamics become a content management system, though? Are portals, even remotely, the reason for buying Dynamics in the first place?

Besides, the portals are so tightly coupled with Dynamics, that there is simply no way to use them without it. Not only from the configuration perspective.. even on the data level, there is no separate storage, there is no independent database. The whole architecture of this solution assumes that Dynamics will serve as a back end, all the time. When did it become normal to create this kind of tightly coupled architectures? I can’t name a single CMS system that would be successful in that – they are all built independently, even though, once they have matured, they do start providing connectors and integrations to other systems.

That’s not everything, though. What’s probably even more interesting is what has been happening to the portals in the last few years.

Not that long ago Microsoft acquired ADX Studio only to split that solution into 3 a year later and, essentially, to give one of those versions back to Adoxio (which is, technically, a former consulting arm of ADX Studio). We may call it community edition, but, with all due respect to the portal developers, I think there will be only so many contributors outside of Adoxio.

No, I understand that something had to be done to support on-premise clients, but, eventually, we ended up with 3 different versions of the portals, and this is at the time when even the main “branch” is still nothing but a nuisance in the CMS world.

Either way, here is what we have according to Adoxio itself:

Here is a question, though.. How does the future look like for those 2 non-Microsoft versions of the Portals? Are they seriously going to compete with the version from Microsoft? I don’t think so. Are they just intermediate solutions which had to be delivered to support all those transitional customers? But there is even no guarantee that the upgrade path will be provided from the community edition to the MS edition in a year or two, so this is just delaying the inevitable. Why bother?

I would understand it if the main scenario for the portals was to integrate them into the existing web sites, but it does not seem to be the case either. It actually takes an effort to create that kind of integration, so it does not look like the portals have been developed with that goal in mind.

In other words, in terms of the vision, and as far as the portals are concerned, it all seems to be a little blurry to say the least. And, yet, there they are, the portals, with quite a bit of activity happening around them. Is it just me feeling a bit confused about what the future holds for the portals?

A validation step that won’t validate

A plugin registered in the validation step won’t intercept any of the changes that get added to the plugin Target in the pre-create. It seems so obvious.. but it’s not something I’d normally be thinking about when developing the plugins.

One of the plugin patterns most of us like using looks like this:

– We would create a pre-operation plugin

– In that plugin, we would populate some attributes of the plugin Target

That way, we can save time/resources which would, otherwise, be required if we did the same in the post-operation plugin (since we would have to call service.Update on the target entity in the post-operation):


This process works great, most of the times. There is one exception.. If we wanted to validate something in the pre-validation, that validation won’t work for whatever changes we would add in the pre-operation. There is no going back in that sequence above.

Imagine we had a plugin that would not allow any changes to the description field on the account entity:


Here, I have created two steps.





Everything looks right – I’m getting an error message when trying to update description field on the account form:


But, if I don’t update description field manually, and, instead, update some other field just so Pre-Operation plugin would kick in.. that plugin would have absolutely no problems updating the description field:



That’s exactly because those changes would happen after the validation step. Actually, the same may happen if we do these validations in the pre-operation if we configure incorrect execution order on the steps.

Comparing update performance: No plugins vs Plugin vs North52 #Dynamics365

When talking to the client about the business requirements, it’s always tempting to think that we can do just about anything in the plugins if we really have to.

And it’s true that plugins allow us to implement very powerful customizations, but, on the other hand, that power does not come for free – we may have to pay a certain price since all those plugins will be running on the Dynamics server, and, so, they’ll be slowing down the application.

So I figured I’d do a quick comparison to see how the plugins affect performance just to illustrate what may happen. Imagine an oversimplified scenario (which might not even require a plugin) where we would need to make sure that any time an account record is updated, it will have “TEST” in the description field.

I’ve run 3 different tests to measure average update request execution time in each case:

1. For the first test, I have disabled all the plugins on account “update” to get the baseline. There were 1000 updates, every 100 of them were done using ExecuteMultiple request.

2. For the second test, I used North52 formula

In general, North52 is a great solution that simplifies all sorts of calculations in Dynamics.. but, it seems, I don’t quite agree with their own assessment of the performance impact which you can find here:

You can download a free edition of North52 to use it in your environment – it does have some limitations, of course, but it can still be very useful.

Anyway, here is how the formula looked like:


Just keep in mind that, when defining a formula there, you are, actually, registering a plugin step (more exactly, North52 is doing it for you):



3. For the third test, I used a separate plugin


That one is not doing much – it’s just setting the description field.

And there are a few more things which are worth mentioning:

  • When running test #2, the plugin from test #3 was disabled
  • When running test #3, the plugin from North52 was disabled
  • Finally, I used a console utility for testing – you’ll see the code below:


class Program
        public static IOrganizationService Service = null;

        static void Main(string[] args)
            var conn = new Microsoft.Xrm.Tooling.Connector.CrmServiceClient(System.Configuration.ConfigurationManager.ConnectionStrings["CodeNow"].ConnectionString);
            Service = (IOrganizationService)conn.OrganizationWebProxyClient != null ? (IOrganizationService)conn.OrganizationWebProxyClient : (IOrganizationService)conn.OrganizationServiceProxy;

            int testCount = 10;
            int requestPerTest = 100;
            double totalMs = 0;

            for (int i = 0; i < testCount; i++)
                totalMs += UpdateAccountPerfTest(requestPerTest);

            double average = totalMs / (testCount * requestPerTest);
            Console.WriteLine("Average update time: " + average.ToString());

        public static double UpdateAccountPerfTest(int requestCount)
            Guid accountId = Guid.Parse("475B158C-541C-E511-80D3-3863BB347BA8");
            Entity updatedAccount = new Entity("account");
            updatedAccount["description"] = "1";
            updatedAccount.Id = accountId;

            UpdateRequest updateRequest = new UpdateRequest()
                Target = updatedAccount

            OrganizationRequestCollection requestCollection = new OrganizationRequestCollection();
            for (int i = 0; i < requestCount; i++)
            DateTime dtStart = DateTime.Now;
            ExecuteMultipleRequest emr = new ExecuteMultipleRequest()
                Requests = requestCollection,

                Settings = new ExecuteMultipleSettings()
                    ContinueOnError = true,
                    ReturnResponses = true

            var response = Service.Execute(emr);
            return (DateTime.Now - dtStart).TotalMilliseconds;

All that said, here is how the tests worked out(ran a few tests, actually. Those numbers are the “best” I saw):

  • Test#1 (No plugins):             99   ms per update request
  • Test#2 (North52):                 126 ms per update request
  • Test#3 (Separate plugin):  114 ms per update request


So, realistically, it’s almost impossible to notice the difference on any single update request. However, it’s still about 20-30% difference, and it seems to be a little worse with North52 when compared to a dedicated plugin. Which could be expected since North52 would be using some additional calculations. Although, that difference between North52 and dedicated plugin might be just something else since I saw those numbers going a bit up and down in different test runs for both Test#2 and Test#3.

The problem, though, is not those individual calls – it’s when you start looking at the really large number of updates, that’s where 20-30% suddenly become something to keep in mind.

But that’s not all yet.. There are a couple of message processing steps on the account entity that come with the FIeld Service solution, so what if I do the tests again with Field Service plugins disabled?


  • Test#1 (No plugins):             27-32  ms per update request
  • Test#2 (North52):                 59-63 ms per update request
  • Test#3 (Separate plugin):  59-63 ms per update request


In other words, when there were no plugins at all, average update request performance was about 3-4 times better. At the same time, North52 seemed to be about as efficient as independent plugins, at least with that simple formula.. Except in one scenario:

Normally, we won’t register plugins to run off just any attribute – we will configure the plugins to run on the update of some specific attributes only. For example, when running Test#2, I tried using only a single attribute (“name”) for the trigger, and I got exactly the same average execution time as if there were no plugins at all. With North52, we can do the same using plugin registration utility, but, it seems, there is no way to do it directly in the Formula Editor. Something to keep in mind then..

So, be aware. You may not notice the difference on every single update/create operation, but, at some point, you may have to start optimizing plugin performance by adjusting the list of attributes they’ll be triggered on, by simplifying the calculations, even by moving those calculations into an external process, etc.

#CodeNow for #XrmToolBox–set fetchxml for a system view

There are other ways of doing this, but here is how you can use CodeNow plugin for XrmToolBox to update FetchXml for a system view (you will need, at the minimum, v1.0.0.18 of the CodeNow plugin)

Load CodeNow plugin (if it’s not installed yet, go to the plugin store in XrmToolBox and install the plugin first)


Make sure XrmToolBox is connected to Dynamics:


Click File->Open


And select “Online” storage from the list:


Then select “Set View Fetch” script


If you run the script right away, it will run for the account entity. You can change the entity name if you wish:


Then click “Run”


The script will ask you to choose a view


And, then, to provide updated fetchXml for that view:


Once you have provided updated fetchXml and hit “ok”, the script will go on to update fetchXml for that view, and, also, to publish all customizations. At which point you will be able to see the results of that update in Dynamics.


By the way, this script also demonstrates a couple of new features that’s been added to the CodeNow:

1. CodeNow script can ask users to choose one of the options from a dropdown now:
string selectedView = Helpers.GetSelection(“Select a view”, “View Fetch Updater”, viewOptions);

2.CodeNow scripts can ask users to provide text input now:

string newFetch = Helpers.GetTextInput(“Enter FetchXml”, “View Fetch Updater”);

This allows for a bit more interaction and a bit less coding, so the same scripts can be re-used by non developers even easier.

Finally, I have PluginRegistration utility for V9!

I’ve finally downloaded the tools for Dynamics365 V9. Not a big deal, you might say, since those tools used to be part of the SDK package all the time.. and I was waiting for this to be delivered in the same way it has always been delivered.. Only to find out it’s different nowSmile

You see, there is “No more monolithic SDK download”:

This is good and bad, as usual. On the one hand, it’s extremely easy to use PowerShell to download the tools(including the Plugin Registration utility, Solution Packager, CrmSvcUtil, etc). The whole procedure is described here:

And you can always get the latest version by re-running the same script.

Actually, I figured it would make my life a little easier if I saved that script in a file, so all I need to do now is run that file:


On the other hand, I think I’d still prefer to have everything, including the samples, on my disk. It’s just easier to navigate/browse there (compared to doing that online).

As for the samples, it seems we can only browse them online now (sure we can download all those files, but not as a single package):

It’s very well structured there, so it might really be easier to use it that way.

And there is yet another set of files that I used to re-visit every now and then – it’s default ribbon definitions. It’s still available as a downloadable zip file, though:

Hope that helps – looks like everything we used to have in the SDK is still available for download, even if it’s delivered in a different way now.

Enjoy your 365-ing!

CodeNow for #XrmToolBox–more than just code now.. Let’s get your organization POA stats this time

I’ve been experimenting with the CodeNow plugin, and there is a new version ( you can now download from the plugin store. There are some user interface improvements(progress indicator, better error reporting, better logging), but, also, there is a new piece of functionality which you can see on the screenshot below:


This is a dashboard in Dynamics that presents some basics stats about the PrincipalObjectAccess table. In order to get that dashboard, you’ll need to do a few things:

1. Add CodeNow plugin to XrmToolBox


2. Deploy the dashboard and the custom entity used to populate that dashboard to your Dynamics organization

Download unmanaged solution from the url below, import it to Dynamics, and publish the customizations:

It will work with 8.0+ versions of Dynamics (V9 included)

3. In the XrmToolBox, connect to the organization where you just deployed the solution. You may want to increase connection timeout – POA queries can be time-consuming, so make it more than 2 minutes if there is significant number of records in the POA:


4. In the XrmToolBox, open CodeNow plugin and select Solution Stats item from the drop down menu



CodeNow plugin will load a script that you can run to calculate number of records per entity type in the POA. Once the numbers have been calculated, they’ll be pushed to Dynamics, and you’ll be able to see the dashboard

5. Here is  what it will look like



IT MAY TAKE TIME! Depending on the size of the POA table..

6. Now you just need to open Dynamics, choose “Dashboards”, and, then, choose “Dynamics Solution Stats” dashboard


Dynamics 365 (V9): No target version when exporting a solution

Edited on Nov 6: As Jonas Rapp mentioned in the comments below, the behavior described below is expected. It still makes it a bit different from what we may be used to because there is no on-premise V9 as of now, so, if anyone is still using on-prem dev enviornment, there is no way to sync back from online to onprem till there is a V9 on-prem.


If you are still using different versions of Dynamics (maybe a mix of on-prem and online), there seem to be a bug in V9 that may affect this process. It seems that V9 is not allowing us to choose the target version when exporting a solution. Here is how it looks like in the on-prem 2016 (8.2) version:


That screen above is followed by the screen below:


Which is how it should be (have a look here for the details: )

V9, on the other hand, does not have that version selector screen at all (at least as of Nov 5):



Instead, it just exports the solution.

Problem is, that solution gets exported as a V9 solution, so, when trying to import it back into 8.2, for example, 8.2 displays an error message:


Not sure if that change was intentional or not, but, either way, may need to keep this in mind for now.

Also not sure if this would be an issue.. I think it might be since what it means is that we can’t get V9 solutions to on-prem anymore. If that’s something you’d want to get back, here is a link to the CRM idea portal where you can upvote the idea:

Dynamice 365: Tracing service and Plugin performance

Tracing service can make it much easier for plugin developers to understand what’s happening in the plugin, especially when there is no other way to debug the code. I was wondering, though, if there would be an obvious performance impact when using the tracing service, so I did some testing tonight:


The results ended up being a bit unexpected.. Here is what happened:

1. I ran those tests in the Dynamics 365 (V9) trial environment

2. To test tracing service performance, I registered a plugin on the update of account entity which would make a few calls to the tracing service:

        public void Execute(IServiceProvider serviceProvider)
            ITracingService tracingService =
            IPluginExecutionContext context = (IPluginExecutionContext)
            IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

            tracingService.Trace("Message from tracing Plugin..");
            tracingService.Trace("Another message from tracing Plugin..");
            tracingService.Trace("Yet another message from tracing Plugin..");

As you can see, there is nothing extraordinary in that plugin..

3. Then, I created a small console application that would update the same account record in a loop:

         for (int i = 0; i < 100; i++)
            var updatedAccount = new Entity("account");
            updatedAccount.Id = account.Id;


Then I did the timing (in milliseconds, how much time it would take to run the for loop above), and here is what I got:


With only 1 call to the tracing service in the plugin AND tracing DISabled, it was taking about 20 seconds

With 3 calls to the tracing service in the plugin AND tracing DISabled, it was taking about 20 seconds

With 3 calls to the tracing service in the plugin AND tracing ENabled, it was taking the same 20 seconds

With the plugin execution step completely disabled, it was taking about 19 seconds

There was some difference between the individual test runs, but, on an average, those were the numbers. Essentially, there was no difference at all, other than for those tests where the plugin was disabled.. they were slightly faster.

And it does not, really, matter what are the absolute numbers – all I wanted to see is whether there would be any obvious performance impact with tracing enabled, and, it seems, there was not.  At least not on these tests.

Which is not what I expected.. And that’s why it’s both concerning and encouraging at the same time. Common sense seems to be telling me that there should be some difference, and, so, we should not be overusing tracing in the plugins. But these tests are telling me that it’s far not as bad to have tracing there as it might have been, so, it seems, it’s a relatively safe technique.

And, by the way, if you need more details about the tracing service, have a look at this page:

Happy 365-ing!