Author Archives: Alex Shlega

Dynamics portals–just thinking out loud..

I have to admit, I have not worked a lot with the portals. It’s always been occasional. But every time I turn to them, somehow it feels like the concept itself is almost foreign to Dynamics. Not because there is no need for the portals – there has always been, and there is still a need. More because it’s a completely different area, and, it seems, there are some natural limits to where the portals(as they are now) can reach.

As in.. Before you even start developing a portal, you need to get Dynamics. You need to learn how Dynamics works, you need to learn how it works with the portals, how to configure the portal in Dynamics. You can do no portal development without Dynamics.

When did Dynamics become a content management system, though? Are portals, even remotely, the reason for buying Dynamics in the first place?

Besides, the portals are so tightly coupled with Dynamics, that there is simply no way to use them without it. Not only from the configuration perspective.. even on the data level, there is no separate storage, there is no independent database. The whole architecture of this solution assumes that Dynamics will serve as a back end, all the time. When did it become normal to create this kind of tightly coupled architectures? I can’t name a single CMS system that would be successful in that – they are all built independently, even though, once they have matured, they do start providing connectors and integrations to other systems.

That’s not everything, though. What’s probably even more interesting is what has been happening to the portals in the last few years.

Not that long ago Microsoft acquired ADX Studio only to split that solution into 3 a year later and, essentially, to give one of those versions back to Adoxio (which is, technically, a former consulting arm of ADX Studio). We may call it community edition, but, with all due respect to the portal developers, I think there will be only so many contributors outside of Adoxio.

No, I understand that something had to be done to support on-premise clients, but, eventually, we ended up with 3 different versions of the portals, and this is at the time when even the main “branch” is still nothing but a nuisance in the CMS world.

Either way, here is what we have according to Adoxio itself:

https://www.adoxio.com/xRM-Portals-Community-Edition/

Here is a question, though.. How does the future look like for those 2 non-Microsoft versions of the Portals? Are they seriously going to compete with the version from Microsoft? I don’t think so. Are they just intermediate solutions which had to be delivered to support all those transitional customers? But there is even no guarantee that the upgrade path will be provided from the community edition to the MS edition in a year or two, so this is just delaying the inevitable. Why bother?

I would understand it if the main scenario for the portals was to integrate them into the existing web sites, but it does not seem to be the case either. It actually takes an effort to create that kind of integration, so it does not look like the portals have been developed with that goal in mind.

In other words, in terms of the vision, and as far as the portals are concerned, it all seems to be a little blurry to say the least. And, yet, there they are, the portals, with quite a bit of activity happening around them. Is it just me feeling a bit confused about what the future holds for the portals?

A validation step that won’t validate

A plugin registered in the validation step won’t intercept any of the changes that get added to the plugin Target in the pre-create. It seems so obvious.. but it’s not something I’d normally be thinking about when developing the plugins.

One of the plugin patterns most of us like using looks like this:

– We would create a pre-operation plugin

– In that plugin, we would populate some attributes of the plugin Target

That way, we can save time/resources which would, otherwise, be required if we did the same in the post-operation plugin (since we would have to call service.Update on the target entity in the post-operation):

image

This process works great, most of the times. There is one exception.. If we wanted to validate something in the pre-validation, that validation won’t work for whatever changes we would add in the pre-operation. There is no going back in that sequence above.

Imagine we had a plugin that would not allow any changes to the description field on the account entity:

image

Here, I have created two steps.

Pre-Validation:

image

Pre-Operation:

image

Everything looks right – I’m getting an error message when trying to update description field on the account form:

image

But, if I don’t update description field manually, and, instead, update some other field just so Pre-Operation plugin would kick in.. that plugin would have absolutely no problems updating the description field:

image

image

That’s exactly because those changes would happen after the validation step. Actually, the same may happen if we do these validations in the pre-operation if we configure incorrect execution order on the steps.

Comparing update performance: No plugins vs Plugin vs North52 #Dynamics365

When talking to the client about the business requirements, it’s always tempting to think that we can do just about anything in the plugins if we really have to.

And it’s true that plugins allow us to implement very powerful customizations, but, on the other hand, that power does not come for free – we may have to pay a certain price since all those plugins will be running on the Dynamics server, and, so, they’ll be slowing down the application.

So I figured I’d do a quick comparison to see how the plugins affect performance just to illustrate what may happen. Imagine an oversimplified scenario (which might not even require a plugin) where we would need to make sure that any time an account record is updated, it will have “TEST” in the description field.

I’ve run 3 different tests to measure average update request execution time in each case:

1. For the first test, I have disabled all the plugins on account “update” to get the baseline. There were 1000 updates, every 100 of them were done using ExecuteMultiple request.

2. For the second test, I used North52 formula

In general, North52 is a great solution that simplifies all sorts of calculations in Dynamics.. but, it seems, I don’t quite agree with their own assessment of the performance impact which you can find here:

http://support.north52.com/knowledgebase/articles/182567-training-video-n52-formula-manager-performance

You can download a free edition of North52 to use it in your environment – it does have some limitations, of course, but it can still be very useful.

Anyway, here is how the formula looked like:

image

Just keep in mind that, when defining a formula there, you are, actually, registering a plugin step (more exactly, North52 is doing it for you):

image

 

3. For the third test, I used a separate plugin

image

That one is not doing much – it’s just setting the description field.

And there are a few more things which are worth mentioning:

  • When running test #2, the plugin from test #3 was disabled
  • When running test #3, the plugin from North52 was disabled
  • Finally, I used a console utility for testing – you’ll see the code below:

 

class Program
    {
        public static IOrganizationService Service = null;

        static void Main(string[] args)
        {
            var conn = new Microsoft.Xrm.Tooling.Connector.CrmServiceClient(System.Configuration.ConfigurationManager.ConnectionStrings["CodeNow"].ConnectionString);
            Service = (IOrganizationService)conn.OrganizationWebProxyClient != null ? (IOrganizationService)conn.OrganizationWebProxyClient : (IOrganizationService)conn.OrganizationServiceProxy;


            int testCount = 10;
            int requestPerTest = 100;
            double totalMs = 0;

            for (int i = 0; i < testCount; i++)
            {
                Console.WriteLine("Testing..");
                totalMs += UpdateAccountPerfTest(requestPerTest);
                
            }

            double average = totalMs / (testCount * requestPerTest);
            Console.WriteLine("Average update time: " + average.ToString());
            Console.ReadKey();
            //SolutionStats();
        }


        public static double UpdateAccountPerfTest(int requestCount)
        {
            Guid accountId = Guid.Parse("475B158C-541C-E511-80D3-3863BB347BA8");
            Entity updatedAccount = new Entity("account");
            updatedAccount["description"] = "1";
            updatedAccount.Id = accountId;

            UpdateRequest updateRequest = new UpdateRequest()
            {
                Target = updatedAccount
            };
            Service.Execute(updateRequest);


            OrganizationRequestCollection requestCollection = new OrganizationRequestCollection();
            for (int i = 0; i < requestCount; i++)
            {
                requestCollection.Add(updateRequest);
            }
            DateTime dtStart = DateTime.Now;
            ExecuteMultipleRequest emr = new ExecuteMultipleRequest()
            {
                Requests = requestCollection,

                Settings = new ExecuteMultipleSettings()
                {
                    ContinueOnError = true,
                    ReturnResponses = true
                }

            };
            var response = Service.Execute(emr);
            return (DateTime.Now - dtStart).TotalMilliseconds;
            
        }
  }

All that said, here is how the tests worked out(ran a few tests, actually. Those numbers are the “best” I saw):

  • Test#1 (No plugins):             99   ms per update request
  • Test#2 (North52):                 126 ms per update request
  • Test#3 (Separate plugin):  114 ms per update request

 

So, realistically, it’s almost impossible to notice the difference on any single update request. However, it’s still about 20-30% difference, and it seems to be a little worse with North52 when compared to a dedicated plugin. Which could be expected since North52 would be using some additional calculations. Although, that difference between North52 and dedicated plugin might be just something else since I saw those numbers going a bit up and down in different test runs for both Test#2 and Test#3.

The problem, though, is not those individual calls – it’s when you start looking at the really large number of updates, that’s where 20-30% suddenly become something to keep in mind.

But that’s not all yet.. There are a couple of message processing steps on the account entity that come with the FIeld Service solution, so what if I do the tests again with Field Service plugins disabled?

image

  • Test#1 (No plugins):             27-32  ms per update request
  • Test#2 (North52):                 59-63 ms per update request
  • Test#3 (Separate plugin):  59-63 ms per update request

 

In other words, when there were no plugins at all, average update request performance was about 3-4 times better. At the same time, North52 seemed to be about as efficient as independent plugins, at least with that simple formula.. Except in one scenario:

Normally, we won’t register plugins to run off just any attribute – we will configure the plugins to run on the update of some specific attributes only. For example, when running Test#2, I tried using only a single attribute (“name”) for the trigger, and I got exactly the same average execution time as if there were no plugins at all. With North52, we can do the same using plugin registration utility, but, it seems, there is no way to do it directly in the Formula Editor. Something to keep in mind then..

So, be aware. You may not notice the difference on every single update/create operation, but, at some point, you may have to start optimizing plugin performance by adjusting the list of attributes they’ll be triggered on, by simplifying the calculations, even by moving those calculations into an external process, etc.

#CodeNow for #XrmToolBox–set fetchxml for a system view

There are other ways of doing this, but here is how you can use CodeNow plugin for XrmToolBox to update FetchXml for a system view (you will need, at the minimum, v1.0.0.18 of the CodeNow plugin)

Load CodeNow plugin (if it’s not installed yet, go to the plugin store in XrmToolBox and install the plugin first)

image

Make sure XrmToolBox is connected to Dynamics:

image

Click File->Open

image

And select “Online” storage from the list:

image

Then select “Set View Fetch” script

image

If you run the script right away, it will run for the account entity. You can change the entity name if you wish:

image

Then click “Run”

image

The script will ask you to choose a view

image

And, then, to provide updated fetchXml for that view:

image

Once you have provided updated fetchXml and hit “ok”, the script will go on to update fetchXml for that view, and, also, to publish all customizations. At which point you will be able to see the results of that update in Dynamics.

 

By the way, this script also demonstrates a couple of new features that’s been added to the CodeNow:

1. CodeNow script can ask users to choose one of the options from a dropdown now:
string selectedView = Helpers.GetSelection(“Select a view”, “View Fetch Updater”, viewOptions);

2.CodeNow scripts can ask users to provide text input now:

string newFetch = Helpers.GetTextInput(“Enter FetchXml”, “View Fetch Updater”);

This allows for a bit more interaction and a bit less coding, so the same scripts can be re-used by non developers even easier.

Finally, I have PluginRegistration utility for V9!

I’ve finally downloaded the tools for Dynamics365 V9. Not a big deal, you might say, since those tools used to be part of the SDK package all the time.. and I was waiting for this to be delivered in the same way it has always been delivered.. Only to find out it’s different nowSmile

You see, there is “No more monolithic SDK download”:

https://blogs.msdn.microsoft.com/crm/2017/11/01/whats-new-for-customer-engagement-developer-documentation-in-version-9-0/

This is good and bad, as usual. On the one hand, it’s extremely easy to use PowerShell to download the tools(including the Plugin Registration utility, Solution Packager, CrmSvcUtil, etc). The whole procedure is described here:

https://docs.microsoft.com/en-ca/dynamics365/customer-engagement/developer/download-tools-nuget

And you can always get the latest version by re-running the same script.

Actually, I figured it would make my life a little easier if I saved that script in a file, so all I need to do now is run that file:

image

On the other hand, I think I’d still prefer to have everything, including the samples, on my disk. It’s just easier to navigate/browse there (compared to doing that online).

As for the samples, it seems we can only browse them online now (sure we can download all those files, but not as a single package):

https://docs.microsoft.com/en-ca/dynamics365/customer-engagement/developer/sample-code-directory

It’s very well structured there, so it might really be easier to use it that way.

And there is yet another set of files that I used to re-visit every now and then – it’s default ribbon definitions. It’s still available as a downloadable zip file, though:

http://download.microsoft.com/download/C/2/A/C2A79C47-DD2D-4938-A595-092CAFF32D6B/ExportedRibbonXml.zip

Hope that helps – looks like everything we used to have in the SDK is still available for download, even if it’s delivered in a different way now.

Enjoy your 365-ing!

CodeNow for #XrmToolBox–more than just code now.. Let’s get your organization POA stats this time

I’ve been experimenting with the CodeNow plugin, and there is a new version (1.0.0.15) you can now download from the plugin store. There are some user interface improvements(progress indicator, better error reporting, better logging), but, also, there is a new piece of functionality which you can see on the screenshot below:

image

This is a dashboard in Dynamics that presents some basics stats about the PrincipalObjectAccess table. In order to get that dashboard, you’ll need to do a few things:

1. Add CodeNow plugin to XrmToolBox

image

2. Deploy the dashboard and the custom entity used to populate that dashboard to your Dynamics organization

Download unmanaged solution from the url below, import it to Dynamics, and publish the customizations:

https://github.com/ashlega/TreeCat.XrmToolBox/raw/master/TreeCat.XrmToolBox.CodeNow/Releases/1.0.0.15/Overview_1_0_0_0.zip

It will work with 8.0+ versions of Dynamics (V9 included)

3. In the XrmToolBox, connect to the organization where you just deployed the solution. You may want to increase connection timeout – POA queries can be time-consuming, so make it more than 2 minutes if there is significant number of records in the POA:

image

4. In the XrmToolBox, open CodeNow plugin and select Solution Stats item from the drop down menu

image

 

CodeNow plugin will load a script that you can run to calculate number of records per entity type in the POA. Once the numbers have been calculated, they’ll be pushed to Dynamics, and you’ll be able to see the dashboard

5. Here is  what it will look like

image

image

IT MAY TAKE TIME! Depending on the size of the POA table..

6. Now you just need to open Dynamics, choose “Dashboards”, and, then, choose “Dynamics Solution Stats” dashboard

image

Dynamics 365 (V9): No target version when exporting a solution

Edited on Nov 6: As Jonas Rapp mentioned in the comments below, the behavior described below is expected. It still makes it a bit different from what we may be used to because there is no on-premise V9 as of now, so, if anyone is still using on-prem dev enviornment, there is no way to sync back from online to onprem till there is a V9 on-prem.

 

If you are still using different versions of Dynamics (maybe a mix of on-prem and online), there seem to be a bug in V9 that may affect this process. It seems that V9 is not allowing us to choose the target version when exporting a solution. Here is how it looks like in the on-prem 2016 (8.2) version:

image

That screen above is followed by the screen below:

image

Which is how it should be (have a look here for the details: https://msdn.microsoft.com/en-us/library/dn689055.aspx )

V9, on the other hand, does not have that version selector screen at all (at least as of Nov 5):

 

image

Instead, it just exports the solution.

Problem is, that solution gets exported as a V9 solution, so, when trying to import it back into 8.2, for example, 8.2 displays an error message:

image

Not sure if that change was intentional or not, but, either way, may need to keep this in mind for now.

Also not sure if this would be an issue.. I think it might be since what it means is that we can’t get V9 solutions to on-prem anymore. If that’s something you’d want to get back, here is a link to the CRM idea portal where you can upvote the idea:

https://ideas.dynamics.com/ideas/dynamics-crm/ID0003166

Dynamice 365: Tracing service and Plugin performance

Tracing service can make it much easier for plugin developers to understand what’s happening in the plugin, especially when there is no other way to debug the code. I was wondering, though, if there would be an obvious performance impact when using the tracing service, so I did some testing tonight:

image

The results ended up being a bit unexpected.. Here is what happened:

1. I ran those tests in the Dynamics 365 (V9) trial environment

2. To test tracing service performance, I registered a plugin on the update of account entity which would make a few calls to the tracing service:

        public void Execute(IServiceProvider serviceProvider)
        {
            ITracingService tracingService =
                (ITracingService)serviceProvider.GetService(typeof(ITracingService));
            IPluginExecutionContext context = (IPluginExecutionContext)
                serviceProvider.GetService(typeof(IPluginExecutionContext));
            IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);


            tracingService.Trace("Message from tracing Plugin..");
            tracingService.Trace("Another message from tracing Plugin..");
            tracingService.Trace("Yet another message from tracing Plugin..");
            
        }

As you can see, there is nothing extraordinary in that plugin..

3. Then, I created a small console application that would update the same account record in a loop:

         for (int i = 0; i < 100; i++)
         {
            var updatedAccount = new Entity("account");
            updatedAccount.Id = account.Id;
            service.Update(updatedAccount);

         }

Then I did the timing (in milliseconds, how much time it would take to run the for loop above), and here is what I got:

 

With only 1 call to the tracing service in the plugin AND tracing DISabled, it was taking about 20 seconds

With 3 calls to the tracing service in the plugin AND tracing DISabled, it was taking about 20 seconds

With 3 calls to the tracing service in the plugin AND tracing ENabled, it was taking the same 20 seconds

With the plugin execution step completely disabled, it was taking about 19 seconds

There was some difference between the individual test runs, but, on an average, those were the numbers. Essentially, there was no difference at all, other than for those tests where the plugin was disabled.. they were slightly faster.

And it does not, really, matter what are the absolute numbers – all I wanted to see is whether there would be any obvious performance impact with tracing enabled, and, it seems, there was not.  At least not on these tests.

Which is not what I expected.. And that’s why it’s both concerning and encouraging at the same time. Common sense seems to be telling me that there should be some difference, and, so, we should not be overusing tracing in the plugins. But these tests are telling me that it’s far not as bad to have tracing there as it might have been, so, it seems, it’s a relatively safe technique.

And, by the way, if you need more details about the tracing service, have a look at this page:

https://msdn.microsoft.com/en-us/library/gg328574.aspx#loggingandtracing

Happy 365-ing!

Dynamics 365: How do you get a column indexed automatically?

This seems to be another one of those “I used to think..”. In this case, I used to think that, once a column is added to the quick find view “find columns”, that column will be indexed automatically. Well, turned out it’s a bit more complicated; although, in the end, it’s probably still correct.

First of all, it would be worth looking at these two posts:

https://blogs.msdn.microsoft.com/darrenliu/2014/04/02/crm-2013-maintenance-jobs/

https://blogs.msdn.microsoft.com/crminthefield/2012/04/26/avoid-performance-issues-by-rescheduling-crm-2011-maintenance-jobs/

There is a bunch of useful information there, but what I was sort of missing is some low-level details. Yes, we can download Job Editor tool from codeplex; however, what is it, actually, going to do? The tool only works in the on-premise environment, and, from what I understand, it goes directly to the MSCRM_CONFIG database to update ScaleGroupOrganizationMaintenanceJobs table:

image

That table maintains the schedule of all those maintenance jobs per Operation Type per OrganizationId:

image

It seems that OperationType 15 corresponds to the Indexing Management job (I will explain why that’s important, just keep reading), even though there can be more than one job with that type (one per organization).

That said, here is how I got that index added to the table:

I created a new field on the entity

image

Then I added that field to the quick find columns

image

And, of course, published the changes.. on the database side, there was still no index:

image

So, I downloaded the tool and tried to reschedule the indexing job. That did not quite work out.. The job got rescheduled, but not for the date for which I wanted it (I wanted to see if the index gets created.. so I actually wanted that job to run almost immediately. Instead, it kept moving 1 day forward every time I would try to reschedule it using the tool)

Eventually, I just went to the SQL Management Studio and ran this SQL query:

UPDATE ScaleGroupOrganizationMaintenanceJobs

SET NextRunTime = ‘2017-11-01 02:10:00’

WHERE id=’…’

(If you ever need to run the same query, make sure you change the id and, also, the date.. that date should be in UTC)

The I restarted the Async Services (both, though it may be sufficient to restart the “maintenance” one), and voila.. There is an index now:

image

Sure you can do this only in the on-premise environments, and only if you have required permissions on the SQL server. However, normally we would not need it. What we should probably keep in mind (as a result of this exercise) is that:

  • Yes, indexes are created when a field is added to the “find columns”
  • No, they are not created right away – there is a scheduled job that is running daily
  • If you are working in the on-premise environment, you may be able to reschedule that job. If you are working in the online environment, you may just have to wait

PS. And, if I had full-text search enabled, this whole story would be different.. but that’s for another post.

Dynamics 365: Working with the virtual entities

It seems the basics of the Virtual Entities have been covered in the blog post below while this feature was still in the preview:

https://blogs.technet.microsoft.com/lystavlen/2017/09/08/virtual-entities/

Now that we do have it in V9, it seems that one major issue that will be limiting our ability to use Virtual Entities in the real-life scenario is security because Virtual Entities are organization-owned, so they cannot be really controlled by the Dynamics security.

Every CRM user will either be able to see all or nothing – from the integration perspective, that’s almost never going to work.

I wanted to explore, then, if there is an option to limit access to the virtual entities with the help of the RetrieveMultiple plugins, and that’s what this post is going to be about.

To start with, let’s create a virtual entity as described in the post above.. while doing this, keep in mind that, once you have the data source and the virtual entity, you still need to map the fields. If you don’t do that, you’ll get the following error message:

image

Entity could not be retrieved from data source. Please try again or contact your system administrator

If you see that error message, make sure you did configure field mappings:

image

So, make sure to configure the external names, and you should be able to see this:

image

If you look at the security role configuration, the only 3 permissions you’ll be able to give on that new entity are Read/Append/AppendTo (all on the organization level):

image

So, yes, we are allowed to set up relationships between virtual entities and other entities.. Although, when setting up a 1:N between a native entity and a virtual entity, we have to provide an external name for the lookup field:

image

Which makes sense since there has to be a lookup on the “N” side of the relationship.

Either way, at this point there is a Virtual Entity but there is, basically, no security. By the way, we are, actually, allowed to build SSRS reports for the virtual entities:

image

So that turns the original question of whether we can use a plugin to introduce some security into a somewhat more complicated one.. Since RetrieveMultiple plugins do not work for SSRS reports, even if we can make that plugin work for the Virtual Entity, can we also make it work for the SSRS reports?

In theory, if we can use a plugin, then we can, probably, comes up with a custom security model. But developing a complete solution is not the purpose of this post – really all I wanted to do is to see whether I can use a plugin to start with. Let’s see then..

Here is my plugin code:

using System;
using System.Activities;
using System.Collections;
using System.Xml.Linq;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using System.Collections.Generic;
using System.Data;
using System.Linq;
            
namespace VETest
{
    public class RetrieveMultiple: IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            ITracingService tracingService =
                (ITracingService)serviceProvider.GetService(typeof(ITracingService));
            IPluginExecutionContext context = (IPluginExecutionContext)
                serviceProvider.GetService(typeof(IPluginExecutionContext));
            IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

            if (context.OutputParameters.Contains("BusinessEntityCollection"))
            {
                var results = ((EntityCollection)context.OutputParameters["BusinessEntityCollection"]).Entities.ToList();
                
                List updatedResultList = new List();
                foreach (Entity entity in results)
                {
                    if (entity.Contains("tcs_name") && (string)entity["tcs_name"] != "Early morning start, need coffee")
                    {
                        updatedResultList.Add(entity);
                    }
                }
                context.OutputParameters["BusinessEntityCollection"] = new EntityCollection(updatedResultList);
            }
        }
    }
}

The idea is that, in the post-operation RetrieveMultiple, the plugin will go over the list of records returned from the datasource and will remove some of them from the results. In this particular case,  a record with that particular value in the “name” field will be removed.

So, I got the plugin compiled, and here is how the steps has been registered:

image

You can do all of the above using XrmToolBox, btw. Just use CodeNow plugin to compile the plugin, and, then, use Plugin Registration plugin in XrmToolBox to register the plugin (that’s a lot of plugins in one sentence..) I’m using tcs_name attribute in that code, though, and that’s something you may need to change first.

Turns out, the plugin works just fine. There is only 1 record in the results now:

image

Surprisingly, there is only 1 record in the report as well:

image

That last one makes it an interesting precedent, btw. I am wondering if all other SSRS reports will, eventually, go through the standard retrieve pipeline when querying  data from Dynamics.. that would remove one of the important limitations we have now when using RetrieveMultiple for custom security.

Either way, it seems that it is possible to customize Virtual Entity security with the help of RetrieveMultiple plugins. Of course what I did above was a very simple example.. Nothing but a proof of concept, really. But it worked, and, it turned out, it also worked with SSRS, so, at the very least, we have a workaround – such a workaround will, likely, involve quite a bit of development, but that’s a different story.