Monthly Archives: April 2019

A PowerShell script to import/export solutions and data

Sep 26: A newer version of this post is available here:

I have never been a big supporter of the configuration migration tool simply because it seems strange that we need to export data manually while preparing a package for the package deployer. I am also on the fence when it comes to the package deployer itself – on the one hand, those packages can certainly do the job. On the other hand, one needs Visual Studio to develop a package.

At some point, we’ve developed EZChange  for one of the implementations – that’s an actual no-code approach utilizing mostly unmanaged solutions.

But when it comes to a solution that would work with both on-premise and Azure DevOps, it seems PowerShell sounds like a really good option. There are a few projects out there, such as Xrm CI Framework by Wael Hamze.

I just needed something relatively simple this time – with a twist, though, since I also needed a simple mostly automated data export/import feature (and, ideally, manual control over that data)

If you are curious to see how it worked out, keep reading… By the end of this post you should be able to set everything up so that you can export solution and data from one environment using one script and import to another using another script.

There is a github project where you can get the scripts

That project also includes a demo solution files for Dynamics



To set up your own project, do this

  • Either clone or download the whole project from GitHub
  • Browse to the PSModules subfolder and create deploymentSettings.psm1 file as a copy of deploymentSettingsTemplate.psm1:


  • Open that file and configure connection strings for both Source and Destination connectionsimage


Essentially, that’s it. Since that GitHub project already has a demo solution and some data, you can try importing both into your destination instance. To do that, run Import.ps1 script from the SampleProject folder:


Note: it’s an unmanaged solution that has one demo entity. I don’t think there is any danger in installing it, but, of course, try it at your own risk

Below is a quick demo of what’s supposed to happen. Notice how, at first, there is no DemoDemployment solution in Dynamics. I will run import.ps1, will get the solution imported, but, also, will have some demo data added as you will see in the advanced find.













So how does import.ps1 script look like?

Here it is:


The work is, mostly, done in the last 4 lines:

  • The first two of them are all about setting up CDSDeployment object and initializing it (the class for that object is defined in the PSModules\deployment.psm1 file)
  • Once the object is ready, I can use it to import my solution
  • And, finally, I’m importing demo data in the last line

What about 10 or so lines at the beginning of the script? That’s reading configuration settings from either the environment variables (for Azure DevOps), or from the deploymentSettings.psm1 file created above.

What if I wanted to export the solution file and/or the data?

This is what export.ps1 script is for.

It’s very similar to the import.ps1, but it’s using the other two methods of the CDSDeployment object:


ExportSolution method will export a solution. The second parameter there stands for “managed” (so, if it’s “false”, then it’ll be an unmanaged solution)

ExportData will export, well, data.

Now keep in mind that there are two files in the Data folder:


You can have more than two, since both ExportData and PushData methods will be accepting file name parameters. However, at least for now, you will need to manually create schema.txt file. That’s, basically, an entity metadata file in json format:


And, of course, data.txt is the actual data file:


You can either export data from the dev environment, or you can create that file manually (and, possibly, use it to populate all environment with the configuration data, including the dev environment)

Would it work with Azure Pipelines? Sure:


What about solution packager etc? That was not, really, the purpose of this set up, but it can certainly be added.

PowerShell and Dynamics/PowerApps


I was working on a PowerShell script, and I just could not get past the error below:


“Element ‘’ contains data from a
type that maps to the name ‘System.Management.Automation:PSObject’. The deserializer has no knowledge of any type that
maps to this name”

Yet I was not trying to do anything extraordinary (other than writing a PowerShell script which is something I had not really done before).

In a nutshell, I would create a connection in my script, then I would create an entity, and, then, set an EntityReference attribute there (it is not a working code below – it’s a simplified sample):

$conn = Get-CrmConnection
$entity = New-Object Microsoft.Xrm.Sdk.Entity -ArgumentList $entityName
$value = New-Object -TypeName Microsoft.Xrm.Sdk.EntityReference

$entity[$name] = $value

The error would be happening in the Update call above.

If you do a search for the error above, you’ll see a few references. It still took me more than a day to figure out a workaround!

Turned out there is something with boxing/unboxing of objects between PowerShell and .NET which I really don’t fully understand at the moment, but, once I realized that, I figured what if I just bypassed boxing / unboxing alltogether. It worked in my case – what I had to do is:

Define a helper class

$HelperSource = @”
public class Helper
public static void SetAttribute(Microsoft.Xrm.Sdk.Entity entity, string name, object value)
entity[name] = value;

Define a helper function 

You might not need it, by the way. It depends on how the script is structured – I had to use it in another class, and, so, my Helper class would not be available to the compiler yet.. unlike the function below:

function Set-Attribute($entity, $name, $value)
[Helper]::SetAttribute($entity, $name, $value)

Add that helper class to my PowerShell script

$assemblyDir = Get-Location
$refs = @”$assemblyDir\Microsoft.Xrm.Sdk.dll”,”System.Runtime.Serialization.dll”,”System.ServiceModel.dll”)
Add-Type -TypeDefinition $script:HelperSource -ReferencedAssemblies $refs

Now, instead of using $entity[$name] = $value, I can use Set-Attribute($entity, $name, $value).

Seems to be working so far – I’m getting all my entity references updated correctly.



Creating an unsupported javascript


imageEarlier today, Shidin Haridas and I were discussing something on linkdein and we realized that we were talking about an undocumented javascript method in the Client API framework.

You know you are not supposed to use undocumented methods, right?

So I thought what if… I mean, don’t get me wrong, if somebody suggested to use unsupported javascript methods on a Dynamics/PowerApps project, I’d be the first to say “NO WAY!”. And, yet, what if… just this one time.

Ah, whatever, let’s do it just for the sake of experiment.

We want to be smart, though. It’s one thing if the script stops working, but it’s a totally different thing if the rest of the application stops working because of that.

Therefore, let’s do it this way:

  • We’ll double check that the method is, indeed, undocumented
  • We will create a wrapper script
  • That wrapper script, if it fails, should handle the errors silently without making the rest of our app fail


All good? Let’s do it.

Here is what I want to achieve. When “Display Related” is selected, I want to display “Related Demos” link under the “related” dropdown:


When it’s not selected, I want the link to disappear:


It’s not, really, clear how to achieve this goal when looking at the related documentation page:

That page is talking about navigation items, but how do we get that specific item?


Turns out all the methods are there. Whether they are supported or not is a question.

In my example, what would work is this:

var related = formContext.ui.navigation.items.getByName(“nav_ita_ita_relateddemo_ita_relateddemo_ParentDemo”);

So what if it ends up being an usupported method in the future? It’s ok if the navigation link shows up – the users will probably notice, but it’s not going to be the end of the world. It would be much worse if that script breaks and everything breaks because of it.

Let’s create a wrapper then! We just need to make it a safe method by adding “try-catch” around the call that may start failing.

There you go:


Now if we stick to using runUnsupportedScript in the various event handlers, there is no risk of running into an error since the actual method that’ll be calling undocumented function will handle the error and log a message to the console instead.

So, I’m not implying this is the best practice. I’m not even saying it’s the recommended practice. I would even say don’t do it on your projects! Well, unless you have to…

Managed Solutions vs Unmanaged Solutions


And why would I even start writing about this.. Well, not because I’m hoping to offer a decisive argument, and certainly not because I am hoping to bring Jonas Rapp to the “unmanaged” camp or Gus Gozalez to the “managed”. Those would be all futile attempts.

I just think that managed solutions are overly complicated. And I also think that unmanaged solutions are overly complicated. And (I think) those two statements can live in perfect harmony.

For the managed solutions, solution layering, even though it’s, in a way, an awesome concept, can easily make you lose sleep once you start thinking of how it works.

For the unmanaged, there is a lot of housekeeping that you may have to do in different environments (manually or using your own and/or third-party tools).. although, I am not sure it would have been all taken care of automatically if you just started to use managed.

But, in general, the whole concept of solutions in Dynamics/PowerApps has an inherent flaw which is not going to be solved just by making a solution managed or unmanaged. So, in my mind, arguing over managed vs unmanaged can surely produce some very interesting discussions but, eventually, is not going to produce any final conclusions.

Unless, of course, PowerApps product team decides to lock it all down somehow and just enforce managed solutions in production, but, that’s only going to lock down the last leg of the deployment process. We will still have to live with the mix of managed / unmanaged between “dev” and “integration” environments, so the whole argument is not going to be over.

I can’t figure out a single word for that flaw I mentioned above, so let me just try illustrating the problem by comparing PowerApps solution development to a regular .NET solution development.

What do we do when we are developing a .NET(or Java.. whatever, it does not matter) application?

  • We have source code repository where our code is stored in its “native” form (well, “delta” or not, but we can look at the latest version of the code in its normal form)
  • We have merge tools
  • We can commit code changes and resolve merge conflicts
  • Finally, we can set up nightly builds to build and test the code we have in the source code repository

Then, we have PowerApps/Dynamics, and it all goes wrong from the beginning.

  • There is nothing “native” about the files (solution files or extracted component files). Those are XML (and, lately, json in some cases) interpretations of the actions we had taken in various UI designers – some of us can understand those files better, some may have troubles.. but there are only a few(if any at all) people who can responsibly say that they know everything about those files
  • There are no merge tools. Combine this with the statement above, and you will see how this whole picture is starting to get dark
  • Without the merge tools, committing the changes and resolving merge conflicts is becoming an impossible task
  • The last step is, actually, somewhat achievable. Yes, we can automatically deploy solutions. As far as testing goes, that’s a big questions.. although, it’s mostly a question of whether we have decided to dedicate enough time to automating the QA with the tools like EasyRepro


Whether we are using managed or unmanaged solutions does not help with any of the steps above, so no matter which solution type we choose, we are not going to solve the actual problem, it seems.

Question is, then, whether one approach is, somehow, better than the other, since neither one is perfect.

Quite frankly, the way I’m looking at it is: if there is an irreversible action, I would not take it. Almost as in “no matter what”. I don’t like burning the bridges – you never know when you’ll need one. Managed solutions, in my mind, is an example of such an action. Once it is deployed in the environment, that environment is locked down. And if it were really locked down in terms of development, I might still understand.. but it’s not – a System Admin can still go to the default solution and customize a lot of things in the environment. But our solution is, now, locked – there is no way to export it once it’s managed. So we can’t get a copy of that environment to restore it in dev.. Which means we have to maintain a dev environment somewhere all the time.

So, yes, it seems I’m totally on the unmanaged side, but, like I said above, this is one of those arguments where there probably can be no winners.

Because, of course, managed have some benefits. They are easy to uninstall. They do support attribute removal. Some customizations can be locked down. Although, that said, just don’t give system customizer/admin permissions to the people who are not supposed to have those permissions, and you would not need to worry about unwanted customizations in production.

Do unmanaged have benefits? Sure – we can take a copy of production and turn it into dev in a matter of minutes. Which is, often, what we need to do anyway since we won’t have some of the licensed production solutions in dev(due to licensing), but we will need related solution entities to be able to reference them in dev. Are there disadvantages? Of course.. You need to delete an attribute? You’ll have to do it manually, or you’ll need to create some kind of script to automate the process.

Maybe what could get us closer to settling this argument is if the whole concept of solution development were changed. For instance, what if solutions files were written in some kind of scripting language? I think I’d be able to work with those (Something like: add_an_attribute(“test_attribute”); )

Although, this does not seem to be where it’s all going, not at all.. but we should all have hope anyway, and, possibly, stop arguingSmile You have a few extra environments to use managed solutions? That’s fine. You don’t have those? Well, stick to unmanaged to be able to export from production (or from a copy of production). But, no matter which way you choose to go, you’ll certainly find at least a few problems along the way.

So, take it easy and have fun with the PowerPlatform!

SolutionPackager can now extract and re-package CanvasApps included into the solution


Not sure if you are aware, but the latest version of SolutionPackager can now extract and re-package CanvasApps (and Flows).

Depending on the version of SolutionPackager you’ve been using so far, it might or might not be failing once a canvas app is included into the solution. However, the latest version is not just not failing, it’s actually extracting the CanvasApps correctly:


There is a caveat, though, and the credit for both running into the issue below and for hinting at the workaround goes to my colleague (without changing any names.. Denis, do you want a link here?)

Either way, when using the solution packager, you can specify the folder. So, if the folder you specify does not include full path, you can extract solution components without any problems:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe /action:Extract /  /folder:Extract

But, if you try re-packaging the solution:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe /action:Pack /  /folder:Extract

You will get an error:


As it turned out, there is an awesome workaround! Instead of using folder name only, use full path to the folder:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe  /action:Pack /  /folder:”C:\Work\Blog\CanvasPackager\Extract

And it will all be fine:


Creating custom check list for a model-driven app

Looking at the check list demo below, I am actually wondering if it would be better to use an embedded Canvas app there? Will need to think of it, but I figured I’d share this small example of a classic model-driven application web resource that’s adding a bit of custom UI to the application:


How easy would it be to create this kind of web resource? Well.. When you need something like this, you may be able to find a good working example almost right away:

What’s required to turn it into a web resource is some knowledge of Web API, Javascript, and HTML.

For the Web API, if you are not familiar with it yet, have a look here:

For JavaScript and HTML.. I’ll just assume you are familiar with both of those to some extent.

And, then, you can download an unmanaged solution which has the web resource, required checklist entities, and a “demo” entity from the link below: 

This solution has 4 components:


Check List Type entity is what you can use to set up different types of checklists.

Check List entity contains individual check list items – here is an example:


Once you’ve imported the solution, start by setting up a few check list types and some check lists items for each type.

Then you’ll need to add the web resource.. Have a look at the Web Resource Demo entity to see how it’s done. If you are setting up your own entity, you’ll need to do 3 things:


  1. Create a field to identify check list type and put it on the form (The schema name should be ita_checklisttype. Make sure it’s all lowercase. Although, you could easily modify the web resource to look for a different attribute name)
  2. Add ita_/html/checklist.html web resource to the form. Make sure the web resource is set up to receive parameters: image

Finally, add ita_checklistcompletion attribute (string, 2000 characters) to your entity – this is where check list selections will be stored.

Publish all, create a record of that entity type, make sure to select check list type for that record, and switch to the “Check List” tab.

The web resource will start loading; it will query check list items through Web API (and that query will be filtered to only return the items which have selected check list type); and it will display them in a list. Once some items have been selected, those selections will be stored in the ita_checklistcompletion attribute, so the next time somebody loads the same record, all selections will be preserved.

Just keep in mind that the purpose of this checklist is not to provide the data you can search on or report on – it’s more to implement a very light-weight checklist functionality (for example, to ensure that the users have completed all required actions before they deactivate the record.. that additional validation might have to be implemented in a plugin, though)

New pricing model for Dynamics/PowerPlatform – can we do some basic estimates?

I hope you have heard about some very interesting changes in the Dynamics pricing model. They may not have affected you yet, but they will, especially if your subscription renewal date is coming soon.

As almost expected in such cases, Jukka Niiranen came up with a great overview of what’s happening to the PowerPlatform (not just on the licensing front):

But I figured I’d try to do a few basic estimates.. By the way, you should definitely read the following article – make sure to look at the “FAQ” section at the bottom:

Anyway, when I go to the admin center, I don’t see capacity reports yet:


Hoping they will show up soon enough, but, at the moment, there are a few things we can assume just by looking at the storage tab of the existing analytics report for the very basic Dynamics instance I am using for training/testing, and it really does not have a lot of data. What it does have is a lot of solutions (Field Service, Project Service, Portals, etc):





Just looking at the top tables by size, I really don’t see any of the custom tables there. To be fair, there are a few other instances I looked at, and, without going into the details, some of them do have custom tables at the top, especially those environments where we don’t have a lot of out-of-the-box solutions.

Either way, it seems that, even without a lot data, we could expect Dynamics instance to need at least 2-3 GB of CDS storage.

If you look at the article below:

You’ll see some numbers which might be handy if you are trying to estimate the impact of the pricing change (disclaimer: I am not 100% sure those numbers are correct, but let’s just assume they are):


So, let’s also assume we have a more realistic production instance(not an extreme case, though) that is using 10 GB of CDS storage, 5 GB of log storage, and 40 GB of file storage(if you think of the email integration, this may still be relatively low), and let’s say we don’t have any spare storage..

If we wanted to create an additional QA/UAT environment for that production instance as a full copy of production, it might cost us, roughly:

10*40 + 5*10 + 40*2 =  530 (USD)

Which is not, really, that cheap, yet those assumptions above can go both ways (up or down) depending on the specifics of our environments.

On the one hand, this means it might be helpful to start getting used to the idea of having less-than-full-copy UAT environments.

On the other hand, there is an opportunity to get more QA/Dev environments if not for free, then, at least, for less than it used to be.

Sure we will see how this is all going to play out, and there will likely be more details coming out in the following months.. but, it seems, even the ALM strategy for Dynamics/PowerApps will now have to consider storage space cost calculations, since, depending on how it’s all going to be set up, it’ll cost us more.. or less..

This is going to be interesting, eh?

Implementing custom document location logic with a plugin

In one of the previous posts I used Microsoft Flow to create folders in SharePoint whenever a record is created in Dynamics/CDS. That was not extremely straightforward to me, but, at least, I did not have to fight with the authentication.

But, having done this, we’ve figured that we should still try a plugin instead (after all, a plugin could do everything synchronously, so it might be less confusing for Dynamics users).

This turned out to be a much more monumental task since we did not really have a lot of experience with SharePoint API-s etc.

This post is a result of those experiments. It’s not meant to explain all the intricacies of how SharePoint works, how OAuth works, etc. Basically, it’s going to be a step-by-step walkthrough, and, in the end, there is going to be sample source code.

Just to give you an idea of how it’s going to work, here is a quick diagram:


The main problem there turned out to be getting that token. There is a solution that Scott Durow developed some time ago:

But, as it turned out, that one is using legacy office 365 authentication, and it can now be disabled through the conditional access policy:

Of course that policy has been applied in our tenant.. Murphy’s law in action, I guess.

So we needed a different solution.

There are a few links which helped a great deal, so I’ll just provide them here for your reference:

There were a couple of key concepts I had to realize while reading through those:

  • SharePoint is not using Azure AD Application registrations for OAuth – there is a separate application registration process, and there is a separate token service
  • When registering an app in SharePoint, we are getting a completely new security principal, as the second link above explains: “After you’ve registered your add-in, it is a security principal and has an identity just as users and groups do” . You can also see it on the screenshot below if you look at the “Modified By” column:


Either way, with all that said, we need to go over a few steps:

  • Register an add-in
  • Create the code that gets the token and calls Sharepoint REST API
  • Write a plugin that is using the same code to create folders in Sharepoint and document locations in Dynamics as needed

Step 1: Registering an add-in

I’ve registered the add-in using <site>/_layouts/15/AppRegNew.aspx page as described here:

Keep in mind that, later on, you’ll be giving permissions to this add-in, so, depending on where you have installed it(site collection / site), you might be able to limit those permissions to the specific site.


Make sure to copy the client secret and the client id – you’ll need those later.


Also, as strange as it is, there seem to be no easy way to browse through the add-ins registered this way, but you can use PowerShell as described here:

First of all, this link mentions something that you may want to keep in mind:

Client secrets for SharePoint Add-ins that are registered by using the AppRegNew.aspx page expire after one year

Not sure how exactly that is supposed to be managed, but let’s leave it for later (have a feeling this is a common problem, so either there is a common solution somewhere, or this is a well-known pain, so a reminder has to be implemented and some manual steps have to be taken periodically)

Either way, to get Connect-MsoService working, also make sure to follow the instructions here:


Now that we have the add-in, it’s time for

Step 2: Setting up add-in permissions

Have a look at the article below:

For the add-in we are creating, we will need read/write permissions on the site, so here we go:

Permissions for the next screenshot:

<AppPermissionRequests AllowAppOnlyPolicy=”true”>

<AppPermissionRequest Scope=”http://sharepoint/content/sitecollection” Right=”FullControl” />


Why is it for the sitecollection? Not 100% sure.. I would think tenant should work, but it did not (kept getting “access denied” errors down below when trying to run api queries)

Navigate to the <site_url>/_layouts/15/appinv.aspx

Paste App Id (copied from Step 1) and lookup the app, then paste permissions from above, then click “Create”


Step 3: Creating a Plugin

For this and the following steps, you will need to find out your sharepoint tenant id. Follow the steps here:

In short, open this url:

http:// <SharePointWebsite> /_layouts/15/AppPrincipals.aspx

You will see tenant id there:


By this moment you should have the following 4 parameters:

  • Client id
  • Client Key
  • Tenant Id
  • And you should definitely know your sharepoint url


You will find the source code for the first version of the plugin on GitHub here:

It definitely deserves a separate post, and there are a few things to do there to improve the code/make it more flexible, but, for now, here is how it works:

  • Build the solution
  • Register the plugin on create of the Lead entity (could be any other document-enabled entity), post-operation, synchronous
  • Add secure configuration to the step



For the secure configuration, use the following XML:

<clientId>YOUR CLIENT ID</clientId>
<clientKey>YOUR KEY</clientKey>
<tenantId>YOUR TENANT ID</tenantId>
<siteRoot> WITH YOURS)</siteRoot>

Now prepare SharePoint and Dynamics:

  • Create a document library in Sharepoint, call it “DynamicsDocs” (right in the root)
  • Assuming “Default Site” refers to the SharePoint root, create a document location in Dynamics like this:



With that done, if you create a lead in Dynamics, here is what will happen:

  • The plugin will create new folder under DynamicsDocs (using new lead ID for the folder name)
  • And it will create a document location in Dynamics to link that folder to the lead entity


Hope I’ll be able to write another post soon to explain the plugin in more details, and, also, to add a few improvements..

Instantiating an email template with Web API

There is email template functionality in Dynamics/model-driven applications – if you are not familiar with it, have a look at the documentation first:

However, what if, for whatever reason, you wanted to automatically instantiate an email template instead of having to use “insert template button” on the email screen?

For example, maybe there are a few frequently used templates, so you might want to add a flyout button to the command bar as a shortcut for those templates.

This is what you can use InsantiateTemplate action for:

And below is a javascript code sample which will do just that, just make sure to

  • Replace those parameters with the propert templateid, objecttype, and object id
  • Instead of displaying the alerts, do something with the subject and description you will get back from the action call
function InstantiateEmailTemplate()
    var parameters = {
        "TemplateId": "00000000-0000-0000-0000-000000000000",//GUID
        "ObjectType": "logicalname",//Entity logical name, lowercase
        "ObjectId": "00000000-0000-0000-0000-000000000000"//record id for the entity above

    var requestUrl = "/api/data/v9.1/InstantiateTemplate";
    var context;
    if (typeof GetGlobalContext === "function") {
        context = GetGlobalContext();
    } else {
        context = Xrm.Page.context;

    var req = new XMLHttpRequest();"POST", context.getClientUrl() + requestUrl, true);
    req.setRequestHeader("OData-MaxVersion", "4.0");
    req.setRequestHeader("OData-Version", "4.0");
    req.setRequestHeader("Accept", "application/json");
    req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
    req.onreadystatechange = function () {
        if (this.readyState === 4) {
            req.onreadystatechange = null;
            if (this.status === 200) {
                var result = JSON.parse(this.response);
            } else {
                var errorText = this.responseText;

PS. Here is the terrible part.. I’ve written the post above, and, then, a colleague of mine came up and said “Hey, I found this other post:  ”

Of course, those Inogic folks.. they even have the same blog theme! Well, maybe I have the same.. Anyway, I figured I’d count the number of lines in each version and you know what? If you remove empty lines and alerts in my version, it turns out to be a shorter one! Well, the old ways are not, always, worse, but they are still old 🙂 So, make sure to read that post by Inogic.

Canvas vs Model-Driven Apps – two ways to look at it

Ever since I’ve found myself working in the online environment, and having done all my previous work on-premise, it seems I got almost addicted to various kinds of diagrams. This seem to be the only way I can even start to understand what’s happening in the PowerPlatform world. Not sure if it’s for better or worse, but, possibly, you’ll find it useful, too.

So here is one of those:


See, what if you wanted to explain the difference between Canvas and Model-Driven apps to somebody not familiar with one or both types of those apps?

This is what the diagram above is all about. First of all, there can be no talking about model-driven apps without CDS. But, when the data is in  CDS, you can think of it as of a square where model-driven applications are better suited for complex data, and canvas apps are better suited for unique user interfaces.

Of course, if you have complex data and need unique interface, there is a problem. Well, this is where the diagram below might also help, since it goes into some of the additional details, yet it also mentions the concept of Embedded Canvas Apps:


I talked about this diagram in the Episode #2 of This or That, just could not get rid of the feeling that a more high-level view was still missing. So, hopefully, it’s good enough now.

PS. And just if you wanted to see yet another diagram, there is one more here: