Monthly Archives: April 2019

Managed Solutions vs Unmanaged Solutions


And why would I even start writing about this.. Well, not because I’m hoping to offer a decisive argument, and certainly not because I am hoping to bring Jonas Rapp to the “unmanaged” camp or Gus Gozalez to the “managed”. Those would be all futile attempts.

I just think that managed solutions are overly complicated. And I also think that unmanaged solutions are overly complicated. And (I think) those two statements can live in perfect harmony.

For the managed solutions, solution layering, even though it’s, in a way, an awesome concept, can easily make you lose sleep once you start thinking of how it works.

For the unmanaged, there is a lot of housekeeping that you may have to do in different environments (manually or using your own and/or third-party tools).. although, I am not sure it would have been all taken care of automatically if you just started to use managed.

But, in general, the whole concept of solutions in Dynamics/PowerApps has an inherent flaw which is not going to be solved just by making a solution managed or unmanaged. So, in my mind, arguing over managed vs unmanaged can surely produce some very interesting discussions but, eventually, is not going to produce any final conclusions.

Unless, of course, PowerApps product team decides to lock it all down somehow and just enforce managed solutions in production, but, that’s only going to lock down the last leg of the deployment process. We will still have to live with the mix of managed / unmanaged between “dev” and “integration” environments, so the whole argument is not going to be over.

I can’t figure out a single word for that flaw I mentioned above, so let me just try illustrating the problem by comparing PowerApps solution development to a regular .NET solution development.

What do we do when we are developing a .NET(or Java.. whatever, it does not matter) application?

  • We have source code repository where our code is stored in its “native” form (well, “delta” or not, but we can look at the latest version of the code in its normal form)
  • We have merge tools
  • We can commit code changes and resolve merge conflicts
  • Finally, we can set up nightly builds to build and test the code we have in the source code repository

Then, we have PowerApps/Dynamics, and it all goes wrong from the beginning.

  • There is nothing “native” about the files (solution files or extracted component files). Those are XML (and, lately, json in some cases) interpretations of the actions we had taken in various UI designers – some of us can understand those files better, some may have troubles.. but there are only a few(if any at all) people who can responsibly say that they know everything about those files
  • There are no merge tools. Combine this with the statement above, and you will see how this whole picture is starting to get dark
  • Without the merge tools, committing the changes and resolving merge conflicts is becoming an impossible task
  • The last step is, actually, somewhat achievable. Yes, we can automatically deploy solutions. As far as testing goes, that’s a big questions.. although, it’s mostly a question of whether we have decided to dedicate enough time to automating the QA with the tools like EasyRepro


Whether we are using managed or unmanaged solutions does not help with any of the steps above, so no matter which solution type we choose, we are not going to solve the actual problem, it seems.

Question is, then, whether one approach is, somehow, better than the other, since neither one is perfect.

Quite frankly, the way I’m looking at it is: if there is an irreversible action, I would not take it. Almost as in “no matter what”. I don’t like burning the bridges – you never know when you’ll need one. Managed solutions, in my mind, is an example of such an action. Once it is deployed in the environment, that environment is locked down. And if it were really locked down in terms of development, I might still understand.. but it’s not – a System Admin can still go to the default solution and customize a lot of things in the environment. But our solution is, now, locked – there is no way to export it once it’s managed. So we can’t get a copy of that environment to restore it in dev.. Which means we have to maintain a dev environment somewhere all the time.

So, yes, it seems I’m totally on the unmanaged side, but, like I said above, this is one of those arguments where there probably can be no winners.

Because, of course, managed have some benefits. They are easy to uninstall. They do support attribute removal. Some customizations can be locked down. Although, that said, just don’t give system customizer/admin permissions to the people who are not supposed to have those permissions, and you would not need to worry about unwanted customizations in production.

Do unmanaged have benefits? Sure – we can take a copy of production and turn it into dev in a matter of minutes. Which is, often, what we need to do anyway since we won’t have some of the licensed production solutions in dev(due to licensing), but we will need related solution entities to be able to reference them in dev. Are there disadvantages? Of course.. You need to delete an attribute? You’ll have to do it manually, or you’ll need to create some kind of script to automate the process.

Maybe what could get us closer to settling this argument is if the whole concept of solution development were changed. For instance, what if solutions files were written in some kind of scripting language? I think I’d be able to work with those (Something like: add_an_attribute(“test_attribute”); )

Although, this does not seem to be where it’s all going, not at all.. but we should all have hope anyway, and, possibly, stop arguingSmile You have a few extra environments to use managed solutions? That’s fine. You don’t have those? Well, stick to unmanaged to be able to export from production (or from a copy of production). But, no matter which way you choose to go, you’ll certainly find at least a few problems along the way.

So, take it easy and have fun with the PowerPlatform!

SolutionPackager can now extract and re-package CanvasApps included into the solution


Not sure if you are aware, but the latest version of SolutionPackager can now extract and re-package CanvasApps (and Flows).

Depending on the version of SolutionPackager you’ve been using so far, it might or might not be failing once a canvas app is included into the solution. However, the latest version is not just not failing, it’s actually extracting the CanvasApps correctly:


There is a caveat, though, and the credit for both running into the issue below and for hinting at the workaround goes to my colleague (without changing any names.. Denis, do you want a link here?)

Either way, when using the solution packager, you can specify the folder. So, if the folder you specify does not include full path, you can extract solution components without any problems:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe /action:Extract /  /folder:Extract

But, if you try re-packaging the solution:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe /action:Pack /  /folder:Extract

You will get an error:


As it turned out, there is an awesome workaround! Instead of using folder name only, use full path to the folder:

C:\Dev1\SDK\Tools\CoreTools\SolutionPackager.exe  /action:Pack /  /folder:”C:\Work\Blog\CanvasPackager\Extract

And it will all be fine:


Creating custom check list for a model-driven app

Looking at the check list demo below, I am actually wondering if it would be better to use an embedded Canvas app there? Will need to think of it, but I figured I’d share this small example of a classic model-driven application web resource that’s adding a bit of custom UI to the application:


How easy would it be to create this kind of web resource? Well.. When you need something like this, you may be able to find a good working example almost right away:

What’s required to turn it into a web resource is some knowledge of Web API, Javascript, and HTML.

For the Web API, if you are not familiar with it yet, have a look here:

For JavaScript and HTML.. I’ll just assume you are familiar with both of those to some extent.

And, then, you can download an unmanaged solution which has the web resource, required checklist entities, and a “demo” entity from the link below: 

This solution has 4 components:


Check List Type entity is what you can use to set up different types of checklists.

Check List entity contains individual check list items – here is an example:


Once you’ve imported the solution, start by setting up a few check list types and some check lists items for each type.

Then you’ll need to add the web resource.. Have a look at the Web Resource Demo entity to see how it’s done. If you are setting up your own entity, you’ll need to do 3 things:


  1. Create a field to identify check list type and put it on the form (The schema name should be ita_checklisttype. Make sure it’s all lowercase. Although, you could easily modify the web resource to look for a different attribute name)
  2. Add ita_/html/checklist.html web resource to the form. Make sure the web resource is set up to receive parameters: image

Finally, add ita_checklistcompletion attribute (string, 2000 characters) to your entity – this is where check list selections will be stored.

Publish all, create a record of that entity type, make sure to select check list type for that record, and switch to the “Check List” tab.

The web resource will start loading; it will query check list items through Web API (and that query will be filtered to only return the items which have selected check list type); and it will display them in a list. Once some items have been selected, those selections will be stored in the ita_checklistcompletion attribute, so the next time somebody loads the same record, all selections will be preserved.

Just keep in mind that the purpose of this checklist is not to provide the data you can search on or report on – it’s more to implement a very light-weight checklist functionality (for example, to ensure that the users have completed all required actions before they deactivate the record.. that additional validation might have to be implemented in a plugin, though)

New pricing model for Dynamics/PowerPlatform – can we do some basic estimates?

I hope you have heard about some very interesting changes in the Dynamics pricing model. They may not have affected you yet, but they will, especially if your subscription renewal date is coming soon.

As almost expected in such cases, Jukka Niiranen came up with a great overview of what’s happening to the PowerPlatform (not just on the licensing front):

But I figured I’d try to do a few basic estimates.. By the way, you should definitely read the following article – make sure to look at the “FAQ” section at the bottom:

Anyway, when I go to the admin center, I don’t see capacity reports yet:


Hoping they will show up soon enough, but, at the moment, there are a few things we can assume just by looking at the storage tab of the existing analytics report for the very basic Dynamics instance I am using for training/testing, and it really does not have a lot of data. What it does have is a lot of solutions (Field Service, Project Service, Portals, etc):





Just looking at the top tables by size, I really don’t see any of the custom tables there. To be fair, there are a few other instances I looked at, and, without going into the details, some of them do have custom tables at the top, especially those environments where we don’t have a lot of out-of-the-box solutions.

Either way, it seems that, even without a lot data, we could expect Dynamics instance to need at least 2-3 GB of CDS storage.

If you look at the article below:

You’ll see some numbers which might be handy if you are trying to estimate the impact of the pricing change (disclaimer: I am not 100% sure those numbers are correct, but let’s just assume they are):


So, let’s also assume we have a more realistic production instance(not an extreme case, though) that is using 10 GB of CDS storage, 5 GB of log storage, and 40 GB of file storage(if you think of the email integration, this may still be relatively low), and let’s say we don’t have any spare storage..

If we wanted to create an additional QA/UAT environment for that production instance as a full copy of production, it might cost us, roughly:

10*40 + 5*10 + 40*2 =  530 (USD)

Which is not, really, that cheap, yet those assumptions above can go both ways (up or down) depending on the specifics of our environments.

On the one hand, this means it might be helpful to start getting used to the idea of having less-than-full-copy UAT environments.

On the other hand, there is an opportunity to get more QA/Dev environments if not for free, then, at least, for less than it used to be.

Sure we will see how this is all going to play out, and there will likely be more details coming out in the following months.. but, it seems, even the ALM strategy for Dynamics/PowerApps will now have to consider storage space cost calculations, since, depending on how it’s all going to be set up, it’ll cost us more.. or less..

This is going to be interesting, eh?

Implementing custom document location logic with a plugin

In one of the previous posts I used Microsoft Flow to create folders in SharePoint whenever a record is created in Dynamics/CDS. That was not extremely straightforward to me, but, at least, I did not have to fight with the authentication.

But, having done this, we’ve figured that we should still try a plugin instead (after all, a plugin could do everything synchronously, so it might be less confusing for Dynamics users).

This turned out to be a much more monumental task since we did not really have a lot of experience with SharePoint API-s etc.

This post is a result of those experiments. It’s not meant to explain all the intricacies of how SharePoint works, how OAuth works, etc. Basically, it’s going to be a step-by-step walkthrough, and, in the end, there is going to be sample source code.

Just to give you an idea of how it’s going to work, here is a quick diagram:


The main problem there turned out to be getting that token. There is a solution that Scott Durow developed some time ago:

But, as it turned out, that one is using legacy office 365 authentication, and it can now be disabled through the conditional access policy:

Of course that policy has been applied in our tenant.. Murphy’s law in action, I guess.

So we needed a different solution.

There are a few links which helped a great deal, so I’ll just provide them here for your reference:

There were a couple of key concepts I had to realize while reading through those:

  • SharePoint is not using Azure AD Application registrations for OAuth – there is a separate application registration process, and there is a separate token service
  • When registering an app in SharePoint, we are getting a completely new security principal, as the second link above explains: “After you’ve registered your add-in, it is a security principal and has an identity just as users and groups do” . You can also see it on the screenshot below if you look at the “Modified By” column:


Either way, with all that said, we need to go over a few steps:

  • Register an add-in
  • Create the code that gets the token and calls Sharepoint REST API
  • Write a plugin that is using the same code to create folders in Sharepoint and document locations in Dynamics as needed

Step 1: Registering an add-in

I’ve registered the add-in using <site>/_layouts/15/AppRegNew.aspx page as described here:

Keep in mind that, later on, you’ll be giving permissions to this add-in, so, depending on where you have installed it(site collection / site), you might be able to limit those permissions to the specific site.


Make sure to copy the client secret and the client id – you’ll need those later.


Also, as strange as it is, there seem to be no easy way to browse through the add-ins registered this way, but you can use PowerShell as described here:

First of all, this link mentions something that you may want to keep in mind:

Client secrets for SharePoint Add-ins that are registered by using the AppRegNew.aspx page expire after one year

Not sure how exactly that is supposed to be managed, but let’s leave it for later (have a feeling this is a common problem, so either there is a common solution somewhere, or this is a well-known pain, so a reminder has to be implemented and some manual steps have to be taken periodically)

Either way, to get Connect-MsoService working, also make sure to follow the instructions here:


Now that we have the add-in, it’s time for

Step 2: Setting up add-in permissions

Have a look at the article below:

For the add-in we are creating, we will need read/write permissions on the site, so here we go:

Permissions for the next screenshot:

<AppPermissionRequests AllowAppOnlyPolicy=”true”>

<AppPermissionRequest Scope=”http://sharepoint/content/sitecollection” Right=”FullControl” />


Why is it for the sitecollection? Not 100% sure.. I would think tenant should work, but it did not (kept getting “access denied” errors down below when trying to run api queries)

Navigate to the <site_url>/_layouts/15/appinv.aspx

Paste App Id (copied from Step 1) and lookup the app, then paste permissions from above, then click “Create”


Step 3: Creating a Plugin

For this and the following steps, you will need to find out your sharepoint tenant id. Follow the steps here:

In short, open this url:

http:// <SharePointWebsite> /_layouts/15/AppPrincipals.aspx

You will see tenant id there:


By this moment you should have the following 4 parameters:

  • Client id
  • Client Key
  • Tenant Id
  • And you should definitely know your sharepoint url


You will find the source code for the first version of the plugin on GitHub here:

It definitely deserves a separate post, and there are a few things to do there to improve the code/make it more flexible, but, for now, here is how it works:

  • Build the solution
  • Register the plugin on create of the Lead entity (could be any other document-enabled entity), post-operation, synchronous
  • Add secure configuration to the step



For the secure configuration, use the following XML:

<clientId>YOUR CLIENT ID</clientId>
<clientKey>YOUR KEY</clientKey>
<tenantId>YOUR TENANT ID</tenantId>
<siteRoot> WITH YOURS)</siteRoot>

Now prepare SharePoint and Dynamics:

  • Create a document library in Sharepoint, call it “DynamicsDocs” (right in the root)
  • Assuming “Default Site” refers to the SharePoint root, create a document location in Dynamics like this:



With that done, if you create a lead in Dynamics, here is what will happen:

  • The plugin will create new folder under DynamicsDocs (using new lead ID for the folder name)
  • And it will create a document location in Dynamics to link that folder to the lead entity


Hope I’ll be able to write another post soon to explain the plugin in more details, and, also, to add a few improvements..

Instantiating an email template with Web API

There is email template functionality in Dynamics/model-driven applications – if you are not familiar with it, have a look at the documentation first:

However, what if, for whatever reason, you wanted to automatically instantiate an email template instead of having to use “insert template button” on the email screen?

For example, maybe there are a few frequently used templates, so you might want to add a flyout button to the command bar as a shortcut for those templates.

This is what you can use InsantiateTemplate action for:

And below is a javascript code sample which will do just that, just make sure to

  • Replace those parameters with the propert templateid, objecttype, and object id
  • Instead of displaying the alerts, do something with the subject and description you will get back from the action call
function InstantiateEmailTemplate()
    var parameters = {
        "TemplateId": "00000000-0000-0000-0000-000000000000",//GUID
        "ObjectType": "logicalname",//Entity logical name, lowercase
        "ObjectId": "00000000-0000-0000-0000-000000000000"//record id for the entity above

    var requestUrl = "/api/data/v9.1/InstantiateTemplate";
    var context;
    if (typeof GetGlobalContext === "function") {
        context = GetGlobalContext();
    } else {
        context = Xrm.Page.context;

    var req = new XMLHttpRequest();"POST", context.getClientUrl() + requestUrl, true);
    req.setRequestHeader("OData-MaxVersion", "4.0");
    req.setRequestHeader("OData-Version", "4.0");
    req.setRequestHeader("Accept", "application/json");
    req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
    req.onreadystatechange = function () {
        if (this.readyState === 4) {
            req.onreadystatechange = null;
            if (this.status === 200) {
                var result = JSON.parse(this.response);
            } else {
                var errorText = this.responseText;

PS. Here is the terrible part.. I’ve written the post above, and, then, a colleague of mine came up and said “Hey, I found this other post:  ”

Of course, those Inogic folks.. they even have the same blog theme! Well, maybe I have the same.. Anyway, I figured I’d count the number of lines in each version and you know what? If you remove empty lines and alerts in my version, it turns out to be a shorter one! Well, the old ways are not, always, worse, but they are still old 🙂 So, make sure to read that post by Inogic.

Canvas vs Model-Driven Apps – two ways to look at it

Ever since I’ve found myself working in the online environment, and having done all my previous work on-premise, it seems I got almost addicted to various kinds of diagrams. This seem to be the only way I can even start to understand what’s happening in the PowerPlatform world. Not sure if it’s for better or worse, but, possibly, you’ll find it useful, too.

So here is one of those:


See, what if you wanted to explain the difference between Canvas and Model-Driven apps to somebody not familiar with one or both types of those apps?

This is what the diagram above is all about. First of all, there can be no talking about model-driven apps without CDS. But, when the data is in  CDS, you can think of it as of a square where model-driven applications are better suited for complex data, and canvas apps are better suited for unique user interfaces.

Of course, if you have complex data and need unique interface, there is a problem. Well, this is where the diagram below might also help, since it goes into some of the additional details, yet it also mentions the concept of Embedded Canvas Apps:


I talked about this diagram in the Episode #2 of This or That, just could not get rid of the feeling that a more high-level view was still missing. So, hopefully, it’s good enough now.

PS. And just if you wanted to see yet another diagram, there is one more here:

Creating a model-driven app – don’t forget to add ALL required entities and forms to the app


When creating a model-driven application, it’s easy to think that we only need to include those entities into the app which will be displayed in the sitemap. So, in the example below, it would be only 5 entities:


There is something to keep in mind, though. If you are expecting quick create forms to show up in your application at all, you need to add corresponding  entities and forms to the application. Although, you don’t have to add those entities to the sitemap.

This applies to the activity entities, too.

For example, under the quick create button at the top, I only have tasks on the screenshot below since it’s the only activity entity added to my app:


Surprisingly, timeline control is behaving a little differently – all activity entities are showing up there:


However, even the timeline control falls victim of the missing forms. If I try creating a task from the timeline, I will be able to use a quick create form:


If I try creating a phone call from the timeline, I’ll be looking at the regular main form:


Adding Phone Call entity (and quick create form for that entity) to the application takes care of this issue:


Here we go:



Editable sub-grids are playing April fools’ joke


It’s the morning of April 1st, I am looking at my editable sub-grid, and I start panicking. Because, apparently, it’s not working – I can’t edit anything at all.

I tried pretty much everything (re-enabling the grid, changing the view, updating control settings in different places, looking in the community forums).. Nothing worked.

And, then, I realized that the parent record (for which Dynamics was displaying that sub-grid) was read-only:


Well, sub-grids, you have certainly made fun of me, but I’ll tell you this: I can still edit those related records if I just open them separately (as in, if I double-click any of the sub-grid rows), so I’ll be the one who laughs last!