Monthly Archives: June 2020

Spooky scary licensing caught up with me again

Not without some help, but, it seems, I’ve just uncovered a skeleton in my closet. Which turned out to be this old blog post I wrote back in September 2019:


Now, I stand corrected TWICE in just one day (just for the reference: That’s a little unusual feeling, but, well, the truth has to come out, so… What I wrote back then is incorrect. To my excuse, it was correct then based on what I knew then, but it has become incorrect some time between then and now. Which is precisely why licensing can be just as spooky and scary as those skeletons.


You are probably aware of the various API limits in CDS – if not, have a look at the following documentation page:

If you are aware, but if you did not think twice of the batch requests, have a look there anyway.

There is lots of stuff there, but the reason I’m writing this post is that there is this note which was added some time in December 2019:

 Entitlement limits … requests… include operations performed by… $batch (ExecuteMultiple)


To reiterate: for the entitlement limits, those individual requests within batch will still be counted separately, so, for example, this may have implications for the data migration, data integration, etc.

Just keep that in mind. And, with that said, let’s keep moving on.

How big can an IFrame be on the model-driven form, and how to make it even bigger?

Did you ever try to put a really big IFrame on the model-driven form? If you did, and totally irrespective of why you would do such a bizarre thing, you might have seen this message:


Those 40 rows, whatever they mean in pixels, is the maximum you can get if you go with a regular web resource/iframe.

However, if you create an IFrame PCF control, you can make that IFrame as big as you want. In the example below, I can easily get to 3000px height with the PCF control:



Whereas for a “native iframe” control all I can get is 1960px:


So, why would it matter? Well, there might be a long data entry form you wanted to display there. That seems to be my scenario. But, even if that’s not a very common scenario, it’s just an interesting fact that PCF controls are not limited by those 40 rows – be it an iframe, a long text area, a huge grid control, etc.

Have fun!


CDS: How to receive notifications when a user gets added to/removed from the AD group team

AD group teams are not managed exactly the same way “native” teams are managed in CDS, and one difference seems to be that associate/disassociate events will not be triggered for such teams.

However, this does not mean there is no way to respond to a change in the group membership, it just has to be done differently. Quite differently, actually.

With a security group in Azure, we can use graph API to subscribe to various notifications:


Why us there an Azure Function on this diagram? This is because, when creating a subscription, we need to use a webhook that responds in a certain way to the initial validation request (as per the link above):

The client must provide a response with the following characteristics within 10 seconds:

  • A 200 (OK) status code.
  • The content type must be text/plain.
  • The body must include the validation token provided by Microsoft Graph

Not sure if this would be doable with the Flow only, so, instead, there is an Azure Function and a Flow.

To start with, you will need an HTTP trigger azure function. You’ll find the one I used for this post on github here:

Before you deploy it in Azure, make sure to update FlowUrl variable – it should be pointing to your own Flow:


Which means you will need to create a Flow first.

Here is how my Flow looks like:


There is no JSON parsing in the HTTP request, since I wanted to make sure I see what’s coming in even if the parsing fails. Instead, there is a Parse JSON action.

You can use the payload sample below to generate schema for that action:


The interesting part about this payload is that Graph notifications are cumulative. For those users which were removed from the AD group, you’ll see “@removed”:”deleted”. For the users added to the group, you won’t have that flag.

For the loop, here is what I did:


That “Condition” in the middle is looking at whether “@removed” flag is present in the array element:

contains(items(‘Apply_to_each_4’), ‘@removed’)

If it’s there, the user has been removed. If it’s not there, the user has been added.

How do we match AD user to the CDS user, though?

This is what “List records” action will be doing:


Keeping in mind the json payload sample above, it just need to use the following expression for the “id” parameters:


That’s it for the Flow – once you’ve created it, you will have a url you can, then, add to the Azure Function so it knows how to call the Flow.

The function will just read request body (which is the original notification payload), and it will forward it to the Flow:


Now, once the function has been updated to have correct Flow url, you can deploy it in Azure.

And what’s left is to actually set up Graph notification.

There is a graph explorer tool you can use to work with Graph API:

In order to register the subscription, you’ll need to send this kind of request:


There are a few things to keep in mind there.

For the resource url, you’ll need to use active directory ID of the group you want to start getting notifications for.

For the notificaiton url, you will need to use the url of your Azure Function

Expiration date time can’t be set too far in the future (I think it can only be as far as 48 hours ahead). One piece of the puzzle which I am not describing in this post is how to renew those subscriptions automatically. That’s another request to the graph API, and it can be done with Flows, too, but you’ll need to set up an application in azure, grant permissions, then use HTTP request with OAuth etc.

Anyway, once the notification is there, you might try adding/removing users to/from the Azure group, and, as a result, you should see your Flow responding to the notifications. In the example below, I’ve removed one user from the group:


And I’ve also added another user to the same group. Both changed were packaged into a single notification, so there was one Flow call with two array elements in the json payload, and here is the second one of those two:


Hope this helps!

Can we call a PowerAutomate Flow synchronously on create/update of a record?

It seems like a no-brainer question with a simple “no” answer; however, there was something that came up on Linkedin while discussing my other post about using CDS pre/post images with the Flow:

What if, instead of using the service bus, we tried using “web hook” option in the Plugin Registration Tool? It does support synchronous execution, after all:


The url below will provide all the details on how to use that option, so I won’t be repeating the steps here:

Instead, I’ll jump right to the part where it did not work out the way I hoped it would with the Flows.

I did not have any problems registering a synchronous step in the pre/post operation:


And the Flow itself is relatively straightforward:


The flow will get the request,it will find record guid in the incoming json payload, and it will use that guid to update account phone #.

However, as it turned out, even though the Flow will be called synchronously, CDS “execution pipeline” won’t wait for the Flow completion. So, even though there were a few occurrences when I could see updated phone # right away while testing, most of the times I had to use “refresh” to see the change:


This makes sense after all, since it may take a while for the Flow to finish (not even talking about possible “delay” actions in the Flow), so, it seems, it’s only the initial trigger that will be called synchronously. The rest of the flow will still run a-synchronously from the execution pipeline standpoint.

Long story short… this is definitely yet another option for passing pre/post images to the Flow, but it does not seem to help with the synchronous execution. Yet it’s probably not as scalable as the service bus. Although, unlike with the service bus approach, there are less steps involved here, so this may get the job done a little faster.


Update (as of June 30): this post attracted the likes of George Doubinski (see comments below), so I stand corrected:) See this:

Or, well, I kind of stand corrected. By the way, also wanted to give credit to Shidin Harridas who was likely the first to suggest using the response action.

It seems the question posted in the subject can have different answers depending on what we need that synchronous flow to do.

  • We can call the Flow synchronously
  • If there is no response action, the web hook won’t wait for the Flow completion, so your CDS operation may complete before the Flow has completed
  • If there is a response action, and if your Flow tries to update the same record BEFORE getting to the response action, then the whole process will fail

The last scenario represents about 80% of my use cases for the synchronous plugins/real-time workflows, so the answer to the question might still be “no” with caveats.

So… why does the Flow fail in this scenario? It sounds reasonable there is some kind of deadlock, but it can’t be a database transaction deadlock, since the same Flow also fails when the webhook registration has been updated to run in “Pre Validation”, which is supposed to run before the transaction starts ( Unless it’s not the case for the web hooks?

Canvas apps vs Model-Driven apps (a follow up)

Here is a really quick question before you continue reading. When looking at the image below, can you see what’s wrong there?


Of course the answer is – it’s a Canvas App trying to pretend it’s a model-driven app:


That’s a little experiment I wanted to try, and, quite honestly, as entertaining as it was, it ended up being quite educating, too.

I mostly work with model-driven apps, so I know that Canvas can give us better control of the UI, and they can also connect to different data sources. Although, the latter (usually) does not matter too much in my world since, again, I mostly work with model-driven, and that assumes CDS. However, just recently I came up with a post ( where I sort of entertained the idea that Canvas App might be taking over in the long term.

Of course that brought up the concept of app types convergence, which also makes sense. There are different ways this can play out eventually, but that all lead me to try the experiment above, which, in turn, lead me to a couple of realizations.

First of all, I realized that I used to underestimate the fact that CDS is “just another datasource” for the Canvas Apps. I used to think Canvas Apps are giving much better control over the UI, and that’s correct. However,  when writing that app above, I might as well be using a Sharepoint connector to store data in Sharepoint while still presenting it in the replica of the UCI interface. In other words, Canvas Apps don’t care what the data source is.

I know it’s obvious. When it matters, though, is when somebody starts comparing those two app types. Essentially, when Canvas and Model-Driven are compared exclusively in the CDS world, model-driven will, likely, win. But, as soon as another data source shows up on the horizon, there is nothing model-driven apps can really offer.

I have also realized that I used to overestimate Canvas Apps UI capabilities. Right now, at least, there are limitations all around(no code reuse, no ability to use nested components, no ability to have really granular control of the HTML events/styles, etc). This is understandable, since implementing those features in the low-code framework might be challenging, and not only for the technical reasons, but, also, because this might ruin the whole concept of low-code. I knew those limitations were there, but, I guess, it’s hard to realize how strong the wall is till you hit it.

And it’s probably the case of the devil being in the details – for example, I can have a button, and I can have an icon, and I can put that icon on top of the button. However, that suddenly stops the button from receiving “mouse over” event when the cursor is moving over the icon. And that messes up the colors (HoverFill vs Fill).

Still, does this change anything for the original post? Not really. It just proves that, at the moment, there is no winner yet, so we’ll just have to wait and see where this goes.

Free non-production regex connector for PowerAutomate Flows

In a few of my previous posts, I used a custom connector which had a “regex” action implemented in it. In case that’s something you might try using in your flows as well, a non-production version (no up-time guarantee, no guarantee at all whatsoever. Feel free to use it, but you can’t hold me responsible) is now available as a CDS solution.

You just need to download the solution file from github:

Once you’ve installed the connector, you should be able to use ItAintBoring Regex action in your flows:


So, when given the following phone # pattern:


And the following value:


It will produce this result:




The source code for is available here:

Please note that this connector is using a php version of the API, and, basically, it’s utilizing preg_match function:

It will either return null if there is no match, or it will return a value that corresponds to the first captured parenthesized subpattern (so, basically, whatever matches the expression started with the leftmost parenthesis).

CDS Post Images and CDS Pre Images in PowerAutomate Flows

Full disclaimer… this post was triggered by what a fellow MVP, Olena Grischenko, wrote earlier today: 

I am all for plugins, I actually write them every daySmile But, then, should not there be some way to work with pre/post images in PowerAutomate? Without code?

And there is one, although “no code” solutions are not, always, the easiest:


Basically, we can set up CDS to send pre/post/target data to a queue in Azure Service Bus. It’s not a complicated process, you just need to follow the steps described here:

Just make sure to use json message format when setting up the integration:


Once that is done, you can start adding SDK steps. And, for those SDK steps, you can start adding pre/post images:


Let’s say that’s been done.

The next step would be creating a flow. PowerAutomate has a connector for Azure Service Bus:

So, set up the trigger:


The remaining part is all about parsing that json from the message queue. Unfortunately, there is no “native” regex capability in Flows, though there are a few third-party connectors there. I did not try those, pretty sure they would work, but this is where “no code” is not the same as “no fees”Smile

Instead, I just opted to tweak the regex custom connector which I blogged about some time ago:

Here is the code I used for the regex action this time:


Now, once there is a regex connector, I just need to add a couple of action to extract “name” field from the pre image:


And, also, to extract it from the post image:


This is where you may have to look at the json and figure out how to write that regex expression (might need to test it in the regex tester).

Once that is done, I can access both pre-image and post-image values of the “name” attribute. In my case, I simply sent an email to myself:


And, once the email came in, both value are there:


What are the caveats?

  • We need to create azure service bus queue
  • We can’t rely on the capabilities of the CDS connector. Although, plugin registration tool is a powerful tool, so this is really a problem for the next step
  • Since we do need to figure out how to parse that json

But, in either case, except for the regex connector, this is a no-code solution to the problem. Might actually be possible to just use a bunch of “substring” and “indexof” function calls to do the same, but it’s probably easier to just set up that custom regex connector (or to subscribe to one of those which are already available in Power Automate)

In the battle of Canvas vs Model-Driven, is there a winner?

To start with, why do I even call it a battle? Isn’t it supposed to be a peaceful coexistence where each type of power apps can contribute where the other type is lacking in functionality?

In theory, yes. In practice, it seems things may spin out of control in the extreme cases. Imagine there is a relatively complex data model which would normally justify model-driven approach, and, yet, there is a really strong push towards custom UI. Which would, in turn, normally justify utilizing a canvas app. Would you choose a model-driven app, or would you choose a Canvas App?


I guess Canvas Apps are non the left side here…

Or, possibly, on the right…



Actually, I’m not going to even try to answer that question, since there can be no perfect answer, it seems. However, while thinking about it, I got a strange idea.

Would it be so impossible if the ultimate goal of the product team working on the Canvas App technology were to eventually replace model-driven apps with Canvas Apps?

Nonsense, you’d say? Maybe, but… Consider:

  • Canvas Apps have come a long way since they were first introduced a few years ago, whereas model-driven applications, at least the way I see it(forgive me Microsoft), got a lot of face-lifting efforts put into them, but, other than that, they are still the same old Dynamics CRM
  • Component Framework is now available for both types of apps
  • Most of what is doable with web resources in model-driven apps can also be done on the canvas apps side
  • Canvas Apps support various data sources, whereas model-driven can only work with CDS
  • Canvas Apps support user interface customizations which are beyond the reach of model-driven apps
  • Canvas Apps, in the form of embedded apps, are already making their way into the model-driven world, but I am not sure the opposite is true


Is there anything a Canvas App can’t do(with the proper application of development efforts) that a model-driven application can? Interestingly, I would not even think to ask that question a few years ago (“whaaat? Are those, really, apps?”)

Of course, and that seems to be the main factor that’s still keeping model-driven apps alive, it would take a lot of time to create a Canvas App from scratch that would provide the same level of functionality that a model-driven app can deliver in just a few minutes.

So, what’s going to happen if and when Canvas Apps product team closes that gap and adds yet missing components to the Canvas Apps? What if there were a command bar control? Left-hand navigation control? A quick find control? They might not show up tomorrow, they might not show up a few months from now… but, looking at where the Canvas Apps are now, after just a few years of being around, would it really be that surprising if this happens in a few years?



TDS endpoint for custom development and data integrations?

TDS endpoint availability is, probably, one of the coolest technical capabilities introduced by Microsoft in the last few months. However, even though it’s great to be able to run SQL queries in the SQL management studio as described in Microsoft docs:

There are other usage scenarios which seem to be very enticing.

Using it for data reporting would be one option:

There are at least a couple of others, though.

1. We can use TDS endpoint when developing custom applications that need access to CDS

For example, I have updated my angular portal template to utilize TDS endpoint when reading cases/notes from CDS. How difficult was it? It was not, at all:


Sure this change meant a bit of code refactoring; however, converting from the entity-base syntax to the sql data reader syntax is very straightforward.

What are the benefits?

  • Well, SQL has always been easier for me to understand than FetchXML or QueryExpressions
  • In terms of specifying query conditions, SQL is definitely more advanced
  • Finally, and I’m not sure it will stay this way(but, on the other hand, I am not sure if it’s technically possible to change this), I have a feeling TDS connection will bypass API limits

Of course the last item on the list above is only applicable to the “read” requests, since TDS endpoint only provides read access at this time. But, come to think of it, “read” operation is what the portal will be doing most of the times. The remaining (“Create”/“Update”/”Delete”) ones will still be done through the SDK.

If you are wondering about the connection string for the SqlConnection, here is how it looks like:

Server=<orgname>,5558;Authentication=Active Directory Password;Database=<orgname>.com;User Id=<user name(email address)>;Password=<password>;

2. Data migrations with SSIS

  • Do you want to use a tool, such as SSIS, to extract data from CDS? Of course KingswaySoft connector has become a de-facto standard for CDS integrations through SSIS, but TDS endpoint availability opens up some nice opportunities.


You just need to create an ADO connection:


And use ado data source after that:

Just keep in mind that TDS endpoint seems to be having troubles getting the metadata that way, so I had to specify the database name manually, and, also, had to use a SQL command option since none of the table names were visible. Maybe I just missed a setting somewhere, but, even so, it’s a already working as you can see in the “preview” window on the screenshot above.


As far as metadata goes, and that’s just a note so we don’t get too carried away, there are definitely some limitations. For example, my attempt to generate a “create” script for the account table failed miserably:


Not that I need it too much, just wanted to quickly create a table in my local database for the SSIS script aboveSmile

Adding basic self-service to the custom portal

In the first version of the Angular-based portal template there was no ability to create support tickets through the portal, which is what the template would have to have in order to be a little more useful right off the bat to cover at least a basic self-service scenario.

So, there is some of that in the updated version now:

Self-Service Tickets

Basically, this version offers the ability to submit a ticket and to add ticket notes through the portal, while it also offers the ability to see ticket details and to provide a response (through the notes, too) right in the model-driven app.

Again, you will find the source code in this repo:

By the way, there is, also, a solution file this time (both managed and unmanaged).

You can also try it “live” here:

A few things might be worth mentioning.

1. This template is using a custom case entity. Why?

First of all, PowerApps licensing is different from what it used to be for Dynamics. In the past, we were not supposed to re-create case management functionality without obtaining a licence which would cover the “case” entity, but, in the PowerPlatform world, this limitation is sort of gone. If we wanted to create a custom case management solution, we could. From the licensing perspective, even a user assigned “per app” license would be able to work with such custom application. So, then, why not to do it that way in the template, especially since making it work with the out-of-the-box entity would be just a matter of replacing a few lines in the API code.

2. .NET Core vs .NET Framework

We can use Web API or we can use SDK assemblies to write code that connects to CDS. I would almost always use the latter, but there are, actually, two versions of the SDK right now.

There is a .NET Framework version:

And there is a .NET Core version:

.NET Core version is relatively new, and it’s still in the “Alpha” stage, which means it is good to experiment with and to get ready to what’s coming, but it might not be quite suitable to actually do something yet.

This is why the template is using .NET Framework, which also means the API layer could not have been written with .NET Core Web API. Iinstead, I used MVC Web API project template.

3. Security

There is some security there, but it’s definitely far less advanced that what you’ll find in the PowerApp Portals. The template is using MSAL to implement client-side authentication, and Web API layer is set up to require user authentication to access certain API-s (such as case lists etc); however, anything more advanced (such as role validation when accessing some parts of the portal) needs to be coded in the client app. There is always a way to do it, and, possibly, the template will get there eventually; however, this is where it’s worth keeping in mind why this template exists:

It’s good to have something simple and completely customizable through the code ,so you don’t need to set up a full-blown portal solution, but, at some point, the benefits you get that way might be overcome by the complexities of custom development, and that’s when setting up a PowerApps Portal might end up being a better option.

Still, it’s not over for the portal template yet, but that’s all for today.