Monthly Archives: May 2019

Setting up nested security groups for CDS/Dynamics instances

This post is a proof of concept for the following scenario:


Would not it be nice if we could use more than one security group so that, for example, we would have admins in one group and all the other users in another. This way, we might add that admin group to Sharepoint so admin users would become site owners/site collection admins there.

Unfortunately, nested security groups are not supported for CDS/Dynamics:

Do we have any options? Well, just continuing on the PowerShell exploration path that I started earlier this month, we could use this kind of script to emulate nested groups:

install-module azuread
import-module azuread

$GroupStartWith = 'Root_CRM'

$ADGroups = get-azureadgroup -Filter "startswith(DisplayName, '$GroupStartWith')"
foreach ($ADGroup in $ADGroups) { 
    $Members = Get-AzureADGroupMember -ObjectId $ADGroup.ObjectId
	#Delete all "user-members"
	foreach ($Member in $Members)
	    if($Member.ObjectType -eq "User")
		   Remove-AzureADGroupMember -ObjectId $ADGroup.ObjectId -MemberId $Member.ObjectId
	#Re-add all nested user-members
	foreach ($Member in $Members)
	    if($Member.ObjectType -eq "Group")
			$NestedMembers = Get-AzureADGroupMember -ObjectId $Member.ObjectId
			foreach ($NestedMember in $NestedMembers)
			   if($NestedMember.ObjectType -eq "User")
				 #Write-Host $NestedMember
				 Add-AzureADGroupMember -ObjectId $ADGroup.ObjectId -RefObjectId $NestedMember.ObjectId

What the script above would do is:

– It would connect to Azure AD

– It would find all groups named as “Root_CRM*” – that’s just a quick “naming convention” I came up with just now

– The script will, then, loop over all of those groups and do a couple of things for each of those:

  • It will remove all current members
  • It will look at the nested groups and add nested group members to the main group

As a result, I can have security groups configured exactly the way it’s shown on the diagram above since I’m not limited to having just one group anymore.

Of course it’s PowerShell, so the script has to be started somehow. It might be doable with Azure Runbooks – for example, this script could be scheduled to run a few times per day to do the automated sync. Although, it will have to be updated so it does not ask for the credentials. Which should be doable with Get-AutomationConnection

Yet the script can likely be improved so it does not remove users which will be re-added eventually. This is to avoid possible glitches when a user loses access to CDS/Dynamics for a moment while that user is being re-added.

TypeScript from the c# developer standpoint


A lot of TypeScript tutorials are available online, so I was looking for one recently and, it seems, this one is really good IF you are familiar with some other language already:

So, while working through it, I noticed a few things which might strike you as being somewhat unusual if you are more used to the languages like C# (or even javascript itself). Figured I’d come up with a short list to use as a refresher for myself later; although, if you are new to the TypeScript, you might find some of the items useful, too.

1. To declare a variable, we need to use <var_name>:<type_name> syntax

var x:number = 1

2. There is no need to use semicolons as long as we have one instruction per line

var x:number = 1
x = 2

3. There are no separate numeric types

There is no int or double – all numbers are represented by the “number” type.

4. There are arrays and there are tuples

For an array, here is an example:

TS: var names:string[] =  [“Mary”,”Tom”,”Jane”, “Andy”]
JS: var names = [“Mary”,”Tom”,”Jane”, “Andy”]

And here is an example for a tuple:

TS: var arr = [“Mary”,”Tom”,1, 2]
JS: var arr = [“Mary”,”Tom”,1, 2]

The difference between those two on the TS side is that arrays are strongly typed, so there will be type validation at compile-time. Tuples are not strongly typed, so you can put objects of different types into a tuple. Once compiled into JS, though, both arrays and tuples look the same.

5. Unions

This is a somewhat crazy concept which is basically saying that a variable can store values of more than one type, but those types have to be mentioned:

var val:string|number
val = 12
val = “This is a string”

Of course this means very little for the compiled javascript code, but it allows TypeScript to do compile-time type validations.

What’s unusual about this (besides the fact that I have not seen this concept in the .NET world so far) is that “union” has somewhat different meaning in other areas. For example, unions allows you to combine results from different queries in SQL, but those different queries since have to match each other on the “metadata” level (same columns). In case with the TypeScript, it’s the “metadata” which is becoming different through the use of the unions.

6. Interfaces

In TypeScript, an interface is more than a contract to be implemented by a class later. An interface defines a type that you can use in variable declarations:

interface IPerson {

var customer:IPerson = {

In c#, you would have to implement the interface first through a class which inherits and implements the interface. In TS, you can declare a variable of the “interface type”, and you can implement that interface right there.

What I mean is that, in the example above, you can’t just omit one of the properties:

var customer:IPerson = {

Since you’ll get an error message from the compiler:

main.ts(6,5): error TS2322: Type ‘{ firstName: string; }’ is not assignable to type ‘IPerson’.
Property ‘lastName’ is missing in type ‘{ firstName: string; }’.

Also, there is a special construct for defining array interfaces – it seems I don’t fully understand the “index” part at the moment, so will just leave it at this:

interface namelist {

7. Somewhat special syntax when defining classes

  • When defining class methods, we don’t use “function” keyword in TS
  • When defining class properties, we don’t need to use “var” keyword
  • To define a constructor, we need to use “constructor” keyword
  • To reference a property in a function, we need to access that property through “this” object


class Car {

constructor(engine:string) {
this.engine = engine

disp():void {
console.log(“Engine is  :   “+this.engine)

8. Type templates

There is a notion of type templates which basically means that, since every object in TS has to have a type,  that type will be inferred from the variable declaration if you do it this way:

var person = {
firstname: “Tom”,
lastname: “Hanks”

With this, you can’t, now, dynamically add another property to the person:

person.age = 1

Since you’ll get an error:
main.ts(7,8): error TS2339: Property ‘age’ does not exist on type ‘{ firstname: string; lastname: string; }’

Instead, you have to add that age property to the original declaration somehow:

var person = {
firstname: “Tom”,
lastname: “Hanks”,
age: NaN

person.age = 1

9. Prototypes vs objects

When you have a class, all class methods will be defined in the prototype:

TS: class Car {
disp():void {

Compiled JS:

var Car = /** @class */ (function () {
function Car() {
Car.prototype.disp = function () {
return Car;

And, then, you can actually change what “disp” means for a specific object of the class:

var a:Car = new Car()
a.disp = () => console.log(“Nissan”)

Which compiles into the following JS:

var a = new Car();
a.disp = function () { return console.log(“Nissan”); };

Well, that’s some strange stuff…

10. Namespaces

If you want to use a namespace from a different ts file, you have to reference the file:

/// <reference path = “SomeFileName.ts” />

There is a special “export” keyword which must be added to anything you want to be visible outside of the namespace.

Interestingly you can actually define the same namespace more than once in the same file:

namespace Test{
export interface Displayable {

namespace Test{
export class Car implements Displayable {
disp():void {

var a:Test.Car = new Test.Car()

But, unless you’ve added those “export” keywords to the interface above and to the class, it’s not going to compile.

11. Modules

There is no equivalent concept in c#, it seems, so, for now, I’d rather just put a link to the page which seems to be explaining the concept… I’m still working through itSmile

12. Ambients

Essentially, it’s similar to adding a dll reference to your .NET project so that

  • Typescript compiler could do compile-time validations
  • You would not have to rewire existing javascript code in TS

Ambient declarations are done through the “d.ts” files, those files are added to the TS files using the “reference” directive:

/// <reference path = “Calc.d.ts” />

However, d.ts files themselves do not contain any actual implementations – there are only type definitions there:

declare module TutorialPoint {
export class Calc {
doSum(limit:number) : number;

When it comes to somehow adding  actual implementation of the doSum function to your final HTML, you have to it using the script tag:

<script src = “doSumSource.js”></script>








Beyond the PowerApps Component Framework

The more I’m looking into the PowerApps Component Framework, the more obvious it becomes that the era of “do-it-all”-ers in Dynamics/PowerApps might be coming to an end.

Although, it’s not just the PCF that’s making me say this.

I turned to Dynamics some time ago since I did not want to stick to pure development – I liked messing with the code every now and then, but I could hardly imagine myself spending whole days coding. That was back in 2011 when this kind of skillset worked great for a Dynamics Consultant since what was missing out of the box I could always extend with Javascript/HTML/Plugins. And I did not have to be a full-stack developer familiar with the latest frameworks, since, really, all I needed was basic javascript skills, some C# development skills, probably SQL… but nothing too advanced.

We can still do it today in the model-driven PowerApps, but, it seems, the world is changing.

It’s been a while since the term Citizen Developer was introduced, and this is really how we call a developer who is, well, not a developer in the regular sense, but who is still doing “development” using more high-level tools:

For example, there can be Flow developers, and there can be Canvas Apps developers. Interestingly, those tools are not that straightforward, so somebody just starting to work with them may need to go over quite a bit of learning.

On the other hand, PowerApps Component Framework hardly belongs to the realm of the Citizen Developer – instead, it’s meant to be utilized by the professional developers:


And it’s not just wording(although, of course, one might argue that plugins were always meant for the professional developers as well). If you look at the sample code of PCF components, you’ll see something like this:


So, unlike with a web resource, there are a few more things to consider:

  • Typescript
  • The framework itself
  • The fact that you need to create HTML elements from code


Now, to use typescript, and unless you are familiar with it, you’ll probably need to catch up first, you’ll also need to learn what NodeJs is, what NPM is, how to use them, etc.

Compare that to a classic web resource development where all you need to know is HTML, CSS, and Javascript. Even though I think web resources are not going anywhere yet since things like javascript-based validation/visibility rules/etc do not belong to the PCF, the difference between PCF and Webresources  is that somebody working with the PCF is supposed to be a “professional developer” rather than just an opportunistic consultant utilizing web resources when and if it’s appropriate.

To start with, you may need to have all those tools configured on your laptop to use them (whereas, with the web resources, we could just paste javascript to the editor in Dynamics if we wanted to).

But that’s just one example of where the line between Citizen Developers and Professional Developers is becoming more clear. There are other examples, too:

  • Do you know what CI/CD is and how to set it up?
  • Are you familiar with the Azure DevOps?
  • Do you know how to use Git?
  • Can you use solution packager to parse the solution so you could put solution components in the source control?
  • Are you familiar with PowerShell scripting?
  • Do you know how to write plugins?
  • Are you familiar with the TypeScript? How about Angular? NodeJs?
  • Can you explain what solution layering means and how it works?


On any relatively complex implementation those are the skills you may need to have as a model-driven powerapps developer.

Although, as a Citizen Developer, you might not need to even bother to learn those things.

And, as a Functional Consultant, you might need to be aware of what can be done through either sort of development, but you have your own toys to play with – think of all the configuration options, security model, licensing, different first-party applications.

A few years ago Microsoft began to introduce “Code-Less” features such as Business Rules, Microsoft Flow, Canvas Apps… and it almost started to look like the days of real developers in Dynamics were counted.

Then there was OData, WebAPI, Liquid for Dynamics Portals… Now there is PCF which is targeting professional developers from the start. Add plugins to this mix, and, suddenly, the outlook for developers starts looking much better.

However, what it all means is that every “group” is getting their own tools, and, so, they have to spend time learning those tools and frameworks. As that is happening, former Dynamics consultants “with developer skills” may have to finally choose sides and start specializing, since there is only so much one can learn to use efficiently in the everyday work. Personally, I’ll probably try to stick to the “do it all” approach for a little longer, but I’m curious how it’s all going to play out. We’ll see…


Error monitoring for the solution-aware Flows


Some flows are solution-aware, and some flows are not:

It turned out the difference between those two is not just that solution-aware flows are portable – somehow, it goes deeper.

Just a few days ago wrote a post where I was trying to summarize error monitoring options for the Flows:

It was not working out quite well there since the main problem I was having is that, for the Flows I was looking at, I could not get the history of Flow runs unless the Flows were shared with me.

However, that’s only a problem for the non-solution aware Flows.

If a Flow is created in the solution, it’s all getting much better.

Through the solution, I can open Flow Analytics for any Flow that’s been added to the solution:


Magically, PowerShell starts showing Flow Runs history for those flows, too:


Of course one possible problem here is that we can’t add non-solution aware flow to a solution, but that’s probably going to be resolved this way or another at some point. Right now, though, if you are looking into Flows portability and/or some kind of error monitoring approach, don’t make the mistake I did and make sure you are working with the solution-aware flows.

PS. In order for the analytics/powershell to work, we do need to have System Admin or System Customizer permissions in the CDS environment.

Microsoft Flow Monitoring


I often read/hear that Microsoft Flow is not suitable for the advanced integration scenarios where Logic Apps should be used instead. That statement is probably coming from the comparison below:


This is all great; however, unlike the Logic Apps which are living their own life somewhere in Azure, Microsoft Flow is a first-class citizen in the Power Platform world, so, even if Logic Apps might be more suitable for the Advanced Integration scenarios, Flow might still be preferred in a number of situations.

There are at least a few reasons for that:

Flow is integrated with Power Apps – every user having appropriate permissions will be able to create and/or run Flows:


Unlike the Logic Apps, Flows are solution-aware and can be deployed through the PowerApps solutions. Potentially, that makes them very useful for the in-house solution creators and/or external ISV-s. This is similar to how we’ve always been using classic workflows in the solutions (and not the SSIS, for example, no matter how useful SSIS can be in the other scenarios):


Besides, every fully-licensed Dynamics user brings extra 15K flow runs allowance per month to the tenant, and it’s not the case with the Logic Apps

As such, and since Flows are generally viewed as a replacement for the classic Dynamics workflows (of course once they have reached parity), I think it’s only fair to assume that Flows will actually be utilized quite extensively even in the more advanced scenarios.

That brings me to the question I was asking myself the other day – what monitoring options do we have when it comes to Microsoft Flow? With the workflows, we used to have System Jobs, so a Dynamics administrator could go to the System Jobs view and review the errors.

Although, to be fair, I’ve probably never seen automated monitoring implemented for those jobs.

Still, now that we have Flows, how do we go about error monitoring?

Surprisingly, there are a few options, but neither of them is as simple as the good old system jobs view.

Flow management connector

This is where Flows can manage flows:

Actually, I am mentioning it here only because this one was a bit confusing/misleading to me. This connector offers a lot of actions to manage flows, but it offers no trigger, and, also, it does not seem to support querying flow errors:


In other words, from the monitoring perspective it does not seem to be helping.

Flow Admin Center

We can go to the flow admin center and have a look at all the flows in the environment, but that does not help with the error monitoring, it seems:


Error-handling steps

As explained in the post below, we can add error-handling steps to our flows. Of course we have to remember to add those steps. But, also, this kind of notifications may have to be more intelligent since, if we ever end up distributing those flows as part of our solutions, we might have to somehow adjust recipient emails depending on the environment. It may still be doable, but it does not seem extremely convenient:

Also, there are some limitations there. We can’t configure “run after” for the actions immediately following the trigger (whether it’s a single action or whether there are a few parallel actions)


And, also, sometimes we can set up the trigger so that it “fails”.. In which case there would be no Flow run recorded in the history. One example would be an “Http Request Received” trigger when json schema validation is enabled:


Whenever schema validation fails, an error won’t be reported for the Flow. Meaning that this kind of integration errors would have to be tracked on the other side of the communication channel, and that might not even be possible.

Out of the box error notifications

These could be useful. However, since they are not sent on each and every failure (and, realistically, the should not be sent on each and every failure), the are only useful that much.

Per-flow analytics

There is some good per-flow analytics at


This might be handy, but this analytics is per-flow. And, also, it’s only available for the flows owned and/or shared with the current user.

Of course we can also go to the and see the list of flows, but this kind of charts are not available there.

Admin analytics

Admin analytics comes close:

But there is no detailed information about the errors:


Well, at least we have errors from different flows in one place.. But we can’t see it right away – the data is cached (same way it’s cached for any other Power BI-based report)


Get-FlowRun cmdlet from the “PowerShell Cmdlets for PowerApps and Flow creators and administrators” gives us almost what we need:


So, if we import required modules:

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -Force

Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber –Force

And utilize Get-FlowRun cmdlet:


We’ll get Flow runs for the specific flow or for all the flows.. Except that, just like with everything above, a user running this cmdlet won’t be able to get flow runs for any flows which the users does not own and which are not shared with that user.

Afre looking at all those options, there has to be some conclusion, and I’m thinking it’s like this:

From the centralized error monitoring standpoint for Flows, there seem to be no ideal option. One way to make it easier might be by making sure that all “system” Flows are shared with the dedicated support group:

This way, at least, any member of that group would be able to use PowerShell and/or Per-Flow/Admin analytics to see how those “System” flows have been doing. There will still be no alerts and notifications, but that’s neither better nor worse when compared to the classic Dynamics workflows – that’s pretty much the same.

From the automation perspective, PowerShell looks most promising, just make sure to use Add-PowerAppsAccount cmdlet or the script will be asking you for the user/password (Which is not going to work for the automation):

$pass = ConvertTo-SecureString "password" -AsPlainText -Force
Add-PowerAppsAccount -Username -Password $pass

Update (May 13): it turned out error monitoring works much better for the solution-aware flows. There is no need to share the flows there, once just needs to have appropriate CDS permissions:

UpdateRequest vs SetStateDynamicEntity


I had a nice validation plugin that used to be working perfectly whenever a record were updated manually. When deactivating a record with a certain status reason, a user would see an error message if some of the conditions were not met.

For example, a transaction can’t be completed till the amount field has been filled in correctly.

Then I created a workflow which would apply the same status change automatically. The idea was that my plugin would still kick in, so all the validations would work. Surprisingly, it’s not exactly what has happened.

Whenever I tried using a workflow to change record status, my plugin just would not fire and the status would change.

Here is a version of the validation plugin that’s been stripped down of any validation functionality – it’s just expected to throw an error every time:


The plugin has been registered on the Update of the new_test entity:


When trying to deactivate a record manually, I’m getting the error as expected:


However, when using a workflow:


Which, by the way, is set up like this:


The plugin does not kick in and the record gets deactivated:


Registering an additional step for the same plugin on the SetStateDynamicEntity message does help, thouhg:


I am now getting correct “validation” error when trying to run my workflow as well.

So, it it seems, SetStateDynamicEntity request (and, possibly, SetState) is still being used internally, even though I used to think it’s been deprecated for a while now:

By the way.. While trying to come up with this simplified repro, I noticed that this may have something to do with the stage in which the plugin is registered. Everything seems to be working correctly in  PostOperation, but not in PreValidation. And there are some quirks there, too. However, if you are trying to test your validations, and if you are observing this kind of behavior, it might help simply moving your plugin from PreValidation to PostOperation.

Of course the problem is that PreValidation is happening outside of the database transaction, so I can write some validation results back to Dynamics while in that stage, and it’s not possible in the Pre/Post Operation since all such change will be lost once an error is raised.. So, eventually, SetStateDynamicEntity might still be a better option.

Things are certainly different when working in the cloud


Ever since I have started working on the current project, there probably was not a day when I would not discover something new (of course what’s new for me is not necessarily new for somebody who’s been working in the online environment for a while).

When working on-premise, you get to know what to expect, what’s doable, what’s going to cause problems.. and, at some point, you just settle into a certain rhythm, you learn to avoid some solutions in favor of those which are more likely to succeed in that particular on-premise environment, and that’s how you deliver the project.

Compared to that, working in the cloud environment sometimes feels like visiting some kind of a wonderland. There are wonders everywhere, they are good and bad, they never cease to amuse you, and you can’t help but keep wondering for a couple of reasons:

  • It’s literally impossible to know everything about everything since Microsoft cloud ecosystem is huge
  • Even what you knew yesterday might be absolutely irrelevant today since new and updated features get released all the time


Even just for Dynamics, there were 6 (six!) updates in April. However small they were, that still means something was fixed, possibly some changes were introduced, etc:


Is it good or bad?  Or, at least, is it better or worse than working on-premise? Hard to say – for all I know, it’s very different.

I am happy to see the latest and greatest features at my disposal. Even though this certainly comes with a greater probability of seeing some sneaky new bugs.

Even if some features are not that new, it’s great to try what the community has been talking about for a while (just to name a few: Canvas Apps, Flows, Forms, etc). Although, it does not take long to realize that there are limitations.

When it comes to the limitations, it’s probably the most challenging factor for me, personally, since it’s difficult to figure them out until you’ve tried, and, back to what I wrote above, you can’t possibly know everything about everything. So there are, likely, more features that I have not tried than those that I have tried. For the ones I have not tried, I may know what the idea is and what they are meant for, but how do they really perform when tested against the specific requirements? A lot of what I’ve been doing lately can really be summarized as “research and development”, which has never been that much of a case while working on-premise.

And, of course, there is so little control we have over the environment/API limits/logging/etc.. Plus there are licensing considerations almost everywhere (can we use Power Apps? Can we use Flows? Can we use this or that? What is currently covered by our licenses and what will have to be added? If we need to add more licenses, how do we justify this decision and how do we get it through the procurement?)

Still, there is something I heard earlier today that makes up for at least some of those hassles. It’s when a developer said “This is a great idea, and minimal effort”. You know what he said this about? Using Microsoft Flow with an http request trigger to accept json data from a web form and to send it to Dynamics.  It literally takes half an hour to prototype such a flow (and maybe another hour to adjust the json). Which is much less than a few hours/days he would have to spend otherwise figuring out the details of Azure App registration, OAuth, etc.

So, yes, it’s a wonderland. Of course you never know what kind of surprise is awaiting you, but that just makes it more interesting.