Monthly Archives: May 2019

Microsoft Flow Monitoring


I often read/hear that Microsoft Flow is not suitable for the advanced integration scenarios where Logic Apps should be used instead. That statement is probably coming from the comparison below:


This is all great; however, unlike the Logic Apps which are living their own life somewhere in Azure, Microsoft Flow is a first-class citizen in the Power Platform world, so, even if Logic Apps might be more suitable for the Advanced Integration scenarios, Flow might still be preferred in a number of situations.

There are at least a few reasons for that:

Flow is integrated with Power Apps – every user having appropriate permissions will be able to create and/or run Flows:


Unlike the Logic Apps, Flows are solution-aware and can be deployed through the PowerApps solutions. Potentially, that makes them very useful for the in-house solution creators and/or external ISV-s. This is similar to how we’ve always been using classic workflows in the solutions (and not the SSIS, for example, no matter how useful SSIS can be in the other scenarios):


Besides, every fully-licensed Dynamics user brings extra 15K flow runs allowance per month to the tenant, and it’s not the case with the Logic Apps

As such, and since Flows are generally viewed as a replacement for the classic Dynamics workflows (of course once they have reached parity), I think it’s only fair to assume that Flows will actually be utilized quite extensively even in the more advanced scenarios.

That brings me to the question I was asking myself the other day – what monitoring options do we have when it comes to Microsoft Flow? With the workflows, we used to have System Jobs, so a Dynamics administrator could go to the System Jobs view and review the errors.

Although, to be fair, I’ve probably never seen automated monitoring implemented for those jobs.

Still, now that we have Flows, how do we go about error monitoring?

Surprisingly, there are a few options, but neither of them is as simple as the good old system jobs view.

Flow management connector

This is where Flows can manage flows:

Actually, I am mentioning it here only because this one was a bit confusing/misleading to me. This connector offers a lot of actions to manage flows, but it offers no trigger, and, also, it does not seem to support querying flow errors:


In other words, from the monitoring perspective it does not seem to be helping.

Flow Admin Center

We can go to the flow admin center and have a look at all the flows in the environment, but that does not help with the error monitoring, it seems:


Error-handling steps

As explained in the post below, we can add error-handling steps to our flows. Of course we have to remember to add those steps. But, also, this kind of notifications may have to be more intelligent since, if we ever end up distributing those flows as part of our solutions, we might have to somehow adjust recipient emails depending on the environment. It may still be doable, but it does not seem extremely convenient:

Also, there are some limitations there. We can’t configure “run after” for the actions immediately following the trigger (whether it’s a single action or whether there are a few parallel actions)


And, also, sometimes we can set up the trigger so that it “fails”.. In which case there would be no Flow run recorded in the history. One example would be an “Http Request Received” trigger when json schema validation is enabled:


Whenever schema validation fails, an error won’t be reported for the Flow. Meaning that this kind of integration errors would have to be tracked on the other side of the communication channel, and that might not even be possible.

Out of the box error notifications

These could be useful. However, since they are not sent on each and every failure (and, realistically, the should not be sent on each and every failure), the are only useful that much.

Per-flow analytics

There is some good per-flow analytics at


This might be handy, but this analytics is per-flow. And, also, it’s only available for the flows owned and/or shared with the current user.

Of course we can also go to the and see the list of flows, but this kind of charts are not available there.

Admin analytics

Admin analytics comes close:

But there is no detailed information about the errors:


Well, at least we have errors from different flows in one place.. But we can’t see it right away – the data is cached (same way it’s cached for any other Power BI-based report)


Get-FlowRun cmdlet from the “PowerShell Cmdlets for PowerApps and Flow creators and administrators” gives us almost what we need:


So, if we import required modules:

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -Force

Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber –Force

And utilize Get-FlowRun cmdlet:


We’ll get Flow runs for the specific flow or for all the flows.. Except that, just like with everything above, a user running this cmdlet won’t be able to get flow runs for any flows which the users does not own and which are not shared with that user.

Afre looking at all those options, there has to be some conclusion, and I’m thinking it’s like this:

From the centralized error monitoring standpoint for Flows, there seem to be no ideal option. One way to make it easier might be by making sure that all “system” Flows are shared with the dedicated support group:

This way, at least, any member of that group would be able to use PowerShell and/or Per-Flow/Admin analytics to see how those “System” flows have been doing. There will still be no alerts and notifications, but that’s neither better nor worse when compared to the classic Dynamics workflows – that’s pretty much the same.

From the automation perspective, PowerShell looks most promising, just make sure to use Add-PowerAppsAccount cmdlet or the script will be asking you for the user/password (Which is not going to work for the automation):

$pass = ConvertTo-SecureString "password" -AsPlainText -Force
Add-PowerAppsAccount -Username -Password $pass

Update (May 13): it turned out error monitoring works much better for the solution-aware flows. There is no need to share the flows there, once just needs to have appropriate CDS permissions:

UpdateRequest vs SetStateDynamicEntity


I had a nice validation plugin that used to be working perfectly whenever a record were updated manually. When deactivating a record with a certain status reason, a user would see an error message if some of the conditions were not met.

For example, a transaction can’t be completed till the amount field has been filled in correctly.

Then I created a workflow which would apply the same status change automatically. The idea was that my plugin would still kick in, so all the validations would work. Surprisingly, it’s not exactly what has happened.

Whenever I tried using a workflow to change record status, my plugin just would not fire and the status would change.

Here is a version of the validation plugin that’s been stripped down of any validation functionality – it’s just expected to throw an error every time:


The plugin has been registered on the Update of the new_test entity:


When trying to deactivate a record manually, I’m getting the error as expected:


However, when using a workflow:


Which, by the way, is set up like this:


The plugin does not kick in and the record gets deactivated:


Registering an additional step for the same plugin on the SetStateDynamicEntity message does help, thouhg:


I am now getting correct “validation” error when trying to run my workflow as well.

So, it it seems, SetStateDynamicEntity request (and, possibly, SetState) is still being used internally, even though I used to think it’s been deprecated for a while now:

By the way.. While trying to come up with this simplified repro, I noticed that this may have something to do with the stage in which the plugin is registered. Everything seems to be working correctly in  PostOperation, but not in PreValidation. And there are some quirks there, too. However, if you are trying to test your validations, and if you are observing this kind of behavior, it might help simply moving your plugin from PreValidation to PostOperation.

Of course the problem is that PreValidation is happening outside of the database transaction, so I can write some validation results back to Dynamics while in that stage, and it’s not possible in the Pre/Post Operation since all such change will be lost once an error is raised.. So, eventually, SetStateDynamicEntity might still be a better option.

Things are certainly different when working in the cloud


Ever since I have started working on the current project, there probably was not a day when I would not discover something new (of course what’s new for me is not necessarily new for somebody who’s been working in the online environment for a while).

When working on-premise, you get to know what to expect, what’s doable, what’s going to cause problems.. and, at some point, you just settle into a certain rhythm, you learn to avoid some solutions in favor of those which are more likely to succeed in that particular on-premise environment, and that’s how you deliver the project.

Compared to that, working in the cloud environment sometimes feels like visiting some kind of a wonderland. There are wonders everywhere, they are good and bad, they never cease to amuse you, and you can’t help but keep wondering for a couple of reasons:

  • It’s literally impossible to know everything about everything since Microsoft cloud ecosystem is huge
  • Even what you knew yesterday might be absolutely irrelevant today since new and updated features get released all the time


Even just for Dynamics, there were 6 (six!) updates in April. However small they were, that still means something was fixed, possibly some changes were introduced, etc:


Is it good or bad?  Or, at least, is it better or worse than working on-premise? Hard to say – for all I know, it’s very different.

I am happy to see the latest and greatest features at my disposal. Even though this certainly comes with a greater probability of seeing some sneaky new bugs.

Even if some features are not that new, it’s great to try what the community has been talking about for a while (just to name a few: Canvas Apps, Flows, Forms, etc). Although, it does not take long to realize that there are limitations.

When it comes to the limitations, it’s probably the most challenging factor for me, personally, since it’s difficult to figure them out until you’ve tried, and, back to what I wrote above, you can’t possibly know everything about everything. So there are, likely, more features that I have not tried than those that I have tried. For the ones I have not tried, I may know what the idea is and what they are meant for, but how do they really perform when tested against the specific requirements? A lot of what I’ve been doing lately can really be summarized as “research and development”, which has never been that much of a case while working on-premise.

And, of course, there is so little control we have over the environment/API limits/logging/etc.. Plus there are licensing considerations almost everywhere (can we use Power Apps? Can we use Flows? Can we use this or that? What is currently covered by our licenses and what will have to be added? If we need to add more licenses, how do we justify this decision and how do we get it through the procurement?)

Still, there is something I heard earlier today that makes up for at least some of those hassles. It’s when a developer said “This is a great idea, and minimal effort”. You know what he said this about? Using Microsoft Flow with an http request trigger to accept json data from a web form and to send it to Dynamics.  It literally takes half an hour to prototype such a flow (and maybe another hour to adjust the json). Which is much less than a few hours/days he would have to spend otherwise figuring out the details of Azure App registration, OAuth, etc.

So, yes, it’s a wonderland. Of course you never know what kind of surprise is awaiting you, but that just makes it more interesting.