Author Archives: Alex Shlega

Data Loss Prevention policies in Power Platform – a quick recap

What are Data Loss Prevention polices in Power Platform and why would you use those?

That’s the question I had to answer recently in a relatively concise manner, and, perhaps, what came out could be useful for others, too.

Essentially, the purpose of DLP in Power Platform is to help prevent users from unintentionally exposing organizational data.

DLP-s take care of one common scenario where such an exposure may happen. Since Power Platform in general, and Power Automate / Canvas Apps in particular rely heavily on the concept of data connectors, those connectors are meant to be combined in the same Power Automate flows/canvas applications.

When two or more connectors are combined that way, sensitive data available through one connector might be unintentionally exposed through the other connector.

In the extreme example, imagine SIN numbers stored in the Dataverse database, and imagine a Power Automate flow utilizing both Dataverse and Twitter connectors, where some information would be pushed to Twitter in response to certain events in Dataverse.

If, for whatever reason, sensitive data (SIN in this example) were exposed on Twitter, there might be implications, and the easiest way to prevent this from happening would be to ensure that only specific connectors are allowed to be used together in the same flows and /or apps.

Implementing that kind of protection is exactly what Data Loss Prevention policies in Power Platform are meant for.

Any DLP policy in Power Platform environment will be splitting conectors into 3 different categories:

– Business

– Non-business

– Blocked

clip_image002

As soon as a DLP has been applied to the Power Platform environment, connectors from Business and Non-Business categories can’t be mixed in the same flows/apps. And, of course, blocked connectors will be blocked.

DLP policies can be combined, and, when it’s done, the most restrictive ones will apply to any combination of data conectors.

For example, if there are 9 polices which have all identified Sharepoint and Twitter as non-business connectors, Power Automate flows using both connectors will still be running fine.

However, if a new policy were added which would turn Sharepoint into a “business” connector, all those existing flows would stop working.

There are different ways to apply DLP policies.

They can be applied to all environments, they can be applied to the specific environments only, or they can be applied to all environments with the exception of a few:

clip_image004

On top of that, there are a few additional considerations.

DLP polices can treat new pre-built connectors differently by default:

clip_image006

Rules for adding custom connectors can be configured separately for each DLP:

clip_image008

Certain connectors can be configured in a more granular manner to allow/disallow specific connector actions:

clip_image010

And the rest would be all about setting it up. Want to know more? Have a look at the docs: https://docs.microsoft.com/en-us/power-platform/admin/wp-data-loss-prevention

How to: debug plugins in the shared environments

When debugging a plugin, I often use PluginExecutionExceptions to display debugging messages. But, of course, it all works great when my development environment (or the plugin) is relatively isolated. And it all falls apart when there are other people working in the same environment, and they suddenly start seeing those error messages.

image

There is a simple workaround, though. Basically, I want to see those error messages, but I don’t want anyone else to be affected by them, so I just need to make sure they will be displayed for my user id. Which is easy to do with the help of the function below:

public static void ThrowDebuggingError(IPluginExecutionContext context, string msg, string userId)
{
   if (context.InitiatingUserId == new Guid(userId))
   {
      throw new InvalidPluginExecutionException(msg);
   }
}

With this in place, I can now call ThrowDebuggingError, pass plugin context, message, and my userId, and that error will only be displayed when the plugin has been initiated by me.

It’s worth noting that “context.InitiatingUserId” might work better than “context.UserId” in the code above, since InitiatingUserId woud correspond to my user account id even if the plugin is configured to run in the context of the “System” (or other) user account.

Power Platform /Dataverse development vs Classic Development – what’s the problem?

The question of setting up a branching strategy when doing development in Dataverse comes up almost inevitably. It’s a discussion that happens at a certain point, and it can be happening on the same project a number of times as people are joining the project.

Would probably make sense to explain why we are having this issue (and, as to “whether we have a good solution”, I think it depends on the project requirements).

The first diagram below shows your classic project where every developer would have their own “workstation” to do development, and they would have their local execution environment on those workstations.

Once the changes are ready, they would be pushed into git, and, from there, they would go to the QA/UAT/Prod environments.

This also allows for branching, for instance, since every developer can work on their own branch in git. After all, there is separate execution environment per developer in this scenario, and that’s important since every developer can maintain their own execution environment.

image

However, once we have switched to Dataverse / Power Platform, it all becomes different. Every developer would still have a workstation, but they might now be sharing the execution environment:

image

Which makes branching in git nearly impossible. So, of course, you might try changing it a little more. Why don’t we have a power platform/dataverse environment per developer?

image

This may work better, but you’ll have to figure out the source control, since you won’t be making “code” changes in those individual development environments – you’ll be making configuration and customization changes, and this is all supposed to be transported through the dataverse solutions.

Which are still far from perfect when it comes to branching and merging support in git, and, more often than not, that last mile between local development environments and git will have to be covered by manually re-implementing changes in some kind of “integration environment”:

image

That’s when you may finally have a somewhat similar setup (similar to the classic development), but this involves a few more power platform environments now, and this also involves that manual/semi-manual step of pushing ready-to-deploy changes to the “integration”.

Are your flows “within context”?

Yesterday, I posted a question asking what does it mean for a flow to be “within context” of the licensed application. Based on what I was able to figure out eventually, the technical answer might be somewhat different from what we could be assuming.

Note: you may want to confirm licensing with Microsoft whenever you have a question. What I am posting in my blog on this topic is just my opinion influenced by the reading / discussions I might have had. Some of those may have been with the product team, but, just to clarify, it’s the information in the licensing guides / docs we can safely reference when it comes to the licensing. Everything else falls into the “opinion” category.

In the FAQs, you’ll see this note (as of Feb 2):

However, flows will need to run within the context of the Power Apps application, which refers to using the same data sources for triggers or actions as the Power Apps application.

https://docs.microsoft.com/en-us/power-platform/admin/powerapps-flow-licensing-faq

So, for a flow to be “within context”, it has to be using the same data sources for triggers or actions, and, it seems, here is how we should be reading the note above when it comes to the model-driven applications, for example:

  • When looking at the application, see which tables it’s using in Microsoft Dataverse
  • For a flow to be within context of that app, the flow has to have triggers and/or actions which are using the same tables
  • Such a flow, then, can still use non-Dataverse connectors if it needs to connect to other data sources, too

Where does it matter?

If you went with the “spirit” of being “within context”, you might think that a flow picking Dataverse data from the Service Bus and sending it somewhere else might be “within context”. But, since it would not be using Dataverse connector (and, hence, would not be using any of the application tables directly) in this scenario, it actually would not be “within context”, at least not technically.

Another example: maybe there is a manually triggered flow that’s using a Power BI connector to generate a paginated report and to send it by email. That report would be connecting to Dataverse and would be using application tables, but the flow itself would not. So, technically, such a flow would not be within context.

Or here is another example: if you had a scheduled flow that were querying application data from Microsoft Dataverse , processing that data, and sending an email, it would be fine. But, if you moved that processing into an Azure Function, and, instead of calling Dataverse from your flow directly, you’d started calling that Azure Function, the flow would not be “within app context” anymore. You may have optimized things, but the flow would need to be licensed differently now.

Well, hope this adds some clarity (even if this complicates licensing a little more)

Flow execution “within context” of the app

If you look at the Power Apps licensing guide, you will see that flow execution is, usually, permitted within app context:

image

That same wording applies to pretty much all licence types, including Dynamics 365. And there is a corresponding note in the Power Apps Licensing FAQs:

image

https://docs.microsoft.com/en-us/power-platform/admin/powerapps-flow-licensing-faq

Question, though. How do you read this?

For example, if you have a model-driven application, are you allowed to use a flow to read data from, for instance, Azure SQL and write it to Microsoft Dataverse?

It’s a different data source, after all, and we are supposed to use the same triggers or actions as the Power Apps application – in case with a model-driven application, the only actual datasource it would be “using” is Microsoft Dataverse, and only specific tables for that matter.

Should Sharepoint / Outlook / Word be included? Technically, model-driven app is not using those in the “Power Automate” datasource sense.

Or, when thinking of the Canvas Apps, can users given per app plan for a specific app use app’s data source in combination with Power BI Connector?

(Updated on Feb 2) it seems I got the answer here: https://www.itaintboring.com/power-platform/are-your-flows-within-context

How to: make a form-scoped business rule application-specific

In Microsoft Dataverse, business rules are not application-specific. They can be scoped to a certain main form, they can be scoped to a table, or they can be scoped to all forms in that table (in the latter case, this would mean the rules would also apply to all quick create forms).

Model-Driven applications will mix and match table components, including form components, as they need. However, business rules are not considered to be application components, so they will kick in for the forms / tables depending on the business rule’s scope and irrespective of the model-driven application.

Is there a way to make business rules application-specific? Of course it’s possible to only scope business rules to a specific main form, but what if we wanted it to work for all forms utilized in the application (including quick create forms), and, yet, what if we wanted it not to work for all other forms?

It’s easy to do.

Since business rule are not going to kick in on the form if there is a column referenced in that rule which has not been added to the form, we can just create a column in the table, add it to the application-specific form, and reference it in the rule. For example, it could be a text column, and we could use it in the business rule condition like this:

image

Then, if that column were on the form, the rule would kick in. Otherwise, it would not. So, all that’s left to do is add that column to the application-specific forms.

Model-Driven Applications for application Users

Recently, I have been working on a training for application users which might be helpful to both users and new model-driven application makers:

https://www.itaintboring.com/downloads/training/ModelDrivenApplicationsforApplicationUsers.pdf

It is meant to cover model-driven application functionality from the user perspective. To be fair, there is a lot to cover, so this is not going to be absolutely exhaustive, but it might be something to start with. There are still a few missing pieces there which I am working on, and I’m thinking of adding a few practical exercises in the next few days (and a Quiz, of course), so there will be an update soon.

Here is what you’ll find there so far, though:

In fact, it is going to be part of the Power Platform training, but it may be helpful on its own.

Model-Driven applications and shared data

For some reason, this issue comes up every now and then. It may come up when talking to the new application users, to the new developers, to the business analysts sometimes… and, usually, it needs to be addressed quickly.

Basically, it might not be obvious that different applications in the same environment might still be accessing the same data:

What the screenshot above is meant to illustrate is that:

  • There could be multiple environments in the same tenant
  • Each environment can have multiple applications
  • Each of those applications in the specific environment may expose pieces of the same data, and, when that data is updated in one application, those changes will surface in the other
  • Model-driven applications cannot cross the boundaries of their respective environments when accessing data (at least not unless developers put significant effort into making it work between environments)

So, then, what if we need to isolate data between different applications? That’s doable, of course, but that’s a question of setting up a proper security model, or, possibly creating custom tables so that each applications exposes its own table instead of using the same shared ones. That’s something applications makers / developers would have to take care about when creating the application.

Power Platform Licensing (Condensed Version)

I was working on a condensed version of the licensing guide which could be presented without really going into the details, and here is what I ended up with so far (this is ignoring Virtual Agents, Power BI, and Portals)

A user can get licensed for Power Platform and/or Power Automate through the following means:

  • Microsoft 365 licensing
  • Power Apps plans
  • Power Automate plans
  • Pay-as-you-go
  • Dynamics 365 plans

With any of those, the users are going to be licensed within one tenant, and, for at least a couple of plans, it will be within a specific environment. There is no licence which can cover multiple tenants, but some licences can cover multiple environments.

On the high level, Microsoft 365 licences will offer access to Power Apps and Power Automate “within the scope of Microsoft 365”. In general, there would be no access to Dataverse, Dynamics 365, and there would be no premium connectors.

PowerApps plans come in a few different flavours. There is a per user plan which allows access to any number of Power Platform applications within the tenant (Canvas, Model Drive, Portals). There is a per app pass which gives access to one app within a specific environment. And there is pay-as-you-go subscription which, once activated “on demand”, gives access to a specific app in a specific environment for one month.

All of the PowerApps plans include some Power Automate allowance.

However, there are, also, dedicated Power Automate plans, and they are meant to support Power Automate usage without or outside of the Power Apps scenarios. There is a per user plan, there is a per user plan with unattended RPA, and there is a per flow plan. 

The latter is relatively expensive, but, in the larger organisations, it may definitely make sense when and if there are lots of users who can benefit from utilising a limited number of flows without ever having to create their own flows, for instance.

Although, all those plans can be mixed together. So, for example, a user having access to Power Automate through Microsoft 365 licensing might not be able to utilize premium connectors in a specific flow. In which case they might still be able to use such a flow if that flow were assigned to a “per flow” plan.

Pay-as-you-go is in preview as of writing this. It does offer a lot of flexibility and can simplify licence management, but, since it’s, essentially, per-app licensing, it may end up being quite expensive compared to the other plans if your users regularly require access to multiple apps at the same time. Unlike with the “per app pass”, pay as you go will kick in automatically, so you will be charged for the usage as soon as that usage occurs.

None of the plans discussed so far cover Dynamics 365 – to give access to the first-party apps, you will need to get Dynamics 365 licences for your users. In general, those licences will also cover Power Apps and Power Automate to some extent; however, that allowance is meant to be used within the same environments as licensed Dynamics 365 applications, and, also, within the context of those Dynamics 365 applications. Not all of that might be technically enforced by Microsoft, but, strictly speaking, if you create a completely custom application in the Dynamics 365 environment, Dynamics 365-licensed users might not be permitted to use such apps. Even if, technically, they might be able to do it. I guess that’s a bit of a grey area for now.

Here are some other considerations to keep in mind:

  • Dataverse storage capacity in the tenant depends on the licences you have there. Most licences will bring in additional capacity which will be accumulated at the tenant level (with the exception of pay-as-you-go). File storage capacity is provided in a similar way. Dataverse log storage capacity does not depend on the number of licences though, and, once it’s exceeded, you may need to purchase add-on log storage 
  • Aside from the app usage allowance, every licence comes with some other limitations: there are request limits, flow usage limits, etc. As I have already mentioned above, some licences may permit premium connectors usage, and others might only permit standard connectors
  • Non-licensed users (Application, administrative, system) will get a number of requests allocated to them in the tenant, but they will all be sharing that pool of requests
  • In the “Power Apps Admin Portal”, there are Capacity reports. From the licensing perspective, this is what you can use to track and analyze database, file, and log storage utilization (and, also, to manage various add-ons)
  • You can have guest users from other tenants accessing applications in your tenant. Those users have to be licensed, too, either through their own licenses, or through the licenses you will provide

For more details on the requests allocation, have a look at the page below:

https://docs.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations

And, of course, there is a lot more to talk about, but this should give you an idea of how licensing works. When and if you need specific details, make sure to download the latest licensing guides and have a closer look at them.

How I spent 7 hours fixing my hacked WordPress site

Usually, I don’t need to deal with the hacked WordPress sites. However, I’m not using hosted WordPress blog – instead, my blog is running on a hosting account where I have installed WordPress… to some extent, that gives me quite a bit of flexibility. But, on the other hand, when things go wrong, it may easily turn into a disaster, and that’s exactly what happened a few days ago.

My blog was hacked.

I have no idea how that happened, I just know that I had to spend 7 hours to fix it. Although, I guess if I knew enough about WordPress “internals” when I started looking into it, it would not have taken that long.

The good thing is that everything seems to be fine now (unless I missed some of the corrupted files, which is quite possible). The bad thing is that, well, it may probably happen again. And the moral of the story is that I can’t help but notice that, even though “cloud offerings” might be somewhat more expensive… it’s almost like paying for the insurance. You might be thinking it’s not worth it until one day the problem hits, and, unlike in the old days, you don’t have to tackle it personally – it’s the cloud service provider who will be dealing with it.

So, yep, this latest experience with self-hosted WordPress reminds me of how the world of on-premise Dynamics 365 is different from the Power Platform world. But, maybe, it’s a topic for another post. or, more likely, just a good analogy to mention when someone brings up the topic of relatively expensive licensing in the cloud?

Right here and right now, I just wanted to summarize what the “hackers” did.

Basically, it was extremely simple.

There is .htaccess and index.php files in the WordPress root directory.

I have no idea how the attackers managed to update “index.php” – that’s a really good question, but that seems to be how they eventually got control of the site.

Normally, index.php is less than 1 KB in size, and it just tells WordPress to load a theme.

In my case, it ended up being over 25KB, and there was some long encoded base64 string in it which would be passed to the “eval” function.

Then there is .htaccess which, among other things, is meant to define rewrite rules. It’s, actually, used by WordPress to support permalinks, for example.

Anyways, what was happening is that:

  • On each page load, index.php would replace .htaccess and set .htaccess permissions so that WordPress itself would not be changing it (444). From what I understand, internally WordPress is looking at whether “write” operation is allowed, and, if not, it’s not trying to replace .htaccess
  • Modified .htaccess would redirect all requests to where they are not supposed to go at all, but it would still allow direct access to a few other files in the root folder which were added by that malicious person
  • One of those files was probably used as a backdoor to secure “future access”, since it seemed to allow file uploads. And you would probably have to look twice to see that the name does not make sense –  it looks very similar to the normal file name, though:

image

There were some other unexpected files, all of them were given direct access to through the modified .htaccess.

And the funniest thing is that, most of the times, my blog was still working. Except that there was a bug in the .htaccess which locked me out of the WordPress area, and, that way, it got my attention.

But, of course, other problems had already been brewing there –  now that the issue is gone, Google analytics is complaining that it can’t find some links:

image

That’s the result of having that hacked .htaccess / index.php – I wonder how many other non-existing links have been added to my blog this way, and how many of the real ones got lost (temporarily, I guess)

So how did it get fixed?

  • What you need is to replace “index.php” with the original index.php from the same version of WordPress. In my case, I took a copy of that file from another WordPress site
  • And you may want to replace .htaccess with the default WordPress .htaccess. After that, make sure to apply permalinks structure from the admin panel
  • With that done, just get rid of the remaining malicious files – not sure I can name them all, but, if you save a copy of the hacked .htaccess, you would actually see a regex there from which you can reconstruct file names:
  • image

Anyways, the problem seems to be gone now, and, hopefully, it’s not coming back any time soon. And I’m thinking of the advantages of “cloud offerings”… so, going back to Power PlatformSmile