Monthly Archives: October 2021

Here is an Azure Function that’s using Power BI Rest API. Although, it is still no faster than Power BI connector

Earlier this month, I blogged about how “Export to File for Paginated Reports” action seems to be relatively slow (even when compared to doing the export manually from the Power BI Service):

So, I thought, maybe an alternative could be to write custom code which would be using Power BI Rest API to do the same, since, after all, this is, probably, exactly the same API that’s utilized in the Power BI connector.

There are 2 API functions I would need to use:

The second call, once the file is ready, would give me the url I could use to finally download the file.

Quick forward, and, despite all those efforts, it’s still taking 25 seconds to get the file exported programmatically – here is a screenshot from the azure portal for the corresponding Azure Function:


Given that I could probably do it faster if I just opened Power BI Service in the browser, started the report, and used “Export To” there… that’s a little disappointing, of course, since it basically means there is some kind of issue with the API which is preventing it from exporting the reports quickly. I guess that’s not a problem for the long-running reports, but it is for the “print forms”.

However, aside from that, there is still that code, so thought I’d share it anyways.

You will find the source code in git:

There is an Azure Function there that takes two parameters:


  • groupId corresponds to the workspace id in Power BI
  • reportId corresponds to the reports id in Power BI

As a result, the function will, unless there is an error, return a docx file which is going to represent exported version of the report. So, for instance, if you wanted to utilize it in the Power Automate, you could use an HTTP action to call the function:


And, then, you could use HTTP action output to, possibly, send generated report by email:


Setting this all up involves a few steps:

1. You will need to register an app in Azure

Interestingly, I did not have to add any permissions:


But you would still need to add a secret:


2. You will need to grant that app access to the Power BI workspace

It can be done by going to the Power BI Service, navigating to the desired Workspace, and, then, using “Access” function to add your app to the workspace (I used “admin” role):


    3. You will need to publish the Azure Function from the project in git

    Once it’s there, add two configuration parameters (CLIENTID_KEY and SECRET_KEY):


    For the values, use application id and secret from the app registration step.

    Also make sure to grab function url from the portal for the ExportToFile function in the newly registered function app:


    That url is what you can use in the Flows to call that function later

    4. You might also need to ensure that Rest API usage is allowed in Power BI

You can do that in the Power BI Admin portal:



As you can see, there are different options there, so you might also use a security group (in which case you’d likely need to add your application principal to that group)

With that done, you should now be able to use that Azure Function to download reports.

Although, again, even if this little project shows the steps required to set everything up for Power BI Rest API usage, yet there is sample (working) code in git, it’s only useful that much. Since, in the end, there is still that slowness when exporting reports to files through the API, and there seem to be nothing I can do about it whether is being done with the help of the out of the box Power BI connector, or whether it’s being done using completely custom code (and an Azure Function in this case).

Connection references and disabled flows – what’s the connection?

Have you ever had your flows turned off when importing them through a solution? It’s been a bit of a mystery to me for a while. I had some idea about what to look at, but it’s only now that I looked closer.

Maybe you’ll find it useful, too.

First of all, every Flow can have multiple connection references. If you are not familiar with the concept yet, have a look here:

When importing a solution with such flows, and if required connection references do not exist in the target environment yet, you’ll be presented with a dialog to set up those connection references. On the high level, I believe here is how it works:


And here is connections “configuration” dialog:


In the simple scenario where neither the connection reference nor the flows have ever been imported into the target environment yet, you’ll get everything up and running as a result.

However, there are some edge cases, it seems.

For example, imagine that there are two different developers working on this solution in the dev environment. Also, imagine that connection references have been separated into another solution, so the Flows are, still, in the main solution. And connection references are in a different one.

In my case, the idea was that connection references would be imported and configured by somebody who has required Sharepoint etc permissions in the target environment. In the end, it seems to be going against the “flow”, but I’ll explain down below.

For now, just look at how, depending on who’s importing the Flows solution (once connection references have been imported and configured), we are going to have Flows turned on or off.

Here, the flows are “on”:


And they are “off” below:


In both cases, the same solution was imported into the environment where that solution had not been imported before. However, you can see how the owner of the Flows is different, and, normally, you’ll see the name of the user who imported that solution in the “Owner” column in those views.

Here is another screenshot – notice who owns connection references:


My flows were not turned on when they were imported by a different user. Which seems to correspond to what’s mentioned in the docs (there was a link above):


It’s, probably, not so much about who owns connection references – more about whose connections are linked to the connection references?

However, there is, also, another twist to this.

What if I added all my connection references to the flows solution in the source? Let’s stay I kept connection references in the separate solution, too, and let’s say that solution has already been imported into the target.

Here is how my solution in dev would look like:


When trying to import that solution into the target environment, I’d get the same connection references configuration dialog, no matter which use account I use:


In other words, whether I have connection references imported into the target or not, I will see the dialog above when those connection references are part of my “flows” solution.

However, if they are not part of the solution, and if connection references had already been imported into the target prior to starting solution import for the flows, the dialog above will not be presented, and no dependency checks will fail. Still, whether the flows will be turned on or off depends on whether Flow owner will end up being the same as the user owning those connections.

It seems there is some inconsistency there? Although, maybe I’m still missing something.

What’s, actually, bugging me is that, it seems, in order to make those deployments work, we always need to be using the same deployment account (to make sure connections and flows are in sync) – that would work nicely with ALM, of course, but what if we wanted to do manual deployments every now and then?

Canvas app error: the requested operation is invalid. Server Response: _scrubbedSensitiveData_

Here is an interesting error I ran into the other day:


It does sound like there was something in the server response that was considered “sensitive data”, but what could have happened?

The screenshot above is, actually, from the application I created specifically to reproduce the error and see why it’s happening, so it’s very simple, and it only does this on click of the button:


That seems pretty innocent, right? And it is – turned out the problem is not, really, on the application side (well, sort of).

There is a plugin involved that kicks in on update of the account record. In that plugin, an exception is raised:


Could you guess what’s causing scrubbedSensitiveData error on the app side?

Here is another line where you can still try guessing before you continue reading.

Ok, so it’s, actually, not the quote characters.

It is the word “token”. Huh?

By the way, I also tried the word “password” and got exactly the same results.

In the actual app, I was getting “token” keyword in the exception message since I was doing json deserialization, and “token” keyword can appear in the error messages raised by the json libraries when the json is incorrect.

Export to File for Paginated Reports execution time

For the last few days, I’ve been trying to figure out one mysterious aspect of  “Export to File for Paginated Reports” action, which is that it was always taking just a little more than 30 seconds to complete.

Which, even for the simplest static paginated report that would not even connect to a data source, would always result in the flow duration which is just over 30 seconds as well:



There was a support ticket, yet I kept asking around, and, eventually, it was confirmed that this is just how it works because of some internal details, so we should expect 30+ seconds there.

Which is an interesting caveat, since once of the scenarios I was trying to implement is printing invoice documents from a model-driven app.  That would include a flow, which would be triggered from a ribbon button using Javascript, and, as a result, generated invoice would be downloaded so the user would print it.

Would it be reasonable to ask the users to wait for about 40-50 seconds just to get the invoice generated when it’s part of the process where the end client is waiting for that invoice? It’s a good question, and I don’t, yet, know, what the answer would be, so may actually have to fall back to the word templates in this particular scenario.

There are other scenarios, though, where 30 seconds wait time would not make any difference (that’s when it takes longer to generate the report anyways, or when the process is a-synchronous by nature. As in, when the report is supposed to be sent by email/stored in Sharepoint, for example).

Anyways, at least now I know what’s happening, so I can stop thinking I’m losing my mindSmile

PS. There is ItAintBoring PowerPlatform chat session coming up on Oct 12 where we’ll talk about Power BI paginated reports in particular, but, also, we will, hopefully, have a broader discussion about document/report generation in Power Apps. If you have something to share, or if you just wanted to listen in, don’t forget to register for the event:

When “adding required components”, be cautious of the solution layers

You probably know how we can add required components to our solutions – there is an option for that in the maker portal, and it’s easy to use. For example, if I had “contact” table in the solution already, I could add required components for that table:


And the idea is that all the relationships, web resources, etc would be brought into the solution when we use that option. In theory, this is very useful, since, if some of the required components were missing in the target environment, we just would not be able to import our solution there until all those components got included into the solution.

However, depending on the environment, there could be different, and, possibly, unexpected consequences. In my slightly customized environment above, I got 2 more tables added to the solution when I used that option. One of them got added with all subcomponents, probably since it’s a custom table created in the same environment. I also got a few subcomponents from the Account table:


In a different environment, in exactly the same situation, I got a lot more components added to the solution as a result,  and that does not seem to have much to do with where those components were created:


For example, the lookup column below is there now, and we don’t even touch that column in any of our solutions – it is there out of the box, and it’s part of the Field Service:


And that is where one potential problem with “add required components” seems to show up. Once this kind of solution is imported into the downstream environment (as a managed solution), we’ll be getting an additional layer for the subcomponents that had been included into the solution as a result of “adding required components”, and it might not be what we meant to do. Since, if those components are supposed to be updated through other solutions, too, this new layer might prevent changes applied through those other solutions from surfacing in the target environment since the layers would now look like this:

  • New solution with required components that now includes Column A
  • Original solution that introduced Column A

If we try deploying updated version of the original solution to update Column A, those changes would still be in the same original layer where Column A was introduced, which means we’ll be somewhat stuck there.

And this is, actually, exactly what the docs are talking about:


So, I guess, beware of the side effects.

DataSourceInfo – checking table permissions

You might have seen Lance Delano’s post about permissions support functions:

This can be very useful, and it sort of feels this might be part of the functionality that would be required to use Power FX when configuring command bar buttons in model driven apps (since we would need to disable/enable those buttons based on the permissions quite often), but, it’s also useful in general when we need to check permissions.

The syntax is very simple, and it’s described in the post above.

One thing I noticed while playing with these functions is that it may make sense to use disabled display mode by default to avoid the effect below when the button shows up as enabled, and, then, it flips to the disabled mode:


This happens since it takes some time for the DataSourceInfo to check permissions, so, basically, the button shows up first, and, after a second or so (it probably depends on the load/connection/etc), it flips to the desired state once the function comes back with the result of permission check:


It works, and it might not be that big of an issue, but, if you wanted to avoid that flipping so it would be working more or less the way it’s doing it below:


You might do this, for example.

1. In the OnStart of the application, you could set IsDeleteAllowed variable using DataSourceInfo function:


This would set IsDeleteAllowed to “false” initially, and, then, the second like would set it to the correct value. In between, the button would show up as disabled (see below)

2. Then you might use the value of that variable to set DisplayMode property of the button


Of course, this way the button will start flipping from “disabled” to “enabled” for those who can use it, but I usually prefer interface elements to work that way, since there is no confusion caused by them being enabled for a little while on load of the app (so a user might try clicking those buttons) before becoming disabled.

When your Power Automate flow history run would not stop spinning

This is probably one of those things the product team is going to fix really soon, but, in the meantime, if you started to see spinner recently when looking at the flow run history, you might want to try one of the following:

  • Resize the browser window (could just maximize it as well)
  • Click somewhere in the “white” area
  • Possibly open dev tools

As of today (Oct 4), resizing the browser seems to work for me. The third option (opening dev tools) is a workaround suggested by a colleague. And the second one (clicking somewhere) used to work in the preview, but it does not seem to work now.

Here is an example:


Open in new window button – would you keep it or would you disable it?

Just noticed a new button showing up in my environments this morning, and, at first, I thought “wow”, but, then, I thought a bit more, and now I ‘m wondering.

It does allow opening the same model-driven screen in a separate window, but is it worth giving up some real-estate in the command bar for the sake of having this button? It seems we can do the same just by dragging current tab “out of the browser”?


What are your thoughts?

Child flows and retry policy

It is often helpful to split bigger Power Automate flows into a bunch of smaller ones, so there is a parent flow, and, possibly, a few child flows.

However, when the child flow fails, here is what you might see as a result in the flow run history:


In the example above, there was a parent flow, and there was a child flow. For the same parent flow run, the child flow tried to run 5 times. It failed each and every time, and, of course, it has done whatever it was doing 5 times for the same parent flow run. That might involve creating new data, updating existing data, etc.

Which might or might not be a problem, depending on what exactly is happening in your flows.

Either way, if you wanted to make sure that there are no retries in this situation, you’d need to remember to update retry policy of the “Run a Child Flow” action. This can easily be done through the action->Settings menu:


By default, it’ll be configured to retry 4 times:


You can either choose “none” to specify no retries at all, or, possibly, you can choose other options if retries are, actually, expected:


Quick testing of Javascript web resources

When working with model-driven applications, we always have to publish our changes to see them applied to the user interface. In some cases, it’s ok. We can keep adding columns to the form and publish those changes one, for example. That’s a little frustrating, but it’s acceptable.

In other cases, this may become much more of a problem. Let’s say there is a javascript web resource which I need to test/fix. If I went about it the regular way, I’d have to do this:

  • Update the javascript
  • Update corresponding web resource in the application
  • Publish my changes
  • Refresh the form and see if the fix was successful
  • If not, repeat this process until I have desired result

This always feels like a slow-motion movie, to be honest, since it may take a minute to update the script, but it takes a few more minutes to see whether that change was successful because of all the subsequent publishing/refreshing.

Well, there is a faster way to do this, though it all needs to be done slightly differently.

  • Event handlers should be attached programmatically in the OnLoad of the form
  • The only event handler that should be configured in the form designer is form’s “OnLoad” handler

With that, I could easily use web browser dev tools to add / update my web resources any time I want without having to go through the regular “save->publish->refresh” route. And, once the script is working as expected, I could finally update the web resource.

For example, let’s see how I could use this to add a side pane (see my other post) on form load.

First of all, since onFormLoad will need executionContext parameter, I need mock context to call onFormLoad when I’m doing it outside of the regular execution pipeline:


Also, since I am trying to avoid having to use “publish” every time, in my onFormLoad function I need to make sure that all event handlers are getting attached programmatically. And, also, if I end up calling it more than once, I need to make sure those handlers are not attached multiple time (so, in the example above, I am just removing and re-adding them)

Note: one the script has been tested, I’ll need to update the web resource so all other users could enjoy the benefits of having this script there. I’ll just need to remember not to add mock context and onFormLoad call to the web resource, sine that part will be taken care of by the regular execution pipeline.

Then, of course, I need lookupTaglickHandler in my script, so here is the full version:

function lookupTagClickHandler(executionContext)
    var columnName = "parentcustomerid";
	var formContext = executionContext.getFormContext();
	executionContext._eventArgs._preventDefault = true;
	var pane = Xrm.App.sidePanes.getPane("LookupDetails");
	if(typeof(pane) == "undefined" || pane == null){
			 title: "Lookup details",
			 paneId: "LookupDetails",
			 canClose: true,
			 width: 500
		}).then((pane) => {

function displayLookupInPane(pane, entityType, id)
           pageType: "entityrecord",
           entityName: entityType,
		   entityId: id

function onFormLoad(executionContext)
	var columnName = "parentcustomerid"; 
	var formContext = executionContext.getFormContext();

var executionContext = {
	getFormContext: function(){
		return Xrm.Page;

Now all I need to do to test it out (even if there is no web resource at all yet) is open the form in the app, open browser dev tools, and paste my script to the console. Then, if I wanted to make changes to see how they work, I’d just need to update the script and paste it to the dev tools console again. Here, have a look – I’ll open the form, will paste script to the dev tools console, and my side page event handler will start working:

That was fast, wasn’t it?

Now all I need is create a web resource and configure on-load event handler for the form, but I already know that my script is good to go, so I am not anticipating any issues and will only need to go through saving/publishing once.

PS. Another practical application of this approach is that we can test changes locally without “bugging” other team members/users.