Monthly Archives: April 2021

Why would you want to restart your portal?

It’s been a few days since my colleague had run into the error below with the Power Apps Portals:

image

From the stacktrace, we can probably guess that the problem is related to the permissions somehow:

Adxstudio.Xrm.Cms.Security.WebsiteAccessPermissionProvider.TryAssertRightProperty

And, yes, this issues started to happen once an Administrator web role had been deleted. That did not happen intentionally – it was a mistake, but the end result is that  a portal administrator was getting this error pretty much anywhere on the portal if they were logged in.

So, I figured I’d sacrifice my own portal for the experiment, and, it seems, this is what eventually helped. Even though in a somewhat unexpected way…

Once I had no Administrator role, I did not start to see the error above right away. Instead, I saw a custom error page (which is portal default for error message). So, I had to go to the portal admin area, disable custom error page (as per the instructions here: https://docs.microsoft.com/en-us/powerapps/maker/portals/admin/view-portal-error-log#display-a-custom-error-message), and, then, I figured I should also restart the portal to clean up any cache… Once back in the portal, I did not see any errors at all – it was working flawlessly (of course I did not have admin permissions anymore, but I was not experiencing any errors either).

This is when my colleague had an “aha” moment, it seems, since, while observing me doing all that, he figured he should try to restart the portal as well. And, just like that, the error was gone.

You can try it for yourself (if you are not afraid of having to restore “Administrator” role):

  • Login to the portal under a user account that has “Administrator” web role
  • Go to the Portal Management
  • Delete “Administrator” web role
  • Logout from the portal
  • Login under the same user account again, click through a few pages. It may not happen right away, but, after just a little while, you will likely run into the same error above (Contact-Us seems to be more vulnerable to this than other pages, though. On some other pages, you might just see “liquid error” messages instead)
  • Then go to the portal admin portal, restart the portal, and the error will be gone

And, as it often happens, this is not necessarily new, though this is new to me. I guess this is just a specific example of what Colin Vermander blogged about back in 2019: https://colinvermander.wordpress.com/2019/09/18/powerapps-portal-force-restart/

PS. In order to get things back on track, create a new Administrator role, add required web site access permissions, entity permissions, etc. Assign that role to the user and restart the portal again. Things should be back to normal after that.

How do we pass configuration settings to the virtual entity plugins?

When configuring a data source in the plugin registration tool, there is this note which implies that we should be able to supply configuration settings to the data provider plug-ins:

image

There are a few more details in the docs:

https://docs.microsoft.com/en-us/powerapps/maker/data-platform/create-edit-virtual-entities

image

Which is encouraging, but it’s still not clear how do we read those configuration settings in the plugin?

So, with some help from Andrew Butenko who did have a very handy code sample in his git repo, I figured I’d summarize how it seems to be working.

1. We need to add configuration columns to the data source table

Note: data source table is, also, a virtual table, though it’s not the same virtual table for which we are creating plugins. It’s the one that we are specifying when registering a new data provider in the plugin registration tool. On the screenshot below, you can see “Connection String” column added to that table:

image

I’ve also added that column to the form.

2. What we are doing above, we are defining data source metadata. What shows up in the Settings->Administration->Virtual Entity Data Sources are “instances” of that metadata (it’s tables vs rows… entities vs records…)

image

In case with ITA Crud Test data source, I can now configure connection string settings there:

image

3. Finally, here is how we can read that connection string from within the plugin

image

What’s that IEntityDataSourceRetrieverService? There is not a lot in the docs, but here is a link anyways:

https://docs.microsoft.com/en-us/dotnet/api/microsoft.xrm.sdk.ientitydatasourceretrieverservice?view=dynamics-general-ce-9

Still, it works, and we can use RetrieveEntityDataSource method to retrieve the “instance” of the datasource. From which, as you can see on the screenshot above, I can read connection string settings.

There seem to be one additional twist to all of this. Even though the plugin is running under the “logged in” user account, that account does not have to be given access to the data source entity. RetrieveEntityDataSource will still work either way.

Which means a System Administrator could configure the data source, provide required connection string value and other “secure” configuration details, and none of those would have to be exposed to the regular users, yet virtual entity plugin will still keep working. Neat, isn’t it?

Have fun!

Turns out there are solutions we can’t export

And, no, it’s not a typo. It’s easy to have a solution that you can’t import – as long as some dependencies are missing in both the solution being imported and in the target environment, such a solution cannot be imported.

But we also can’t export some solutions – I don’t see such errors often, so was quite a bit surprised the other day when I ran into one of those:

image

Here is a link to the support article mentioned in the error above:

https://support.microsoft.com/en-us/topic/invalid-export-business-process-entity-missing-3bd9228b-4d1f-9871-7007-fb7239559251

I used to think that “exporting” a solution is always allowed – apparently, business process flows are exceptional, for some reason. Although, I’d think it would not be different from all other dependencies, so not sure why an exception was made for the BPF tables – they have to be added to the same solution where the BPF is (and, normally, that’s what we do. But, when we don’t, that’s how the error above happens).

What’s also interesting about this is that a solution checker will fail to run for such a solution. Although, all we will see in the history is that solution check “couldn’t be completed”:

image

Looking at the solution history, though, there is a clue there:

image

Apparently, solution checker is exporting the solution prior to running solution check. It fails for the reasons described above, and there we go… solution check just “couldn’t be completed”.

PS. Come to think of it, there is a CRM tip of the day that’s been around since 2019: https://crmtipoftheday.com/1294/export-solution-before-running-solution-checker Well, I just sort of rediscovered the wheelSmile

When thinking of an SSRS report in Power Platform, think Power BI Premium

Or, at least, give it a try, since you can get Power BI Premium per User now, and it’s not even that difficult to try it out:

image

If you have missed that notification on your first login, you can still do it from the profile menu:

image

Which will bring you to this:

image

Now what is Power BI Premium per User? It’s, basically, Power BI Premium without the Premium price tag:

image

Note: those prices above are in CAD, so make sure to do the conversion.

You can find GA announcement for this feature here:

https://powerbi.microsoft.com/en-us/blog/power-bi-premium-per-user-now-generally-available-for-purchase/

Power BI Premium per User has almost all the same feature you’ll find in the Power BI Premium, although, there is no unlimited distribution. Which is understandable:

image

With this licence, you can now create a Premium workspace:

image

With that, you can create a publish a paginated report. Why a paginated report? Because it’s your SSRS report, just it’s in Power BI now:

image

Once you choose that option above, you’ll have to download Power BI Report Builder and start creating a report there. For anyone who worked with SSRS, this will look very familiar:

image

In the example above, while creating a Table or Matrix report using the Wizard, I’d have to specify the dataset, and that’s going to be my Dataverse instance:

image

Using that “Build” button on the screenshot above, it’s easy to set up the dataset:

image

From there, off we go to the next step to choose the table and fields (Dataverse table and fields):

image

If, somehow, you wanted to edit the query manually, there is that option there (Edit as Text).

So, go through the steps – it takes literally a few minutes, and you have a very simple report ready there:

image

Does this look familiar? How about the screenshot below, which comes straight from the SSRS tutorial (https://docs.microsoft.com/en-us/sql/reporting-services/lesson-1-creating-a-report-server-project-reporting-services?view=sql-server-ver15):

image

Anyways, you can run the report locally right away:

image

And you know what? You did not even need a license for that.

You will need a license to publish this report to the Power BI Service:

image

So, let’s publish it to the premium workspace created earlier?

image

And here it is, published there:

image

Now, before running the report, I have to update connection settings:

image

image

image

With that checkbox above selected (since we want each user to use their own credentials when accessing the data), I would sign in, re-run the report, and here we go:

image

Now the functionality there is, basically, what you always dreamed about in SSRS (at least as far as Dynamics/Power Platform goes). You can export to different formats, you can create a subscription (did not you always want to create a subscription?), and you can share the report:

image

There is one caveat with the “Share” option. Since I’ve been talking about “Power BI Premium per User” license in this post, users licensed that way can only share paginated reports (which require either a per user license or a Premium capacity license ) with the users who been licensed as well. Unlicensed users will be getting the message below:

image

That’s where “Per Capacity” licensing will come with unlimited distribution, but per user licensing will not have that included.

Once you have dealt with the licensing, you can start sharing those reports, though. This is where you have different options:

  • You can add links to the left-hand navigation in your model-driven apps
  • You can simply point users to the premium workspace
  • You can even organize those reports into an App in Power BI (although, keep in mind there is one app per workspace)

With all those updates and features, is there still a reason you need SSRS in Power Platform? Or, to ask a different question… what are you going to use to build your next pixel-perfect report and why? If you are not sure, give Power BI Premium a try, and see how it works out!

Form component controls and field validations / event handlers

Form components controls (https://docs.microsoft.com/en-us/powerapps/maker/model-driven-apps/form-component-control) can be extremely useful.

It may not look so, at the first glance, since, after all, what they give us is, essentially, one extra tab on the “main” table form. In that tab, we can display columns from the secondary table. This comes at the cost of not having access to the command bar of the secondary table, not having the ability to use a BPF there, having to work with just a single tab…

Would not it be easier to simply add those other columns to the first table instead?

This is one of those “it depends”. In the most straightforward scenario, the need to streamline user interface may come at the point where both tables have been around for a while, so re-organizing all dependencies and underlying data might turn out to be a pain in the neck, and this is when you might appreciate the option of having a form control.

There is one interesting caveat, though. It’s ok if we have to update main table’s command bar by adding a button here and there to support operations which would normally be available through the secondary table’s command bar. It’s ok if we have to create a new main form for the secondary table to accommodate “one tab has it all” requirements.

However, and this may not be obvious, when such a control is placed in a separate tab, that control is not loaded until the user navigates to that tab. Which leads to a somewhat inconsistent behavior if and when there are validations on the secondary table’s form – those could be required fields, onsave event handlers, etc.

In the example below, you’ll see that, on the account form I can keep updating phone # and saving the record without any popup messages up until the moment I’ve switched to the primary contact tab – this is when contact form gets loaded, and it has an OnSave event handler which will start displaying alert messages on each subsequent “Save”:

The same goes for “Required” fields validation, for example. So, if we wanted all those validations and scripts to kick in no matter if the user switches to that other tab or not, what can we possibly do?

Short of redesigning main table’s form to display everything on the same tab, including secondary form control, one other option might be to programmatically switch to the tab that’s supposed to display that other table, and, then, switch back to the first tab of the main table’s form:

You can see how the tabs are switching on load of the main table’s form, and that’s the disadvantage of this approach, but, at least, secondary form scripts and validations start working right away, which is what I needed there.

And here is the script which I added to the “OnLoad” of the account form:

function loadTab(executionContext, tab, returnTab)
{
   var formContext = executionContext.getFormContext();
   formContext.ui.tabs.get(tab).setFocus(); 
   if(typeof(returnTab) != 'undefined')
   {
      setTimeout(function(){
            formContext.ui.tabs.get(returnTab).setFocus(); 
      }, 
     1000);
   }
}

Here is the event handler:

image

I wonder if there is a better option, though? Let me know if there is!

Print function in Canvas Apps

I have been waiting for a little while to try this in my environment, and, finally, “print” functionality is there.

There are a couple of additional screen templates we can add to the canvas apps now:

image

Those two screens come with the print button added to them by default, and, other than that, there is, actually, nothing on those screens. “Print” button will be hidden when printing:

image

Which is all good, but it still leaves a little mixed impression. Unless I’m missing how it is supposed to be configured, it seems “print” will always print one page only.

For example, I have a screen with Loren Ipsum here:

image

There is more text in the label than could possibly fit on one screen, and it just gets cut off when printing, yet only one page will be printed:

image

So this is not, probably, meant for printing the whole books (yet?), but can still be very useful when printing application screens. For example, it takes no time at all to add a “print” button to the screen below so we can send it to a printer:

image

Anyways, that’s some neat functionality we can start adding to our applications, just need to be mindful of the limitations.

Protecting shared emails messages and attachments

In the previous post, there was a quick video of how otherwise shared emails could be protected using a few relatively simple plugins.

Just to re-cap, why would you want to do this?

The scenario we are talking about is:

  • We want to keep everyone in the know of the email correspondence with the client (on the contact timeline, for example)
  • At the same time, there could be different business groups involved, and some of them might feel that their emails have sensitive information that should not be shared with others

Both of those statements apply to the attachments, too.

So, for a user who only has limited access only, we would want the system to hide email message and, also, to prevent such user from downloading email attachments.

Which, in the end, is totally doable with a relatively simple plugin. Whether it’s worth it or not depends, but, really, if yo can’t get the requirements changed, you may, in the end, have to change the system:

image

Here is how we can do it with a plugin.

We need to intervene at 4 different points in the execution pipeline:

image

On create of the email, as well as on update of the emails, we need to store original email body in a secured entity, and we need to update email body with a “this email is protected” message.

On retrieve of the email, we need to substitute “protected” email body with the actual message from the secured entity for those users who have access, and we should display “please contact the owner…” for those who have no access.

Finally, whenever a user tries to download an attachment, we need to see if the user has access to the protected email to start with. Which is another “retrieve” plugin.

For this to work, we’ll need to add a new table (Secure Data), we’ll need to create a lookup column on the email table (it’ll be a lookup to that new table), and, then, we need to have some sort of trigger/identifier of the secure emails. Which I did by adding a “Yes/No” column to the email entity, but there might be other ways.

Basically, the plugin, so far, assumes that, once the value of that column is “Yes”, the email is supposed to be protected. That column could be updated manually be the owner, or it could be updated through a Power Automate Flow, or through another plugin – that depends on whether emails should be protected automatically, manually, or if it should work both ways.

You will find complete source code on github; however, just to get you going, here is an example of the “Retrieve” code:

image

The plugin is running under the interactive user account in this case, so it’s running in the same security context. Which means if it can’t access “Secure Data” record through the lookup, it’ll fail, and, then, it’ll replace email body with a message advising the user to contact email owner for the details of that email.

And the rest of the plugin is more or less all about creating and managing that secure data record.

Have fun!

Securing email details while keeping emails visible to all users

Sometimes, project requirements can take an interesting turn, and we may need to think of unusual solutions. Fortunately, we have powerful options up the sleeve, since, after all, we are not talking about some kind of a csv file here – it’s Microsoft Dataverse!

So, when it comes to the activity tables in Power Platform, there are a few unique characteristics specific to the activities only:

  • This is a hierarchical table – there is a parent activity table and there is a bunch of child tables such as emails, phone calls, etc
  • Which is sometimes a problem from the security perspective, since we can’t give permissions separately per activity type
  • And, also, because of the timeline control, it may be undesirable to hide some activities from the users since, then, the timeline becomes broken (for example, some emails might be showing up there, and others might be hidden because of the permissions)

However, in the environments where, for instance, users from different business units might start sending emails to the same contact, it might not take long before we are asked to start hiding some of the emails so that different business units would not see each others emails (or, at least, some of those).

Which might mean the timeline would get broken, and, in general, it would be difficult to rely on the timeline control to see if there was any communication with the contact.

So, what if we wanted to keep the emails visible to everyone in the system, but, at the same time, what if we wanted to protect the actual messages on some of the emails?

Would it be possible to do have it working like this?

image

Everybody would still have “read” access to the activities/emails, but, as far as the actual email message goes, it would be protected and stored in a separate table where users will have access permissions different from the permissions they’d be given for the activities.

There is a plugin behind this solution, and I’ll talk about it in more details later, but, if you are wondering, feel free to have a look:

https://github.com/ashlega/ItAintBoring.SecureData

There are, also, additional explanations in the next post

And below is a quick video for this proof of concept solution:

Let the Power be with you!

Power Platform Developers – what does the future hold?

It seems this old new debate is starting again, so now that Jukka and Natraj have posted great articles on the topic:

I really can’t help but chime in, even if not on that same scale.

See, I code. However, I used to write about it before, and I’m still stating the same: I’d be almost insulted if somebody called me a developer (and that happens almost daily, so you can imagine…)

Do I have anything against development? Not really, I just don’t think that’s who I am, and, to start with, I’d be bored to death if I had to be writing code all day long. But doing it a few hours per day? That’s just what I need.

Which, of course, affects my overall development skills, since, even though I used to be a .NET developer before switching to Dynamics in 2010, ever since then I have always been playing a catch up with the development tools/technologies, since I don’t need to use them that often anymore. That was a conscious choice back then, though, so, essentially, I got what I wanted.

Do I miss development? Sometimes, I do. Even while reading Natraj’s article above, I noticed that some of the tools he mentioned I have never even heard about. That definitely hurts, but this is where I need to remember why I transitioned from a traditional pro developer to a Dynamics/Power Platform consultant with development background.

And the reason is that, as much as I like coding every now and then, there is another area which is at least as interesting, and which is on the edge of business analysis and development. It’s designing solutions in collaboration with the clients. Sometimes, this is called architecture, but there are nuances there, since there is classic enterprise architecture that has very little to do with the solution design, and there is solution design/architecture which is very specific to the tools utilized in the solution.

That latter one is exactly what’s making this job attractive to me, and, of course, there is an added layer of occasional development that I need to do when prototyping or implementing more advanced customizations. It’s sort of a perfect mix.

However, thinking of whether I could be a pro developer focusing on the Power Platform, I wonder if there is, really, that option now. I could probably make an argument that, from the beginning, Dynamics CRM was meant to be a low-code platform. We were not supposed to code user interface from scratch, we were not supposed to develop lots of business logic in the plugins. We were supposed to do some of that, but it was meant to be done sporadically, because, after all, if you had to re-write Dynamics functionality using custom code, why in the world would you even go with Dynamics to start with?

And it’s the same with the Power Platform today, just Microsoft has taken it even further by introducing all those extra tools such as Canvas Apps, Power FX, Power Automate, Virtual Agents, etc.

A lot of work in Power Platform is (and always was) supposed to be done without pro code. And I think you would actually have to work for a big implementation partner to be specializing in Power Platform development as a pro developer, since pro development is an expensive activity, and, the way I see it, one might have difficulties finding a dedicated pro developer role on a single project directly with the Power Platform clients. Unless, of course, those are huge clients which may have such requirements occasionally.

It’s much more likely, though, that the clients would need a consultant. In the sense that development would not be the only (or even the main) activity – there would be other things to do. Setting up the users, providing training, providing support, troubleshooting, evaluating requirements, configuring the system, setting up data migration, and, occasionally, still doing development. But not just pro-development, since you’d have to be doing low-code development as well.

Do you think after a few years of this work you are still going to be able to say that you are working on the cutting edge of pro development? I’m sorry, but it’s not going to be the case (and least not through your work experience). You would definitely be on the cutting edge of the Power Platform technology, though.

So, I guess I’m with Natraj in terms of what pro developers could be doing if they wanted to stay pro developers, and I’m with Jukka in terms of what Power Platform consultants could be doing, but there is one caveat.

Essentially, I don’t think pro developers should be really focusing on the Power Platform. If they do want to keep advancing their pro development skills, they might choose to venture into the Power Platform world every now and then, but there is not going to be a lot of cutting edge work there (except, maybe, for the PCF-s, but how often would you be writing an advanced PCF?), so, in terms of maintaining/exercising their skills, somehow they need to find a healthy mix of Power Platform and non Power Platform work.

That’s, actually, the curse of the Power Platform – I don’t think it’s that attractive for pro developers, so that might be another reason why there is a lot of push for the low-code tools. Basically, if we can’t get enough work for pro developers, but we still need code-level customizations, why not to democratize this area and come up with the low-code approach, right?

Except that, when low-code meets pro-code, there can be a clash of cultures, but this is where we would have to start talking about unit-testing, devops, pipelines, git, and all of the other things which are normal for pro-developers, but could be almost alien for the functional consultants or business users who would be using low-code.

Well, it seems there can be no short post on this topicSmile Have fun!

Powerapps Portals: header and footer caching

When setting up profile page for newly registered users yesterday (which I wrote about here: https://www.itaintboring.com/dynamics-crm/power-apps-portals-redirecting-newly-registered-users-to-a-custom-page/), I ran into something that I did not realize first.

And, then, while digging around tonight, it all became crystal clear. Yep, header and footers are cached, so some of the Liquid code we may expect to work in other places won’t work there as is. I mean, it will work, and, then, header content will become cached, so, from there, we’ll be getting cached header from the portal server.

Come to think of it, that’s going to be a problem when using such Liquid objects as request.url, and that’s exactly what I tried to do in the Liquid-based version of the redirect implementation, and that’s why it did not work.

Here is a documentation page that’s talking about header/footer caching:

https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/enable-header-footer-output-caching

Luckily, there is a special tag we can use to prevent caching of some areas of the header/footer:

image

https://docs.microsoft.com/en-us/powerapps/maker/portals/liquid/template-tags

It’s funny that request.url is, actually, called out in the tip there…

Anyways, just to try it out, I have updated the header to display request.url without substitution and with substitution:

 image

The difference is quite obvious – notice how the vales displayed in the top left corner are different as I keep navigating through different pages, even though it’s request.url all the time. However, the first value has been cached, and the second one has not, since it’s within the “substitution” tag:

So there you go. When using Liquid in the header (or in the footer), keep caching in mind.