Author Archives: Alex Shlega

Readonly = impression, FieldSecurity = impression + access restrictions, Plugins = controlled changes

 

Why is it not enough to make a field readonly on the form if you want to secure your data from unintended changes?

Because there are at least 2 simple ways to unlock such fields:

1. Level up extension from Natraj Yegnaraman

https://chrome.google.com/webstore/detail/level-up-for-dynamics-crm/bjnkkhimoaclnddigpphpgkfgeggokam

Here is how read-only fields look before you apply “God Mode”:

image

Then I apply the “God Mode”:

image

And I can happily update my read-only field:

image

Which is absolutely awesome when I am a system administrator trying to quickly fix some data in the system, but it can become a nightmare from the data consistency standpoint if I am either not a system administrator or if, even as a system administrator, I am not really supposed to do those things.

2. When in the grid view, you can use “Open in Excel Online” option

image

“Read-only” is how you mark fields on the forms, not what really applies to the server side/excel/etc. So, once you’ve updated the records in excel, you can just save the changes:

image

Of course you can also export/import, but “online” is the quickest in that sense.

What about the field-level security, though?

It does help when the user in question is not a system admin:

https://docs.microsoft.com/en-us/power-platform/admin/field-level-security

“Record-level permissions are granted at the entity level, but you may have certain fields associated with an entity that contain data that is more sensitive than the other fields. For these situations, you use field level security to control access to specific fields.”

Is that good enough? This is certainly better, but it does not help with the opportunist system admins who just want to deal with the immediate data entry problem. They will still have access to all the fields.

Is there a way to actually control those changes?

Well, strictly speaking you can’t beat a system admin in this game. However, you might create a synchronous plugin, or, possibly, a real0time workflow to run on update of some entities and to raise errors whenever a user is not supposed to modify certain data.

Would it help all the time? To some extent – a system admin can just go into the environment and disable a plugin. Of course that’s a little more involved, and that’s the kind of intervention not every system admin would be willing to do, but still. However, for the most critical data you could, at least, use this method to notify the system administrator why updating such fields would not be desirable. Which is, often, part of the problem – if there is a read-only field and no explanations of why it is read-only, then… why not to unlock it, right? But, if a plugin throws an error after that with some extra details, even the system admin might decide not to proceed. Technically, you might also use a real-time workflow for this(just stop a real-time workflow with “cancelled” status), but it might be difficult/impossible to verify conditions properly in the workflow.

Anyway, those are the options, and, of course, in terms of complexity, making a field read-only would be the easiest but it would also be the least restrictive. Using field level security would be more involved but would restrict data access for anyone but for the system administrators. Plugins might give even more control, but that would certainly be development.

Trying out Microsoft bot technologies

 

I got a little different idea for the month of October, and it is not directly related to the PowerApps development, but it depends on what I can do with the bots. So, naturally, I’ve been trying Bot Framework, Virtual Agent, and may need to try Virtual Assistant samples.

If this interests you, carry on readingSmile

image

First of all, Bot Framework is in the core of any “agent/assistant/bot” technology from Microsoft. However, the way I see it:

  • Bot Framework is the underlying technology
  • Virtual Agent is a self-driving vehicle
  • Virtual Assistant is a Formula 1 car

And, of course, you can always develop your own bot from scratch directly on top of the Bot Framework.

Either way, let’s talk about the Virtual Agent in this post.

First of all, Virtual Agent is a Dynamics/PowerApps-connected technology. You can easily utilize Microsoft Flow directly from the Virtual Agent (but you can’t do much more in terms of “development”… which is how PowerPlatform often is – it’s a “low code”/”no code” platform in many cases)

Then, it’s in a preview. Don’t start posting questions to the community forum in the first 10 minutes after you’ve created a trial through the link below:

https://dynamics.microsoft.com/en-us/ai/virtual-agent-for-customer-service/

Wait till you see the confirmation:

image

Until then, you will see what can probably be described as “deployment work in progress” – some areas will be working, but others won’t. For example, all the buttons to create new topics or new bots will be greyed out and unavailable.

Either way, here is what I wanted to make my virtual agent do:

  • Wake up – easy to do, just need to define some trigger phrases. That said, the agent does not seem to recognize a trigger phrase if it’s even slightly different
  • Greet the visitor – not a problem, it’s just something the bot needs to say
  • Take the answer and get the name

 

This last ones seems simple, but it actually requires a Flow. The user might say “I am…”, “My name is…”, etc. However, all I can do in the agent designer is take that answer into a variable and use “is equal to” condition in the expressions:

image

Which is not quite what I need since I’d rather knew the visitor name. Hence, I need to call a Flow through what is called an “action” in the Virtual Agent to try to parse the answer. Actually, there is a demo of a similar Flow here:

https://www.youtube.com/watch?v=joXCzvi38Fo&feature=youtu.be

That’s overly simplified, though, since I need to parse the user response to remove “I am…”, “My name is…”, etc.

This is where I quickly found out that error reporting between Flows and Virtual Agent may still need some work because, once I had my Flow created and added as an action to the bot, I started to get this:

image

If I could only see what’s wrong somehow? Apparently, the Flow was running ok:

image

Turned out that was just a matter of forming HTTP response incorrectly in the Flow:

image

The agent was expecting “VisitorName” since that’s how the parameter was named in the schema, but the flow was returning “Name” instead. In the absence of detailed error reporting it’s always the simplest stuff that you verify last, so took me a while to figure it out – was a quick fix after that.

In the end, it was not too complicated, and, apparently, this relative simplicity is why we’d want to use a Virtual Agent:

 

image

From there, if I wanted this agent to run on my own web site, there is an iframe deployment option. Basically, as I mentioned before, this is all in line with the low-code/no-code approach.

And, because of the Flow integration(which, in turn, can connect to just about anything), the possibilities there are just mind-blowing. We can use a virtual agent to:

  • Confirm site visitor identity by comparing some info with what’s stored in Dynamics/PowerApps
  • Help site visitors open a support ticket
  • Provide an update on their support tickets
  • Surface knowledge base articles from Dynamics
  • Help them navigate through the phone directory
  • Search for the special offers
  • Etc etc

Besides, there is a Flow connector for LUIS, and I could probably add intent recognition and do other cool stuff using Flows:

https://flow.microsoft.com/en-US/connectors/shared_luis/luis/

 

I would definitely stick to trying it more, but I really wanted to integrate my bot with speech services, and, since this feature is not available yet (as per this thread: https://community.dynamics.com/365/virtual-agent-for-customer-service/f/dynamics-365-virtual-agent-for-customer-service-forum/358349/does-dynamics-365-virtual-agent-for-ce-support-voice-bot), will be moving on to the Bot Framework and Virtual Assistant for now.

Which means leaving the familiar PowerPlatform domain. Let’s see what’s out there in the following posts…

 

Lookups behavior in Wave 2 – recent items, wildcard search, magnifying glass button, etc

 

I am wondering how many of us have missed this post:

“Preview for usability enhancements to lookups in Unified Interface”

https://powerapps.microsoft.com/en-us/blog/preview-for-usability-enhancements-to-lookups-in-unified-interface/

I certainly did. So, to summarize, here is what’s happening now when you are clicking that magnifying glass button in the lookup field:

  • If “recent list” is enabled for the lookup, you will see recent items only
  • If “recent list” has been disabled through the configuration, you will see nothing (there will still be “new <record>” link though)
  • If you try typing “*” and clicking the magnifying glass after that, you will get nothing as well (there seem to be a bug when using “*” on its own)

 

If you wanted to bring up the list of records (not the “recent” records), there seem to be two options:

  • When in the lookup field, while it is still empty, push “enter” button
  • OR enter some text to search for and push “enter” or click the magnifying glass button(for example, “T*123” would work fine… as long as it’s not just “*”)

Using PowerShell to export/import solutions, data, and Word Templates

 

I blogged about it before, but now that ItAintBoring.CDS.PowerShell library has been fine tuned, it might be time to do it again.

There are three parts to this post:

  • Introduction
  • ItAintBoring.CDS.PowerShell installation instructions
  • ItAintBoring.CDS.PowerShell usage example

 

INTRODUCTION

There are different ways we can set up ci/cd – we can use Azure Devops, we can use PowerApp Build Tools, we can use various community tools, or we can even create our own powershell library.

Each has some pros and cons, but this post is all about that last option, which is using our own “powershell library”.

What are the main benefits? Well, it’s the amount of control we have over what we can do. You might say that “no code” is always better, but I would argue that, when you are setting up ci/cd, “no code” will probably be the least of your concerns.

Anyway, in this case we can use the library to export/import solutions, and, also, to export/import configuration data. Including, as of v 1.0.1, word templates.

INSTALLATION INSTRUCTIONS

1. Create a new folder, call it “CDS Deployment Scripts”(although, you can give it a different name if you prefer)

2. Create a new ps1 file in that folder with the content below

function Get-NuGetPackages{

$sourceNugetExe = “https://dist.nuget.org/win-x86-commandline/latest/nuget.exe”
$targetNugetExe = “.\nuget.exe”
Remove-Item .\Tools -Force -Recurse -ErrorAction Ignore
Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe
Set-Alias nuget $targetNugetExe -Scope Global -Verbose

./nuget install ItAintBoring.CDS.PowerShell -O .
}
Get-NuGetPackages

Here is how it should all look like:

image

3. Run the file above with PowerShell – you will have the scripts downloaded

image

image

4. Update system path variable so it has a path to the deployment.psm1

image

 

USAGE EXAMPLE

For the remaining part of this post, you will need to download sources from Git (just clone the repo if you prefer):

https://github.com/ashlega/ItAintBoring.Deployment

That repository includes all script sources, but, also, a sample project.

1. In the file explorer, navigate to the Setup/Projects/DeploymentDemo folder

image

2. Update settings.ps1 with the correct environment url-s

Open settings.ps1 and update your connection settings for the source/destination. If you are just trying the demo, don’t worry about the source, but make sure to update the destination.

image

3. IF YOU HAD A SOURCE environment

Which you don’t, at least while working on this example. But the idea is that, if you start using the script in your ci/cd, you would have the source.

So, if you had a source environment, you would now need to update Export.ps1. The purpose of that file is to export solutions and data from the source:

image

You can see how, for the data, it’s using FetchXml, which also works for the Word Document Templates.

4. Run import_solutions.ps1 to deploy solutions and data to the destination environment

image

5. Run import_data.ps1 to deploy data (including Word Templates) for the new entities to the destination environment

image

 

As a result of those last two steps above, you will have the solution deployed and some data uploaded:

image

image

image

Most importantly, it literally takes only a few clicks once those export/import files have been prepared.

And what about the source control? Well, it can still be there if you want. Once the solution file has been exported, you can run solution packager to unpack the file, then use Git to put it in the DevOps or on GitHub. Before running the import, you will need to get those sources from the source control, package them using the solution packager, and run the import scripts.

Test harness for PCF controls – we can also use “start npm start”

While developing a PCF control, you might want to start the test harness.

Although, if you are, like me, not doing PCF development daily, let me clarify what the “test harness” is. Essentially, it’s what you can/should use to test your PCF control without having to deploy it into the CDS environment. Once it’s started, you’ll be able to test the behavior of your PCF control outside of the CDS in a special web page – you’ll be able to modify property values, to see how your control responds to those changes, debug your code, etc.

If you need more details, you might want to open the docs and have a look:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/debugging-custom-controls

Either way, in order to start the test harness, you would use the following command:

npm start

This is fast and convenient, but this command will lock your terminal session. For example, if you are doing PCF development in Visual Studio Code, here is what you will see in the terminal window:

image

You won’t be able to use that window for any additional commands until you have terminated the test harness. Of course you could open yet another terminal window, but it just occurred to me that we could also type

start npm start

Those two “start”-s surrounding the “npm” have completely different meaning. When done this way, a new command prompt window will show up and “npm start” will run in that additional window:

image

It’ll be the same result – you will have the harness started, but, also, your original terminal session will continue to work, and you won’t need to open another one.

Managed solutions – let’s debunk some misconceptions

I have a strange attitude towards managed solutions. In general, I don’t always see great benefits in using them. On the other hand, this is what Microsoft recommends for production environments, so should I even be bringing this up?

This is why I probably don’t mind it either way now (managed or unmanaged); although, if somebody decides to go with unmanaged, I will be happy to remind them about this:

image

Still, it’s good to have a clear recommendation, but it would be even better to understand the “why-s” of it. Of course it would be much easier if we could describe the difference between managed/unmanaged in terms of the benefits each of those solution types is bringing.

For example, consider Debug and Release builds of your C# code in the development world. Debug version will have some extra debugging information, but it could also be slower/bigger. Release version might be somewhat faster and smaller, so it’s better for production. However, it’s not quite the same as managed/unmanaged in PowerApps since we can’t, really, say that managed is faster or slower, that there is some extra information in the unmanaged that will help with debugging, etc.

I am not sure I can do it that clearly for “managed” vs “unmanaged”, but let’s look at a few benefits of the managed solutions which are, sometimes, misunderstood.

1. There is no ability to edit solution components directly

The key here is “directly”. The screenshot below explains it very clearly:

image

In other words, you would not be able to lock solution components just by starting to use a managed solution. You’d have to disable customizations of those components (and, yes, those settings will be imported as part of the solution). Without that, you might still go to the default solution and modify the settings.

However, locking those components makes your solution much less extendable. This is probably one of the reasons why Microsoft is not overusing that technique, and we can still happily customize contacts, accounts, and other “managed” entities:

image

Despite the fact that Account is managed (see screenshot above), we can still add forms, update fields, create new fields, etc.

Then, do managed solutions help with “component locking”? From my standpoint, the answer is “sometimes”.

2. Ability to delete solution components is always great

This can be as risky as it is useful. It’s true that, with the unmanaged solutions, we can add components but we can’t delete them (we can “manually” or through the API, but not as part of solution removal). Managed solutions can be of great help there. However, even with managed there can be some planning involved.

Would you delete an attribute just like that or would you need to ensure data retention? What if an attribute is removed from the dev environment by a developer who did not think of the data retention, and, then, that solution is deployed in production? The attribute, and the data, will be gone forever.

Is it good or bad? Again, it depends. You may not want to automate certain things to that extent.

 

3. Managed can’t be exported, and that’s ok

For a long time, this has been my main concern. If the dev environment gets broken or becomes out of sync with production, how do we restore it?

This is where, I think, once we start talking about using managed solutions in production, we should also start talking about using some form of source control and devops. Whether it’s Azure DevOps, or whether it’s something else, but we need a way to store solution sources somewhere just in case we have to re-build our dev environment, and, also, we need to ensure we don’t deploy something in prod “manually”, but we always do it from the source control.

Which is great, but, if you ever looked at setting up devops for PowerApps, you might realize that it is not such a simple exercise. Do you have the expertise (or developers/consultants) for that?

So, at this point, do you still want to use managed solutions?

If you are not that certain anymore, I think that’s exactly what I wanted to achieve, but, if you just said “No”, maybe that’s taking it too far.

All the latest enhancements in that area (solution history, for example) are tied into the concept of managed solutions. The product team has certainly been pushing managed solutions lately. I still think there is a missing piece that will make the choice obvious as far as internal development goes, but, of course, as far as ISV-s are concerned managed is exactly what they need. Besides, you might be able to put “delete” feature to good use even for internal development, and you may be ok with having to figure out proper source control, and, possibly, ci/cd for your solutions. Hence, “managed” might just be the best way to go in those scenarios.

Choosing the right Model-Driven App Supporting Technology

 

While switching between Visual Studio, where I was adding yet another plugin, and Microsoft Flow designer, where I had to tweak a flow earlier this week, I found myself going over another episode of self-assessment which, essentially, was all about trying to answer this question: “why am I using all those different technologies on a single project”?

So, I figured why don’t I  dig into it a little more? For now, let’s not even think about stand-alone Canvas Apps – I am mostly working with model-driven applications, so, if you look at the diagram below, it might be a good enough approximation of how model-driven application development looks like today. Although, you will notice that I did not mention such products as Logic Apps, Azure Functions, Forms Pro, etc. This is because those are not PowerPlatform technologies, and they all fall into the “Custom or Third-Party” boxes on this diagram.

image

On a high level, we can put all those “supporting technologies” into a few buckets (I used “print forms”, “integrations”, “automation”, “reporting”, “external access” on the diagram above); however, there will usually be a few technologies within each bucket, and, at the moment, I would have a hard time identifying a single technology that would be superior to the others in that same bucket. Maybe with the exception of external access where Power Platform can offer only one solution, which is the Portals. Of course we can always develop something custom, so “custom or third-party” would normally be present in each bucket.

So, just to have an example of how the thinking might go:

  • Flows are the latest and greatest, of course, but there are no synchronous Flows
  • Workflows will probably be deprecated
  • Plugins are going to stay, so might work well for synchronous
  • However, we need developers to create plugins

 

I could probably go on and on with this “yes but” pattern – instead, I figured I’d create a few quick comparison tables (one per bucket), so here is what I ended up with – you might find it useful, too.

1. Print forms

image

It seems Word Templates would certainly be better IF we could use unlimited relationships, and, possibly subreports. However, since we can’t, and since, at some point, we will almost inevitably need to add data to the report that can only be tracked through a relationship that’s not trackable in Word Templates, that would, sooner or later, represent a problem. So, even if we start with Word Templates only, at some point we may still end up adding an SSRS report.

2. Integrations and UI

image

Again, when comparing, I tried to make sure that each “technology” has something unique about it. What is “ad-hoc development? Basically, it’s the ability to adjust the script right in the application without having to first compile the typescript and re-deploy the whole thing(which is one of the differences between web resources and PCF).

3. Automation

image

So, as far as automation goes, Microsoft Flow is the latest and greatest except that it does not support synchronous events. And, of course, you might not like the idea of having custom code, but, quite often, it’s the only efficient way to achieve something. Classic workflows are not quite future-proof keeping in mind that Flow has been slowly replacing them. Web Hooks require urls, so those may have to be updated once the solution has been deployed. And, also, web hooks are a bit special in the sense that they still have to be developed, it’s just that we don’t care about how and where they are developed (as long as they do exist) while on the PowerApps side.

4. Reporting

image

Essentially, SSRS is a very capable tool in many cases, and we don’t need a separate license to use it. However, compatible dev tools are, usually, a few versions behind. Besides, you would need a “developer” to create SSRS reports. None of the other tools are solution-aware yet. Excel templates have limitations on what we can do with the relationships. Power BI requires a separate license. Power Query is  not “integrated” with PowerApps.

5. External access

This one is really simple since, out-of-the-box, there is nothing to choose from. It’s either the Portals or it has to be something “external”.

So, really, in my example with the plugins and Flows, it’s not that I want to make the project more complicated by utilizing plugins and Flows together. In fact, I’m trying to utilize Flow where possible to avoid “coding”, but, since Flows are always asynchronous, there are situations where they just don’t cover the requirements. And, as you can see from the above tables, it’s pretty much the same situation with everything else. It’s an interesting world we live inSmile

PCF controls in Canvas Apps and why using Xrm namespace might not be a good idea

 

I wanted to see how a custom PCF control would work in a canvas app, and, as it turned out, it just works if you make sure this experimental feature has been enabled in the environment:

https://powerapps.microsoft.com/en-us/blog/announcing-experimental-release-of-the-powerapps-component-framework-for-canvas-apps/

You also need to enable components for the app:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/component-framework-for-canvas-apps

So, since I tried it for the Validated Input control, here is how it looks like in the canvas app:

image

image

image

Here is what I realized, though.

If you tried creating PCF components before, you would have probably noticed that you can use WebAPI to run certain queries (CRUD + retrieveMultiple):

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/reference/webapi

Often, those 5 methods provided in that namespace are not enough – for example, we can’t associate N:N records using WebAPI.

So, when implementing a different control earlier, I sort of cheated and assumed that, since my control would always be running in a model-driven application entity form, there would always be an Xrm object. Which is why I was able to do this in the index.ts file:

declare var Xrm: any;

And, then, I could use it this way:

var url: string = (<any>Xrm).Utility.getGlobalContext().getClientUrl();

Mind you, there is probably a different method of getting the url, so, technically, I might not have to use Xrm in this particular scenario.

Still, that example above actually shows how easy it is to get access to the Xrm namespace from a PCF component, so it might be tempting. Compare that to the web resources where you have to go to the parent window to find that Xrm object, yet you may have to wait till Xrm gets initialized.

However, getting back to the Canvas Apps. There is no Xrm here… who would have thought, huh?

WebAPI might become available, even though it’s not there yet. Keep in mind it’s still an experimental preview, so things can change completely.  However, there might be no reason at all for the Canvas Apps to surface complete Xrm namespace, which means if you decide to use Xrm in you PCF component, you will only be able to utilize such component in the model-driven applications. It seems to be almost obvious, but, somehow, I only realized it once I started to look at my control in the Canvas App.

If there are errors you can’t easily see in the Flow designer, look at the complex actions – the errors might be hidden inside

Having deployed my Flow in the Test environment earlier today, I quickly realized it was not working. Turned out Flow designer was complaining about the connections in the Flow with the following error:

Some of the connections are not authorized yet. If you just created a workflow from a template, please add the authorized connections to your workflow before saving. 

image

I’ve fixed the connection and clicked “save” hoping to see the error gone, but:

image

Huh?

It seems Flow designer is a little quirky when it comes to highlighting where the errors are. There was yet another connection error in the “Apply to each”, but I could not really see it till I have expanded that step:

image

Once that other connection error has been fixed, the Flow went back to life. By the way, it did not take me long to run into exactly the same situation with yet another Flow, but, this time, the error was hidden inside a condition action.

Upcoming API call limits: a few things to keep in mind

 

Microsoft is introducing daily API call limits in the October licensing:

image

https://docs.microsoft.com/en-ca/power-platform/admin/api-request-limits-allocations

This will affect all user accounts, all types of licenses, all types of applications/flows. Even non-interactive/application/admin user accounts will be affected.

This may give you a bit of a panic attack when you are reading it the first time, but there are a few things to keep in mind:

1. There will be a grace period until December 31, 2019

You also have an option to extend that period till October 1, 2020. Don’t forget to request an extension:

image

This grace period applies to the license plans, too.


2. From the WebAPI perspective, a batch request will still be counted as one API call

Sure this might not be very helpful in the scenarios where we have no control over how the requests are submitted (individually or via a batch), but, when looking at the data integrations/data migrations, we should probably start using batches more aggressively, especially since such SSIS connectors as KingswaySoft or CozyRoc would allow you to specify the batch size.

Although, you may also have to be careful about various text lookups, since, technically, they would likely break the batch. From that standpoint, “local DB” pattern I described a couple of years ago might be helpful here as well – instead of using a lookup, we might load everything into the local db using a batch operation, then do a lookup locally, then push data to the target without having to do lookups there:

https://www.itaintboring.com/dynamics-crm/data-migration-how-do-you-usually-do-it/

3. Those limits are not just for the CDS API calls

https://docs.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations#what-is-a-microsoft-power-platform-request

image