Monthly Archives: September 2019

Using PowerShell to export/import solutions, data, and Word Templates


I blogged about it before, but now that ItAintBoring.CDS.PowerShell library has been fine tuned, it might be time to do it again.

There are three parts to this post:

  • Introduction
  • ItAintBoring.CDS.PowerShell installation instructions
  • ItAintBoring.CDS.PowerShell usage example



There are different ways we can set up ci/cd – we can use Azure Devops, we can use PowerApp Build Tools, we can use various community tools, or we can even create our own powershell library.

Each has some pros and cons, but this post is all about that last option, which is using our own “powershell library”.

What are the main benefits? Well, it’s the amount of control we have over what we can do. You might say that “no code” is always better, but I would argue that, when you are setting up ci/cd, “no code” will probably be the least of your concerns.

Anyway, in this case we can use the library to export/import solutions, and, also, to export/import configuration data. Including, as of v 1.0.1, word templates.


1. Create a new folder, call it “CDS Deployment Scripts”(although, you can give it a different name if you prefer)

2. Create a new ps1 file in that folder with the content below

function Get-NuGetPackages{

$sourceNugetExe = “”
$targetNugetExe = “.\nuget.exe”
Remove-Item .\Tools -Force -Recurse -ErrorAction Ignore
Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe
Set-Alias nuget $targetNugetExe -Scope Global -Verbose

./nuget install ItAintBoring.CDS.PowerShell -O .

Here is how it should all look like:


3. Run the file above with PowerShell – you will have the scripts downloaded



4. Update system path variable so it has a path to the deployment.psm1




For the remaining part of this post, you will need to download sources from Git (just clone the repo if you prefer):

That repository includes all script sources, but, also, a sample project.

1. In the file explorer, navigate to the Setup/Projects/DeploymentDemo folder


2. Update settings.ps1 with the correct environment url-s

Open settings.ps1 and update your connection settings for the source/destination. If you are just trying the demo, don’t worry about the source, but make sure to update the destination.


3. IF YOU HAD A SOURCE environment

Which you don’t, at least while working on this example. But the idea is that, if you start using the script in your ci/cd, you would have the source.

So, if you had a source environment, you would now need to update Export.ps1. The purpose of that file is to export solutions and data from the source:


You can see how, for the data, it’s using FetchXml, which also works for the Word Document Templates.

4. Run import_solutions.ps1 to deploy solutions and data to the destination environment


5. Run import_data.ps1 to deploy data (including Word Templates) for the new entities to the destination environment



As a result of those last two steps above, you will have the solution deployed and some data uploaded:




Most importantly, it literally takes only a few clicks once those export/import files have been prepared.

And what about the source control? Well, it can still be there if you want. Once the solution file has been exported, you can run solution packager to unpack the file, then use Git to put it in the DevOps or on GitHub. Before running the import, you will need to get those sources from the source control, package them using the solution packager, and run the import scripts.

Test harness for PCF controls – we can also use “start npm start”

While developing a PCF control, you might want to start the test harness.

Although, if you are, like me, not doing PCF development daily, let me clarify what the “test harness” is. Essentially, it’s what you can/should use to test your PCF control without having to deploy it into the CDS environment. Once it’s started, you’ll be able to test the behavior of your PCF control outside of the CDS in a special web page – you’ll be able to modify property values, to see how your control responds to those changes, debug your code, etc.

If you need more details, you might want to open the docs and have a look:

Either way, in order to start the test harness, you would use the following command:

npm start

This is fast and convenient, but this command will lock your terminal session. For example, if you are doing PCF development in Visual Studio Code, here is what you will see in the terminal window:


You won’t be able to use that window for any additional commands until you have terminated the test harness. Of course you could open yet another terminal window, but it just occurred to me that we could also type

start npm start

Those two “start”-s surrounding the “npm” have completely different meaning. When done this way, a new command prompt window will show up and “npm start” will run in that additional window:


It’ll be the same result – you will have the harness started, but, also, your original terminal session will continue to work, and you won’t need to open another one.

Managed solutions – let’s debunk some misconceptions

I have a strange attitude towards managed solutions. In general, I don’t always see great benefits in using them. On the other hand, this is what Microsoft recommends for production environments, so should I even be bringing this up?

This is why I probably don’t mind it either way now (managed or unmanaged); although, if somebody decides to go with unmanaged, I will be happy to remind them about this:


Still, it’s good to have a clear recommendation, but it would be even better to understand the “why-s” of it. Of course it would be much easier if we could describe the difference between managed/unmanaged in terms of the benefits each of those solution types is bringing.

For example, consider Debug and Release builds of your C# code in the development world. Debug version will have some extra debugging information, but it could also be slower/bigger. Release version might be somewhat faster and smaller, so it’s better for production. However, it’s not quite the same as managed/unmanaged in PowerApps since we can’t, really, say that managed is faster or slower, that there is some extra information in the unmanaged that will help with debugging, etc.

I am not sure I can do it that clearly for “managed” vs “unmanaged”, but let’s look at a few benefits of the managed solutions which are, sometimes, misunderstood.

1. There is no ability to edit solution components directly

The key here is “directly”. The screenshot below explains it very clearly:


In other words, you would not be able to lock solution components just by starting to use a managed solution. You’d have to disable customizations of those components (and, yes, those settings will be imported as part of the solution). Without that, you might still go to the default solution and modify the settings.

However, locking those components makes your solution much less extendable. This is probably one of the reasons why Microsoft is not overusing that technique, and we can still happily customize contacts, accounts, and other “managed” entities:


Despite the fact that Account is managed (see screenshot above), we can still add forms, update fields, create new fields, etc.

Then, do managed solutions help with “component locking”? From my standpoint, the answer is “sometimes”.

2. Ability to delete solution components is always great

This can be as risky as it is useful. It’s true that, with the unmanaged solutions, we can add components but we can’t delete them (we can “manually” or through the API, but not as part of solution removal). Managed solutions can be of great help there. However, even with managed there can be some planning involved.

Would you delete an attribute just like that or would you need to ensure data retention? What if an attribute is removed from the dev environment by a developer who did not think of the data retention, and, then, that solution is deployed in production? The attribute, and the data, will be gone forever.

Is it good or bad? Again, it depends. You may not want to automate certain things to that extent.


3. Managed can’t be exported, and that’s ok

For a long time, this has been my main concern. If the dev environment gets broken or becomes out of sync with production, how do we restore it?

This is where, I think, once we start talking about using managed solutions in production, we should also start talking about using some form of source control and devops. Whether it’s Azure DevOps, or whether it’s something else, but we need a way to store solution sources somewhere just in case we have to re-build our dev environment, and, also, we need to ensure we don’t deploy something in prod “manually”, but we always do it from the source control.

Which is great, but, if you ever looked at setting up devops for PowerApps, you might realize that it is not such a simple exercise. Do you have the expertise (or developers/consultants) for that?

So, at this point, do you still want to use managed solutions?

If you are not that certain anymore, I think that’s exactly what I wanted to achieve, but, if you just said “No”, maybe that’s taking it too far.

All the latest enhancements in that area (solution history, for example) are tied into the concept of managed solutions. The product team has certainly been pushing managed solutions lately. I still think there is a missing piece that will make the choice obvious as far as internal development goes, but, of course, as far as ISV-s are concerned managed is exactly what they need. Besides, you might be able to put “delete” feature to good use even for internal development, and you may be ok with having to figure out proper source control, and, possibly, ci/cd for your solutions. Hence, “managed” might just be the best way to go in those scenarios.

Choosing the right Model-Driven App Supporting Technology


While switching between Visual Studio, where I was adding yet another plugin, and Microsoft Flow designer, where I had to tweak a flow earlier this week, I found myself going over another episode of self-assessment which, essentially, was all about trying to answer this question: “why am I using all those different technologies on a single project”?

So, I figured why don’t I  dig into it a little more? For now, let’s not even think about stand-alone Canvas Apps – I am mostly working with model-driven applications, so, if you look at the diagram below, it might be a good enough approximation of how model-driven application development looks like today. Although, you will notice that I did not mention such products as Logic Apps, Azure Functions, Forms Pro, etc. This is because those are not PowerPlatform technologies, and they all fall into the “Custom or Third-Party” boxes on this diagram.


On a high level, we can put all those “supporting technologies” into a few buckets (I used “print forms”, “integrations”, “automation”, “reporting”, “external access” on the diagram above); however, there will usually be a few technologies within each bucket, and, at the moment, I would have a hard time identifying a single technology that would be superior to the others in that same bucket. Maybe with the exception of external access where Power Platform can offer only one solution, which is the Portals. Of course we can always develop something custom, so “custom or third-party” would normally be present in each bucket.

So, just to have an example of how the thinking might go:

  • Flows are the latest and greatest, of course, but there are no synchronous Flows
  • Workflows will probably be deprecated
  • Plugins are going to stay, so might work well for synchronous
  • However, we need developers to create plugins


I could probably go on and on with this “yes but” pattern – instead, I figured I’d create a few quick comparison tables (one per bucket), so here is what I ended up with – you might find it useful, too.

1. Print forms


It seems Word Templates would certainly be better IF we could use unlimited relationships, and, possibly subreports. However, since we can’t, and since, at some point, we will almost inevitably need to add data to the report that can only be tracked through a relationship that’s not trackable in Word Templates, that would, sooner or later, represent a problem. So, even if we start with Word Templates only, at some point we may still end up adding an SSRS report.

2. Integrations and UI


Again, when comparing, I tried to make sure that each “technology” has something unique about it. What is “ad-hoc development? Basically, it’s the ability to adjust the script right in the application without having to first compile the typescript and re-deploy the whole thing(which is one of the differences between web resources and PCF).

3. Automation


So, as far as automation goes, Microsoft Flow is the latest and greatest except that it does not support synchronous events. And, of course, you might not like the idea of having custom code, but, quite often, it’s the only efficient way to achieve something. Classic workflows are not quite future-proof keeping in mind that Flow has been slowly replacing them. Web Hooks require urls, so those may have to be updated once the solution has been deployed. And, also, web hooks are a bit special in the sense that they still have to be developed, it’s just that we don’t care about how and where they are developed (as long as they do exist) while on the PowerApps side.

4. Reporting


Essentially, SSRS is a very capable tool in many cases, and we don’t need a separate license to use it. However, compatible dev tools are, usually, a few versions behind. Besides, you would need a “developer” to create SSRS reports. None of the other tools are solution-aware yet. Excel templates have limitations on what we can do with the relationships. Power BI requires a separate license. Power Query is  not “integrated” with PowerApps.

5. External access

This one is really simple since, out-of-the-box, there is nothing to choose from. It’s either the Portals or it has to be something “external”.

So, really, in my example with the plugins and Flows, it’s not that I want to make the project more complicated by utilizing plugins and Flows together. In fact, I’m trying to utilize Flow where possible to avoid “coding”, but, since Flows are always asynchronous, there are situations where they just don’t cover the requirements. And, as you can see from the above tables, it’s pretty much the same situation with everything else. It’s an interesting world we live inSmile

PCF controls in Canvas Apps and why using Xrm namespace might not be a good idea


I wanted to see how a custom PCF control would work in a canvas app, and, as it turned out, it just works if you make sure this experimental feature has been enabled in the environment:

You also need to enable components for the app:

So, since I tried it for the Validated Input control, here is how it looks like in the canvas app:




Here is what I realized, though.

If you tried creating PCF components before, you would have probably noticed that you can use WebAPI to run certain queries (CRUD + retrieveMultiple):

Often, those 5 methods provided in that namespace are not enough – for example, we can’t associate N:N records using WebAPI.

So, when implementing a different control earlier, I sort of cheated and assumed that, since my control would always be running in a model-driven application entity form, there would always be an Xrm object. Which is why I was able to do this in the index.ts file:

declare var Xrm: any;

And, then, I could use it this way:

var url: string = (<any>Xrm).Utility.getGlobalContext().getClientUrl();

Mind you, there is probably a different method of getting the url, so, technically, I might not have to use Xrm in this particular scenario.

Still, that example above actually shows how easy it is to get access to the Xrm namespace from a PCF component, so it might be tempting. Compare that to the web resources where you have to go to the parent window to find that Xrm object, yet you may have to wait till Xrm gets initialized.

However, getting back to the Canvas Apps. There is no Xrm here… who would have thought, huh?

WebAPI might become available, even though it’s not there yet. Keep in mind it’s still an experimental preview, so things can change completely.  However, there might be no reason at all for the Canvas Apps to surface complete Xrm namespace, which means if you decide to use Xrm in you PCF component, you will only be able to utilize such component in the model-driven applications. It seems to be almost obvious, but, somehow, I only realized it once I started to look at my control in the Canvas App.

If there are errors you can’t easily see in the Flow designer, look at the complex actions – the errors might be hidden inside

Having deployed my Flow in the Test environment earlier today, I quickly realized it was not working. Turned out Flow designer was complaining about the connections in the Flow with the following error:

Some of the connections are not authorized yet. If you just created a workflow from a template, please add the authorized connections to your workflow before saving. 


I’ve fixed the connection and clicked “save” hoping to see the error gone, but:



It seems Flow designer is a little quirky when it comes to highlighting where the errors are. There was yet another connection error in the “Apply to each”, but I could not really see it till I have expanded that step:


Once that other connection error has been fixed, the Flow went back to life. By the way, it did not take me long to run into exactly the same situation with yet another Flow, but, this time, the error was hidden inside a condition action.

Upcoming API call limits: a few things to keep in mind


Microsoft is introducing daily API call limits in the October licensing:


This will affect all user accounts, all types of licenses, all types of applications/flows. Even non-interactive/application/admin user accounts will be affected.

This may give you a bit of a panic attack when you are reading it the first time, but there are a few things to keep in mind:

1. There will be a grace period until December 31, 2019

You also have an option to extend that period till October 1, 2020. Don’t forget to request an extension:


This grace period applies to the license plans, too.

2. From the WebAPI perspective, a batch request will still be counted as one API call

Note: as of December 2019, this is incorrect (a note added in June 2020). For the details, check this out:

Sure this might not be very helpful in the scenarios where we have no control over how the requests are submitted (individually or via a batch), but, when looking at the data integrations/data migrations, we should probably start using batches more aggressively, especially since such SSIS connectors as KingswaySoft or CozyRoc would allow you to specify the batch size.

Although, you may also have to be careful about various text lookups, since, technically, they would likely break the batch. From that standpoint, “local DB” pattern I described a couple of years ago might be helpful here as well – instead of using a lookup, we might load everything into the local db using a batch operation, then do a lookup locally, then push data to the target without having to do lookups there:

3. Those limits are not just for the CDS API calls