Author Archives: Alex Shlega

Setting up nested security groups for CDS/Dynamics instances

This post is a proof of concept for the following scenario:

image

Would not it be nice if we could use more than one security group so that, for example, we would have admins in one group and all the other users in another. This way, we might add that admin group to Sharepoint so admin users would become site owners/site collection admins there.

Unfortunately, nested security groups are not supported for CDS/Dynamics:

https://community.dynamics.com/crm/b/workandstudybook/archive/2018/05/16/gotcha-don-t-use-nested-security-group

Do we have any options? Well, just continuing on the PowerShell exploration path that I started earlier this month, we could use this kind of script to emulate nested groups:

install-module azuread
import-module azuread
Connect-AzureAD

$GroupStartWith = 'Root_CRM'

$ADGroups = get-azureadgroup -Filter "startswith(DisplayName, '$GroupStartWith')"
 
foreach ($ADGroup in $ADGroups) { 
    $Members = Get-AzureADGroupMember -ObjectId $ADGroup.ObjectId
	#Delete all "user-members"
	foreach ($Member in $Members)
	{  
	    if($Member.ObjectType -eq "User")
	    {
		   Remove-AzureADGroupMember -ObjectId $ADGroup.ObjectId -MemberId $Member.ObjectId
	    }
	}
	
	#Re-add all nested user-members
	foreach ($Member in $Members)
	{  
	    if($Member.ObjectType -eq "Group")
	    {
			$NestedMembers = Get-AzureADGroupMember -ObjectId $Member.ObjectId
			foreach ($NestedMember in $NestedMembers)
			{  
			   if($NestedMember.ObjectType -eq "User")
			   {
				 #Write-Host $NestedMember
				 Add-AzureADGroupMember -ObjectId $ADGroup.ObjectId -RefObjectId $NestedMember.ObjectId
			   }
			}
		}
	}
} 

What the script above would do is:

– It would connect to Azure AD

– It would find all groups named as “Root_CRM*” – that’s just a quick “naming convention” I came up with just now

– The script will, then, loop over all of those groups and do a couple of things for each of those:

  • It will remove all current members
  • It will look at the nested groups and add nested group members to the main group

As a result, I can have security groups configured exactly the way it’s shown on the diagram above since I’m not limited to having just one group anymore.

Of course it’s PowerShell, so the script has to be started somehow. It might be doable with Azure Runbooks – for example, this script could be scheduled to run a few times per day to do the automated sync. Although, it will have to be updated so it does not ask for the credentials. Which should be doable with Get-AutomationConnection

Yet the script can likely be improved so it does not remove users which will be re-added eventually. This is to avoid possible glitches when a user loses access to CDS/Dynamics for a moment while that user is being re-added.

TypeScript from the c# developer standpoint

 

A lot of TypeScript tutorials are available online, so I was looking for one recently and, it seems, this one is really good IF you are familiar with some other language already:

https://www.tutorialspoint.com/typescript

So, while working through it, I noticed a few things which might strike you as being somewhat unusual if you are more used to the languages like C# (or even javascript itself). Figured I’d come up with a short list to use as a refresher for myself later; although, if you are new to the TypeScript, you might find some of the items useful, too.

1. To declare a variable, we need to use <var_name>:<type_name> syntax

var x:number = 1

2. There is no need to use semicolons as long as we have one instruction per line

var x:number = 1
x = 2
x++

3. There are no separate numeric types

There is no int or double – all numbers are represented by the “number” type.

4. There are arrays and there are tuples

For an array, here is an example:

TS: var names:string[] =  [“Mary”,”Tom”,”Jane”, “Andy”]
JS: var names = [“Mary”,”Tom”,”Jane”, “Andy”]

And here is an example for a tuple:

TS: var arr = [“Mary”,”Tom”,1, 2]
JS: var arr = [“Mary”,”Tom”,1, 2]

The difference between those two on the TS side is that arrays are strongly typed, so there will be type validation at compile-time. Tuples are not strongly typed, so you can put objects of different types into a tuple. Once compiled into JS, though, both arrays and tuples look the same.

5. Unions

This is a somewhat crazy concept which is basically saying that a variable can store values of more than one type, but those types have to be mentioned:

var val:string|number
val = 12
val = “This is a string”

Of course this means very little for the compiled javascript code, but it allows TypeScript to do compile-time type validations.

What’s unusual about this (besides the fact that I have not seen this concept in the .NET world so far) is that “union” has somewhat different meaning in other areas. For example, unions allows you to combine results from different queries in SQL, but those different queries since have to match each other on the “metadata” level (same columns). In case with the TypeScript, it’s the “metadata” which is becoming different through the use of the unions.

6. Interfaces

In TypeScript, an interface is more than a contract to be implemented by a class later. An interface defines a type that you can use in variable declarations:

interface IPerson {
firstName:string,
lastName:string,
}

var customer:IPerson = {
firstName:”Tom”,
lastName:”Hanks”,
}

In c#, you would have to implement the interface first through a class which inherits and implements the interface. In TS, you can declare a variable of the “interface type”, and you can implement that interface right there.

What I mean is that, in the example above, you can’t just omit one of the properties:

var customer:IPerson = {
firstName:”Tom”
}

Since you’ll get an error message from the compiler:

main.ts(6,5): error TS2322: Type ‘{ firstName: string; }’ is not assignable to type ‘IPerson’.
Property ‘lastName’ is missing in type ‘{ firstName: string; }’.


Also, there is a special construct for defining array interfaces – it seems I don’t fully understand the “index” part at the moment, so will just leave it at this:

interface namelist {
[index:number]:string
}

7. Somewhat special syntax when defining classes

  • When defining class methods, we don’t use “function” keyword in TS
  • When defining class properties, we don’t need to use “var” keyword
  • To define a constructor, we need to use “constructor” keyword
  • To reference a property in a function, we need to access that property through “this” object

 

class Car {
//field
engine:string;

//constructor
constructor(engine:string) {
this.engine = engine
}

//function
disp():void {
console.log(“Engine is  :   “+this.engine)
}
}

8. Type templates

There is a notion of type templates which basically means that, since every object in TS has to have a type,  that type will be inferred from the variable declaration if you do it this way:

var person = {
firstname: “Tom”,
lastname: “Hanks”
}

With this, you can’t, now, dynamically add another property to the person:

person.age = 1

Since you’ll get an error:
main.ts(7,8): error TS2339: Property ‘age’ does not exist on type ‘{ firstname: string; lastname: string; }’

Instead, you have to add that age property to the original declaration somehow:

var person = {
firstname: “Tom”,
lastname: “Hanks”,
age: NaN
}

person.age = 1

9. Prototypes vs objects

When you have a class, all class methods will be defined in the prototype:

TS: class Car {
disp():void {
console.log(“Ford”)
}
}

Compiled JS:

var Car = /** @class */ (function () {
function Car() {
}
Car.prototype.disp = function () {
console.log(“Ford”);
};
return Car;
}());

And, then, you can actually change what “disp” means for a specific object of the class:

var a:Car = new Car()
a.disp = () => console.log(“Nissan”)

Which compiles into the following JS:

var a = new Car();
a.disp = function () { return console.log(“Nissan”); };

Well, that’s some strange stuff…

10. Namespaces

If you want to use a namespace from a different ts file, you have to reference the file:

/// <reference path = “SomeFileName.ts” />

There is a special “export” keyword which must be added to anything you want to be visible outside of the namespace.

Interestingly you can actually define the same namespace more than once in the same file:

namespace Test{
export interface Displayable {
disp():void;
}
}

namespace Test{
export class Car implements Displayable {
disp():void {
console.log(“Ford”)
}
}
}

var a:Test.Car = new Test.Car()
a.disp()

But, unless you’ve added those “export” keywords to the interface above and to the class, it’s not going to compile.

11. Modules

There is no equivalent concept in c#, it seems, so, for now, I’d rather just put a link to the page which seems to be explaining the concept… I’m still working through itSmile

https://www.typescriptlang.org/docs/handbook/modules.html

12. Ambients

Essentially, it’s similar to adding a dll reference to your .NET project so that

  • Typescript compiler could do compile-time validations
  • You would not have to rewire existing javascript code in TS

Ambient declarations are done through the “d.ts” files, those files are added to the TS files using the “reference” directive:

/// <reference path = “Calc.d.ts” />

However, d.ts files themselves do not contain any actual implementations – there are only type definitions there:

declare module TutorialPoint {
export class Calc {
doSum(limit:number) : number;
}
}

When it comes to somehow adding  actual implementation of the doSum function to your final HTML, you have to it using the script tag:

<script src = “doSumSource.js”></script>

 

 

 

 

 

 


					

Beyond the PowerApps Component Framework

The more I’m looking into the PowerApps Component Framework, the more obvious it becomes that the era of “do-it-all”-ers in Dynamics/PowerApps might be coming to an end.

Although, it’s not just the PCF that’s making me say this.

I turned to Dynamics some time ago since I did not want to stick to pure development – I liked messing with the code every now and then, but I could hardly imagine myself spending whole days coding. That was back in 2011 when this kind of skillset worked great for a Dynamics Consultant since what was missing out of the box I could always extend with Javascript/HTML/Plugins. And I did not have to be a full-stack developer familiar with the latest frameworks, since, really, all I needed was basic javascript skills, some C# development skills, probably SQL… but nothing too advanced.

We can still do it today in the model-driven PowerApps, but, it seems, the world is changing.

It’s been a while since the term Citizen Developer was introduced, and this is really how we call a developer who is, well, not a developer in the regular sense, but who is still doing “development” using more high-level tools:

https://www.mobile-mentor.com/blog/citizen-developer-powerapps/

For example, there can be Flow developers, and there can be Canvas Apps developers. Interestingly, those tools are not that straightforward, so somebody just starting to work with them may need to go over quite a bit of learning.

On the other hand, PowerApps Component Framework hardly belongs to the realm of the Citizen Developer – instead, it’s meant to be utilized by the professional developers:

image

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/overview

And it’s not just wording(although, of course, one might argue that plugins were always meant for the professional developers as well). If you look at the sample code of PCF components, you’ll see something like this:

image

So, unlike with a web resource, there are a few more things to consider:

  • Typescript
  • The framework itself
  • The fact that you need to create HTML elements from code

 

Now, to use typescript, and unless you are familiar with it, you’ll probably need to catch up first, you’ll also need to learn what NodeJs is, what NPM is, how to use them, etc.

Compare that to a classic web resource development where all you need to know is HTML, CSS, and Javascript. Even though I think web resources are not going anywhere yet since things like javascript-based validation/visibility rules/etc do not belong to the PCF, the difference between PCF and Webresources  is that somebody working with the PCF is supposed to be a “professional developer” rather than just an opportunistic consultant utilizing web resources when and if it’s appropriate.

To start with, you may need to have all those tools configured on your laptop to use them (whereas, with the web resources, we could just paste javascript to the editor in Dynamics if we wanted to).

But that’s just one example of where the line between Citizen Developers and Professional Developers is becoming more clear. There are other examples, too:

  • Do you know what CI/CD is and how to set it up?
  • Are you familiar with the Azure DevOps?
  • Do you know how to use Git?
  • Can you use solution packager to parse the solution so you could put solution components in the source control?
  • Are you familiar with PowerShell scripting?
  • Do you know how to write plugins?
  • Are you familiar with the TypeScript? How about Angular? NodeJs?
  • Can you explain what solution layering means and how it works?

 

On any relatively complex implementation those are the skills you may need to have as a model-driven powerapps developer.

Although, as a Citizen Developer, you might not need to even bother to learn those things.

And, as a Functional Consultant, you might need to be aware of what can be done through either sort of development, but you have your own toys to play with – think of all the configuration options, security model, licensing, different first-party applications.

A few years ago Microsoft began to introduce “Code-Less” features such as Business Rules, Microsoft Flow, Canvas Apps… and it almost started to look like the days of real developers in Dynamics were counted.

Then there was OData, WebAPI, Liquid for Dynamics Portals… Now there is PCF which is targeting professional developers from the start. Add plugins to this mix, and, suddenly, the outlook for developers starts looking much better.

However, what it all means is that every “group” is getting their own tools, and, so, they have to spend time learning those tools and frameworks. As that is happening, former Dynamics consultants “with developer skills” may have to finally choose sides and start specializing, since there is only so much one can learn to use efficiently in the everyday work. Personally, I’ll probably try to stick to the “do it all” approach for a little longer, but I’m curious how it’s all going to play out. We’ll see…

 

Error monitoring for the solution-aware Flows

 

Some flows are solution-aware, and some flows are not:

https://docs.microsoft.com/en-us/flow/overview-solution-flows

It turned out the difference between those two is not just that solution-aware flows are portable – somehow, it goes deeper.

Just a few days ago wrote a post where I was trying to summarize error monitoring options for the Flows:

https://www.itaintboring.com/dynamics/microsoft-flow-monitoring/

It was not working out quite well there since the main problem I was having is that, for the Flows I was looking at, I could not get the history of Flow runs unless the Flows were shared with me.

However, that’s only a problem for the non-solution aware Flows.

If a Flow is created in the solution, it’s all getting much better.

Through the solution, I can open Flow Analytics for any Flow that’s been added to the solution:

image

Magically, PowerShell starts showing Flow Runs history for those flows, too:

image

Of course one possible problem here is that we can’t add non-solution aware flow to a solution, but that’s probably going to be resolved this way or another at some point. Right now, though, if you are looking into Flows portability and/or some kind of error monitoring approach, don’t make the mistake I did and make sure you are working with the solution-aware flows.

PS. In order for the analytics/powershell to work, we do need to have System Admin or System Customizer permissions in the CDS environment.

Microsoft Flow Monitoring

 

I often read/hear that Microsoft Flow is not suitable for the advanced integration scenarios where Logic Apps should be used instead. That statement is probably coming from the comparison below:

image

https://docs.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs

This is all great; however, unlike the Logic Apps which are living their own life somewhere in Azure, Microsoft Flow is a first-class citizen in the Power Platform world, so, even if Logic Apps might be more suitable for the Advanced Integration scenarios, Flow might still be preferred in a number of situations.

There are at least a few reasons for that:

Flow is integrated with Power Apps – every user having appropriate permissions will be able to create and/or run Flows:

image

Unlike the Logic Apps, Flows are solution-aware and can be deployed through the PowerApps solutions. Potentially, that makes them very useful for the in-house solution creators and/or external ISV-s. This is similar to how we’ve always been using classic workflows in the solutions (and not the SSIS, for example, no matter how useful SSIS can be in the other scenarios):

image

Besides, every fully-licensed Dynamics user brings extra 15K flow runs allowance per month to the tenant, and it’s not the case with the Logic Apps

As such, and since Flows are generally viewed as a replacement for the classic Dynamics workflows (of course once they have reached parity), I think it’s only fair to assume that Flows will actually be utilized quite extensively even in the more advanced scenarios.

That brings me to the question I was asking myself the other day – what monitoring options do we have when it comes to Microsoft Flow? With the workflows, we used to have System Jobs, so a Dynamics administrator could go to the System Jobs view and review the errors.

Although, to be fair, I’ve probably never seen automated monitoring implemented for those jobs.

Still, now that we have Flows, how do we go about error monitoring?

Surprisingly, there are a few options, but neither of them is as simple as the good old system jobs view.

Flow management connector

This is where Flows can manage flows:

https://docs.microsoft.com/en-us/connectors/flowmanagement/

Actually, I am mentioning it here only because this one was a bit confusing/misleading to me. This connector offers a lot of actions to manage flows, but it offers no trigger, and, also, it does not seem to support querying flow errors:

image

In other words, from the monitoring perspective it does not seem to be helping.

Flow Admin Center

We can go to the flow admin center and have a look at all the flows in the environment, but that does not help with the error monitoring, it seems:

image

Error-handling steps

As explained in the post below, we can add error-handling steps to our flows. Of course we have to remember to add those steps. But, also, this kind of notifications may have to be more intelligent since, if we ever end up distributing those flows as part of our solutions, we might have to somehow adjust recipient emails depending on the environment. It may still be doable, but it does not seem extremely convenient:

https://flow.microsoft.com/en-us/blog/error-handling/

Also, there are some limitations there. We can’t configure “run after” for the actions immediately following the trigger (whether it’s a single action or whether there are a few parallel actions)

image

And, also, sometimes we can set up the trigger so that it “fails”.. In which case there would be no Flow run recorded in the history. One example would be an “Http Request Received” trigger when json schema validation is enabled:

image

Whenever schema validation fails, an error won’t be reported for the Flow. Meaning that this kind of integration errors would have to be tracked on the other side of the communication channel, and that might not even be possible.

Out of the box error notifications

https://flow.microsoft.com/en-us/blog/microsoft-forms-tables-flow-failures/

These could be useful. However, since they are not sent on each and every failure (and, realistically, the should not be sent on each and every failure), the are only useful that much.

Per-flow analytics

There is some good per-flow analytics at flow.microsoft.com

image

This might be handy, but this analytics is per-flow. And, also, it’s only available for the flows owned and/or shared with the current user.

Of course we can also go to the admin.flow.microsoft.com and see the list of flows, but this kind of charts are not available there.

Admin analytics

Admin analytics comes close:

https://flow.microsoft.com/en-us/blog/admin-analytics/

But there is no detailed information about the errors:

image

Well, at least we have errors from different flows in one place.. But we can’t see it right away – the data is cached (same way it’s cached for any other Power BI-based report)

PowerShell

Get-FlowRun cmdlet from the “PowerShell Cmdlets for PowerApps and Flow creators and administrators” gives us almost what we need:

image

https://powerapps.microsoft.com/en-us/blog/gdpr-admin-powershell-cmdlets/

So, if we import required modules:

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -Force

Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber –Force

And utilize Get-FlowRun cmdlet:

image

We’ll get Flow runs for the specific flow or for all the flows.. Except that, just like with everything above, a user running this cmdlet won’t be able to get flow runs for any flows which the users does not own and which are not shared with that user.

Afre looking at all those options, there has to be some conclusion, and I’m thinking it’s like this:

From the centralized error monitoring standpoint for Flows, there seem to be no ideal option. One way to make it easier might be by making sure that all “system” Flows are shared with the dedicated support group:

https://flow.microsoft.com/en-us/blog/share-with-sharepoint-office-365/

This way, at least, any member of that group would be able to use PowerShell and/or Per-Flow/Admin analytics to see how those “System” flows have been doing. There will still be no alerts and notifications, but that’s neither better nor worse when compared to the classic Dynamics workflows – that’s pretty much the same.

From the automation perspective, PowerShell looks most promising, just make sure to use Add-PowerAppsAccount cmdlet or the script will be asking you for the user/password (Which is not going to work for the automation):

$pass = ConvertTo-SecureString "password" -AsPlainText -Force
Add-PowerAppsAccount -Username foo@bar.com -Password $pass

Update (May 13): it turned out error monitoring works much better for the solution-aware flows. There is no need to share the flows there, once just needs to have appropriate CDS permissions: https://www.itaintboring.com/flow/error-monitoring-for-the-solution-aware-flows/

UpdateRequest vs SetStateDynamicEntity

 

I had a nice validation plugin that used to be working perfectly whenever a record were updated manually. When deactivating a record with a certain status reason, a user would see an error message if some of the conditions were not met.

For example, a transaction can’t be completed till the amount field has been filled in correctly.

Then I created a workflow which would apply the same status change automatically. The idea was that my plugin would still kick in, so all the validations would work. Surprisingly, it’s not exactly what has happened.

Whenever I tried using a workflow to change record status, my plugin just would not fire and the status would change.

Here is a version of the validation plugin that’s been stripped down of any validation functionality – it’s just expected to throw an error every time:

image

The plugin has been registered on the Update of the new_test entity:

image

When trying to deactivate a record manually, I’m getting the error as expected:

image

However, when using a workflow:

image

Which, by the way, is set up like this:

image

The plugin does not kick in and the record gets deactivated:

image

Registering an additional step for the same plugin on the SetStateDynamicEntity message does help, thouhg:

image

I am now getting correct “validation” error when trying to run my workflow as well.

So, it it seems, SetStateDynamicEntity request (and, possibly, SetState) is still being used internally, even though I used to think it’s been deprecated for a while now:

https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/gg308277(v%3Dcrm.8)

By the way.. While trying to come up with this simplified repro, I noticed that this may have something to do with the stage in which the plugin is registered. Everything seems to be working correctly in  PostOperation, but not in PreValidation. And there are some quirks there, too. However, if you are trying to test your validations, and if you are observing this kind of behavior, it might help simply moving your plugin from PreValidation to PostOperation.

Of course the problem is that PreValidation is happening outside of the database transaction, so I can write some validation results back to Dynamics while in that stage, and it’s not possible in the Pre/Post Operation since all such change will be lost once an error is raised.. So, eventually, SetStateDynamicEntity might still be a better option.

Things are certainly different when working in the cloud

 

Ever since I have started working on the current project, there probably was not a day when I would not discover something new (of course what’s new for me is not necessarily new for somebody who’s been working in the online environment for a while).

When working on-premise, you get to know what to expect, what’s doable, what’s going to cause problems.. and, at some point, you just settle into a certain rhythm, you learn to avoid some solutions in favor of those which are more likely to succeed in that particular on-premise environment, and that’s how you deliver the project.

Compared to that, working in the cloud environment sometimes feels like visiting some kind of a wonderland. There are wonders everywhere, they are good and bad, they never cease to amuse you, and you can’t help but keep wondering for a couple of reasons:

  • It’s literally impossible to know everything about everything since Microsoft cloud ecosystem is huge
  • Even what you knew yesterday might be absolutely irrelevant today since new and updated features get released all the time

 

Even just for Dynamics, there were 6 (six!) updates in April. However small they were, that still means something was fixed, possibly some changes were introduced, etc:

https://support.microsoft.com/en-us/help/2925359/microsoft-dynamics-crm-online-releases

image

Is it good or bad?  Or, at least, is it better or worse than working on-premise? Hard to say – for all I know, it’s very different.

I am happy to see the latest and greatest features at my disposal. Even though this certainly comes with a greater probability of seeing some sneaky new bugs.

Even if some features are not that new, it’s great to try what the community has been talking about for a while (just to name a few: Canvas Apps, Flows, Forms, etc). Although, it does not take long to realize that there are limitations.

When it comes to the limitations, it’s probably the most challenging factor for me, personally, since it’s difficult to figure them out until you’ve tried, and, back to what I wrote above, you can’t possibly know everything about everything. So there are, likely, more features that I have not tried than those that I have tried. For the ones I have not tried, I may know what the idea is and what they are meant for, but how do they really perform when tested against the specific requirements? A lot of what I’ve been doing lately can really be summarized as “research and development”, which has never been that much of a case while working on-premise.

And, of course, there is so little control we have over the environment/API limits/logging/etc.. Plus there are licensing considerations almost everywhere (can we use Power Apps? Can we use Flows? Can we use this or that? What is currently covered by our licenses and what will have to be added? If we need to add more licenses, how do we justify this decision and how do we get it through the procurement?)

Still, there is something I heard earlier today that makes up for at least some of those hassles. It’s when a developer said “This is a great idea, and minimal effort”. You know what he said this about? Using Microsoft Flow with an http request trigger to accept json data from a web form and to send it to Dynamics.  It literally takes half an hour to prototype such a flow (and maybe another hour to adjust the json). Which is much less than a few hours/days he would have to spend otherwise figuring out the details of Azure App registration, OAuth, etc.

So, yes, it’s a wonderland. Of course you never know what kind of surprise is awaiting you, but that just makes it more interesting.

image

A PowerShell script to import/export solutions and data

 

I have never been a big supporter of the configuration migration tool simply because it seems strange that we need to export data manually while preparing a package for the package deployer. I am also on the fence when it comes to the package deployer itself – on the one hand, those packages can certainly do the job. On the other hand, one needs Visual Studio to develop a package.

At some point, we’ve developed EZChange  for one of the implementations – that’s an actual no-code approach utilizing mostly unmanaged solutions.

But when it comes to a solution that would work with both on-premise and Azure DevOps, it seems PowerShell sounds like a really good option. There are a few projects out there, such as Xrm CI Framework by Wael Hamze.

I just needed something relatively simple this time – with a twist, though, since I also needed a simple mostly automated data export/import feature (and, ideally, manual control over that data)

If you are curious to see how it worked out, keep reading… By the end of this post you should be able to set everything up so that you can export solution and data from one environment using one script and import to another using another script.

There is a github project where you can get the scripts

https://github.com/ashlega/ItAintBoring.Deployment

That project also includes a demo solution files for Dynamics

image

image

To set up your own project, do this

  • Either clone or download the whole project from GitHub
  • Browse to the PSModules subfolder and create deploymentSettings.psm1 file as a copy of deploymentSettingsTemplate.psm1:

image

  • Open that file and configure connection strings for both Source and Destination connectionsimage

 

Essentially, that’s it. Since that GitHub project already has a demo solution and some data, you can try importing both into your destination instance. To do that, run Import.ps1 script from the SampleProject folder:

image

Note: it’s an unmanaged solution that has one demo entity. I don’t think there is any danger in installing it, but, of course, try it at your own risk

Below is a quick demo of what’s supposed to happen. Notice how, at first, there is no DemoDemployment solution in Dynamics. I will run import.ps1, will get the solution imported, but, also, will have some demo data added as you will see in the advanced find.

 

 

 

 

 

 

 

 

 

 

 

 

So how does import.ps1 script look like?

Here it is:

image

The work is, mostly, done in the last 4 lines:

  • The first two of them are all about setting up CDSDeployment object and initializing it (the class for that object is defined in the PSModules\deployment.psm1 file)
  • Once the object is ready, I can use it to import my solution
  • And, finally, I’m importing demo data in the last line

What about 10 or so lines at the beginning of the script? That’s reading configuration settings from either the environment variables (for Azure DevOps), or from the deploymentSettings.psm1 file created above.

What if I wanted to export the solution file and/or the data?

This is what export.ps1 script is for.

It’s very similar to the import.ps1, but it’s using the other two methods of the CDSDeployment object:

image

ExportSolution method will export a solution. The second parameter there stands for “managed” (so, if it’s “false”, then it’ll be an unmanaged solution)

ExportData will export, well, data.

Now keep in mind that there are two files in the Data folder:

image

You can have more than two, since both ExportData and PushData methods will be accepting file name parameters. However, at least for now, you will need to manually create schema.txt file. That’s, basically, an entity metadata file in json format:

image

And, of course, data.txt is the actual data file:

image

You can either export data from the dev environment, or you can create that file manually (and, possibly, use it to populate all environment with the configuration data, including the dev environment)

Would it work with Azure Pipelines? Sure:

image

What about solution packager etc? That was not, really, the purpose of this set up, but it can certainly be added.

PowerShell and Dynamics/PowerApps

 

I was working on a PowerShell script, and I just could not get past the error below:

image

“Element ‘http://schemas.datacontract.org/2004/07/System.Collections.Generic:value’ contains data from a
type that maps to the name ‘System.Management.Automation:PSObject’. The deserializer has no knowledge of any type that
maps to this name”

Yet I was not trying to do anything extraordinary (other than writing a PowerShell script which is something I had not really done before).

In a nutshell, I would create a connection in my script, then I would create an entity, and, then, set an EntityReference attribute there (it is not a working code below – it’s a simplified sample):

$conn = Get-CrmConnection
$entity = New-Object Microsoft.Xrm.Sdk.Entity -ArgumentList $entityName
$value = New-Object -TypeName Microsoft.Xrm.Sdk.EntityReference

$entity[$name] = $value
$conn.Update($entity)

The error would be happening in the Update call above.

If you do a search for the error above, you’ll see a few references. It still took me more than a day to figure out a workaround!

Turned out there is something with boxing/unboxing of objects between PowerShell and .NET which I really don’t fully understand at the moment, but, once I realized that, I figured what if I just bypassed boxing / unboxing alltogether. It worked in my case – what I had to do is:

Define a helper class

$HelperSource = @”
public class Helper
{
public static void SetAttribute(Microsoft.Xrm.Sdk.Entity entity, string name, object value)
{
entity[name] = value;
}
}
“@

Define a helper function 

You might not need it, by the way. It depends on how the script is structured – I had to use it in another class, and, so, my Helper class would not be available to the compiler yet.. unlike the function below:

function Set-Attribute($entity, $name, $value)
{
[Helper]::SetAttribute($entity, $name, $value)
}

Add that helper class to my PowerShell script

$assemblyDir = Get-Location
$refs = @”$assemblyDir\Microsoft.Xrm.Sdk.dll”,”System.Runtime.Serialization.dll”,”System.ServiceModel.dll”)
Add-Type -TypeDefinition $script:HelperSource -ReferencedAssemblies $refs

Now, instead of using $entity[$name] = $value, I can use Set-Attribute($entity, $name, $value).

Seems to be working so far – I’m getting all my entity references updated correctly.

 

 

Creating an unsupported javascript

 

imageEarlier today, Shidin Haridas and I were discussing something on linkdein and we realized that we were talking about an undocumented javascript method in the Client API framework.

You know you are not supposed to use undocumented methods, right?

So I thought what if… I mean, don’t get me wrong, if somebody suggested to use unsupported javascript methods on a Dynamics/PowerApps project, I’d be the first to say “NO WAY!”. And, yet, what if… just this one time.

Ah, whatever, let’s do it just for the sake of experiment.

We want to be smart, though. It’s one thing if the script stops working, but it’s a totally different thing if the rest of the application stops working because of that.

Therefore, let’s do it this way:

  • We’ll double check that the method is, indeed, undocumented
  • We will create a wrapper script
  • That wrapper script, if it fails, should handle the errors silently without making the rest of our app fail

 

All good? Let’s do it.

Here is what I want to achieve. When “Display Related” is selected, I want to display “Related Demos” link under the “related” dropdown:

image

When it’s not selected, I want the link to disappear:

image

It’s not, really, clear how to achieve this goal when looking at the related documentation page:

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/clientapi/reference/formcontext-ui-navigation

That page is talking about navigation items, but how do we get that specific item?

image

Turns out all the methods are there. Whether they are supported or not is a question.

In my example, what would work is this:

var related = formContext.ui.navigation.items.getByName(“nav_ita_ita_relateddemo_ita_relateddemo_ParentDemo”);
related.setVisible(isVisible);

So what if it ends up being an usupported method in the future? It’s ok if the navigation link shows up – the users will probably notice, but it’s not going to be the end of the world. It would be much worse if that script breaks and everything breaks because of it.

Let’s create a wrapper then! We just need to make it a safe method by adding “try-catch” around the call that may start failing.

There you go:

image

Now if we stick to using runUnsupportedScript in the various event handlers, there is no risk of running into an error since the actual method that’ll be calling undocumented function will handle the error and log a message to the console instead.

So, I’m not implying this is the best practice. I’m not even saying it’s the recommended practice. I would even say don’t do it on your projects! Well, unless you have to…