Monthly Archives: May 2019

Why would you use “Stage for Upgrade” with managed solutions?

 

In case you were wondering what that “stage for upgrade” option means and why is it there in the first place:

image

It’s all about deleting components from the managed solutions. The ALM whitepaper actually does a good job describing this feature, though there is a lot of other content in that document, so “stage for upgrade” may seem less important at the first glance:

https://www.microsoft.com/en-us/download/details.aspx?id=57777

Anyway, here is an example of the managed solution I have installed in the production instance:

image

What if I did not need that workflow anymore, and, also, what if I did not need the Test Field:

image

I could go to the development environment, delete the workflow and the field, update the version number, and import new version of the same managed solution. While importing that updated solution, I’d have to choose whether I wanted to “Stage for Upgrade”. What if I did not?

image

The solution would be imported, but the workflow would still be there:

image

How come?

Problem is, solution components can’t be deleted just like that. It all comes down to the reference count – the solution that’s there in the instance is still referencing the component, so the system can’t remove that component while there is a reference. And it can’t remove the reference without removing the component first.

Historically, this is where we’d be using holding solutions, and this is what we are actually doing when choosing “stage for upgrade” option.

Let’s make a note of the solution ID. When not using stage for upgrade, it stays the same:

image

Now let’s export another version of the same solution (it’ll be 1.2.0.0) and try importing it with the “stage for upgrade” option. This time, there will be two solutions in the target instance:

image

That “_Upgrade” solution is, actually, what used to be called a “holding” solution – there is no workflow in that solution, but the entity is still there:

image

If I now applied solution upgrade to the original solution:

image

The workflow would be removed since the system would do three things:

  • It would remove my original solution
  • It would remove any components that have no references anymore (the workflow)
  • It would rename that “_upgrade” solution so the name matches original solution name

As a side-effect, solution ID would, actually, change (since it would be the ID of the former “_upgrade” solution):

image

PS. Through this same process, my Test Field finally got deleted as well:

image

Data Center Locations for different services (including Dynamics 365 and Sharepoint)

I went to the Azure Active Directory properties earlier today for a completely different reason, and, then, I noticed something that made me jump:

image

What?! With the client being in Canada, all Dynamics/Sharepoint data needs to be physically stored in Canada, or this whole implementation is going to stop quite abruptly. Unfortunately, when looking at the environment properties in the PowerApps Admin Center, all we see is the region:

image

So, where is the data stored? Can it really be stored in the United States?

Usually, I don’t even need to deal with this, so I think many of you would know it already, but there is a separate page that explains where your data is stored:

https://products.office.com/en-us/where-is-your-data-located

Here is what comes up for the Canada region there:

image

And there is another page that clarifies where the data is stored for Dynamics 365:

http://o365datacentermap.azurewebsites.net/

image

Interestingly, I don’t see anything specifically for CDS (what if it’s not a Dynamics 365 CDS instance, but a regular CDS instance?)

Well, I probably would not know where the data is either way – it’s really up to Microsoft to ensure it’s stored in the right place, but it would be great if there were a single page to see all those locations “per service” on a single page, including locations for Dynamics and/or regular CDS data.

Anyway, with those two links above combined, it seems all our data will actually be stored in Canada, so it’s business as usual.

Dynamics 365 App for Outlook – a few gotchas that hit us

 

We’ve been implementing Dynamics 365 App for Outlook lately – that’s been a bumpy road, so I figured I’d list some of the things which hit us along the way, and which were either not expected or, in some cases, underestimated.

1. Dynamics 365 App for Outlook does not work with shared mailboxes

Actually, this seems to be more of a “client-side” limitation, so, for example, if you add that shared mailbox as a regular account to your outlook, it might still work. Of course you would have to share credentials with whoever needs access to that mailbox, and that might not fly with the security.

As simple as it is, this one is quite annoying, actually, since pretty much every group of users would have a shared mailbox in out environment.

2. All emails have to be approved by the global admin

That’s a mere inconvenience, but there is no way around it. I could be a Dynamics Service Administrator capable of creating/deleting the instance, but won’t be able to approve an email in that instance unless I’m a global admin.

3. Email addresses in Dynamics should match user names in the Office

Otherwise, you may get an error message stating that the UPN doesn’t match. This is, likely, more of a problem when setting up a mailbox for Queues since you’ll run into this error while approving the email address – there will be no actual user experiencing login errors.

4. Items sent from Dynamics won’t be stored in the Outlook sent folders

This is just how it works – you can send an email from Dynamics, but it won’t show up in the “sent” folder in Outlook

5. Items sent from Outlook won’t show up in Dynamics (unless tracked manually through the app)

There is an organization setting that can be updated, but it’s the default behavior.

For more details, have a look at this support article – look up “AutoTrackSentFolderItems” there:

https://support.microsoft.com/en-us/help/2691237/orgdborgsettings-tool-for-microsoft-dynamics-crm

What’s interesting about it is that This setting only applies if the mailbox is configured to track “All Email Messages”

Which may defeat the purpose of tracking in some situations.

6. Dynamics App for Outlook is an actual Model-Driven App in Dynamics

image

What it means is that, if you have a custom entity you need to use for the email “regarding” field, for example, you may have to add that entity to the Dynamics 365 App for Outlook just like you would do with any other model-driven app.

    And you may need to add that app to your solutions in case you wanted to have the same changes applied to production.

 

7. And, just in case, Dynamics App for Outlook is only available in the Dynamics environments

In other words, a regular CDS instance won’t have it (hopefully, it’s just “yet”)

How to: using PowerShell for automated testing of PowerApps

 

If you have not looked at the EasyRepro yet, you probably should:

https://github.com/microsoft/EasyRepro

I wrote a post about EasyRepro before with some explanations of how it works, so this may also be helpful:

https://www.itaintboring.com/dynamics-crm/easy-repro-what-is-it/

Now, I am not sure if Microsoft is “all in” on making PowerApps development a primarily dev-only activity, but, at the moment, it seems not everyone would have the skills and licenses to build tests in the Visual Studio, and, so, not everyone would be able to create tests in the Visual Studio. Otherwise, this whole post might not make a lot of sense since one might use Visual Studio Test tasks instead:

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/vstest?view=azure-devops

Yet EasyRepro github solution already provides over 200 sample tests:

image

However, what if we were more on the functional consulting side with limited access to the Visual Studio licenses (and to the skillset required to utilize all that)?

This is where, in theory, one might use PowerShell. Of course PowerShell itself is not necessarily one of those tools everyone would know how to use either, but, at least, it’s available pretty much anywhere. It’s also available in Azure Pipelines.

First things first, though. In order to use EasyRepro with PowerShell, some kind of wrapper module might be required which would expose EasyRepro methods as PowerShell cmdlets. I’ve started building one out, and it’s on the github here: https://github.com/ashlega/ItAintBoring.EasyReproPowershell

image

EasyRepro folder contains compiled EasyRepro dll-s. There is, also, a wrapper powershell module which defines the cmdlet-s. Of course it’s called EasyRepro.psm1. A few cmdlets have been added to far, but there is still work to do.

Azure-pipelines.yml is a build pipeline definition for the sample pipeline – you will see that pipeline below.

Settings.ps1 defines a couple of environment variables in case you wanted to run the tests manually (otherwise, those environment variables should be configured in Azure Pipelines)

Finally, RunTests.ps1 is an example of how to run the tests:

image

That script starts by importing EasyRepro.psm1 module. Then it defines a hashtable to store test results. After that, it defines a sample test (creating an account). And, finally, it proceeds to initialize EasyRepro, run the tests, clear EasyRepro objects (mostly to close the browser), and, then, does a bit of reporting on the results.

For example, the test above will fail since there is no “123name123” field on the account form. It’s easy to see if I just run the script manually. However, I can also use it in a build pipeline like this (notice the variables, btw. When running manually, they are defined in the settings.ps1):

image

This pipeline, once started, will fail with the error below:

image

And, if I wanted to review the details, I could always click on that failed step and see what happened:

image

Just as expected, there was a failure in the “Create Account” test.

Of course writing and maintaining those tests can still be a challenge, so I probably would not suggest going “all in” on automated testing with this. Although, it might make sense to spend that extra time and cover at least some of the core regression testing. As in the example above – creating an account should just work, all the time, so when there is a failure, it’s certainly a problem. Mind you it’s a problem with the test in this particular caseSmile

Model-driven apps and form security

Earlier today, we got a strange request from the user who was wondering what would be the reason he kept getting an error message? Indeed, we knew he should not be getting that error. Although, after looking into it, we found that there was a form the user was working on, it was a regular form, and it did have a script error. We knew that form was broken, but we also knew the user was not supposed to see that form at all since it was not added to the model-driven application.

Fast forward – here is what it turned out to be:

  • There was an entity added to the application
  • That entity had multiple main forms, but only one of the forms was added to the application
  • The user experiencing that error was not given access to the form above through the security roles
  • Turned out the user was able to see all the other forms he had access to

Interestingly, once the user was granted access to the form, it became the only form he could see while working with the application. Problem solved.

But… Was it a bug? Was it a feature? I am not sure – it’s just something to keep in mind.

Actually, it’s easy to reproduce even with a System Admin account.

Here, I can see two forms:

image

Even though my “Simple App” only has one form for that Test entity:

image

And this is all because I did not give myself access to the form (yes, this is one of those cases where System Admins don’t get full permissions by default):

image

And, once I’ve enabled that form for System Customizer & System Administrator, here is what I see in the application:

image

Using Microsoft Flow to Accept Data Submissions from Form.IO

Recently, we’ve been prototyping a solution which would be using Form.IO to collect web forms data, so we needed a way to somehow store submitted requests in Dynamics.

So what we came up with looked more or less like this:

image

Form.IO offers different types of actions, and one of those allows forwarding submitted data to a url in json format. That’s exactly how we can hook up Forms.IO to the Microsoft flow in the scenario above.

However, rather than walking you through the details on how “HTTP Request is received” trigger works in the Flows, I’ll provide a link to this other blog post by Serge Luca here:

https://sergeluca.wordpress.com/2019/02/19/protect-your-nested-service-flows-with-azure-api-management-service/

That blog post also covers something else – specifically, it explains how to protect this kind of Flow through the Azure API Management. That came up as a question from the architecture/security folks almost right away once they’ve looked at the implementation above.

IP access restriction policy (which is described in the post above) seems to be most useful in this scenario, although, at the moment we are not sure we can use it with Form.IO since we would need to know the IP addresses Form.IO is using.

Still, basic authentication is supported on the API management side and on the Form.IO side. Also, we might add a policy for a custom HTTP header…

image

Other than that, the flow above is relatively straightforward. Of course Forms.IO can accept file attachments (and, in our case, we used Azure Blob storage to save those attachments), so, at least half of the actions in that Flow are really dealing with creating sharepoint document locations for the newly submitted case records. That part, is, basically,  a replica of the flow described in the post below:

https://www.itaintboring.com/dynamics-crm/creating-custom-folder-structure-for-sharepoint-integration-using-flows/

Except that we don’t need to create custom folders structure this time – we just need to create a folder per case in Sharepoint, then upload submitted files to that folder through the Sharepoint connector.

Setting up nested security groups for CDS/Dynamics instances

This post is a proof of concept for the following scenario:

image

Would not it be nice if we could use more than one security group so that, for example, we would have admins in one group and all the other users in another. This way, we might add that admin group to Sharepoint so admin users would become site owners/site collection admins there.

Unfortunately, nested security groups are not supported for CDS/Dynamics:

https://community.dynamics.com/crm/b/workandstudybook/archive/2018/05/16/gotcha-don-t-use-nested-security-group

Do we have any options? Well, just continuing on the PowerShell exploration path that I started earlier this month, we could use this kind of script to emulate nested groups:

install-module azuread
import-module azuread
Connect-AzureAD

$GroupStartWith = 'Root_CRM'

$ADGroups = get-azureadgroup -Filter "startswith(DisplayName, '$GroupStartWith')"
 
foreach ($ADGroup in $ADGroups) { 
    $Members = Get-AzureADGroupMember -ObjectId $ADGroup.ObjectId
	#Delete all "user-members"
	foreach ($Member in $Members)
	{  
	    if($Member.ObjectType -eq "User")
	    {
		   Remove-AzureADGroupMember -ObjectId $ADGroup.ObjectId -MemberId $Member.ObjectId
	    }
	}
	
	#Re-add all nested user-members
	foreach ($Member in $Members)
	{  
	    if($Member.ObjectType -eq "Group")
	    {
			$NestedMembers = Get-AzureADGroupMember -ObjectId $Member.ObjectId
			foreach ($NestedMember in $NestedMembers)
			{  
			   if($NestedMember.ObjectType -eq "User")
			   {
				 #Write-Host $NestedMember
				 Add-AzureADGroupMember -ObjectId $ADGroup.ObjectId -RefObjectId $NestedMember.ObjectId
			   }
			}
		}
	}
} 

What the script above would do is:

– It would connect to Azure AD

– It would find all groups named as “Root_CRM*” – that’s just a quick “naming convention” I came up with just now

– The script will, then, loop over all of those groups and do a couple of things for each of those:

  • It will remove all current members
  • It will look at the nested groups and add nested group members to the main group

As a result, I can have security groups configured exactly the way it’s shown on the diagram above since I’m not limited to having just one group anymore.

Of course it’s PowerShell, so the script has to be started somehow. It might be doable with Azure Runbooks – for example, this script could be scheduled to run a few times per day to do the automated sync. Although, it will have to be updated so it does not ask for the credentials. Which should be doable with Get-AutomationConnection

Yet the script can likely be improved so it does not remove users which will be re-added eventually. This is to avoid possible glitches when a user loses access to CDS/Dynamics for a moment while that user is being re-added.

TypeScript from the c# developer standpoint

 

A lot of TypeScript tutorials are available online, so I was looking for one recently and, it seems, this one is really good IF you are familiar with some other language already:

https://www.tutorialspoint.com/typescript

So, while working through it, I noticed a few things which might strike you as being somewhat unusual if you are more used to the languages like C# (or even javascript itself). Figured I’d come up with a short list to use as a refresher for myself later; although, if you are new to the TypeScript, you might find some of the items useful, too.

1. To declare a variable, we need to use <var_name>:<type_name> syntax

var x:number = 1

2. There is no need to use semicolons as long as we have one instruction per line

var x:number = 1
x = 2
x++

3. There are no separate numeric types

There is no int or double – all numbers are represented by the “number” type.

4. There are arrays and there are tuples

For an array, here is an example:

TS: var names:string[] =  [“Mary”,”Tom”,”Jane”, “Andy”]
JS: var names = [“Mary”,”Tom”,”Jane”, “Andy”]

And here is an example for a tuple:

TS: var arr = [“Mary”,”Tom”,1, 2]
JS: var arr = [“Mary”,”Tom”,1, 2]

The difference between those two on the TS side is that arrays are strongly typed, so there will be type validation at compile-time. Tuples are not strongly typed, so you can put objects of different types into a tuple. Once compiled into JS, though, both arrays and tuples look the same.

5. Unions

This is a somewhat crazy concept which is basically saying that a variable can store values of more than one type, but those types have to be mentioned:

var val:string|number
val = 12
val = “This is a string”

Of course this means very little for the compiled javascript code, but it allows TypeScript to do compile-time type validations.

What’s unusual about this (besides the fact that I have not seen this concept in the .NET world so far) is that “union” has somewhat different meaning in other areas. For example, unions allows you to combine results from different queries in SQL, but those different queries since have to match each other on the “metadata” level (same columns). In case with the TypeScript, it’s the “metadata” which is becoming different through the use of the unions.

6. Interfaces

In TypeScript, an interface is more than a contract to be implemented by a class later. An interface defines a type that you can use in variable declarations:

interface IPerson {
firstName:string,
lastName:string,
}

var customer:IPerson = {
firstName:”Tom”,
lastName:”Hanks”,
}

In c#, you would have to implement the interface first through a class which inherits and implements the interface. In TS, you can declare a variable of the “interface type”, and you can implement that interface right there.

What I mean is that, in the example above, you can’t just omit one of the properties:

var customer:IPerson = {
firstName:”Tom”
}

Since you’ll get an error message from the compiler:

main.ts(6,5): error TS2322: Type ‘{ firstName: string; }’ is not assignable to type ‘IPerson’.
Property ‘lastName’ is missing in type ‘{ firstName: string; }’.


Also, there is a special construct for defining array interfaces – it seems I don’t fully understand the “index” part at the moment, so will just leave it at this:

interface namelist {
[index:number]:string
}

7. Somewhat special syntax when defining classes

  • When defining class methods, we don’t use “function” keyword in TS
  • When defining class properties, we don’t need to use “var” keyword
  • To define a constructor, we need to use “constructor” keyword
  • To reference a property in a function, we need to access that property through “this” object

 

class Car {
//field
engine:string;

//constructor
constructor(engine:string) {
this.engine = engine
}

//function
disp():void {
console.log(“Engine is  :   “+this.engine)
}
}

8. Type templates

There is a notion of type templates which basically means that, since every object in TS has to have a type,  that type will be inferred from the variable declaration if you do it this way:

var person = {
firstname: “Tom”,
lastname: “Hanks”
}

With this, you can’t, now, dynamically add another property to the person:

person.age = 1

Since you’ll get an error:
main.ts(7,8): error TS2339: Property ‘age’ does not exist on type ‘{ firstname: string; lastname: string; }’

Instead, you have to add that age property to the original declaration somehow:

var person = {
firstname: “Tom”,
lastname: “Hanks”,
age: NaN
}

person.age = 1

9. Prototypes vs objects

When you have a class, all class methods will be defined in the prototype:

TS: class Car {
disp():void {
console.log(“Ford”)
}
}

Compiled JS:

var Car = /** @class */ (function () {
function Car() {
}
Car.prototype.disp = function () {
console.log(“Ford”);
};
return Car;
}());

And, then, you can actually change what “disp” means for a specific object of the class:

var a:Car = new Car()
a.disp = () => console.log(“Nissan”)

Which compiles into the following JS:

var a = new Car();
a.disp = function () { return console.log(“Nissan”); };

Well, that’s some strange stuff…

10. Namespaces

If you want to use a namespace from a different ts file, you have to reference the file:

/// <reference path = “SomeFileName.ts” />

There is a special “export” keyword which must be added to anything you want to be visible outside of the namespace.

Interestingly you can actually define the same namespace more than once in the same file:

namespace Test{
export interface Displayable {
disp():void;
}
}

namespace Test{
export class Car implements Displayable {
disp():void {
console.log(“Ford”)
}
}
}

var a:Test.Car = new Test.Car()
a.disp()

But, unless you’ve added those “export” keywords to the interface above and to the class, it’s not going to compile.

11. Modules

There is no equivalent concept in c#, it seems, so, for now, I’d rather just put a link to the page which seems to be explaining the concept… I’m still working through itSmile

https://www.typescriptlang.org/docs/handbook/modules.html

12. Ambients

Essentially, it’s similar to adding a dll reference to your .NET project so that

  • Typescript compiler could do compile-time validations
  • You would not have to rewire existing javascript code in TS

Ambient declarations are done through the “d.ts” files, those files are added to the TS files using the “reference” directive:

/// <reference path = “Calc.d.ts” />

However, d.ts files themselves do not contain any actual implementations – there are only type definitions there:

declare module TutorialPoint {
export class Calc {
doSum(limit:number) : number;
}
}

When it comes to somehow adding  actual implementation of the doSum function to your final HTML, you have to it using the script tag:

<script src = “doSumSource.js”></script>

 

 

 

 

 

 


					

Beyond the PowerApps Component Framework

The more I’m looking into the PowerApps Component Framework, the more obvious it becomes that the era of “do-it-all”-ers in Dynamics/PowerApps might be coming to an end.

Although, it’s not just the PCF that’s making me say this.

I turned to Dynamics some time ago since I did not want to stick to pure development – I liked messing with the code every now and then, but I could hardly imagine myself spending whole days coding. That was back in 2011 when this kind of skillset worked great for a Dynamics Consultant since what was missing out of the box I could always extend with Javascript/HTML/Plugins. And I did not have to be a full-stack developer familiar with the latest frameworks, since, really, all I needed was basic javascript skills, some C# development skills, probably SQL… but nothing too advanced.

We can still do it today in the model-driven PowerApps, but, it seems, the world is changing.

It’s been a while since the term Citizen Developer was introduced, and this is really how we call a developer who is, well, not a developer in the regular sense, but who is still doing “development” using more high-level tools:

https://www.mobile-mentor.com/blog/citizen-developer-powerapps/

For example, there can be Flow developers, and there can be Canvas Apps developers. Interestingly, those tools are not that straightforward, so somebody just starting to work with them may need to go over quite a bit of learning.

On the other hand, PowerApps Component Framework hardly belongs to the realm of the Citizen Developer – instead, it’s meant to be utilized by the professional developers:

image

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/overview

And it’s not just wording(although, of course, one might argue that plugins were always meant for the professional developers as well). If you look at the sample code of PCF components, you’ll see something like this:

image

So, unlike with a web resource, there are a few more things to consider:

  • Typescript
  • The framework itself
  • The fact that you need to create HTML elements from code

 

Now, to use typescript, and unless you are familiar with it, you’ll probably need to catch up first, you’ll also need to learn what NodeJs is, what NPM is, how to use them, etc.

Compare that to a classic web resource development where all you need to know is HTML, CSS, and Javascript. Even though I think web resources are not going anywhere yet since things like javascript-based validation/visibility rules/etc do not belong to the PCF, the difference between PCF and Webresources  is that somebody working with the PCF is supposed to be a “professional developer” rather than just an opportunistic consultant utilizing web resources when and if it’s appropriate.

To start with, you may need to have all those tools configured on your laptop to use them (whereas, with the web resources, we could just paste javascript to the editor in Dynamics if we wanted to).

But that’s just one example of where the line between Citizen Developers and Professional Developers is becoming more clear. There are other examples, too:

  • Do you know what CI/CD is and how to set it up?
  • Are you familiar with the Azure DevOps?
  • Do you know how to use Git?
  • Can you use solution packager to parse the solution so you could put solution components in the source control?
  • Are you familiar with PowerShell scripting?
  • Do you know how to write plugins?
  • Are you familiar with the TypeScript? How about Angular? NodeJs?
  • Can you explain what solution layering means and how it works?

 

On any relatively complex implementation those are the skills you may need to have as a model-driven powerapps developer.

Although, as a Citizen Developer, you might not need to even bother to learn those things.

And, as a Functional Consultant, you might need to be aware of what can be done through either sort of development, but you have your own toys to play with – think of all the configuration options, security model, licensing, different first-party applications.

A few years ago Microsoft began to introduce “Code-Less” features such as Business Rules, Microsoft Flow, Canvas Apps… and it almost started to look like the days of real developers in Dynamics were counted.

Then there was OData, WebAPI, Liquid for Dynamics Portals… Now there is PCF which is targeting professional developers from the start. Add plugins to this mix, and, suddenly, the outlook for developers starts looking much better.

However, what it all means is that every “group” is getting their own tools, and, so, they have to spend time learning those tools and frameworks. As that is happening, former Dynamics consultants “with developer skills” may have to finally choose sides and start specializing, since there is only so much one can learn to use efficiently in the everyday work. Personally, I’ll probably try to stick to the “do it all” approach for a little longer, but I’m curious how it’s all going to play out. We’ll see…

 

Error monitoring for the solution-aware Flows

 

Some flows are solution-aware, and some flows are not:

https://docs.microsoft.com/en-us/flow/overview-solution-flows

It turned out the difference between those two is not just that solution-aware flows are portable – somehow, it goes deeper.

Just a few days ago wrote a post where I was trying to summarize error monitoring options for the Flows:

https://www.itaintboring.com/dynamics/microsoft-flow-monitoring/

It was not working out quite well there since the main problem I was having is that, for the Flows I was looking at, I could not get the history of Flow runs unless the Flows were shared with me.

However, that’s only a problem for the non-solution aware Flows.

If a Flow is created in the solution, it’s all getting much better.

Through the solution, I can open Flow Analytics for any Flow that’s been added to the solution:

image

Magically, PowerShell starts showing Flow Runs history for those flows, too:

image

Of course one possible problem here is that we can’t add non-solution aware flow to a solution, but that’s probably going to be resolved this way or another at some point. Right now, though, if you are looking into Flows portability and/or some kind of error monitoring approach, don’t make the mistake I did and make sure you are working with the solution-aware flows.

PS. In order for the analytics/powershell to work, we do need to have System Admin or System Customizer permissions in the CDS environment.