Monthly Archives: November 2019

Do you want to become an MVP?

I was watching the video Mark Smith just posted, and, as it often happens, got mixed impression.

Of course I do agree that there is always this idea that becoming an MVP should bring in some benefits. When thinking of “becoming an MVP”, you are inevitably starting to think of those benefits at some point. As in “is it worth putting in all those efforts to be awarded”?

In that sense, Mark has done a great job explaining the benefits.

However, I wanted to give you a little different perspective.

First of all, let’s be honest, I am not a great speakerSmile My main contribution to the community has always been this blog and Linkedin. There were a few tools, there were occasional presentations, and there were forum answers at some point. But all those videos and conferences… I’m just not wired that way.

Somehow, I still got awarded twice so far. What did it really give me?

Consider the NDA access. I do appreciate it since, that way, I often hear about upcoming changes and can provide early feedback before the changes become public. However, I can rarely act on that information since I can’t reference it. In other words, if I knew of an upcoming licensing change (it’s only an example, please don’t start pulling your hair), I could only influence license purchase decisions indirectly. On the technical side, it could be more helpful. But, again, how do you explain technical or architecture decisions which are made based on the NDA information?

Do I appreciate NDA access, though? Absolutely. Even if, more often than not, I can’t use it to justify my decisions, it gives me the feeling that I can influence product group direction. How about that? Out of a sudden, I am not just a “client” utilizing the product – I am a bit of a stakeholder who can influence where the product goes.

What about the money? In my “independent consultant” world, I know a lot of people who are making more even though they are not MVP-s. Maybe it’s different for the full-time employees, but I can’t say much about it.

Speaking engagements. Personally, I am not looking for them that actively. On a practical note, though, I think those engagements are tied to the previous point, which was “money”. More speaking engagement and more publicity means better recognition, and, in the end, more opportunities to land better jobs/contracts. On the other hand, that’s travel, that’s spending time away, etc.

How about free software and tools? I have MSDN and Azure credits. I have Camtasia. Etc. That does help. The tricky part there is… what if I don’t get renewed next year? I will lose all that. But, then, to what extent can I rely on those benefits when preparing my sample solutions, tools, posts, presentations, etc? The way I personally deal with this, I am trying to use this kind of benefits, of course, but I am trying not to over rely on them. For example, rather than getting an MVP instance of Dynamics 365, I’m getting one through the Microsoft Action Pack subscription.  Am I using MSDN? Of course. If I lose it, I’ll deal with it when the time comesSmile

So, in general, I think my overall idea of the MVP program has not changed much in the last year:

However, again on a practical note, what if, after doing all your research, you still wanted to be an MVP? My personal recipe is relatively simple:

  • Find your own motivation for making those community contributions. As for me… I always thought that I can learn more through sharing. No, I am not always sharing just because I want to shareSmile I am sharing because, while doing so, I can fine-tune my own skills and knowledge. After all, how do you write about something if you don’t understand it? The same goes for different tools – it’s one thing to have something developed for myself, but it’s another thing to have a tool that somebody else can use. In the same manner, how do you answer a forum question if you don’t know the answer? You’ll just have to figure out that answer first.
  • Once your motivation and efforts are aligned with the MVP program, and assuming you’ve been doing whatever it is you’ve been doing for some time, you will be awarded. Yes, you may have to get in touch with other MVP-s just to become nominated, but, more likely than not, by the time you do it(and assuming you’ve been making quality contributions), you will already be on the radar, so the question of being nominated won’t be a question at all.

Of course, this recipe does not guarantee the award, since there is no formula to calculate the value of your contributions ahead of time. Well, you may just have to start doing more of those, and then, a little more again. And you’ll get there.

CDS (current environment) connector is playing hide and seek?

Have you ever seen a connector playing hide and seek? Just look at the recording below:

  • Common Data Service (Current Environment Connector) does not show up when I type “related records” on the first screen
  • But it does show up when I do it on the other screen


What’s the difference?

From what I could see so far, the only difference is that, in the first case, my Flow is created outside of the solution. In the second case, I’m creating a Flow within a solution.

But, that magic aside, if you have not seen that connector yet, it’s definitely worth looking at since we are getting additional functionality there:

  • FetchXML
  • Relate/Unrelate records
  • Actions
  • There is a unified on update/create/delete trigger


And, also, this connector can be deployed through your solutions without having to update the connections.

Actually, it’s a bit more complicated. If you deploy the Flow with such a connector through a managed solution, it will start working in the new environment.

BUT. If you choose to look at that flow in the new environment, you’ll notice that the connections are broken, so you won’t be able to change anything in the Flow until you fix the connections.

Here, notice how the Flow ran a few times:


But the connections, if you decide to look at them, are broken:


The trick with this connector is not to touch those connectionsSmile Just leave them be, deploy that flow through a solution, and, probably, don’t forget to delete the solution when deploying an updated version of the Flow (for more details on this, have a look at the previous post )

It’s the balloons time (when the help panes are feeling empty)


I just learned how to create balloons!


At first, it actually did not look that interesting at all when I clicked the question mark button:




Yep, it felt a little empty there. So, I started to wonder, what can I do to make it more interesting?

Turned out there is quite a bit we can do:

We can add sections, images, bullet lists, videos, some other things… and, of course, those balloons.

The thing about the balloons, though, is that they are linked to the page elements, so, if the elements change (or if you navigate to a different page), the balloons might stop flying. Well, that’s just a note – other than that the balloons still are awesome.

So, what is it we may want to keep in mind about the help panes?

We can add them to the solutions. Although, only while in the classic(?) mode



We can work with the help XML using the definition below. Although, I guess that involves extracting solution files, updating them, then packing them back into a solution file (would be a good use for the solution packager)

The help pane will stay open as the user keeps navigating in the app

This may bring the help pane a little bit out of content, so the users would have to remember to either close it or to click “home” button at the top left corner to change context for the help pane.

Help panes are language-specific

I just switched to French, and the help pane is feeling empty again


I used Dynamics 365 environment everywhere above, but it’s actually working in the CDS environments, too



Well, it seems to be a pretty cool feature. Of course help authoring may take time, and keeping it up to date may take time, too. But it seems to be a nice easy-to-use feature which can help even if we choose to only use it sporadically where it’s really needed.

A tricky Flow

I got a tricky Power Automate Flow the other day – it was boldly refusing to meet my expectations in what it was supposed to do. In retrospect, as much as I would want to say that it was all happening since Power Automate was in a bad mood, there seem to be a couple of things we should keep in mind when creating the Flows, and, somewhat specifically, when using Common Data Service(current environment) connector:


That connectors supports FetchXml queries in the List Records action, which makes it very convenient in the situations where you need to query data based on some conditions.

Here is what may happen, though.

Let’s imagine some simple scenario for the Flow:

  • The Flow will start on create of the lead record
  • When a lead is created, the Flow would use “List records” action to select a contact with the specific last name
  • Finally, the flow would send an email to that contact


And there will be two environments, so the idea is that we’ll use a managed solution to move this flow from development to production:



Let’s see if it works? I’ve created a lead, and here is my email notification:


But wait, wasn’t it supposed to greet me by name, not just say “Hello”?

Problem is, even though I can use all those attributes in the flow, they have to be added to the FetchXml in order to query them through the List Records action. Since I did not have firstname included in the Fetch, it came up empty.

The fix is simple:


And I have my email with the proper name now:


Now let’s bring this flow through a managed solution to another environment.

  • Export as managed
  • Import into the prod environment


Before I continue, let’s look at the solution layers for that flow in production:


Everything is perfect, but now we need to fix the connections for the flow:

  • image
  • Once the connections have been fixed, apparently we need to save the Flow.
  • What happens to the solution layers when we click “save”, though?



That is, actually, unfortunate. Let’s say I need to update the Flow now.

In the development environment, I can add an extra attribute to the Fetch:


That solution is, then, exported with a higher version, and I’m about to bring it over to production:


I should see that attribute added in production now, right?


You can see it’s not there.

I would guess this problem is related to the solution layering – when updating connections in production, I had to save the flow there, and that created a change in the unmanaged layer. Importing updated managed solution made changes to the managed layer, but, since it was an existing solution/flow, those changes went just under the unmanaged layer, so they did not show up on the “surface”.

If I go to the solution layers for my flow in product and remove active customizations:


All connections in the Flow will be broken again, but that additional attribute will finally show up:


This is when I can fix the connections, and, finally, get the Flow up and running as expected.

Of course another option might be to remove managed solution completely and to re-import updated version. Since I normally have Flows/Workflows in a separate solution, that would probably work just fine, even if I had to request a downtime window.


Bulk-loading inactive records to CDS


When implementing ItAintBoring.Deployment powershell modules, I did not initially add support for the “status” and “status reason” fields. Usually, we don’t need to migrate inactive reference data, but there are always exceptions, and I just hit one the other day. Reality check.

There is an updated version of the powershell script now, and there is an updated version of the corresponding nuget package.

But there is a caveat.

In CDS, we cannot create inactive records. We have to create a records as “active” first, and, then, we can deactivate it.

Just to illustrate what happens when you try, here is a screenshot of the Flow where I am trying to create a record using one of the inactive status reasons:


The error goes like this:

7 is not a valid status code for state code LeadState.Open on lead with Id d794b380-0501-ea11-a811-000d3af46cc5.

In other words, CDS is trying to use inactive status reason with the active status, and, of course, those are incompatible.

The workaround here would be to create the record first using one of the active status reasons, and, then, to change the status/status reason.

If we get back to the bulk data load through powershell scripts above, then it would look like this:

  • Export data without status/status reason into one file
  • Export data with status/status reasons into another file
  • Import the first file
  • Import the second file


In other words, in the export script I would use these two queries(notice how there is no status/status reason in the first one, and the second one is querying all attributes):


Once I’ve run the export, here is how exported data looks like – you can see the difference:


And, then, I just need to import those files in the same order.

Here is what I had before I ran the import:


Here is what I have after:


It takes a little bit of planning to bulk-load reference data this way, but, in the end, it’s just an extra run for the script, an extra fetch xml for me, and quite a bit of time saving when automating the deployment.

UI Flow in Power Automate (former Microsoft Flow)


If you have not heard about UI Flows, give them a try! As in right now… that’s just some cool stuff from Microsoft which is now in preview.

Login to your powerapps portal (, select Flows, and choose UI Flows:


The coolest part about it is that, right from the start, you can probably appreciate where it’s all going:

image There are no connectors, there is no API, there is nothing BUT recording and re-running of the user actions.

You want to automatically open an application, fill in some fields, click save button, etc? There you go. You want to open a web site, populate some fields, click “next”, etc? That’s another scenario. Basically, we should be able to automate various usage scenarios – I am not sure if this also means we’ll be able to use this for automated testing, and I am also not sure to what extent this will work with various web applications where controls are created/loaded/updated on the fly… But, if it all works out, this is going to be really interesting.

And I am wondering about the licensing, since, technically, it seems there will be no API calls or connectors involved, so might not be a lot of load on the Microsoft servers when running such flows. Does it mean “not that expensive” licensing? We’ll see, I guess.

Anyway, let’s just try something.

Let say I wanted to create a desktop UI flow:


Apparently, need to download a browser extension. Presumably, that’s to actually perform actions on my computer (heh… how about security… anyway, that’s for later):



Here is a funny hiccup – the installer asked me to close all Edge/Chrome windows. Lost the Flow, had to re-open and re-create again.

Continuing from there and still getting the same error.

Some back and forth, tried installing new Edge (Chromium), still the same… Eventually, I tried updating that same Flow using a different portal:

It finally worked that way, and it also started to work through after that.

And I have just recorded a UI Flow which is going to put some text into a notepad window!


Here, have a look (it takes a few second to start the test):


Might seem like not too much  for now, but keep in mind this was just a simple test. At the moment, I am not even sure of the practical applications and/or limitations yet, but that’s for later.

Workflow “TimeOut Until” vs Flow “delay until”

I was looking for a way to implement “timeout until” in Microsoft Flow, and, although there seem to be a way to do it, there is at least one caveat.

First of all, here is the problem. Let’s say I wanted to send a notification email (or assign a case to somebody) on the Follow Up By date:


In the classic workflows, I would set up a workflow like this:


Once the workflow reaches that step, it would be postponed until the follow up date:


Where it becomes interesting is what happens if I update that date on the case:


Looking at the same workflow session, I can see that new date has been reflected there – exactly the same workflow session is now postponed till Nov 11:


Which takes case of a lot of “date-based” notification scenarios.

Can we do it with Microsoft Flows? Well, sort of. There is “delay until” in Flows, so a Flow which seems similar to the workflow above might look like this:




Looks good? Let’s move follow up date to the 15th of November, then.

The classic workflow is, now, waiting for the 15th:


The Flow is still waiting till the 12th, though:


Although, since the Flow is configured to start on the update of the “Follow Up” date, there are, actually, two Flows running now:


Once of those flows is waiting till the 15th, and another one is waiting till the 12th.

There are, also, multiple workflow sessions running (for the same reasons – every update/create would start a session):


The difference between those is that “delay until” conditions in the Flows are not updated with the modified date value, so each of those Flows is waiting for a different date/time, but all classic workflows have picked up the new date and are, now, waiting for exactly the same date/time.

Having multiple Workflows or Flows trying to take the same action against the record might be a bit of a problem either way (unless it does not matter how many times the action happens), but, more importantly, where classic workflows will run on time as expected, each Flow might be taking that action at different times, and the “context” for the action might be different. For all we know, that “follow up date” may have moved by a week, but some instance of the Flow will still be using original follow up date.

In theory, I guess we could add a condition to the flow to ensure that the flow should still be taking an action – possibly check if the planned time is within a few minutes of the current time:


There is, also, a 30 days limit for the flows, so this may only work in the short-term reminders.

For everything else, it seems we should start using the “recurrence” trigger – on every run, we would need to query the list of records that are past the “due date”, then use  “for each” to go over all those records and take actions. Although, if we wanted those actions not to be delayed by a few hours, we would likely have to use a few recurrence triggers (for example, to have the flow started every hour). Or, for the time being, we might probably keep using classic workflows in this kind of scenarios, though this would not be the recommended approach.