Monthly Archives: February 2021

Creating a Business Process Flow for multiple entities

Business process flows can definitely help guide our application users – they always look nice in the demos, and they easily draw positive feedback. But, it seems, when the BPF is designed to cover multiple entities, there are some quirks there.

Here is the BPF I wanted to create:

image

There are two entities involved:

  • Main Entity
  • Stage 1 Entity

Once that simple BPF above is created, here is how it looks like in the user interface while I’m still looking at the Main Entity record::

image

In order to move that BPF to the next stage, I can create a Stage 1 Entity record, save it, and it’ll be fine.

But what if I made “Start” stage active again?

image

My BPF would lose that link to the Stage 1 Entity record – I could try moving that process to the next stage at this point, but I would have to create another record:

image

In order to fix this, I need a lookup from Stage 1 Entity back to the Main entity:

image

With that new relationship in place, it will also show up in the BPF designer:

image

That relationship will now be used as a filter for the BPF.

For example, once the lookup is set on the Stage 1 Record:

image

I will see that Stage 1 Record in the dropdown when trying to navigate to the next stage in the BPF:

image

Well, let’s say it’s all been set up as per the above, so the BPF is, currently, in the second (“Details”) stage:

image

What happens if I choose to move that BPF back to the “Start” stage again? It will go back to the previous state – Stage 1 Entity records won’t be in the “process path” anymore:

image

Looking at the Advanced Find for the “BPF Automation” entity at this point, and it’s obvious that link to the “Stage 1…” record has been removed:

image

Would not it be great if the BPF kept that relationship somehow instead?

Well, since it does not, it seems this is where we may end up getting more than one “Stage 1 Entity” record for the same BPF if the user decided to create another one. Which is not, necessarily, what we wanted to.

One workaround would be to raise an error when an attempt is made to create another “Stage 1 Entity” record for the same BPF, but how do we do it?

For example, here is how it might work:

  • We could create a lookup from the Main Entity to the Stage 1 Entity
  • Then, we could use a real-time workflow on create of the Stage 1 Entity record to check if that lookup on the Main Entity had already been populated. In which case we could terminate the workflow and raise an error. If it had not been populated yet, then the workflow could put a value in that lookup and allow the BPF to proceed
  • Here is what that lookup would look like in my example:
  • image

And the workflow follows below.

Checking the condition (if that new lookup from the Main entity to the Stage 1 Entity has been set):

image

Terminating the workflow if the condition is met:

image

Setting the lookup otherwise:

image

Now if I used the same BPF again, went to the “Details” stage, got “Entity 1 Stage” record created, and went back to the “Start” stage, here is what it would look like:

image

And, if I tried creating another Stage 1 Record, I’d get an error:

image

Actually, since Quick Create forms are supported in business process flows, we can make this experience a little better by adding a quick create form for Stage 1 Entity:

image

  • In the example above, our users would get an error message without really leaving “Main Entity” screen.

Is there a better option to handle this scenario? Let me know if you have other suggestions!

Feature of my day: Version history for Canvas Apps

This is definitely not something new – you’ll find quite a few blog posts on this topic. But if, for some reason, you are not aware of this feature, it might make your day. As it made mine a few months back… and the only reason I’m talking about it now is that, well, it’s better late than never.

image

This actually just shows how different model-driven applications and canvas applications are, right?

With the canvas apps, those apps are, actually, the containers that encapsulate all the functionality. To an extent, this is what solutions do in the model-driven applications world.

But, even so, not sure that’s the right analogy. After all, even though we can specify a version # for our solutions in Dataverse, we can’t restore solutions from the previous versions – there is no option like the one below:

image

I can opt to restore my app from the previous version, and here is what happens:

image

That’s simple and efficient. It used to be Version 1, now it’ll be restored as Version 3:

image

And, just like that, if I messed something up in Version 2 so its not usable anymore, I can simply restore my app from the older version and continue from there.

Well, wish we had that for model-driven apps, tooSmile

Let’s show an image on the Power Apps Portal form, too

In the previous post, I wrote about how we can use Dataverse image columns in the Power BI Paginated reports (or in the SSRS reports for that matter). What kept bugging me is whether we could actually use a Power Apps Portal to surface that image. Turned out that is not doable, at least not “out of the box”, so the question had to be turned on its head, really. Can we display that kind of image on the portal to start with?

Again, out of the box we can’t – instead of the nice image, we will see this kind of “Unsupported” error:

image

But, of course, we still can resort to pretty much the same workaround mentioned in the previous post for the reports and still display the image:

image

You will find all of the Power Automate flow setup instructions in the previous post, but, just to summarize the steps:

  • There is a Flow that’s accepting documentId as a query parameter
  • It, then, reads image from the corresponding imagge attribute of the Dataverse record
  • And it sends that image back through the “Response” action

Therefore, we just need to re-use that flow on the portal. Which is easy to do with a custom web template – we just need to throw in this kind of javascript there:

var image = "<img width='200px' src='https://<flow_trigger_url>&documentId=" + 
(location.search.split('id=')[1] || '').split('&')[0] + "'/>"; 
document.write(image);


Assuming record id is passed through the query string (as an “id” parameter), the script above will add a new image to the document, and it will use flow trigger url (extended with the documentId query parameter) as a source.

From there, I just had to create a custom web template, copy source from the template I used before, and add the script above in the right place:

image

And then, of course, associate my page template to the new web template.

This does not, really, make it an editable image, since there is no “edit” button there. But, at least, we can present all those images to the portal users this way and ensure they can see what model-driven app users can see (actually, we can do the same with the documents stored in the “file” columns).

Here is a model-driven app form and a portal form side-by-side:

image

Have fun!

Paginated Power BI / SSRS reports and Dataverse image columns

With the image columns in Dataverse, we can easily add fields to store images directly in the tables like this:

image

This is great, since this makes all those images/documents much easier to identify than if they are stored in Sharepoint.

In terms of storage, those images are stored in the file storage which is cheaper than Dataverse itself, and which makes it a viable option in many cases. However, I guess somewhat because of that, working with images is not that straightforward sometimes.

For example, what I wanted to build a Power BI Paginated report (the same approach should work with SSRS) and incorporate those images into the report? It’s actually doable, though, it seems, it’s not as simple as just adding an Image control to the report canvas and pointing it to the image attribute. Here is an example, though:

image

Dataverse datasource exposes a few image-related columns:

image

However, when I tried sing ita_image attribute directly with the image control, it did not work:

image

The result of this is not exactly what we would probably expect:

image

There is another attribute there, which is image url. That’s an actual url you can download the image from; however, it only works for an authenticated user. If I use it to specify image component url:

image

The result is no different from what we saw before:

image

That makes sense, since the report is trying to open that url outside of the authenticated Datasource connection, and it’s not working.

So, how do we make it work?

I guess there are at least a couple of options:

  • We could use a Flow or a plugin to upload images to a web site, and, then, we could reference those images in the report through the external url parameter of the image control. That involves a bit of setting up and we are giving up some of the security, so it might not be that fast or even acceptable
  • Another solution might be to use something else to sort of fake the url. Below, I’ll do it with a Power Automate flow. But it could have been an Azure Function, a Logic App, or, possibly, something else (I wonder if those images could have been exposed through a Power Apps portal? It’s something to try…)

Here is the flow:

image

On the report side, there is an image control that’s configured to use external url. That url is, really, a trigger url for the flow above extended with a custom “documentId” parameter:

image

On the flow side, there is a Compose action which will extract documentId from the query:

image

And, then, here is how the remaining flow steps look like:

image

I guess you could say this is not necessarily very efficient – there is a report that’s utilizing a flow to grab some data, not to mention involved API calls and Flow usage… This might not scale well if you need thousands of images per report, but, for a small one (and if you need to prototype quickly), this works just fine.

For a more robust and scalable solution, and unless there is, actually, a way to somehow display that image directly in the report (without having to use “external url”), a better option might be to use a Flow/plugin to upload those images somewhere else first, and, then, display them in the report from that new external location.

“When an item is created” SQL trigger in Power Automate

There is a SQL Connector in Power Automate, and it has couple of triggers which are dependent on the IDENTITY/ROWVERSION columns in SQL:

image

The existence of those two columns in the table is a pre-requisite for the trigger to work:

https://docs.microsoft.com/en-us/connectors/sql/#when-an-item-is-created-(v2)

image

There is a catch there, though, and I probably would not know about it if not for a co-worker who shared it with me, and it’s that if we truncate a table, that may lead to some unexpected results.

Imagine that there is a table and there is a bunch of records there:

image

The query above will insert 600 records into the table, which means id_num will go up to 600:

image

Which will make the flow run 600 times:

image

So what if I truncated the table at this point? It would be empty after that:

image

This would reset indentity column seed, so the next time I added data to the table, the numbering would start from 1 again.

And the flow would not be triggered:

imageimage

It won’t start for the next insert, and for the next one… not until I have 601 records in the table – that’s when it will kick in.

It seems the connector stores last identity column value somehow, and it won’t trigger the Flow till there is a record where identity column value exceeds the last known value.

Which makes sense except that, when in testing, where we tend to delete or truncate test data every now and then, this may lead to the Flow not running at all.

If you’ve managed to run into this, there are at least a couple of work arounds:

  • Reset SQL identity seed to that new value exceeds what’s been cached by the connector: DBCC CHECKIDENT (WatchTable, RESEED, 700). Of course you’d need to know, at least approximately, what that value should be, but, unless you are concerned about gaps, that should not be a big problem. This is, probably, the fastest solution
  • Another option would be to recreate the trigger. That’s easy to do in the dev environment – just add that Flow to a different solution, export as un-managed, delete the flow, the re-import the solution. In the downstream environments, where your solutions are managed, this might be more complicated. It’s still doable, but may first need to re-organize all your managed solutions to basically make it so that that particular Flow is in it’s own solution. In which case you should be able to delete that solution, then re-import, effectively re-creating the trigger along the way

Although, it might be easier to avoid it all together by not using “delete from…” instead of “truncate” on the SQL side. Since, well, “delete” won’t reset the identity seed.

This connector has some other limitations/caveats. It is definitely very useful when/if you need SQL integration in your flows, but you might want to be aware of those. Especially since they are all conveniently listed in the docs: https://docs.microsoft.com/en-us/connectors/sql/#when-an-item-is-created-(v2)

Have fun!

This blog just hit 20K users per month. What’s next?

I’ve been trying to promote my technology blog through LinkedIn for as long as it’s been out there, and all has been going pretty well.

Google analytics audience overview for the last 3 years would please any stock trader if this were a stock chart:

image

Well, at this rate my blog might be following in the footsteps of Tesla stock… in the sense that one day it might just experience a real breakthrough, hit unexpected highs, and…

And, actually, I would not be sure what’s nextSmile 

The blog itself has always been my way of summarizing the knowledge I have acquired – it was not even so much about sharing that knowledge, a big part of it was literally my way of digesting pieces of information, bringing them together, and expressing them in the blog just to understand them even better.

Over time, it has proven to be a good tactic, as you can see from the chart above, but the interesting part is that those numbers – they are just numbers. It’s not a money-making blog, I am not selling anything here…

There are, of course, some reputational gains, though I am not sure they matter that much. To start with, this blog is, probably, the main reason I’ve been recognized by Microsoft as a Most Valuable Professional (MVP). Don’t get me wrong, that recognition is a big deal in my field – in a way, the fact that I was recognized an MVP is something I can probably keep mentioning for the rest of my professional career. And it happened not once, but 3 times in a row so far (which is nothing compared to 15+ times for some of the fellow MVP-s, but it still quite a bit). Yet, I know a lot of great professionals in my field who have never tried becoming an MVP, but who might, in many ways, be more knowledgeable than I am.

It’s this blog that made me an MVP, and, for that, I’m definitely grateful.

So, where does it go from here?

The answer might not be available right away, but this is what lead me to looking at the numbers a bit more, and they are, well, interesting.

It’s great to see tens of thousands of visitors coming to this blog monthly, but where are they coming from?

This is where Google Analytics can be very handy. Well, if I were just paying attention to it earlierSmile

image

As you can see, google search is where I’m getting most of the traffic from.

I’ve been very diligent in my efforts to promote this blog through LinkedIn and Twitter. Every now and then, somebody else in the technical community would post a link to my blog, and those links and mentions are always great to see. But, as far as traffic goes, it’s still Google that’s generating most of it.

Of course it is all related in a way, since, the more references are there, the more credibility my blog has for Google, so the higher it’s, likely, ranked in the search results, right? That’s just basic SEO. 

For what it’s worth, at this point I can say only a few things with certainty about the blog here:

  • Getting a blog from almost zero to 20K a month is possible, and, I guess, if it were the goal here, it could have been done sooner
  • No matter what, the main source of traffic, at least in my case, is not direct links – it’s Google
  • And, yet, if not for the links, who knows where it would be now?

Where does the blog go from here is a different question entirely, though. And I might share a few thoughts later, but, for now, I’ll just sleep on it.