Author Archives: Alex Shlega

Creating a Business Process Flow for multiple entities

Business process flows can definitely help guide our application users – they always look nice in the demos, and they easily draw positive feedback. But, it seems, when the BPF is designed to cover multiple entities, there are some quirks there.

Here is the BPF I wanted to create:

image

There are two entities involved:

  • Main Entity
  • Stage 1 Entity

Once that simple BPF above is created, here is how it looks like in the user interface while I’m still looking at the Main Entity record::

image

In order to move that BPF to the next stage, I can create a Stage 1 Entity record, save it, and it’ll be fine.

But what if I made “Start” stage active again?

image

My BPF would lose that link to the Stage 1 Entity record – I could try moving that process to the next stage at this point, but I would have to create another record:

image

In order to fix this, I need a lookup from Stage 1 Entity back to the Main entity:

image

With that new relationship in place, it will also show up in the BPF designer:

image

That relationship will now be used as a filter for the BPF.

For example, once the lookup is set on the Stage 1 Record:

image

I will see that Stage 1 Record in the dropdown when trying to navigate to the next stage in the BPF:

image

Well, let’s say it’s all been set up as per the above, so the BPF is, currently, in the second (“Details”) stage:

image

What happens if I choose to move that BPF back to the “Start” stage again? It will go back to the previous state – Stage 1 Entity records won’t be in the “process path” anymore:

image

Looking at the Advanced Find for the “BPF Automation” entity at this point, and it’s obvious that link to the “Stage 1…” record has been removed:

image

Would not it be great if the BPF kept that relationship somehow instead?

Well, since it does not, it seems this is where we may end up getting more than one “Stage 1 Entity” record for the same BPF if the user decided to create another one. Which is not, necessarily, what we wanted to.

One workaround would be to raise an error when an attempt is made to create another “Stage 1 Entity” record for the same BPF, but how do we do it?

For example, here is how it might work:

  • We could create a lookup from the Main Entity to the Stage 1 Entity
  • Then, we could use a real-time workflow on create of the Stage 1 Entity record to check if that lookup on the Main Entity had already been populated. In which case we could terminate the workflow and raise an error. If it had not been populated yet, then the workflow could put a value in that lookup and allow the BPF to proceed
  • Here is what that lookup would look like in my example:
  • image

And the workflow follows below.

Checking the condition (if that new lookup from the Main entity to the Stage 1 Entity has been set):

image

Terminating the workflow if the condition is met:

image

Setting the lookup otherwise:

image

Now if I used the same BPF again, went to the “Details” stage, got “Entity 1 Stage” record created, and went back to the “Start” stage, here is what it would look like:

image

And, if I tried creating another Stage 1 Record, I’d get an error:

image

Actually, since Quick Create forms are supported in business process flows, we can make this experience a little better by adding a quick create form for Stage 1 Entity:

image

  • In the example above, our users would get an error message without really leaving “Main Entity” screen.

Is there a better option to handle this scenario? Let me know if you have other suggestions!

Feature of my day: Version history for Canvas Apps

This is definitely not something new – you’ll find quite a few blog posts on this topic. But if, for some reason, you are not aware of this feature, it might make your day. As it made mine a few months back… and the only reason I’m talking about it now is that, well, it’s better late than never.

image

This actually just shows how different model-driven applications and canvas applications are, right?

With the canvas apps, those apps are, actually, the containers that encapsulate all the functionality. To an extent, this is what solutions do in the model-driven applications world.

But, even so, not sure that’s the right analogy. After all, even though we can specify a version # for our solutions in Dataverse, we can’t restore solutions from the previous versions – there is no option like the one below:

image

I can opt to restore my app from the previous version, and here is what happens:

image

That’s simple and efficient. It used to be Version 1, now it’ll be restored as Version 3:

image

And, just like that, if I messed something up in Version 2 so its not usable anymore, I can simply restore my app from the older version and continue from there.

Well, wish we had that for model-driven apps, tooSmile

Let’s show an image on the Power Apps Portal form, too

In the previous post, I wrote about how we can use Dataverse image columns in the Power BI Paginated reports (or in the SSRS reports for that matter). What kept bugging me is whether we could actually use a Power Apps Portal to surface that image. Turned out that is not doable, at least not “out of the box”, so the question had to be turned on its head, really. Can we display that kind of image on the portal to start with?

Again, out of the box we can’t – instead of the nice image, we will see this kind of “Unsupported” error:

image

But, of course, we still can resort to pretty much the same workaround mentioned in the previous post for the reports and still display the image:

image

You will find all of the Power Automate flow setup instructions in the previous post, but, just to summarize the steps:

  • There is a Flow that’s accepting documentId as a query parameter
  • It, then, reads image from the corresponding imagge attribute of the Dataverse record
  • And it sends that image back through the “Response” action

Therefore, we just need to re-use that flow on the portal. Which is easy to do with a custom web template – we just need to throw in this kind of javascript there:

var image = "<img width='200px' src='https://<flow_trigger_url>&documentId=" + 
(location.search.split('id=')[1] || '').split('&')[0] + "'/>"; 
document.write(image);


Assuming record id is passed through the query string (as an “id” parameter), the script above will add a new image to the document, and it will use flow trigger url (extended with the documentId query parameter) as a source.

From there, I just had to create a custom web template, copy source from the template I used before, and add the script above in the right place:

image

And then, of course, associate my page template to the new web template.

This does not, really, make it an editable image, since there is no “edit” button there. But, at least, we can present all those images to the portal users this way and ensure they can see what model-driven app users can see (actually, we can do the same with the documents stored in the “file” columns).

Here is a model-driven app form and a portal form side-by-side:

image

Have fun!

Paginated Power BI / SSRS reports and Dataverse image columns

With the image columns in Dataverse, we can easily add fields to store images directly in the tables like this:

image

This is great, since this makes all those images/documents much easier to identify than if they are stored in Sharepoint.

In terms of storage, those images are stored in the file storage which is cheaper than Dataverse itself, and which makes it a viable option in many cases. However, I guess somewhat because of that, working with images is not that straightforward sometimes.

For example, what I wanted to build a Power BI Paginated report (the same approach should work with SSRS) and incorporate those images into the report? It’s actually doable, though, it seems, it’s not as simple as just adding an Image control to the report canvas and pointing it to the image attribute. Here is an example, though:

image

Dataverse datasource exposes a few image-related columns:

image

However, when I tried sing ita_image attribute directly with the image control, it did not work:

image

The result of this is not exactly what we would probably expect:

image

There is another attribute there, which is image url. That’s an actual url you can download the image from; however, it only works for an authenticated user. If I use it to specify image component url:

image

The result is no different from what we saw before:

image

That makes sense, since the report is trying to open that url outside of the authenticated Datasource connection, and it’s not working.

So, how do we make it work?

I guess there are at least a couple of options:

  • We could use a Flow or a plugin to upload images to a web site, and, then, we could reference those images in the report through the external url parameter of the image control. That involves a bit of setting up and we are giving up some of the security, so it might not be that fast or even acceptable
  • Another solution might be to use something else to sort of fake the url. Below, I’ll do it with a Power Automate flow. But it could have been an Azure Function, a Logic App, or, possibly, something else (I wonder if those images could have been exposed through a Power Apps portal? It’s something to try…)

Here is the flow:

image

On the report side, there is an image control that’s configured to use external url. That url is, really, a trigger url for the flow above extended with a custom “documentId” parameter:

image

On the flow side, there is a Compose action which will extract documentId from the query:

image

And, then, here is how the remaining flow steps look like:

image

I guess you could say this is not necessarily very efficient – there is a report that’s utilizing a flow to grab some data, not to mention involved API calls and Flow usage… This might not scale well if you need thousands of images per report, but, for a small one (and if you need to prototype quickly), this works just fine.

For a more robust and scalable solution, and unless there is, actually, a way to somehow display that image directly in the report (without having to use “external url”), a better option might be to use a Flow/plugin to upload those images somewhere else first, and, then, display them in the report from that new external location.

“When an item is created” SQL trigger in Power Automate

There is a SQL Connector in Power Automate, and it has couple of triggers which are dependent on the IDENTITY/ROWVERSION columns in SQL:

image

The existence of those two columns in the table is a pre-requisite for the trigger to work:

https://docs.microsoft.com/en-us/connectors/sql/#when-an-item-is-created-(v2)

image

There is a catch there, though, and I probably would not know about it if not for a co-worker who shared it with me, and it’s that if we truncate a table, that may lead to some unexpected results.

Imagine that there is a table and there is a bunch of records there:

image

The query above will insert 600 records into the table, which means id_num will go up to 600:

image

Which will make the flow run 600 times:

image

So what if I truncated the table at this point? It would be empty after that:

image

This would reset indentity column seed, so the next time I added data to the table, the numbering would start from 1 again.

And the flow would not be triggered:

imageimage

It won’t start for the next insert, and for the next one… not until I have 601 records in the table – that’s when it will kick in.

It seems the connector stores last identity column value somehow, and it won’t trigger the Flow till there is a record where identity column value exceeds the last known value.

Which makes sense except that, when in testing, where we tend to delete or truncate test data every now and then, this may lead to the Flow not running at all.

If you’ve managed to run into this, there are at least a couple of work arounds:

  • Reset SQL identity seed to that new value exceeds what’s been cached by the connector: DBCC CHECKIDENT (WatchTable, RESEED, 700). Of course you’d need to know, at least approximately, what that value should be, but, unless you are concerned about gaps, that should not be a big problem. This is, probably, the fastest solution
  • Another option would be to recreate the trigger. That’s easy to do in the dev environment – just add that Flow to a different solution, export as un-managed, delete the flow, the re-import the solution. In the downstream environments, where your solutions are managed, this might be more complicated. It’s still doable, but may first need to re-organize all your managed solutions to basically make it so that that particular Flow is in it’s own solution. In which case you should be able to delete that solution, then re-import, effectively re-creating the trigger along the way

Although, it might be easier to avoid it all together by not using “delete from…” instead of “truncate” on the SQL side. Since, well, “delete” won’t reset the identity seed.

This connector has some other limitations/caveats. It is definitely very useful when/if you need SQL integration in your flows, but you might want to be aware of those. Especially since they are all conveniently listed in the docs: https://docs.microsoft.com/en-us/connectors/sql/#when-an-item-is-created-(v2)

Have fun!

This blog just hit 20K users per month. What’s next?

I’ve been trying to promote my technology blog through LinkedIn for as long as it’s been out there, and all has been going pretty well.

Google analytics audience overview for the last 3 years would please any stock trader if this were a stock chart:

image

Well, at this rate my blog might be following in the footsteps of Tesla stock… in the sense that one day it might just experience a real breakthrough, hit unexpected highs, and…

And, actually, I would not be sure what’s nextSmile 

The blog itself has always been my way of summarizing the knowledge I have acquired – it was not even so much about sharing that knowledge, a big part of it was literally my way of digesting pieces of information, bringing them together, and expressing them in the blog just to understand them even better.

Over time, it has proven to be a good tactic, as you can see from the chart above, but the interesting part is that those numbers – they are just numbers. It’s not a money-making blog, I am not selling anything here…

There are, of course, some reputational gains, though I am not sure they matter that much. To start with, this blog is, probably, the main reason I’ve been recognized by Microsoft as a Most Valuable Professional (MVP). Don’t get me wrong, that recognition is a big deal in my field – in a way, the fact that I was recognized an MVP is something I can probably keep mentioning for the rest of my professional career. And it happened not once, but 3 times in a row so far (which is nothing compared to 15+ times for some of the fellow MVP-s, but it still quite a bit). Yet, I know a lot of great professionals in my field who have never tried becoming an MVP, but who might, in many ways, be more knowledgeable than I am.

It’s this blog that made me an MVP, and, for that, I’m definitely grateful.

So, where does it go from here?

The answer might not be available right away, but this is what lead me to looking at the numbers a bit more, and they are, well, interesting.

It’s great to see tens of thousands of visitors coming to this blog monthly, but where are they coming from?

This is where Google Analytics can be very handy. Well, if I were just paying attention to it earlierSmile

image

As you can see, google search is where I’m getting most of the traffic from.

I’ve been very diligent in my efforts to promote this blog through LinkedIn and Twitter. Every now and then, somebody else in the technical community would post a link to my blog, and those links and mentions are always great to see. But, as far as traffic goes, it’s still Google that’s generating most of it.

Of course it is all related in a way, since, the more references are there, the more credibility my blog has for Google, so the higher it’s, likely, ranked in the search results, right? That’s just basic SEO. 

For what it’s worth, at this point I can say only a few things with certainty about the blog here:

  • Getting a blog from almost zero to 20K a month is possible, and, I guess, if it were the goal here, it could have been done sooner
  • No matter what, the main source of traffic, at least in my case, is not direct links – it’s Google
  • And, yet, if not for the links, who knows where it would be now?

Where does the blog go from here is a different question entirely, though. And I might share a few thoughts later, but, for now, I’ll just sleep on it.

Sometimes, we still do certain things using on-premise technologies. I just spent half of my day solving a strange problem with SSIS

It’s been a while since I wrote about SSIS – most of what we do these days is happening in the cloud.

But, sometimes, we still need to resort to the old good time-proven technologies… such as SSIS.

What’s the de-facto standard toolset of any Dynamics/PowerPlatform consultant who need to use SSIS? Well, that’s, of course, SSIS itself and Kingsway connector in particular.

If you ever had to do the following:

  • Read rows from the file
  • Adjust Dynamics CRM Source query for each line to query data from one environment
  • So you could push certain changes to another environment

 

You probably know that Dynamics CRM Source does not accept inputs, so you can’t just use a File Source and connect File Source’s output to the Dynamics CRM Source’s input:

image

Actually, this does not have anything to do with Kingsway, since other data sources (OleDB, for instance), would have exactly the same behavior.

I don’t know if I’ll ever get a chance to write about it again given that I only need to dig into SSIS every so often these days, so, here is a workaround just for the record.

Instead of doing everything in the same data flow task, you can do this:

image

In the first task there, read data from the file and push all file lines to an object variable using a script component:

image

Here is how the script looks like:

image

So, basically, just push everything to an array list, and pass it back to the package through a variable.

From there, set up a For Each Loop Container to work off your variable:

image

For every iteration, you’ll need to push each individual line to another variable:

image

From there, you can now work with each row separately in the loop. This should also be applicable to other data sources – it could even be Dynamics CRM Source originally, though I’m not sure how to pass complex objects through the enumeration in this case.

Anyways, since we now have individual file row in the variable, we can use a Script Task to parse that variable line and populate additional variables, and, from there, we can use those other variables in the other tasks within each iteration to query Dataverse and/or to do whatever else we need there.

image

  • And here is how the parsing is done in that Script Task:
  • image

And the end result is that I can now use those variables to tweak FetchXml as required for all Dynamics CRM Sources in those other tasks within the Foreach Loop Container.

That seems to be a lot of work for a simple task… and I can’t help but notice that I could probably do this in Power Automate more efficiently. But, well, we don’t always have the choice of technology, so, if you have to do this with SSIS, this might help.

TDS endpoint – getting closer to GA?

Just a few days ago, when I was writing a post about Power BI Paginated Reports, “Dataverse” connection type was still called “Common Data Service(Preview)” there.

As of today, in the updated version of the Power BI Report Builder, it’s called “Dataverse”:

image

There is no reference to the preview anymore, but, apparently, it’s still a SqlClient connection.

That said, it seems Dataverse SQL Query is still in preview:

https://docs.microsoft.com/en-us/powerapps/developer/data-platform/dataverse-sql-query

Yet there is no mentioning of the Dataverse datasource in the docs on the list of datasources for Power BI Paginated Reports:

https://docs.microsoft.com/en-us/power-bi/paginated-reports/paginated-reports-data-sources

So is it official? Is it not official? Probably not quite, yet. Not until we see it documented in the docs. But, that said, it’s clear some work is happening behind the scene, and TDS endpoint-based connectors are starting to make it into the Power Platform (And, intentionally or not, they are not even explicitly defined as preview features anymore).

Power BI Paginated Reports and data security

In the previous post, I looked at some of the features of Power BI paginated reports. What was still unclear is how security works.

Turned out it works pretty much as one would expect. Users having no access to the data won’t see that data in the report.

Unless the report has NOT been configured to use credentials of the viewing user.

But first things first. For those new to Power BI administration, keep in mind you may need to be gives permissions to share reports from the previous workspace, and, for that, you need to be given “Member” role:

https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-new-workspaces

image

Unless you are given a role that allows sharing, you will likely be getting a little ugly error when trying to share. Although, you will still be able to publish a report if you are at least a contributor, and, then, somebody else could share it with the other users.

That’s the first level of “security” – if a user does not have access to the workspace, the report would have to be shared with them.

Once it is shared, you will find that user’s ability to see data in the report depends on how the datasource has been configured.

You can configure report datasource through the “manage” action:

image

image

image

Then just make sure the checkbox above (“Report viewers can only access this data source with their own Power BI identities using DirectQuery”). When that option is selected, Power BI will use report viewer credentials to load data from Dataverse.

If the option is not selected, Power BI will be using default data source credentials (those will usually be of the report author’s).

In other words, if the option is not selected OR if the user has permissions to see data, all data will be visible (#1 on the screenshot below).

If the option is selected AND if the user viewing the report does not have permissions to see certain data, that data will not show up:

image

So, if you are concerned about the security in paginated reports… it seems you should not be. It’s all been taken care of.

The one remaining piece is better integration with model-driven apps (running reports for the grids, running them from the command bar, etc). That will likely require some scripting, but let me see if I can come up with something in one of the next post.