Monthly Archives: September 2021

Long functions in Dataverse plugins: Part 2

I wrote my previous post knowing full well it was going to sound controversial to any pro-dev, that’s if a pro-dev were to read it. Turned out a fellow MVP, Daryl LaBar, did, and he just raised the bar by responding with a well-articulated article of his own:

https://dotnetdust.blogspot.com/2021/09/Long-Functions-Are-Always-A-Code-Smell.html

Before I continue, I have to mention that I would often question seemingly obvious things – sometimes, that leads to useful findings. Other times, those question end up being dismissed easily, and the life just goes on. No harm done.

In this case, I am definitely not implying that “long code” is inherently better for the plugins, but, having read Daryl’s post, I am still not convinced it’s inherently worse either, so I’ll try to be Devil’s advocate and explain why.

First of all, that analogy with the book makes sense, but only to an extent. There are other practices which can improve readability and understanding – I could add documentation-style comment just before the execute method, and, then, I could use regions in the plugin to isolate pieces of code in the same manner it’s done with the functions:

image

Would there be any benefits from the readability perspective? Not necessarily (except that, yes, I don’t need to go back and forth and can follow plugin logic from start to end). Would there be drawbacks from the readability standpoint? It seems it’s the same answer – not necessarily.

Would there be some pieces of code where a function would work better? Of course – when that piece is reusable. But, then, the assumption so far is that not a lot of code is, really, re-usable in the plugins.

Now, Daryl writes that having smaller functions is beneficial for error logging since function names show up in the log:

“if the 300 lines where split into 10 functions of 30 lines, then I’d know the function that would be causing the error and would only have a tenth of the code to analyze for null ref.  That’s huge!”

There is nothing wrong with that, but here is what usually happens:

  • A bug will show up in production
  • We will need to make sure we have a repro to start with
  • At which point we will start debugging to pinpoint the line of code that’s causing the error

Let’s assume we have 30 lines of code – in order to identify one specific line (let’s say we can’t guess easily), we still need to isolate that line to be able to fix it. So, we will either have to add additional diagnostics to the code, or we will try to guess and build different versions of the plugin to see which one stops failing.

If we had to add diagnostics/tracing to each and every line where, potentially, a “null reference” error might happen, an argument could be made that all such lines in the plugin should be instrumented just as a best practice (to make sure we have all required into the next time an error happens in production).

Which, then, negates the difference between having a function and not having it.

If we went with guessing above (let’s say we had 5 educated guesses about which line, exactly, is failing), then we’d need to build 5 versions of the plugin (worst case). Which is not that far from having to build 7-8 versions if we just tried to split our code in half and see if we can still reach that point when running the test (since 2^8 =256). However, with 7-8 runs, we’ll know exactly where the error is happening. And, with those 5 educated guesses we will often end up just where we started; since we might have guessed wrong to start with.

There is a potential problem with that, of course, and it’s all those “if” statements, since they can quickly confuse this kind of “divide and conquer” strategy. And this is where I’d agree with Daryl – should not have complex “if” structures in the long code, or, at least, should try making those if-s sequential, as in:

if(…) {};

if(…) {};

Instead of

if(…){

if(…){

}

}

But that’s not the same as not having shorter functions.

Also, I would often add almost line-by-line tracing to the plugin. Once it’s done, it does not matter how long the function is, and/or how complex those if-s are; since you will have enough details in the error message to land on the block of code where the error occurred.

It’s taken to the extreme in the example below, but, if you absolutely need to know where your plugin fails without having to guess, that’s more or less how you’d do it:

image

That said, do I always do it? Not at all. I’d often start doing it once there is an issue, and, then, I’d keep that tracing in the plugin. Till there is another issue when I’ll add more. Etc.

Of course if we were able to actually debug plugins, but, well… and I don’t mean plugin profiler, though it can be quite useful sometimes.

In the last part of his post, Daryl is talking about unit-testing and cherry-picking what to test. Which makes sense to me, but I’d just add that, with very limited amount of reusable code (that’s the basic assumption here), everything else should probably be tested as a whole anyways (in which case shorter functions vs longer functions difference might not be that important). This is because, for instance, it’s very likely that one short function will affect what’s happening in the other short function. As in:

  • We would set max credit limit
  • We would create account log entries (possibly based on what we did above)
  • And we would update account status (might be a VIP or regular account… based on the credit limit we set above)

Admittedly, I don’t use unit-testing that much in the plugins – would normally prefer integration testing with Easy Repro (and, quite often, no matter what I prefer, dev testing would still be the last thing we ever get to do on the project. Which is not great at all, of course, but this is a different story).

Long functions in dataverse plugins – is it still ” code smell”?

I am, usually, not too shy of writing relatively long functions in the plugins. And I mean long – not 15 lines, not 30 lines. In a relatively complex plugin I would, occasionally, have 200+ lines in the Execute method. Here is an example:

image

Notice the line numbers. Even if I removed all the comments and empty lines from that code, I’d still be looking at around 200 lines of code.

This does not happen with simple plugins (since there is just not a lot of code which needs to be there in the first place), but this tends to happen with relatively complex ones.

So, of course, as a commonly accepted rule of thumb this is not good, and you can easily find examples where it is suggested to split long functions into smaller ones based on the functionality that can be reused or unit-tested independently.

Which, in turn, brings me to the question of whether that always applies to the plugins, or whether we are basically talking about personal preferences and it all depends?

Where plugins are somewhat special is that they are

  1. Inherently stateless
  2. Often developed to provide a piece of very specific business logic

The first item above just means that there is very little re-use of in-memory objects. Of course, we can create classes in the usual way, and we can create smaller methods in those classes to sort of split code into smaller pieces, but instances of those classes will always be short-lived. Which is very different from how they would normally be used in the actual applications where some objects would be created when the application starts, and they will be disposed of when the application stops. That might be minutes, hours, days, or even months. Yet different parts of the same application might be reusing those shared objects all the time.

In the plugins, on the other hand, everything is supposed to be short-lived, and, at most, we are looking at a couple of minutes (although, normally it should be just a fraction of a second).

This is not to mention that the same plugin can run on different servers, and, even when it’s running on the same server, there is no guarantee it won’t be a new instance of the same plugin.

That seems to render object-oriented approach somewhat useless in the plugins (other than, maybe, for “structuring”), but that does not necessarily renders functions useless.

Although, as far as functions go, imagine a plugin that needs to do a couple of queries, then cycle through the results, do something in the loops (different things in each loop), and, possibly, update related records in each loop.

Below is a relatively simple example with just a few conditions, and it already has 15 lines there.

image

The first 4 lines there are setting up the query. Although, what if we wanted to add related table to the query? And, maybe, more than one. That would be 3-4 lines per link, so we can easily end up with 10+ lines of code just to configure the query in those cases.

And there might be a few blocks like that in the same plugin.

We should also add the usual plugin initialization – that’s 4-6 lines more:

image

Although, the initialization part is, often, isolated into the PluginBase class which saves as a few lines at the beginning of the plugin.

But, still, with each plugin doing relatively unique queries, and applying relatively unique business logic to the specific operation/table, the functionality is, often, not that much reusable. Which means up until this point a lot of this seem to be personal preference and general “rule of thumbing”. Plus, maybe, personal preferences from the code readability standpoint; although (and that is an example of the personal preference), I’d often prefer longer code in such cases since I don’t have to jump back and forth when reading it / debugging it.

But what about the unit testing? FakeXrmEasy is the testing framework that probably comes to mind whenever we start thinking of writing unit tests for the plugins. The concept there is very interesting – we would create a fake context, which we would, then, use to execute our plugin against. This is a great idea, but, of course, it implies that we’d need to pre-create the context properly, to pre-populate it with the test data, etc. Which might result in quite a bit of work/maintenance, too.

Would it really matter for FakeXrmEasy how long our functions are in the plugins? Not that much, it seems, since FakeXrmEasy is treating plugins as  black boxes, more or less. It does not care about the internal structure of the plugins being tested. It’s main purpose is to fake live context, as the name implies:)

In which case, just on the principle, and with the assumptions above (which is that the majority of the code in each individual plugin is, generally, not reusable across the project, and, also, that unit testing may have to be done on the plugin level rather than on the function level), does it matter how long that code is?

What do you think?


PS. This lead to a bit of a discussion

Here is what Daryl LaBar wrote in reply:

https://dotnetdust.blogspot.com/2021/09/Long-Functions-Are-Always-A-Code-Smell.html

And here is my reply to his reply above:

Long functions in Dataverse plugins: Part 2

C# code in Power Automate: let’s sort a string array?

In the previous post, I wrote about how we can use C# code in Power Automate connectors now. The example given there (converting a string to uppercase) was not that useful, of course, since we could simply use an expression and a “toUpper” function directly in the expression.

So I thought of a more useful application of this new functionality, and, it seems, sorting string arrays might be one of those.

Mind you, it can be done without c# code. For example we could run an office script:

https://www.tachytelic.net/2021/04/power-automate-sort-array-objects/

Which looks like a valid option (it’s coding anyways), but, if we did not want to use office connectors (or, perhaps, for those of us who would prefer C#), it can also be done with C# using “code” functionality in the custom connector:

image

You will find both swagger definition and script files for this connector in the git repo:

https://github.com/ashlega/ITAintBoringITAFunctions/tree/main/ConnectorFiles

And there are some explanations below

1. SortStringArray operation will have arrays as both input and output parameters

image

2. In the code, incoming array will be converted into a string array, then sorted, then sent back through the response

image

That’s quick and simple; although, I have to admit it took me a few hours to figure out all the plumbing (swagger definitions, using JArray above, etc). But, I guess, it’s fine to expect a bit of a learning curve initially.

What definitely helped, though, is creating a small project in the Visual Studio and testing some of the same code there.

Anyways, that’s all for now!

C# code in Power Automate? No way…

Did you know that you can easily add C# code to your Power Automate flows?

And you will need no azure functions, no plugins, no external web services. You will only need 5 minutes.

Although, your code will only have 5 seconds to run, and you will be limited in what exactly you can do in that code, but still:

image

The code above will simply convert given string value to uppercase – that’s not much, but you should get the idea.

Here is how you do it:

1. In the maker portal, go to Data->Custom Connectors, and start creating a new connector

You will need to provide the host name… Want to try google.com? That’s fine. One there is code, it’s going take over the codeless definition, so here we go:

image

2. To make it simple, let’s use no security

image

3. Now, for the definition, you might want to use the swagger below

swagger: '2.0'
info: {title: TestCodeConnector, description: Test Code Connector, version: '1.0'}
host: google.com
basePath: /
schemes: [https]
consumes: []
produces: []
paths:
  /:
    post:
      responses:
        default:
          description: default
          schema: {type: string}
      summary: StringToUpperCase
      operationId: stringtouppercase
      parameters:
      - name: value
        in: body
        required: false
        schema: {type: string}
definitions: {}
parameters: {}
responses: {}
securityDefinitions: {}
security: []
tags: []

This just says that a string comes in, and a string comes out.

4. Finally, for the code, you can use something the example below (my apologies for the formatting – it seems there are some special characters, so copy-paste would not work otherwise)

public class Script : ScriptBase

{

public override async Task<HttpResponseMessage> ExecuteAsync()

{

return await this.HandleToUpperCase().ConfigureAwait(false);

}

private async Task<HttpResponseMessage> HandleToUpperCase()

{

HttpResponseMessage response;

var contentAsString = await this.Context.Request.Content.ReadAsStringAsync().ConfigureAwait(false);

response = new HttpResponseMessage(HttpStatusCode.OK);

response.Content = new StringContent(contentAsString?.ToUpper());

return response;

}

}

That one above will take a string and upper case it:

image

5. And that’s about it – you can test it now

image

From there, just create a flow, pick your new connector, and use it in the flow:

image

And do the test:

image

There are a few more notes:

  • This is a preview feature
  • You can find some extra details in the docs: https://docs.microsoft.com/en-us/connectors/custom-connectors/write-code
  • Custom code execution time is limited by 5 seconds
  • Only a limited number of namespaces is available for you to use
  • When there is code, it takes precedence over the codeless definition. In other words, no API calls are, actually, made. Unless you make them from the code

So, there are limits. But, then, there are opportunities. There is a lot we can do in the code in 5 seconds.

PS. And, by the way, don’t forget about the upcoming PowerPlatform Chat event: https://www.itaintboring.com/itaintboring-powerplatform-chat/

Dataverse dilemma: should it be a flow or should it be a plugin?

When it comes to implementing business logic In the Dataverse environments, we have a few valid options:

  • Cloud flows
  • Plugins
  • Classic workflows

Yes, classic workflows is not at the top of our minds these days, but, with no Flow alternative to the real-time workflows, I don’t see them going away easily yet.

However, for all other purposes Power Automate flows are a superior technology(to the workflows), so this is why classic workflows did not even show up in the title of this blog.

When it comes to the plugins and Flows, the question seems to have a simple answer if you ask a non-developer. Since, of course, you do need to know how to develop plugins, and, in general, it’s easier to jump into Flows development instead.

That said, I’ve been developing plugins for a long time, and I did do other development every now and then (less so once I switched to Dynamics, to be fair), but, learning Power Automate to the point where I felt comfortable with it took quite a bit of time. There are little tricks that eventually make you productive, and you need to learn them as you go.

Yet there are still areas of the flows where I feel a bit dizzy. As in, how do we handle errors so we can report on those errors properly? How do we make multiple updates consistent if there is no concept of transactions? Those are just two which are always at the top of my mind, but there are smaller ones, too (as in, “what’s the exact syntax for that expression I need to write here?”)

Of course, flows are great for integrations. Forget about Dataverse for a second – there are hundreds of connectors to other systems. It might be really time-consuming to implement those integrations from scratch in the plugins, so, if there is a connector in Power Automate already, creating a Power Automate flow is always my first choice in such cases.

However, what if we are, mostly, focused on the Dataverse?

Is it still better to use Flows, or is it better to use plugins? From what I’ve seen so far, the answer seem to depend on the business logic complexity, but, also, it depends on the team’s tolerance towards pro-dev:

image

From the complexity perspective, the inherent problem of the flows is that they are no-code.

How come it’s a problem? Well, Flows just do not support classic concepts or tools of code development – there are no functions, there is no object-oriented programming, error handling is not as simple, and there is even no way to easily “refactor” the flow (for example, to do find and replace). Besides, usual code is much more concise when looking at it in the dev tools (when compared to the Flow in the flow designer), which makes it easier to grasp what’s happening in the code, especially as the code/flow starts growing.

I’ve seen it a bunch of times already that we would start with a flow (easy to start, yet it seems to be the preferred way), and, then, that Flow would grow out of proportions quickly, so we’d either have to start using child flows(which is not necessarily helping that much), or we’d have to give up on the flow and create a plugin instead.

And remember this is all in the context of the Dataverse business logic – as soon as we throw in some kind of integration requirement (even with Sharepoint) things can easily change, since this is where Flows approach gets a significant boost.

Then, of course, there is that other aspect of the decision-making process.

Are there enough developers who can develop plugins, and, also, do we want to lock into that mode of development? This, of course, is the main reason Flows showed up in the first place – democratizing development and making it less dependent on the pro-dev team members. That’s the y-axe on the diagram above: team tolerance towards pro-dev development. It’s not, necessarily, team experience that matters. It’s, also, where the management has to commit and say that “We are fine with having plugin developers on the team”, and that would be for both initial development and ongoing maintenance.

My personal feeling is that, no matter how good Flows are for relatively simple tasks, business logic complexity will still drive teams towards plugin development as soon as they have access to the proper “dev resources”, though I might be wrong – time will show.

PS. There is, also, a mixed model. We can create Custom API-s (that would be plugin development, basically), and we can use custom API-s from our flows. We can also create flows and call them from our plugins (using http triggers, for example). The former seems to be how things should be mixed, but the latter is also doable. And, still, in this mixed model the need to dev resources on the team is not going away, so nothing changes for the diagram above.

Dataverse Custom API

It’s funny – I have been hearing about Custom API-s for a while now, but, until today, it’s been just “oh, there is something out there I need to try out. But it seems to be a reincarnation of custom actions, so, maybe, I’ll look at it later”.

So, the day has come to look at it. And, at the end of the day, I am neither extremely excited nor disappointed. Except for some edge cases, there seem to be nothing there I could not do with the classic custom actions, and, yet, we got a few extra features to cover those edge cases.

Originally, custom actions (not the newer custom API) were meant to allow non-programmers to build shared processes. In a somewhat stretched way, one could say they were a pre-historic version of Power Automate… although, maybe I’m overstretching too much.

But, then, what actually happened to them is that developers realized at least a couple of other benefits:

  • Turned out custom actions were great for calling them from javascript web resources
  • There was also that idea of using custom actions to store configuration values (without the need to create configuration tables… sorry, they were called entities back then)

Do you still remember the latter? Quite a few people used to entertain that idea, and, if you do remember it, you’ve probably been around long enough!

But, of course, now that there are Environment Variables, there is Power Automate, classic workflows are, mostly, discouraged… the only real benefit of custom actions seems to be that ability to register a message, to associate a plugin to the message, and to have that message, and a plugin, exposed to other parts of the system.

And that’s exactly where Custom API comes in. Basically, they pick up where custom actions left. In the sense that, since there is no need to use custom actions to expose application configuration properties, and there is no need to use them to define “shared” workflows, the rest might better be done slightly differently. Besides, when doing it differently, a few extra features can be thrown in.

It’s easy to define new Custom API using the maker portal:

image

Yet there are other ways of doing it, just have a look at the docs if you are curious.

There are a few interesting features there:

1. Plugin type

You can associate a plugin to your custom action. But, unlike with most of the other plugins, that one will in stage 30. Normally, we can register plugins in stage 20 (pre-operation) and 40 (post-operation)

And we don’t even need to use plugin registration tool to register those steps. Well, we still need to use it to register the assembly, but that’s about it.

2. And, yet, we can still register plugins for pre and post operations

For that, we just need to make sure that “Alowed Custom Processing Step Type” is set to something other than “None”.

For example, with the custom API above, I could make use of the usual execution pipeline and register another plugin like this:

image

My original plugin would run in stage 30.

Another plugin could run in stage 20 (suddenly, “Pre-Operation” starts making even more sense. Since stage 30 would be for “main operation”, and that’s where my original plugin would run.

Then, of course, I could also have another plugin in “post operation” (which is Stage 40). Which is after the “Main operation”.

There is, also, that ability to make the API private, to associate a security role, to make it a function (which would render it useless for Power Automate). But this is where I’d call those scenarios “edge cases”.

The fact that we can have main operation plugin running in stage 30 looks interesting, though, since it all but guarantees that the order of plugins will be as expected (pre-operation plugins will run first, then main operation one will run, then post operation plugins will take over).

Also, it seems we have more options when defining parameter types, and we can tie custom API to entity collections, not just to entities. That might be handy, too.

With that, I guess I’ll just be using Custom API, and not a custom action, the next time I need “custom action” functionality.