Monthly Archives: September 2021

Skeletons in the closet: missing technical documentation

When working with Microsoft Dataverse, have you ever tried to figure out what exactly is happening behind the scene, and why that specific column is getting a certain value which you would not expect it would be getting?

One might think it is easy to do, but consider how many ways are there to actually change that column value:

image

  • A Power App (canvas or model-driven) can make a change
  • A Power Automate Flow can make a change
  • A plugin can make a change
  • A classic workflow can make a change
  • And this is not to mention possible integrations/external processes

The first 4 on this list are not even exceptional in any way – they are just something we would use as/when we need, so, on a relatively large project, we would often end up having all of those mixed and matched based on such factors as:

  • Available functionality (synchronous vs asynchronous is, probably, the most prominent example)
  • Microsoft recommendations (use Power Automate, avoid workflows)
  • “Center of excellence” recommendations (if a version of it exists in the organization)
  • Business preferences, as strange as it sounds (low-code as much as possible)
  • Project team preferences overall
  • Personal preferences of the team members

And this can turn into a nightmare when there is no obvious structure to it, and when there is no documentation.

A record is created, and there is a Flow/Workflow/Plugin that kicks in. However, exactly because there can be different ways to respond, some logic might end up in the Flow, but other logic might end up in the plugin, and, yet, there could be a business rule or a javascript on top of that. Now, there could be valid reasons mixing everything – for instance, it’s quite possible that we would only want synchronous logic to be in the plugin, and the rest might be in the Flow since we would want functional consultants to work on those.

I always feel bad when another team member has to ask “where does this value get set – is it in the plugin? Is it in the Flow? Is is happening in Javascript?”

And I always feel frustrated when it’s me who is asking the same question.

Because, in either case, I usually have to spend quite a bit of time figuring out how do all those pieces work together and where exactly the change is happening.

Would not it be nice to have some kind of documentation, or, at least, some way of answering this kind of questions faster? This is exactly why, in the pro-code world, adding comments directly to the code has been recognized is the best practice (and has been well-documented). Here is one example:

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/xmldoc/

However, there is no universal way of documenting Power Automate flows, workflows, plugins, scripts, etc. We could, probably, start creating diagrams/documents, but all those details are extremely difficult to document, and, even if someone were diligent enough to maintain that level of documentation, it might turn into a full-time job for that person on anything but a relatively simple project.

This is why, sooner or later, almost every project becomes a bit of a “black box” for new team members. This is not to say the same problem does not exist on the pro-code projects, but, because of the variety of tools, there just seem to be an extra layer in Power Platform.

Well, now that the skeleton has been uncovered… what do we do about it? I have no perfect answer, but here is what I’ve seen so far:

  • For the plugins, apparently, commenting in the code helps. Putting some thought into the structure of those plugins helps, too
  • With Classic Workflows, adding notes to the description may help to an extent
  • With Power Automate flows, adding “notes” to the actions may be helpful. As well as filling in Flow description
  • It also helps to keep high-level diagrams updated every now and then so that all those relationships between different parts of the system were a bit more clear

Although, as helpful as those can be, you will usually need a way to perform solution-wide search, and this is one benefit of using source control:

https://docs.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/use-source-control-solution-files?view=op-9-1

Personally, I don’t believe a lot of folks would have the ability to do advanced changes directly in the the extracted xml / json files in the source control without breaking something. However, it’s easy to perform search over all those files to find a column name, and that can help identify all low-code/pro-code components which are using that column quickly.

Although, that does require some discipline, yet the clients need to understand the importance of this for future maintenance and support. Which, again, is not always the case – it usually comes as the clients, projects, and teams mature.

And, yes, I had my share of projects where I would not put enough efforts into  documenting implemented solutions. Although, to my (somewhat lame) excuse, in many cases there would be no one to provide this kind of documentation to. Which is just how those skeletons end up being stuck in the closet.

PS. Do you have a “skeletons story” to share for the month of October? Everyone’s invited:

image

Skeletons in the closet – are there things that should have been done differently on your Power Platform projects?

The month of October is upon us, and, with that, there are all the pumpkins, costumes, decorations, and all the other Halloween stuff.

So, then, why don’t we talk about the skeletons we have either found, or, possibly, left in the virtual closets of all those Dynamics/PowerPlatform projects we have worked on? While doing this, we might also talk about why we should have done certain things differently, and how that might have improved the outcomes.

Pretty sure that could turn into an interesting series, but, of course, each of us, alone, would only have that many skeletons – you would not be working on the projects just to create those, right?

Although, who am I to say… Just a few months ago I was chatting to someone who mentioned they were working on the solution I had been working on 6 or 7 years earlier. And that scared me – I could hardly stop myself from asking how many problems did they find in my implementation? Since I knew there were some.

In either case, everyone is invited. Share your story in your blog, and I’ll put a link here. If you don’t have a blog, send me a message on linkedin (https://www.linkedin.com/in/alexandershlega/), and I’ll post it in my blog. Or do a youtube video. Or reach out to the community somehow else and let me know.

Those should be real-life stories, though!

Just like with every Halloween trick, there will be a treat. At the end of October, I’ll write a blog post featuring the best stories. I mean, it’s you who will be featured there! Unless you choose to stay anonymous, of course:)

Also, the best stories will be featured in the ItAintBoring PowerPlatform Chat session on October 26. You are welcome to co-present if you’d like, or I can just present them with your permission.  If you want to hear the stories, or if you want to co-present, don’t forget to register:

https://www.linkedin.com/events/skeletonsinthecloset-whenthings6849130439559516160/

Blog users by country – is this a reflection of PowerPlatform usage by country?

I was looking at my blog statistics in Google analytics today (doing a review  every now and then), and noticed something that I’m curious about.

To start with, I don’t do a lot of presentations, so my blog, for the most part, is all about delivering written material. Which is available in any time zone at any time, and it’s all about Power Platform.

However, looking at the number of users by country in the last 6 months, here is what I see:

image

Don’t get me wrong – I would not have dreamed of having that many visitors when I was starting, and I’m grateful to all of you for coming here every now and then.

However, the numbers above mean that about 50% of those reading my blog are from the countries where English is the first language. India somewhat stands out there, of course, and I’m assuming this is partially because there is a lot of IT outsourcing, and, because of that, India is almost literally “on the same page” as United States in that sense.

However, the rest of the world is definitely under-represented relatively to the population. Consider Germany and France, for example. Together, they are about 5 times as big as Canada, but, again together, they yield just about as many users for my blog as Canada.

There are, likely, at least a few possible explanations:

  • Language barrier
  • PowerPlatform usage is different in different parts of the world
  • All those numbers are misrepresenting the real picture, and there is lots of noise there (potential spammers trying, incorrect google search hits, etc)

It seems there is no way of telling which one is it by looking at my own blog stats, but I am wondering if, from your experience, the chart above represents PowerPlatform usage by country, or is it more a representation of some other factors?

Side panes in model-driven apps

I was playing with the side panes tonight, and, even though there is no argument they have great potential , I wonder if you’ve already started using them or if there are a few missing features which are holding you off?

To start with, side panes are pretty well documented in the docs. They are in the public preview, so, on the one hand, this implementation might not be final, but, on the other hand, it is considered stable enough to go into the preview:

https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/create-app-side-panes

If you wanted to test side panes quickly, you can do this in less than a minute:

  • Open your model-driven app in the browser
  • Push F12 to open dev tools
  • Copy-paste the code below to the console in the dev tools

Xrm.App.sidePanes.createPane({
title: “Accounts”,
paneId: “AccountList”,
canClose: false
}).then((pane) => {
pane.navigate({
pageType: “entitylist”,
entityName: “account”,
})
});

This will create a side pane which will display the list of accounts:

image

In the same manner, you can add more panes if you wish.

There could be some interesting usage scenarios there, since, for instance, we could use this to display user tasks, cases, etc. Basically, to some extent, this would be replacing (or, possibly, complementing) dashboards functionality since we could mix and match different views on the same screen.

There are a couple of interesting caveats there, though.

When creating a side pane, we can specify pane width. If we miss to specify it properly, table forms displayed in the side pane might not be fully visible:

image

So we probably need to allocate a little more width for those panes when creating them, but that might be taking space away from everything else we want to display there.

It seems it would be useful if we could open those records in the popup somehow. Or, possibly, if we could instruct model-drive app to open those records in the main area of the app (not in the side pane). Not sure if this is something that would be coming, eventually.

Then, it seems, there is no way to pass “context” from the main area to the side pane and back. In other words, what if I wanted to display a view in the main app area (just like we normally do), then display selected record in the side pane?

Last but not least, there is no supported way to load side panes on start of the model-driven application. We would need to execute javascript to do that, but that’s not something we can normally do. Although,  some things have certainly been brewing there – Mehdi El Amri had a great post on this topic::

https://xrmtricks.com/2021/05/07/how-to-run-javascript-code-when-loading-a-model-driven-app/

All that said, there is at least one very legitimate scenario at the moment, and that is using side panes to display lookup forms:

For that to work, you can use the following javascript web resource:

function onFormLoad(executionContext)
{
	var columnName = "parentcustomerid"; 
	var formContext = executionContext.getFormContext();
	formContext.getControl("parentcustomerid").addOnLookupTagClick(
		function(executionContext){
			var formContext = executionContext.getFormContext();
			executionContext._eventArgs._preventDefault = true;
			var pane = Xrm.App.sidePanes.getPane("LookupDetails");
			if(typeof(pane) == "undefined" || pane == null){
				Xrm.App.sidePanes.createPane({
					 title: "Lookup details",
					 paneId: "LookupDetails",
					 canClose: true,
					 width: 400
				}).then((pane) => {
					  displayLookupInPane(pane, 
						 formContext.getAttribute(columnName).getValue()[0].entityType,
						 formContext.getAttribute(columnName).getValue()[0].id);
				   });
			}
			else{
				displayLookupInPane(
				   pane,
				   formContext.getAttribute(columnName).getValue()[0].entityType,
				   formContext.getAttribute(columnName).getValue()[0].id
				);
			}
		}
	);
}
function displayLookupInPane(pane, entityType, id)
{
	pane.navigate({
           pageType: "entityrecord",
           entityName: entityType,
		   entityId: id
        });
}


Just make sure to do a couple things:

  • Update that script to use your lookup attribute name for the columnName variable
  • Configure the form to use onFormLoad function as an “on load” event handler

Anyways, have fun!

ItAintBoring PowerPlatform Chat session summary, Sept 28

First of all, it was nice to see familiar faces there, and it was at least equally nice to see unfamiliar ones! Thank you all for showing up to participate in the discussion, it was much appreciated.

To recap, we had a quick presentation – you will find the link below:

https://docs.google.com/presentation/d/116l7TaolAAIjOrbJv1PxuhGNr1eFG58-iCe26Tu0qNg/edit?usp=sharing

image

And, in the discussion that followed, a few additional items were brought up, so I’ve listed them below (hopefully, I did not miss something important there):

1. In general, it seems the consensus was that, for complex calculations, plugins might be somewhat more suitable.

2. Flows definitely outmatch plugins when it comes to the integrations.

3. Where plugins can use tracing service, flows have “Run History” available right away, and, in many cases, it is much more convenient to look at the run history than to look at the traces.

4. Plugins can literally plug into the execution pipeline and make changes to the records without incurring API calls (As in, in the pre-operation stage we can set target record attributes without calling service.Update). In the flows, that would have to be an update, which is an API call. Coincidentally, there were a few people on the call who considered this important, and that probably has to do with the amount of data processed on their projects. Which makes sense, since every API call counts in those cases (keeping in mind current API limits).

Aside from that, I was wondering if there is a scenario where a business user would actually be creating their own Flows. Still looking for somebody who could share their story on this one! One potential scenario that was mentioned is, possibly, asking users to follow/use a template when creating their own Flows.

We also talked about classic workflows and how it might be challenging for Microsoft to take those away, although, I think we somewhat divided as to whether we should be using them. I’d say the use of classic workflows is, generally, discouraged these days. However, it does not mean they are just going to break out of a sudden. Besides, Power Automate is asynchronous, and, in that sense, real-time workflows are, often, the only possible low-code option.

With that, another session is coming up in two weeks – we are going to talk about how Power BI can replace SSRS when it comes to reporting/print forms. Although, this could also turn into a broader discussion of how to do “reporting and print forms”. There is event link below, will be happy to see you there:

https://www.linkedin.com/events/oct12itaintboringpowerplatformc6843586222267478016/

Using C# code in custom connector to connect to Dataverse?

This one is almost of an academical interest; since, after all, there is Dataverse connector available already. But,  just on the principle, I was wondering if it were possible to use C# code in the custom connector to connect to Dataverse?

Actually, there is nothing special about Dataverse – the same approach could be used with other cloud services. Basically, all we need is:

  • Figure out how to do Azure authentication
  • Write some code to call required API-s
  • Process results and return them to the calling Power Automate flow

And, of course, may need to keep in mind 5 seconds limit, which is making it even more academical in a way (since some calls are just meant to take longer, and there is nothing we can do about it).

Anyways, turned out the above is totally doable and not even that complicated.

First things first – we will need to register an app in Azure (unless you have one already) to configure Azure Authentication for the connector.

The process is described here: https://docs.microsoft.com/en-us/powerapps/developer/data-platform/walkthrough-register-app-azure-active-directory

Below are a few screenshots from my azure portal

There is am application:

image

There is a secret:

image

You may also need to grant administrative consent for the API-s:

image

And there is a redirect URI:

image

As for the redirect URI, you are, actually, supposed to add it to the application once you have created the connector (see below). Although, I have a feeling it’s going to be the same URI in this case.

Still, having registered the application, you can start setting up the connector:

image

Just like we did it before, you can use pretty much anything for the host name (this time I figured I’d go with microsoft.com):

image

On the security tab, start with the swagger file – you can get it from git:

https://github.com/ashlega/ITAintBoringITAFunctions/tree/main/ITACDSConnectorFiles

image

Close swagger editor and configure remaining part of the security tab as per the screenshot below:

image

Use OAuth 2.0 for the authentication type.

Use Azure Active Directory for the identity provider.

For the client ID, use application ID from the Azure Portal.

For the client secret, use client secret from the Azure Portal for the application you registered above

For the resource URL, use your environment url.

Then proceed to the “definition” stage, and to the “Code” right away – enable code, and copy paste contents of Script.cs file from git: https://github.com/ashlega/ITAintBoringITAFunctions/tree/main/ITACDSConnectorFiles

image

Make sure to replace the url there with your own environment url.

The interesting part about that code is that, essentially, we can just reuse incoming Request. There is authentication token there, already, so we don’t need to worry about that part at all.

Now you can create the connector, and you should be able to test it from there. Once you try testing it,it will ask you to create a connection first, which is where you’ll provide your credentials:

image

Once the connection is there, you can run the actual test – enter part of the user name, and the connector will return a json array with all the users whose name matches:image
Then, of course, you can use it in your Flows.

Is there practical value there? Not sure to be honestSmile But, I guess, if you wanted to quickly create a simplified re-usable Dataverse connector which would do some targetted/specific things in Dataverse, you could do it this way without having to go through the process of setting up out-of-the-box Dataverse connector actions. And this applies to other Azure services, too (we could probably do the same with Sharepoint etc).

Interested to know more? Here are a couple of other posts:

C# code in Power Automate? No way…

C# code in Power Automate: let’s sort a string array?

Long functions in Dataverse plugins: Part 2

I wrote my previous post knowing full well it was going to sound controversial to any pro-dev, that’s if a pro-dev were to read it. Turned out a fellow MVP, Daryl LaBar, did, and he just raised the bar by responding with a well-articulated article of his own:

https://dotnetdust.blogspot.com/2021/09/Long-Functions-Are-Always-A-Code-Smell.html

Before I continue, I have to mention that I would often question seemingly obvious things – sometimes, that leads to useful findings. Other times, those question end up being dismissed easily, and the life just goes on. No harm done.

In this case, I am definitely not implying that “long code” is inherently better for the plugins, but, having read Daryl’s post, I am still not convinced it’s inherently worse either, so I’ll try to be Devil’s advocate and explain why.

First of all, that analogy with the book makes sense, but only to an extent. There are other practices which can improve readability and understanding – I could add documentation-style comment just before the execute method, and, then, I could use regions in the plugin to isolate pieces of code in the same manner it’s done with the functions:

image

Would there be any benefits from the readability perspective? Not necessarily (except that, yes, I don’t need to go back and forth and can follow plugin logic from start to end). Would there be drawbacks from the readability standpoint? It seems it’s the same answer – not necessarily.

Would there be some pieces of code where a function would work better? Of course – when that piece is reusable. But, then, the assumption so far is that not a lot of code is, really, re-usable in the plugins.

Now, Daryl writes that having smaller functions is beneficial for error logging since function names show up in the log:

“if the 300 lines where split into 10 functions of 30 lines, then I’d know the function that would be causing the error and would only have a tenth of the code to analyze for null ref.  That’s huge!”

There is nothing wrong with that, but here is what usually happens:

  • A bug will show up in production
  • We will need to make sure we have a repro to start with
  • At which point we will start debugging to pinpoint the line of code that’s causing the error

Let’s assume we have 30 lines of code – in order to identify one specific line (let’s say we can’t guess easily), we still need to isolate that line to be able to fix it. So, we will either have to add additional diagnostics to the code, or we will try to guess and build different versions of the plugin to see which one stops failing.

If we had to add diagnostics/tracing to each and every line where, potentially, a “null reference” error might happen, an argument could be made that all such lines in the plugin should be instrumented just as a best practice (to make sure we have all required into the next time an error happens in production).

Which, then, negates the difference between having a function and not having it.

If we went with guessing above (let’s say we had 5 educated guesses about which line, exactly, is failing), then we’d need to build 5 versions of the plugin (worst case). Which is not that far from having to build 7-8 versions if we just tried to split our code in half and see if we can still reach that point when running the test (since 2^8 =256). However, with 7-8 runs, we’ll know exactly where the error is happening. And, with those 5 educated guesses we will often end up just where we started; since we might have guessed wrong to start with.

There is a potential problem with that, of course, and it’s all those “if” statements, since they can quickly confuse this kind of “divide and conquer” strategy. And this is where I’d agree with Daryl – should not have complex “if” structures in the long code, or, at least, should try making those if-s sequential, as in:

if(…) {};

if(…) {};

Instead of

if(…){

if(…){

}

}

But that’s not the same as not having shorter functions.

Also, I would often add almost line-by-line tracing to the plugin. Once it’s done, it does not matter how long the function is, and/or how complex those if-s are; since you will have enough details in the error message to land on the block of code where the error occurred.

It’s taken to the extreme in the example below, but, if you absolutely need to know where your plugin fails without having to guess, that’s more or less how you’d do it:

image

That said, do I always do it? Not at all. I’d often start doing it once there is an issue, and, then, I’d keep that tracing in the plugin. Till there is another issue when I’ll add more. Etc.

Of course if we were able to actually debug plugins, but, well… and I don’t mean plugin profiler, though it can be quite useful sometimes.

In the last part of his post, Daryl is talking about unit-testing and cherry-picking what to test. Which makes sense to me, but I’d just add that, with very limited amount of reusable code (that’s the basic assumption here), everything else should probably be tested as a whole anyways (in which case shorter functions vs longer functions difference might not be that important). This is because, for instance, it’s very likely that one short function will affect what’s happening in the other short function. As in:

  • We would set max credit limit
  • We would create account log entries (possibly based on what we did above)
  • And we would update account status (might be a VIP or regular account… based on the credit limit we set above)

Admittedly, I don’t use unit-testing that much in the plugins – would normally prefer integration testing with Easy Repro (and, quite often, no matter what I prefer, dev testing would still be the last thing we ever get to do on the project. Which is not great at all, of course, but this is a different story).

Long functions in dataverse plugins – is it still ” code smell”?

I am, usually, not too shy of writing relatively long functions in the plugins. And I mean long – not 15 lines, not 30 lines. In a relatively complex plugin I would, occasionally, have 200+ lines in the Execute method. Here is an example:

image

Notice the line numbers. Even if I removed all the comments and empty lines from that code, I’d still be looking at around 200 lines of code.

This does not happen with simple plugins (since there is just not a lot of code which needs to be there in the first place), but this tends to happen with relatively complex ones.

So, of course, as a commonly accepted rule of thumb this is not good, and you can easily find examples where it is suggested to split long functions into smaller ones based on the functionality that can be reused or unit-tested independently.

Which, in turn, brings me to the question of whether that always applies to the plugins, or whether we are basically talking about personal preferences and it all depends?

Where plugins are somewhat special is that they are

  1. Inherently stateless
  2. Often developed to provide a piece of very specific business logic

The first item above just means that there is very little re-use of in-memory objects. Of course, we can create classes in the usual way, and we can create smaller methods in those classes to sort of split code into smaller pieces, but instances of those classes will always be short-lived. Which is very different from how they would normally be used in the actual applications where some objects would be created when the application starts, and they will be disposed of when the application stops. That might be minutes, hours, days, or even months. Yet different parts of the same application might be reusing those shared objects all the time.

In the plugins, on the other hand, everything is supposed to be short-lived, and, at most, we are looking at a couple of minutes (although, normally it should be just a fraction of a second).

This is not to mention that the same plugin can run on different servers, and, even when it’s running on the same server, there is no guarantee it won’t be a new instance of the same plugin.

That seems to render object-oriented approach somewhat useless in the plugins (other than, maybe, for “structuring”), but that does not necessarily renders functions useless.

Although, as far as functions go, imagine a plugin that needs to do a couple of queries, then cycle through the results, do something in the loops (different things in each loop), and, possibly, update related records in each loop.

Below is a relatively simple example with just a few conditions, and it already has 15 lines there.

image

The first 4 lines there are setting up the query. Although, what if we wanted to add related table to the query? And, maybe, more than one. That would be 3-4 lines per link, so we can easily end up with 10+ lines of code just to configure the query in those cases.

And there might be a few blocks like that in the same plugin.

We should also add the usual plugin initialization – that’s 4-6 lines more:

image

Although, the initialization part is, often, isolated into the PluginBase class which saves as a few lines at the beginning of the plugin.

But, still, with each plugin doing relatively unique queries, and applying relatively unique business logic to the specific operation/table, the functionality is, often, not that much reusable. Which means up until this point a lot of this seem to be personal preference and general “rule of thumbing”. Plus, maybe, personal preferences from the code readability standpoint; although (and that is an example of the personal preference), I’d often prefer longer code in such cases since I don’t have to jump back and forth when reading it / debugging it.

But what about the unit testing? FakeXrmEasy is the testing framework that probably comes to mind whenever we start thinking of writing unit tests for the plugins. The concept there is very interesting – we would create a fake context, which we would, then, use to execute our plugin against. This is a great idea, but, of course, it implies that we’d need to pre-create the context properly, to pre-populate it with the test data, etc. Which might result in quite a bit of work/maintenance, too.

Would it really matter for FakeXrmEasy how long our functions are in the plugins? Not that much, it seems, since FakeXrmEasy is treating plugins as  black boxes, more or less. It does not care about the internal structure of the plugins being tested. It’s main purpose is to fake live context, as the name implies:)

In which case, just on the principle, and with the assumptions above (which is that the majority of the code in each individual plugin is, generally, not reusable across the project, and, also, that unit testing may have to be done on the plugin level rather than on the function level), does it matter how long that code is?

What do you think?


PS. This lead to a bit of a discussion

Here is what Daryl LaBar wrote in reply:

https://dotnetdust.blogspot.com/2021/09/Long-Functions-Are-Always-A-Code-Smell.html

And here is my reply to his reply above:

Long functions in Dataverse plugins: Part 2

C# code in Power Automate: let’s sort a string array?

In the previous post, I wrote about how we can use C# code in Power Automate connectors now. The example given there (converting a string to uppercase) was not that useful, of course, since we could simply use an expression and a “toUpper” function directly in the expression.

So I thought of a more useful application of this new functionality, and, it seems, sorting string arrays might be one of those.

Mind you, it can be done without c# code. For example we could run an office script:

https://www.tachytelic.net/2021/04/power-automate-sort-array-objects/

Which looks like a valid option (it’s coding anyways), but, if we did not want to use office connectors (or, perhaps, for those of us who would prefer C#), it can also be done with C# using “code” functionality in the custom connector:

image

You will find both swagger definition and script files for this connector in the git repo:

https://github.com/ashlega/ITAintBoringITAFunctions/tree/main/ConnectorFiles

And there are some explanations below

1. SortStringArray operation will have arrays as both input and output parameters

image

2. In the code, incoming array will be converted into a string array, then sorted, then sent back through the response

image

That’s quick and simple; although, I have to admit it took me a few hours to figure out all the plumbing (swagger definitions, using JArray above, etc). But, I guess, it’s fine to expect a bit of a learning curve initially.

What definitely helped, though, is creating a small project in the Visual Studio and testing some of the same code there.

Anyways, that’s all for now!

C# code in Power Automate? No way…

Did you know that you can easily add C# code to your Power Automate flows?

And you will need no azure functions, no plugins, no external web services. You will only need 5 minutes.

Although, your code will only have 5 seconds to run, and you will be limited in what exactly you can do in that code, but still:

image

The code above will simply convert given string value to uppercase – that’s not much, but you should get the idea.

Here is how you do it:

1. In the maker portal, go to Data->Custom Connectors, and start creating a new connector

You will need to provide the host name… Want to try google.com? That’s fine. One there is code, it’s going take over the codeless definition, so here we go:

image

2. To make it simple, let’s use no security

image

3. Now, for the definition, you might want to use the swagger below

swagger: '2.0'
info: {title: TestCodeConnector, description: Test Code Connector, version: '1.0'}
host: google.com
basePath: /
schemes: [https]
consumes: []
produces: []
paths:
  /:
    post:
      responses:
        default:
          description: default
          schema: {type: string}
      summary: StringToUpperCase
      operationId: stringtouppercase
      parameters:
      - name: value
        in: body
        required: false
        schema: {type: string}
definitions: {}
parameters: {}
responses: {}
securityDefinitions: {}
security: []
tags: []

This just says that a string comes in, and a string comes out.

4. Finally, for the code, you can use something the example below (my apologies for the formatting – it seems there are some special characters, so copy-paste would not work otherwise)

public class Script : ScriptBase

{

public override async Task<HttpResponseMessage> ExecuteAsync()

{

return await this.HandleToUpperCase().ConfigureAwait(false);

}

private async Task<HttpResponseMessage> HandleToUpperCase()

{

HttpResponseMessage response;

var contentAsString = await this.Context.Request.Content.ReadAsStringAsync().ConfigureAwait(false);

response = new HttpResponseMessage(HttpStatusCode.OK);

response.Content = new StringContent(contentAsString?.ToUpper());

return response;

}

}

That one above will take a string and upper case it:

image

5. And that’s about it – you can test it now

image

From there, just create a flow, pick your new connector, and use it in the flow:

image

And do the test:

image

There are a few more notes:

  • This is a preview feature
  • You can find some extra details in the docs: https://docs.microsoft.com/en-us/connectors/custom-connectors/write-code
  • Custom code execution time is limited by 5 seconds
  • Only a limited number of namespaces is available for you to use
  • When there is code, it takes precedence over the codeless definition. In other words, no API calls are, actually, made. Unless you make them from the code

So, there are limits. But, then, there are opportunities. There is a lot we can do in the code in 5 seconds.

PS. And, by the way, don’t forget about the upcoming PowerPlatform Chat event: https://www.itaintboring.com/itaintboring-powerplatform-chat/