Monthly Archives: July 2020

Adaptive Cards – PowerStorm session findings

Just had a really cool PowerStorm session with Aric Levin and Linn Zaw Win. You would think that’s not a lot of people, and there would be no argument there, but, that said, the idea of those sessions is not to do a demo/presentation, but, rather, to try something out together.

Long story short, I like how it worked out, since we’ve managed not only to run into a bunch of issues along the way, but, also, to resolve them. Which is exactly what make up the experience.

So, fresh off the storm, here is my recollection of what we’ve learned today:

1. Whas is adaptive cards?

“Adaptive Cards are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into native UI that automatically adapts to its surroundings. It helps design and integrate light-weight UI for all major platforms and frameworks.”

 

To be a little more specific, we can use adaptive cards with Teams, Outlook, Bots, Javascript, etc. We can even create a PCF control to render adaptive cards in the canvas apps/model-driven apps (there is an example here).

To be absolutely specific, here is an example of the rendered adaptive card:

image

You can have a look at the json for that card here: https://adaptivecards.io/samples/CalendarReminder.html

Which bring me to the next point

2. There is an adaptive cards designer

Using the adaptive cards designer, you can quickly build your own adaptive cards

It’s worth mentioning that different host apps (Teams, Outlook, etc) may be using slightly different schema for the adaptive cards; however, adaptive card designer is aware of those differences, and this is exactly why it’s allowing us to select a host app:

image

For instance, Outlook allows usage of Adaptive Cards to create so-called actionable messages, and there is a special action called Action.Http which we might use to post card date to a url. That action is only available in Outlook, and it won’t work anywhere else. However, an adaptive cards meant for Teams might use Action.Submit action, but would not be able to use Action.Http action.

3. So, how do you send an adaptive card to Teams?

We were using PowerAutomate Flows during this session. Which is, of course, just one of the options.

Still, in order to send an adaptive card  from the Flow, we need to use a connector. With the Teams, it turned out to be relatively straightforward – there are a few actions we can use:

image

There are actions to send adaptive cards to a user or to a channel. And, for each of those, you can choose to wait for the response (in which case the Flow will pause) or not to wait for the response (in which case the Flow will continue running)

There are a few caveats there:

When a card it sent to a channel, the Flow that’s setup to wait for the response, will resume after the first response

When a card is sent to multiple users from the same flow, you can either do a “for each” loop to send the cards concurrently, or you can send them one after another. In the first case, all users will see the card right away. However, The Flow will still have to wait for everyone’s response.

In the second case, adaptive cards will be showing up sequentially. Once the first user provides their response, the Flow will continue by sending the same card to the second user, then it will wait for that user to respond, and so on.

Which means it might be challenging to implement a Flow which will be sending a card to multiple users, but which will be analyzing each and every response as those responses start coming in (without waiting for all of them first).

Because, as it turned out, we can’t terminate a Flow from within the foreach.

So that’s one of the challenges we did not have time to dig into.

4. And how do you send an adaptive card by email?

There are a few good resources:

https://docs.microsoft.com/en-us/outlook/actionable-messages/adaptive-card

https://spodev.com/flow-and-adaptive-cards-post-1/

Sending an adaptive card by email proved to be extremely simple and, yet, quite complicated at the same time:

image

Btw, pay attention to that script tag – it’s important.

Anyway, originally we tried sending an adaptive card without that highlighted originator attribute. It worked… but it only worked when an email was sent to ourselves. I could send an email to Aric, and he would not see the adaptive card. Aric could send an email to Linn, and Linn would not see the card. But, when I were sending an email to myself, it was all working. It was the same for Linn and Aric.

Did not take long for Aric to find a page talking about the security requirements:

https://docs.microsoft.com/en-us/outlook/actionable-messages/security-requirements

Then we’ve also found this excerpt:

https://docs.microsoft.com/en-us/outlook/actionable-messages/email-dev-dashboard

image

While Aric an I were messing with the Flow, Linn was trying a different approach. He has found actionable messages debugger for Outlook:

https://appsource.microsoft.com/en-us/product/office/WA104381686?tab=Overview

image

Once it was installed, we could finally see the error:

image

So, it was a problem with the security. We needed to set that originator field. And the url in that message led us straight to where we need to register new originator:

https://outlook.office.com/connectors/oam/publish

So we did, and, once we had originator id, we put it in the adaptive card json:

image

That was the last step, after which the card started to show up for all 3 of us
no matter who was sending it.

5. What have we not tried?

Of course there is probably more we have not tried than what we have tried. One thing I am thinking of trying on my own (unless somebody does it before) is creating a Flow which would be triggered by a POST http request (sent “from” the outlook actionable message). This would allow such a Flow to kick in once a user responds to the adaptive card, and, essentially, that would mean we can use actionable messages to create a completely custom email-based approval workflow.

Anyway, those turned out to be 2.5 hours where I learnt quite a bit, so this session format seems to make sense. Will try it again in a couple of weeks, so stay tuned.

PowerStorm watch for July 30 – Adaptive Cards possible

LogoHey everybody – another PowerStorm watch has just been confirmed, and, apparently, it’s going to happen on Thursday, July 30, at 9 PM EST.

 

 

According to the itaintboring powerologists, here are some examples of the conditions you may expect during the event:

  • Different ideas of using Adaptive Cards with Power Platform will be floated around
  • The session will start with a quick overview of the adaptive cards
  • Following the overview, possible scenarios of adaptive cards usage will be reviewed
  • We will have a quick brainstorming session to see what other ideas we may come up with (hopefully, those who end up in the center of this event will feel recharged enough to start generating ideasSmile ).
  • Finally, and this may depend on the experience of the folks attending the event, we will try building out a few samples of using adaptive cards in the Flows/Teams/Canvas Apps/Model-Driven Apps

 

There are still a few slots available – register now!

July 16 PowerStorm session summary & lessons learned

Have you heard of the PowerStorms? As Microsoft keeps changing application development climate to the better, some fascinating things start happening around the Globe. Four of us happened to find ourselves right in the center of one of those on July 16 – that’s when the first ever PowerStorm session happened!

There were four of us to witness it:

And myself

By the way, between the four of us, we had 16 hours time difference, and nobody was on the same time zone.

Originally, I thought this would be more of a training-like event, but, since there was no set agenda other than “let’s do something interesting with Power Platform”, I guess I just wanted to see how it works out.

So… Arjun brought up a problem to solve, Greg was absolutely instrumental in organizing the process, Linn and I were mostly messing with the Flow, and, despite all this… we ended up not being able to solve the problem at hands.

How so?

Well, we just wanted to achieve a seemingly simple goal of creating a Flow that would trigger whenever a new file is added to a blog and that would post that file to a Teams channel so it all looks like this:

image

As a result of this, we were hoping to get a message posted to Teams which would have a link to the uploaded file:

image

What we found out is:

  • We can use Azure Blob Storage connector to trigger our Flow
  • We can actually connect to a completely different tenant (since we did not have Blob Storage in the trial environment we created that time)
  • We can also use that connector to retrieve the file
  • We can use Sharepoint connector to upload that file to the Team Channel’s sharepoint folder
  • And we can use Teams connector to post a message

What we have not quite figured out is how do we display that file link in the same manner it’s done on the screenshot above. It must be something simple we’ve missed? AdaptiveCards maybe? Have no idea how to use them yetSmile

Anyway, it seems there are really a few ways to conduct these sessions, so it’s something to think about on my spare time.

In the meantime, there are a few other lessons learned:

  • If we are to do another hackathon-style session, I should have the trial ready in advance. Otherwise, it can easily take half an hour just to set everything up and to add participants as trial users
  • For those interested in the E5 trial licenses, you might also want to look at the following link: http://aka.ms/m365devprogram This kind of tenants won’t have D365 instances, but you will get everything that comes with E5 including Power Apps for Office 365 (https://docs.microsoft.com/en-us/office/developer-program/microsoft-365-developer-program-faq). These developer training instances are for 90 days and they can be extended. Although, they are not necessarily the best option for PowerPlatform trainings/hackathons

Well, it was good 3 hours of learning/trying/brainstorming. We have not solved the problem, and it’s still bugging me, but I’ve definitely learned quite a few things.

Thank you folks, hope to see you around next time!

Setting up sprints for dev/qa in Azure DevOps

It seems to be very common when there is a project which is trying to implement SCRUM, and which can never really do it because of the inherent problem which occurs when there is a dedicated QA team.

SCRUM assumes QA happens within the sprint, but, when scrum development team is different from the QA team, it can get really messy.

Since, of course, QA can’t start till development is done. Which means QA usually starts close to the end of the development sprint, and that leas to all sorts of workarounds with each of eventually breaking the SCRUM process:

  • There might be 3 weeks sprints with the last week being reserved for the QA. But what is it the developers are supposed to do during that last week?
  • QA team may choose to do their work outside of SCRUM; however, if “QA test passed” is part of the definition of done for the dev team, they’ll never be able to close work items within their sprints, and, at the very least, that’s going to render burndown charts useless

 

The problem here is, usually, that it’s almost impossible to blend traditional QA and development within the same sprint, and, this way or another, we end up with the diagram below if we are trying to complete both development and QA within the sprint:

image

So what if we treated DEV and QA processes as two different SCRUMs?

That would requires a few adjustments to the definitions, but, I guess, we could think of it this way:

  • The dev team would still be responsible for their own testing, and they’d be doing it within dev sprints. This might not be the same level of comprehensive testing a QA team can deliver, but, as far as Dev team is concerned, there would be release-ready candidates after each sprint, it’s just that they’d be released to the QA team first
  • QA team would be responsible for comprehensive testing, possibly for adding test automation, etc. They’d be doing it within their own set of sprints, and, by the end of each sprint, they’d have added known issues to the release candidate provided to them by the dev team. That release candidate (with the associated list of known issues) could be made available to the end users

 

Of course this would assumes some level of communication between two teams, but this is where Azure Devops can really help because:

  • Within the same project, we can create multiple teams
  • If we choose so, each team will be able to see the same work items
  • Each team can have its own set of iterations

 

And, of course, we can have joint sprint reviews, scrum of scrum meetings, etc.

So, how do we set it up in devops? Here is an example:

For a Scrum project in Azure Devops, create a Dev team and a QA team

image

There will always be a default project team as well, btw. You can use that one in place of one of the other ones if you want.

Configure project iterations

image

Notice how DEV and QA sprints are aligned. They don’t have to be

For my example, I’ve set up each team to have access to the default area AND to all sub-areas(sub-areas are included on the screenshot below)

image

Dev team should have access to the “dev” set of iterations

image

QA team should have access to the QA set of iterations

image

With the setup done, I can now go to the sprint boards for the Dev Team (notice team selection at the top), and add a work item to the bord + a task:

image

There would still be noting on the taskboard for the QA team:

image

By the end of the sprint, my task #5077 would move to the “done” state on the Dev Team taskboard:

image

And I (as a developer) could create another task for the QA and put it to the upcoming QA sprint (which is not quite in compliance with SCRUM principles… but QA team can, then, decide to get those items off the upcoming sprint and handle it in the sprint farther away):

image

Now if I look at the QA sprint taskboard, here is what shows up here:

image

And there you go: Dev Team will be working on their own set of sprints, yet QA Team will be working on their set of sprints. Each team can plan their work, they don’t depend on each other when closing the sprints, and it’s just that the “release candidate” has to go through a two step process now:

image

XRM, CDS, Microsoft Dataflex… What’s in the name?

image

So, CDS is DataFlex now. Actually, it’s important to call it Microsoft Dataflex since there is a separate non-Microsoft product called DataFlex.

It’s the second rebranding of what we used to know as XRM, but it may be the one that will stick for a little longer (I know what you are thinking… but we should have some faith).

To start with, I never really understood what Common Data Service meant. In the last few years, I’ve been on a lot of projects utilizing CDS, but, even when those projects were for the same client, most of the data was unique. There might be some Common Data Model, but that data model would  still would be customized and extended for every project needs. And, besides, what if we take it one step further and look at the data utilized by different clients? We will likely see different models, and we will certainly see different data.

Then why calling it “common” if it’s not that common at all?

That said, I think the intention was to really start building around common data model when CDS/CDM first came out. It’s just not necessarily how things worked out, it seems.

In that sense, XRM might make more sense even today, but XRM has a different flavour to it. It has a lot of “CRM” legacy, and what became of it has very little to do with CRM.

This is why, leaving our personal preferences and attachments aside, neither CDS nor XRM seem to be the right name for this product/service.

Is it different with Microsoft Dataflex? Quite frankly, the main difference seems to be that it does not have any meaning embedded into it except that it’s dealing with the data, and it’s a product from Microsoft. From that standpoint, it could easily be called somehow else, and it would not matter. New services may be added and existing services can be removed, the product may keep shifting its focus, licensing may keep changing, Microsoft may choose to rename Common Data Model to something else… None of that would automatically imply another renaming for Microsoft Dataflex at this point.

Which might actually be great for everyone in the long term.

But, for the time being, it seems we all just got work to do. Updating the slide decks, writing these blog posts, making the clients aware of this new naming, etc. Still, for the reasons I mentioned above, I’m hoping that, once the dust settles, this name will stay for a little longerSmile

When would onSave form event occur in model-driven apps?

We can use three different functions to save a form:

  • formContext.data.entity.save
  • formContext.data.save
  • formContext.data.refresh(true…)

 

Off the top of your head, do you know which one will result in the “onSave” form event and when?

Those details might not be quite obvious when looking at the documentation for each of the functions separately, but there is a documentation for the onSave event which gives us the answer:

https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/events/form-onsave

image

Notice the difference between formContext.data.entity.save and the other two. Whether onSave will be called or not depends on whether there is unsaved data for those, but it does not matter for the formContext.data.entity.save. It’s one of those strange cases where onSave will actually be called even if there is no changed data to be saved. Which makes it an equivalent of the “save” button (see the first bullet item).

Why would it matter? I am actually having a hard time coming up with the scenarioSmile Except, possibly, if you have some custom logic in the onSave that is supposed to kick in whenever the user clicks “save” button (counting save button clicks? Well…), and that logic might not kick in when formContext.data.save or formContext.data.refresh is used.

Using application monitor to troubleshoot Model-Driven apps

You may have heard that we can now use Monitor with model-driven applications. It can provide some very useful context about the client-side behavior of your apps, but there is one option which may be particularly useful when troubleshooting end-user issues.

It seems this process is supposed to be done through the “Invite” button below:

image but there are a few things which did not work for me when I tried doing it that way, so here is a workaround for now.

Using your own account, open Monitor page for your app from the maker portal:

image

Once the monitor shows up, click “Play Model-Driven App”

image

Copy the url so you can send it to the user

On the screen below, do not choose “join” right away – first, copy the url from the browser address bar:

image

Then send that url to the user and click Join

Once the user receives the url, they can open that url in the browser, join the session, and you will see all their session details in your original monitor session:

image

How/why is this helpful?

Well, you can spare your users from having to send you the log files to start with, and, besides, you may be able to see not just the error itself in the monitor session, but, also, a lot of contextual client-side information about the error itself, and, also, about the client-side events which happened before or after the error and which might have lead to the error.

We can now do column comparison with Fetch/Web API!

Wow, we’ve been asking about it for a (long) while:

image

https://powerapps.microsoft.com/en-us/blog/announcing-column-comparison-through-fetchxml-sdk-and-odata/

Do you want to do column comparison with Fetch? You can do it now – it’s only supported for a few conditions so far:

  • Equal
  • NotEqual
  • GreaterThan
  • GreaterEqual
  • LessThan
  • LessEqual

 

And it’s not, yet, supported in the UI query building tools (Advanced Find). And it can only be done within the entity, but… with all those limitations, which will hopefully go away over time, I can still do this, for example:

Find all accounts where primary contact’s email address is identical to the the account’s email address

Did not I just say it only works within the same entity? Yeah, but… It’s problem solving 101 – lets reduce this more complicated problem to the problems we can solve.

1. No multiple entities allowed? Let’s create a calculated field and populate it from the primary contact

image

2. Can’t use the UI to build that query yet? XrmToolBox to the rescue

image

And here we go:

image

And if you wanted to utilize this in the system views, you might just use View Designer tool in XrmToolBox to update a system view:

https://www.xrmtoolbox.com/plugins/Cinteros.XrmToolBox.ViewDesigner/

Advanced Find: the relationship you are adding already exists in the query

I was looking at this post by CRM Ninja, and it hit me that, even though that post had a different purpose, it had an interesting workaround for the “one relationship only” problem which happens when trying to add the same relationship more than once to the query. For example, on the screenshot below, I just tried adding Tasks one more time:

image

Why would I wanted to do it? What I wanted to find all contacts where there are both “Important” and “Not important” tasks? Such as this one below:

image

If I used “or” condition on the linked task, that would give me contacts with both or one of those tasks. But I need “both”.

If I used “and”, it would not work, apparently, since none of the tasks can be important and not important at the same time.

However, I can do it this way to get the contacts I want:

image

Basically, the first condition would limit my contacts to those which have “Important” tasks. But, from there, I’d go back to the contacts(and that would be the same contact, since it’s just going back through the same relationship)… then again to the tasks… to make sure there are, also, “not important” tasks.

It may feel somewhat convoluted with this kind of looping through the relationships, but it does the trick.

Thank you CRM NinjaSmile


Canvas App Code Reuse Tool for XrmToolBox has just been listed in the repository

 

The Canvas App Code Reuse tool has just been listed, so it should be much easier to install it now:

image

Just a quick summary:

  • You can use this tool to do implement repeatable code in your canvas apps
  • Essentially, you’ll do a bit of mark up to identify places in your canvas app where the code should show up, and the tool will do all required markup replacements for you

 

Since I already have a separate page for this tool, just have a look there for more details:

https://www.itaintboring.com/canvasappcodereuse/