Author Archives: Alex Shlega

From “just make it work” to Low-Code to Pro-Dev

imageA few years ago, there was a common mantra on pretty much any project I was on:

“Stick to the configuration. There should be no javascripts and/or plugins”

This was happening since quite a few people had run into problems with those low-level customizations in the past. Which is understandable – to start with, you need somebody who can support those customizations moving forward, and, quite often, even plugin source codes would have been lost.

That’s about when Microsoft came up with the concept of “low code” – those are your Canvas Apps and Microsoft Flow (which is Power Automate now). It seemed the idea was quite ridiculous, but, by constantly pushing the boundaries of low code, Canvas Apps and Power Automate have turned into very powerful tools.

Which did not come without some sacrifices, since, if you think “low code” means “low effort”, it is not, always, the case anymore. Learning the syntax, figuring out various tricks and limitations of those tools takes time. Besides, “low code” is not the same as “no code” – just think about all that json parsing in Power Automate, organizing actions into correct sequences, Writing up canvas app formulas, etc. Yet, it presents other problems – what somebody can do easily with a few lines of code may actually require a few separate actions in Power Automate or a tricky formula in Canvas Apps. Does it save time? Not necessarily. Does it open up “development” to those folks who would not know how to create a javascript/.NET app? For sure.

In the meantime, plugins and custom workflow activities were still lingering there. Those of us not afraid of these monsters kept using them to our advantage, since, for instance, there are situations when you need synchronous server-side logic. Not to mention that it may be faster and easier to write for loop in .NET than to do it in Power Automate. But, it seemed, those technologies were not quite encouraged anymore.

On the client side, we got Business Rules. Which were supposed to become a replacement for various javascript webresources… except that, of course, it did not quite work out. Business Rules designer went through a few iterations and, eventually, got stuck at the point where it’s only usable for simple scenarios.  For example, if I have 20 fields to lock up on the form, I’ll go with javascript vs business rules designer since it would be faster to do and easier to support. For something more simple, though, I might create a business rule.

But then we got PCF components, and, so, the whole “low code” approach was somewhat ditched.

How come?

Well, think of it. There are lots of components in the PCF gallery, but none of the clients I know would agree to rely on the open-source code unless that code is, somehow, supported. And, since a lot of those components are released and supported by PCF enthusiasts (rather than by Microsoft partners, for example), there is absolutely no guarantee that support will last.

At least I know I can’t even support my PCF components beyond providing occasional updates. Basically, if there is a bug… and if you discover it once the component is in production… you are on your own.

Which means anyone considering to use PCF components in their environments should assume that a pro-dev person would be required to support such solutions.

PCF is only one example, though. There has been a lot of emphasis on the proper ALM and integration with DevOps in the recent years, and those are topic which are pro-dev by definition.

What else… Custom Connectors? Data providers for Virtual Entities? Azure Functions to support various integrations and/or as extensions for the Apps/Flows? Web resources are still alive since there is no replacement (PCF-s were never meant to replace the web resources), and plugins are still there.

The whole concept of Dynamics CRM/D365/PowerApps development has come a full circle, it seems. From the early days when everything was allowed, all the way through the days when scared clients would insist on not having anything to do with the plugins/javascripts, and back to the point where we actually do need developers to support our solutions.

So, for now, see ya “no code”. I guess we’ll be there again, but, for the time being, we seem to be very much on the opposite side.

Connection references

Connection references have been released (well, not quite, but they are in public preview, which is close enough), and, from the ALM support perspective, it might be one of the most anticipated features for those of us who have been struggling with all those connections in the Flows (chances are, if you are using Flows, and if your Flows are doing anything other than connecting to the current CDS environment, you have most likely been struggling).

The announcement came out earlier today:

https://powerapps.microsoft.com/en-us/blog/announcing-the-new-solution-import-experience-with-connections-and-environment-variables/

And, right away, when looking at the connections in my newly created Flow, I see connection references instead of connections:

image

Which is, honestly, a very pro-dev way of calling things, but, I guess, they should have been called differently from the former connections… and there we go, there are connection references now. Still, that captures the nature of this new thing quite accurately.

It’s interesting my older Flows are still using “Connections”, not “Connection References”:

image

Yet, it does not matter if I am adding new actions or updating existing ones. It seems older Flows are just using connections.

This can be solved by re-importing the Flow (unmanaged in my case), though:

image

Not sure if there is an easier way to reset the Flow so it starts using connection references, but I just added it to a new solution, exported the solution, deleted both the Flow and my new solution, then imported it back.

By the way, I made a rookie mistakes while trying it all out. When I tried importing my new solution to another environment, I did not get that setup connections setup dialog.

This is because I should have included connection references into the solution to get it to work:

image

Yeah, but… well, I added my connection reference, and it just did not show up. Have to say PowerApps were a bit uncooperative this afternoon:

image

Turned out there is a magic trick. Instead of using “All” filter, make sure it’s “Connection References”:

image

Now we are talking! And now I’m finally getting connections set up dialog when importing my solution to another environment:

image

Although, to be fair, maybe I did not even need connection references for CDS (current environment). But, either way, even if only for the sake of experimentSmile

PS. As exciting as it it, the sad part about this last screen is that we finally have to say farewell to the classic solution import experience. It does not support this new feature, and, so, as of now it’s, technically, obsolete. You might still want to do some of the things in the classic solution designer, but make sure you are not using it for import.

For example, here is a Flow that’s using Outlook connector. I just imported a managed solution through the new solution import experience. My flow is on, and there is a correctly configured connection reference in it:

image

When the same solution is imported using classic experience, the flow is off:

image

Add intelligent File Download button to your model-driven (or canvas) apps

How do we add file download button to the model-driven apps? To start with, why would we even want to do it?

There can be some interesting scenarios there, one being to allow your users to download PowerAutomate-generated word templates (see my previous post).

That, of course, requires some custom development, since you may want to pass current record id and/or other parameters to the API/url that you’ll be using to download the file. You may also need to use different HTTP methods, you may need to specify different titles for that button, and you may need to have downloaded file name adjusted somehow.

So, building on my earlier post, here is another PCF control – it’s a generic file download button this time (which we can also use with PowerAutomate):

 

downloadbutton.pcf

Unlike the earlier control, this one has a few other perks:

  • First of all, there is a separate solution (to make it easier to try)
  • Also, download url is represented by 3 parameters this time. This is in case the url is longer than 100 characters (just split it as needed between those 3 parameters) –it seems this is still an issue for PCF components
  • There is HTTP method parameter (should normally be “GET” or “POST”. Should be “POST” for PowerAutomate flows)
  • In the model-driven apps, you can use attribute names to sort of parameterize those parameters (just put those names within ## tags. You can also use “id” parameter, which is just the record id

Here is an example of control settings – notice how file name template is parameterized with the ita_name attribute:

image

Last but not least, this PCF control can work in two modes: it can download the file, or it can open that file in a new tab. What happens aftet depends on whether the file can be viewed in the browser, so, for example, a pdf file will show up in the new tab right away:

You can control component’s behavoir through the highlighted parameter below – use “TRUE” or “FALSE” for the value:

To add this control to your forms, just put some text field on the form, and replace out of the box control with the ITAFileDownloadButton.

The source code is on github: https://github.com/ashlega/ITAintBoring.PCFControls/tree/master/Controls/ITAFileDownloadButton

And here is a link to the packaged (unmanaged) solution:

https://github.com/ashlega/ITAintBoring.PCFControls/raw/master/Controls/Deployment/Solutions/ITAFileDownloadButtonSolution.zip

Using flow-generated word documents in model-driven apps

Document Templates have been available in model-driven apps for a while now – they are integrated with the model-driven apps, and it’s easy for the users to access them.

They do have limitations, though. We cannot filter those documents, we can only use 1 level of relationships, on each relationship we can only load 100 records max, etc.

There is “Populate a Microsoft Word Template” action in PowerAutomate. Which might be even more powerful, but the problem here is that it’s not quite clear how to turn this into a good user experience. We’d have to let users download those generated documents from the model-driven apps somehow, and, ideally, the whole thing would work like this:

image

So, while thinking about it, I recalled an old trick we can use to download a file through javascript: https://www.itaintboring.com/dynamics-crm/dynamics-365-the-craziest-thing-i-learned-lately/

It proved to be quite useful in the scenario above, since, in the end, here is how we can make it all work with a PCF control:

 

As usual, you will find all source codes on github:

https://github.com/ashlega/ITAintBoring.PCFControls

For this particular component, look in the ITAWordTemplate folder.

If using the component as is, you’ll need to configure a few properties. In a nutshell, here is how it works:

  • You will need a flow that is using HTTP request trigger
  • You will need to configure that trigger to accept docId parameter:

image

  • After that, you can do whatever you need to generate the document, and, eventually you’ll need to pass that document back through the response action:image

Here is a great post that talks about the nuances of working with “word template” action (apparently, in my Flow above it’s a much more simple version):

https://flow.microsoft.com/en-us/blog/intermediate-flow-of-the-week-create-pdf-invoices-using-word-templates-with-microsoft-flow/

  • Then you will need to put ITAWordTemplate component on the form, configure its properties (including Flow url), and that’s about it

 

Technically, most of the work will be happening in these two javascript methods:

public downloadFile(blob: any) {
	if (navigator.msSaveBlob) { // IE 10+
		navigator.msSaveBlob(blob, this._fileName);
	} else {
		var link = document.createElement("a");
		if (link.download !== undefined) { 
			var url = URL.createObjectURL(blob);
			link.setAttribute("href", url);
			link.setAttribute("download", this._fileName);
			link.style.visibility = 'hidden';
			document.body.appendChild(link);
			link.click();
			document.body.removeChild(link);
		}
	}
}

public getFile() {
	var docId: string = this.getUrlParameter("id");
	var data = {
		docId: docId
	};
	fetch(this._flowUrl, {
		method: 'POST',
		headers: {
			'Content-Type': 'application/json'
		},
		body: JSON.stringify(data) 
		}).then(response => {
			response.blob().then(blob => {
				this.downloadFile(blob);
			})
		}).then(data => console.log(data));
}

 

Just one note on the usage of “fetch” (it has nothing to do with FetchXML, btw). At first, I tried using XMLHttpClient, but it kept broking the encoding, so I figured I’d try fetch. And it worked like a charm. Well, it’s the preferred method these days anyway, so there you go – there is no XMLHttpRequest in this code.

One question you may have here is: “what about security?” After all, that’s an http request trigger, so it’s not quite protected. If that’s what you are concerned about, there is another great post you might want to read: https://demiliani.com/2020/06/25/securing-your-http-triggered-flow-in-power-automate/

 

PS. Also, have a look at the follow-up post which is featuring an improved version of this control.

Business rules and editable subgrids

What seems to be the most popular reason why a business rule would not be working?

There is very little that can really break in the business rules, except for one thing: we can include certain fields into the business rule conditions, and, then, forget to add those fields to the context (which can be a form, or it can also be an editable grid).

When working with the forms, we can always make a field hidden, so it won’t be visible, but it will still allow the business rule to work.

When it comes to the editable grids, though, it seems to be just plain dangerous to use the business rules.

Because:

  • Editable grids are using views
  • Those views can be updated any time
  • Whoever is updating the views will, sooner or later, forget (or simply won’t know) to add a column for one of the fields required by the business rules

 

And voila, the business rule will not be working anymore. What’s worse, this kind of bug is not that easy to notice. There will be no errors, no notifications, no any signs of  a problem at all. Instead, you’ll suddenly realize something is off (and you might not even know why it’s off by that time)… or, maybe, it’s the users who will notice long after the changes are in production…

This just happened to me again today – there is an editable subgrid for an entity, and that subgrid shows up on two different forms (even more, those forms are for different entities). There is an attribute that must be editable when on one of the forms, but it should be readonly when on the other form. The condition in my business rule would have to look at whether there is data in a certain lookup field, and that would only work if I had that lookup field added to the subgrid. Which means the interface would become more crowded, so the users would immediately want to get rid of that column.

Anyway, this is exactly why I removed a couple of business rules from the system just now and  replaced them with the following javascript:

function onFeeSelect(executionContext) {
var gridContext = executionContext.getFormContext();
if(gridContext.getAttribute(“<attribute_name>”) != null)
{
gridContext.getAttribute(“<attribute_name>”).controls.get(0).setDisabled(true);
}
}

 

That script is now attached to the onRecordSelect subgrid event only on the forms I need.

And this should do it – no more users will be updating that attribute in the editable subgrid on that particular form.

 

Flow connections and CI/CD

I am wondering if PowerAutomate flows can really be part of CI/CD when there are non-CDS connections?

There seem to be a few problems here:

  • Once deployed, the flow is turned off, and all non-CDS connections have to be re-wired in order to turn it on. That’s a manual step
  • While re-wiring the connections, we’ll be creating an unmanaged customization for a managed flow (assuming all deployments are using managed solutions

The first item undermines the idea of fully automated deployments.

The second item means, that we might not be able to deploy flow updates through a managed solution unless we remove unmanaged customizations (or the flow) first.

 

Here is how the flow looks like once it’s been deployed through a managed solution:

image

It’s off, since, in addition to the CDS (current environment) connector used for the trigger and one of the actions, there is an Office 365 Outlook connector in that flow, and the connection needs to be re-wired for that one:

image

If I tried turning the Flow on in the target environment, I’d get this error:

image

So… Have to edit the flow, and, to start with, have to sign into that Outlook connection:

image

Surprisingly, I can’t. Well, I can’t from the managed solution. Which is not that surprisingly come to think of it, but still…

From the default solution, I can do it:

image

The CDS connection re-wires automatically once I click “continue”(even though, presumably, it does not need to. At least does not need to be re-wired when there are other connections in the Flow), and, now, I can activate the Flow.

image

So far, it seems, I’ve just managed to demonstrate how automated deployment becomes broken.

But what about those unmanaged customizations?

Well, by re-wiring the connections, I got an unmanaged customizations layer for the Flow:

image

What if I the Flow were updated in the source environment?

For example, let’s change the email body. It used to be like this in the first version:

image

Let’s make it slightly different:

image

Once deployed in the target environment, the Flow is on. But that email action is still using original text:

image

Now, when importing the solution, we have a few options. What if I used the one which is not recommended?

image

This will take care of the updates, but the flow will be turned off. Because, I’m assuming, those connections were originally fixed in the unmanaged layer, and now at least some of those changes have been rolled back. Which means the connections have to be re-wired again before I can turn on the flow.

From the CI/CD perspective, this all seems to be a little cumbersome, so I am wondering how is everybody else doing CI/CD with flows?

Adaptive Cards – PowerStorm session findings

Just had a really cool PowerStorm session with Aric Levin and Linn Zaw Win. You would think that’s not a lot of people, and there would be no argument there, but, that said, the idea of those sessions is not to do a demo/presentation, but, rather, to try something out together.

Long story short, I like how it worked out, since we’ve managed not only to run into a bunch of issues along the way, but, also, to resolve them. Which is exactly what make up the experience.

So, fresh off the storm, here is my recollection of what we’ve learned today:

1. Whas is adaptive cards?

“Adaptive Cards are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into native UI that automatically adapts to its surroundings. It helps design and integrate light-weight UI for all major platforms and frameworks.”

 

To be a little more specific, we can use adaptive cards with Teams, Outlook, Bots, Javascript, etc. We can even create a PCF control to render adaptive cards in the canvas apps/model-driven apps (there is an example here).

To be absolutely specific, here is an example of the rendered adaptive card:

image

You can have a look at the json for that card here: https://adaptivecards.io/samples/CalendarReminder.html

Which bring me to the next point

2. There is an adaptive cards designer

Using the adaptive cards designer, you can quickly build your own adaptive cards

It’s worth mentioning that different host apps (Teams, Outlook, etc) may be using slightly different schema for the adaptive cards; however, adaptive card designer is aware of those differences, and this is exactly why it’s allowing us to select a host app:

image

For instance, Outlook allows usage of Adaptive Cards to create so-called actionable messages, and there is a special action called Action.Http which we might use to post card date to a url. That action is only available in Outlook, and it won’t work anywhere else. However, an adaptive cards meant for Teams might use Action.Submit action, but would not be able to use Action.Http action.

3. So, how do you send an adaptive card to Teams?

We were using PowerAutomate Flows during this session. Which is, of course, just one of the options.

Still, in order to send an adaptive card  from the Flow, we need to use a connector. With the Teams, it turned out to be relatively straightforward – there are a few actions we can use:

image

There are actions to send adaptive cards to a user or to a channel. And, for each of those, you can choose to wait for the response (in which case the Flow will pause) or not to wait for the response (in which case the Flow will continue running)

There are a few caveats there:

When a card it sent to a channel, the Flow that’s setup to wait for the response, will resume after the first response

When a card is sent to multiple users from the same flow, you can either do a “for each” loop to send the cards concurrently, or you can send them one after another. In the first case, all users will see the card right away. However, The Flow will still have to wait for everyone’s response.

In the second case, adaptive cards will be showing up sequentially. Once the first user provides their response, the Flow will continue by sending the same card to the second user, then it will wait for that user to respond, and so on.

Which means it might be challenging to implement a Flow which will be sending a card to multiple users, but which will be analyzing each and every response as those responses start coming in (without waiting for all of them first).

Because, as it turned out, we can’t terminate a Flow from within the foreach.

So that’s one of the challenges we did not have time to dig into.

4. And how do you send an adaptive card by email?

There are a few good resources:

https://docs.microsoft.com/en-us/outlook/actionable-messages/adaptive-card

https://spodev.com/flow-and-adaptive-cards-post-1/

Sending an adaptive card by email proved to be extremely simple and, yet, quite complicated at the same time:

image

Btw, pay attention to that script tag – it’s important.

Anyway, originally we tried sending an adaptive card without that highlighted originator attribute. It worked… but it only worked when an email was sent to ourselves. I could send an email to Aric, and he would not see the adaptive card. Aric could send an email to Linn, and Linn would not see the card. But, when I were sending an email to myself, it was all working. It was the same for Linn and Aric.

Did not take long for Aric to find a page talking about the security requirements:

https://docs.microsoft.com/en-us/outlook/actionable-messages/security-requirements

Then we’ve also found this excerpt:

https://docs.microsoft.com/en-us/outlook/actionable-messages/email-dev-dashboard

image

While Aric an I were messing with the Flow, Linn was trying a different approach. He has found actionable messages debugger for Outlook:

https://appsource.microsoft.com/en-us/product/office/WA104381686?tab=Overview

image

Once it was installed, we could finally see the error:

image

So, it was a problem with the security. We needed to set that originator field. And the url in that message led us straight to where we need to register new originator:

https://outlook.office.com/connectors/oam/publish

So we did, and, once we had originator id, we put it in the adaptive card json:

image

That was the last step, after which the card started to show up for all 3 of us
no matter who was sending it.

5. What have we not tried?

Of course there is probably more we have not tried than what we have tried. One thing I am thinking of trying on my own (unless somebody does it before) is creating a Flow which would be triggered by a POST http request (sent “from” the outlook actionable message). This would allow such a Flow to kick in once a user responds to the adaptive card, and, essentially, that would mean we can use actionable messages to create a completely custom email-based approval workflow.

Anyway, those turned out to be 2.5 hours where I learnt quite a bit, so this session format seems to make sense. Will try it again in a couple of weeks, so stay tuned.

PowerStorm watch for July 30 – Adaptive Cards possible

LogoHey everybody – another PowerStorm watch has just been confirmed, and, apparently, it’s going to happen on Thursday, July 30, at 9 PM EST.

 

 

According to the itaintboring powerologists, here are some examples of the conditions you may expect during the event:

  • Different ideas of using Adaptive Cards with Power Platform will be floated around
  • The session will start with a quick overview of the adaptive cards
  • Following the overview, possible scenarios of adaptive cards usage will be reviewed
  • We will have a quick brainstorming session to see what other ideas we may come up with (hopefully, those who end up in the center of this event will feel recharged enough to start generating ideasSmile ).
  • Finally, and this may depend on the experience of the folks attending the event, we will try building out a few samples of using adaptive cards in the Flows/Teams/Canvas Apps/Model-Driven Apps

 

There are still a few slots available – register now!

July 16 PowerStorm session summary & lessons learned

Have you heard of the PowerStorms? As Microsoft keeps changing application development climate to the better, some fascinating things start happening around the Globe. Four of us happened to find ourselves right in the center of one of those on July 16 – that’s when the first ever PowerStorm session happened!

There were four of us to witness it:

And myself

By the way, between the four of us, we had 16 hours time difference, and nobody was on the same time zone.

Originally, I thought this would be more of a training-like event, but, since there was no set agenda other than “let’s do something interesting with Power Platform”, I guess I just wanted to see how it works out.

So… Arjun brought up a problem to solve, Greg was absolutely instrumental in organizing the process, Linn and I were mostly messing with the Flow, and, despite all this… we ended up not being able to solve the problem at hands.

How so?

Well, we just wanted to achieve a seemingly simple goal of creating a Flow that would trigger whenever a new file is added to a blog and that would post that file to a Teams channel so it all looks like this:

image

As a result of this, we were hoping to get a message posted to Teams which would have a link to the uploaded file:

image

What we found out is:

  • We can use Azure Blob Storage connector to trigger our Flow
  • We can actually connect to a completely different tenant (since we did not have Blob Storage in the trial environment we created that time)
  • We can also use that connector to retrieve the file
  • We can use Sharepoint connector to upload that file to the Team Channel’s sharepoint folder
  • And we can use Teams connector to post a message

What we have not quite figured out is how do we display that file link in the same manner it’s done on the screenshot above. It must be something simple we’ve missed? AdaptiveCards maybe? Have no idea how to use them yetSmile

Anyway, it seems there are really a few ways to conduct these sessions, so it’s something to think about on my spare time.

In the meantime, there are a few other lessons learned:

  • If we are to do another hackathon-style session, I should have the trial ready in advance. Otherwise, it can easily take half an hour just to set everything up and to add participants as trial users
  • For those interested in the E5 trial licenses, you might also want to look at the following link: http://aka.ms/m365devprogram This kind of tenants won’t have D365 instances, but you will get everything that comes with E5 including Power Apps for Office 365 (https://docs.microsoft.com/en-us/office/developer-program/microsoft-365-developer-program-faq). These developer training instances are for 90 days and they can be extended. Although, they are not necessarily the best option for PowerPlatform trainings/hackathons

Well, it was good 3 hours of learning/trying/brainstorming. We have not solved the problem, and it’s still bugging me, but I’ve definitely learned quite a few things.

Thank you folks, hope to see you around next time!

Setting up sprints for dev/qa in Azure DevOps

It seems to be very common when there is a project which is trying to implement SCRUM, and which can never really do it because of the inherent problem which occurs when there is a dedicated QA team.

SCRUM assumes QA happens within the sprint, but, when scrum development team is different from the QA team, it can get really messy.

Since, of course, QA can’t start till development is done. Which means QA usually starts close to the end of the development sprint, and that leas to all sorts of workarounds with each of eventually breaking the SCRUM process:

  • There might be 3 weeks sprints with the last week being reserved for the QA. But what is it the developers are supposed to do during that last week?
  • QA team may choose to do their work outside of SCRUM; however, if “QA test passed” is part of the definition of done for the dev team, they’ll never be able to close work items within their sprints, and, at the very least, that’s going to render burndown charts useless

 

The problem here is, usually, that it’s almost impossible to blend traditional QA and development within the same sprint, and, this way or another, we end up with the diagram below if we are trying to complete both development and QA within the sprint:

image

So what if we treated DEV and QA processes as two different SCRUMs?

That would requires a few adjustments to the definitions, but, I guess, we could think of it this way:

  • The dev team would still be responsible for their own testing, and they’d be doing it within dev sprints. This might not be the same level of comprehensive testing a QA team can deliver, but, as far as Dev team is concerned, there would be release-ready candidates after each sprint, it’s just that they’d be released to the QA team first
  • QA team would be responsible for comprehensive testing, possibly for adding test automation, etc. They’d be doing it within their own set of sprints, and, by the end of each sprint, they’d have added known issues to the release candidate provided to them by the dev team. That release candidate (with the associated list of known issues) could be made available to the end users

 

Of course this would assumes some level of communication between two teams, but this is where Azure Devops can really help because:

  • Within the same project, we can create multiple teams
  • If we choose so, each team will be able to see the same work items
  • Each team can have its own set of iterations

 

And, of course, we can have joint sprint reviews, scrum of scrum meetings, etc.

So, how do we set it up in devops? Here is an example:

For a Scrum project in Azure Devops, create a Dev team and a QA team

image

There will always be a default project team as well, btw. You can use that one in place of one of the other ones if you want.

Configure project iterations

image

Notice how DEV and QA sprints are aligned. They don’t have to be

For my example, I’ve set up each team to have access to the default area AND to all sub-areas(sub-areas are included on the screenshot below)

image

Dev team should have access to the “dev” set of iterations

image

QA team should have access to the QA set of iterations

image

With the setup done, I can now go to the sprint boards for the Dev Team (notice team selection at the top), and add a work item to the bord + a task:

image

There would still be noting on the taskboard for the QA team:

image

By the end of the sprint, my task #5077 would move to the “done” state on the Dev Team taskboard:

image

And I (as a developer) could create another task for the QA and put it to the upcoming QA sprint (which is not quite in compliance with SCRUM principles… but QA team can, then, decide to get those items off the upcoming sprint and handle it in the sprint farther away):

image

Now if I look at the QA sprint taskboard, here is what shows up here:

image

And there you go: Dev Team will be working on their own set of sprints, yet QA Team will be working on their set of sprints. Each team can plan their work, they don’t depend on each other when closing the sprints, and it’s just that the “release candidate” has to go through a two step process now:

image