Monthly Archives: January 2020

Reactivating a classic workflow that’s part of a managed solution

Managed solutions are recommended for production, and we’ve been using them lately without much troubles, but, occasionally, something does come up.

One of the solutions had a workflow which required reference data. So it should not have been included into that solution to start with, but, since it was, it could not be activated.

We’ve got the reference data deployed, and I was trying to activate the workflow… when I ran into the error below:

image

As it often happens, the error is not too helpful:

“Action could not be taken for few records before of status reason transition restrictions. If you contact support, please provide the technical details”.

Apparently it’s talking about the status reason transitions… that kind of throw me off at first, since I thought I just can’t reactivate “managed” workflows at all for some reason. That might be a bummer for sure.

Well, turned out there is still a way. As it’s been for a while, if you can’t do something from the managed solution, try doing it from the default solution. Worked like a charm in this case, too, and I got my workflow activated.

But, of course, I should not have had this problem to start with if I put all those workflows in the right solutions and did my deployment in the right order. Still… If there is a third-party solution in the system, it might be helpful to know that what’s been deactivated can still be reactivated. As long as it’s done from the default solution.

Word Templates and default printer

Have you ever used a Word Template in Power Apps?

Choose Word Template and select entity

If you have not, have a look at this documentation page. For the model-driven apps, it’s one of the easiest ways to quickly create standardized word documents for your entities.

Although, Word templates do come with some limitations – I won’t go into the details here since it’s not what this post is about. I use Word templates occasionally, and they work great where I don’t hit those limitations.

This was one of the projects where Word templates seemed to fit great. We had a few different document to print, there were not deep relationships to display, we could live with no conditional logic, etc. And, then, just about the time we were supposed to go live, one of the business folks was looking at it and posed a very interesting question:

“So, do I have to remember to switch the printer every time I use this?”

See, for some of the records, there would be more than one template, and they would have to be printed on different printers. One of the printers would be a regular printer, but the other one would be a plastic card printer. And, yes, if somebody did send a 10-pages long regular document to the card printer, that would be a waste of plastic cards. The opposite of that would be sending a card template to the regular printer, but that’s much less problematic.

Seems simple, right? Let’s just set the default printer and be done with it, or, at least, so I thought.

Unfortunately for us, Microsoft Word (2016 in our case) turned out to be more optimized that expectedSmile

If you have 2 printers, and if you set one of those as the default printer, you would probably expect the default printer to be selected by default?

image

The way it works, though, is:

  1. Imagine you’ve opened a document in Word
  2. Then you printed that document to a non-default printer
  3. Then you opened another document in a different Word window
  4. And you are trying to print that second document

The printer you’ll see selected by default is the same printer that you used for the first document:

image

Isn’t that awesome? You don’t need to choose, that’s the one you used before… except that we’d just waste a bunch of plastic cards in our scenario.

The problem seems to be related to the fact that there is only one winword process, no matter how many word documents you have open on the screen:

image

And, it seems, it’s that process that’s actually storing “current” printer selections for the user.

So, how can we work around this performance optimization in Microsoft Word?

We have to close all word windows, then the process is unloaded from memory, and the next time we open a document in Word and try sending it to the printer, Word will be using default printer again:

image

I wish there were a setting somewhere…

Well, there are articles suggesting to use macros in this scenario to choose the printer, but, since it’s a word template, and since there will be different users even on the same “terminal” machine, I am not sure how well this will work and if it will work at all. Might still need to try.

Power Platform Dataflows vs … Taking a cruise to see Microsoft cloud ETL/ELT capabilities

Sometimes I think that Microsoft Cloud is not quite a cloud – it’s, actually, more like an ocean (which is, probably, similar to how things are with other “clouds” to be fair).

As an on-premise consultant, I did not use to appreciate the depth of Microsoft cloud at all. As a Power Platform consultant, I started to realize some of the extra capabilities offered by the Power Platform, such as:

  • Canvas Applications
  • Power Automate Flows
  • Different licensing options (can be good and bad)
  • Integration with Azure AD

 

Yet I was suffering quite often since, you know, there is “no way I can access the database”.

And, then, I tried the Dataflows recently. Which took me on a little different exploration path and made me realize that, as much as I’m enjoying swimming in the familiar lake, it seems there is so much more water out there. There is probably more than I can hope to cover, but I certainly would not mind going on a cruise and see some of it. So, this post is just that – a little cruise into the cloud ETL/ELT capabilities:

image

And, by the way, normally, you don’t really do deep diving on a cruise. You are out there to relax and see places. Here is the map – there will be a few stops, and, of course, you are welcome to join (it’s free!):

image

Stop #1: On-Premise ETL tools for Dynamics/Power Platform

If you have not worked with Dynamics on-premise, and I am assuming it’s about time for the pure-breed cloud consultants to start showing up, on-premise ETL tools might be a little unfamiliar. However, those are, actually, well-chartered waters. On-premise ETL tools have been around for a long time, and, right off the top of my head, I can mention at least a few which I touched in the past:

  • SSIS
  • Scribe(now Tibco – thank you Shidin Haridas for mentioning they were acquired)
  • Informatica

 

They all used to work with Dynamics CRM/Dynamics 365 just fine. Some of them turned into SAAS tools (Scribe online, for example), and some of them took a different route by merging into the new cloud tools (SSIS). Either way, in order to use those tools we had to deploy them on premise, we had to maintain them, we had to provide required infrastructure, etc. Although, on the positive side, the licensing was never about “pay per use” – those tools were, usually, licensed per the number of connections and/or agents.

We are still just near the shore, though.

Stop #2: PowerPlatform ETL capabilities

This is where we are going a little beyond the familiar waters – we can still use those on-premise ETL tools, but things are changing. Continuing the analogy, the cruise ship is now somewhere at sea.

Even if you’ve been working with the Power Platform for a while now, you might not be aware of the ETL capabilities embedded into the Power Platform. As of now, there are, actually, at least 3 options which are right there:

 

And, of course, we can often still use on-premise tools. After all, we are not that far from the shore. Though we are far enough for a bunch of things to have changed. For example, this is where an additional Power Platform licensing component kicks in since Power Apps licenses come with a certain number of allowed API calls.

Still, why would I call out those 3 options above? Technically, they are offering everything you need to create a ETL pipeline:

  • A schedule/a trigger/manual start
  • A selection of data sources
  • A selection of data destinations

 

Well, data lake export is special in that sense, since it’s hardwired for the CDS to Azure Data Lake export, but, when in the cloud, that’s an important route, it seems.

How do they compare to each other, though? And, also, how do they compare to the on-premise ETL tools (let’s consider SSIS for example):

image

The interesting part about Data Lake Export is that it does not seem to have any obvious advantages over any of the other tools EXCEPT that setting up CDS to Data Lake export looks extremely simple when done through “data lake export”.

Stop #3: Azure Data Factory

Getting back to the analogy of Azure being the ocean, it should not surprise you that, once in the ocean, we can probably still find the water somewhat familiar, and, depending on where we are, we might see familiar species. Still, the waters are certainly getting deeper, and there can be some interesting ocean-only life forms.

Hey, there is one just off the port side… Have you seen Azure Data Factory? That’s a real beast:

image

This one is strong enough to survive in the open waters – it does not care about Power Platform that much. It probably thinks Power Platform is not worth all the attention we are paying it, since here is what Azure Data Factory can offer:

image

  • It has data flows to start with
  • It can copy data
  • It has connectors
  • It has functions
  • It has loops
  • It is scalable
  • Pipeline designer looks somewhat similar to SSIS
  • It can actually run SSIS packages
  • It allows deployment of self-hosted(on-premise) integration runtime to work with on-premise data
  • It offers pipeline triggers
  • If has the ability to create reusable data flows
  • It has native support for CI CD (so, there is dev-test-prod)

 

And I think it has much more, but, well, it’s a little hard to see everything there is to it while on a cruise. Still, this screenshot might give you an idea of what it looks like:

image

In terms of data transformations, it seems there is a lot more one can do with the Data Factory than we can possibly do with the Dataflows/Data Lake Export/Power Automate Flows.

Although, of course, Data Factory does not really care about the Power Platform (I was trying to show it Power Platform solutions, and it just ignored them altogether. Poor thing is not aware of the solutions)

Finally, going back and relaxing in the sun…

image

It’s nice to be on a cruise, but it’s also great to be going home. And, as we are returning to the familiar Power Platform waters, let’s try putting all the above in perspective. The way I see it now, and I might be more than a little wrong, since, really, I did not have an opportunity to do a deep dive on this cruise, here is how it looks like:

  • SSIS will be becoming less and less relevant
  • Azure Data Factory will take over (probably has already done it)
  • Power Platform’s approach is almost funny in that sense. And, yet, it’s extremely useful. Following the familiar low code/no code philosophy, Power Platform has introduced its own tools. Which often look like simplified (and smaller) versions of their Azure counterparts, but which are meant to solve common Power Platform problems, and which are sometimes optimized for the Power Platform scenarios (environments, solutions, CDS data source, etc). The funny part there is that we, Power Platform consultants, are treated a little bit like kids who can’t be trusted with the real things. But, well, that approach does have some advantages:)

 

Power Platform dataflows

Have you tried Power Platform dataflows yet?

image

I would not be too surprised if you have not – I had not tried them until this weekend either. Might not have completely figured them out yet, but here is a quick rundown so far.

Basically, a data flow is a ETL process that takes data from the source, uses Power Query to transform it, and places this data in one of the two possible destinations:

image

Among those sources, there are some really generic ones – you can use Web API, OData, JSON, XML… They can be loaded from OneDrive, they can be loaded from a URL, etc:

image

For the Power Automate/Power Apps folks reading this – Data Flows are not using all the familiar connectors you may be used to when creating power automate Flows, for instance. As I understand it, Data Flows cannot be extended by throwing in yet another data source in the same way you would do it for Power Automate, for example. Although, since there are those generic “Web API/OData” sources, the extensibility is still there.

However, Data Flows did not start in Power Platform – they were first introduced in Power BI. There is a great post that explains why there were introduced there:

https://powerbi.microsoft.com/fr-fr/blog/introducing-power-bi-data-prep-wtih-dataflows/

Previously, ETL logic could only be included within datasets in Power BI … Power BI dataflows store data in Azure Data Lake Storage Gen2”

In other words, the problem Data Flows meant to solve in the Power BI world was about doing all that data transformation work outside of the Power BI dataset to make it much more reusable.

Power Platform dataflows seems to be doing exactly the same, although they can also store data in the Common Data Service. Actually, by default they will target Common Data Service. If you choose “Analytical entities only”, you’ll get data stored in Azure Data Lake Storage Gen2:

image

But what if you wanted to move data from CDS to Azure Data Lake Storage Gen2? Potentially (and I have no tried), you can probably choose “Analytical entities only” on the screenshot above, and, then, connect to CDS using Web API, then move that data to the data lake.

There is another option in the Power Platform which is called Export to Data Lake:

image

There is some initial setup, but, once it’s all done, you can enable CDS entities for export to data lake:

image

Important: don’t forget to enable Change Tracking on your CDS entity if you want it to show up on the list above.

So, with all the above in mind, here are two other facts / observations (in no particular order):

  • When setting up a data flow, you need to configure refresh frequency. For the data lake “target”, you can refresh target dataset up to 48 time per day. It seems there is no such limitation for CDS.
  • “Export to data lake” works somewhat differently from a regular data flow. It does create files for the records, but it also creates snapshots. The snapshots are not updated at once – they are updated with certain frequency (about 1 hour?)

 

Notice how, in the storage explorer,  I have snapshots dated Jan 11:

image

However, contacts files for 2018 has already been updated on Jan 12:

image

Have a look at the following post for a bit more details on this:

https://powerapps.microsoft.com/en-us/blog/exporting-cds-data-to-azure-data-lake-preview/

Compare those screenshots above to a regular Data Flow which has been configured with 1 minute refresh frequency (and, therefore, which has stopped to run because of the 48 runs per day limitation):

image

As you can see, there is a snapshot every minute, at least for as long as the data flow kept running.

Compose action, dynamic content, and data conversions

Earlier today, a colleague of mine who tends to spend his days developing Power Automate Flows lately showed me something that seemed confusing at first. Now having dug into it a bit more I think it makes sense, but let’s see what you think.

Here is a Flow where I have an HTTP Trigger, a Compose action, Initialize Variable action, and “send an email” action:

image

When trying to add dynamics content to the email body, I see Compose action outputs, but I don’t see the variable. I also see “name” and “value” from the http request json payload.

What’s interesting about all this is that:

  • Presumably, email “body” is of “string” type
  • Compose action is of “any” type
  • “Name” and “Value” are of “string” type, too

 

As for the email “body”, I am not really sure how to verify the type there, but it’s a reasonable assumption.

I was not able to find  that statement about “Compose” action in the PowerAutomate documentation, but here is what Logic Apps documentation has to say:

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-actions-triggers#compose-action

image

As for the Http request, here is the schema:

image

So, what if I changed the type of my variable to make it a “string”? I would not be able to use “body” from the Dynamic content to initialize it:

image

BUT. I would be able to use that variable for the email body:

image

Or I could just use “Compose”, since, the way I see it, it can take “any” type for input, and it produces “any” type for output. Which makes it compatible with any other type, and which is different from variables, since they are actually strongly-typed.

PS. Of course I might also use triggerBody() function to achieve the same result without using Compose, but what would I write about then?Smile

Power App Portals, Azure AD B2C, and external identities

Before you read this post, let me suggest two earlier posts first, since they are all part of the same series:

Power App Portals have identity management functionality available out of the box. What it means is that the portals can use local identities, but they can also use external identities (azure, google, facebook, etc). All those identities can be linked to the same user profile in the portal (contact record in CDS):

image

Once a portal user has logged in using some kind of authentication, they can manage their other external authentications from the profile page:

image

For example. I just set up Azure AD B2C integration for my portal (have a look at the previous post for more details). However, I did not limit portal sign in options to the azureb2c policy only (through the LoginButtonAuthenticationType parameter), so “local account” and “Azure AD” are still there:

image

If I sign in through Azure AD, I’ll be able to connect my other external identities to my portal profile – in this case I only have azureb2c configured, so there are not a lot of options, but I could have configured google and facebook, for example, in which case they would be showing up on the list as well:

image

This is where the difference between using Azure AD B2C as an external identity provider and utilizing those other “individual” identity providers becomes clearer.

When Azure AD B2C is available, it’s likely the only identity provider the portal needs to know about, so it only makes sense to instruct the portal to use that identity provider all the time through the following site setting:

Authentication/Registration/LoginButtonAuthenticationType

image

When done that way, “sign in” link on the portal will bring the users directly to the Azure AD B2C sign in page:

image

So… there is no Azure AD or other options there? This is because I now need to go back to the Azure AD B2C and configure required identity providers as described in the docs:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/tutorial-add-identity-providers

Note: it seems Azure AD application setup instructions provided there might not work as is, at least they did not work for me. When specifying the redirect url for my Azure AD application, I had to use the following format:

https://treecatsoftwareb2c.b2clogin.com/5cb9b89d-d5d2-4e31-….-e82a2cf12121/oauth2/authresp

That ID in the url is my Azure AD B2C tenant ID:

image

Otherwise, I kept getting an error when trying to authenticate through Azure AD since the redirect url specified for my application was different from the redirect url added to the request by Azure AD B2C when it was “redirecting” authentication to Azure AD (Uh… would be good if you are still following me, since I seem to be loosing it myself in all those redirects).

Anyway, once I’ve done that, Azure AD is now showing up as a “social account” sign in option on the Azure AD B2C sign in page:

image

If I use it to sign in, that brings me to the other screen:

image

Another note: I did not enable email claim on my B2C signin flow, so, at first, once I passed through the screen above, I got the following page displayed on the portal:

image

This is not how it should be, so, if you happen to forget to enable that claim as well, just go to you Azure AD B2C portal, find the signin policy you have set up for the portal, and add email claim there:

image

Once I’ve done that, though, the portal is complaining again:

image

But this is normal. The portal is not allowing a registration for an email that’s already there – remember that original portal account was using Azure ID external identity; however, right now I’m trying to register with an Azure AD B2C external identity, and it’s different. So, the portal is trying to create a new contact record in CDS with the same email address, and it can’t.

There is a portal setting that allows auto-association to a contact record based on email:

https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/azure-ad-b2c#claims-mapping

If I wanted to enable that setting, I would need to add the following site setting to the portal my Azure AD B2C external provider (and set the value to true):

Authentication/OpenIdConnect/azureb2c/AllowContactMappingWithEmail

Finally, once that is done, I can now login to the portal through Azure AD B2C… but still using my Azure AD identity.

Since I did set up the portal (see above) to use Azure AD B2C exclusively, I don’t see my other external identities (or the local portal identity) on the profile page:

image

However, behind the scene the portal just created another external identity for my contact record:

image

It’s almost all good so far except for one remaining question (I know there are more questions, but this one is important). Having the portal integrated with Azure AD B2C, I would think there should be some easy way to link multiple external identities to the same user account. Basically, what if a portal user had different external identities(Azure AD, Google, Facebook, etc) and wanted to use either of them to login into the same portal account?

While identity management was done by the portal, it was possible to connect external identities from the user profile screen.

However, since I have just outsourced identity management to the Azure AD B2C, that kind of linkage would have to be done through Azure AD B2C now.

This seems to be what the github repository below is meant for, but I am certainly going to have to spend some more time on it:

https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/tree/master/account-linking

And this will have to wait until the next post.

Power App Portals and Azure AD B2C

The whole reason I started to look into the details of OAuth in the previous post is that I really wanted to see how to set up external identity providers for the portals.

There are some great blog posts out there which are describing the process in a step-by-step kind of way with all the necessary screenshots:

https://readyxrm.blog/2019/07/24/configure-azure-ad-b2c-for-powerapps-portals/

There is a documentation page as well which can walk you over pretty much the same steps:

https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/azure-ad-b2c

What I was looking for is a bit better understanding of what’s happening behind the scene, though.

As a result, I think there are three items to discuss in this post:

  • OpenID Connect
  • Azure AD B2C
  • Setting up the portal to work with Azure AD B2C

 

But, again, if you have not looked at the OAuth, or if the term “implicit flow” still sounds too alien to you, have a look at the previous post and all the references there.

Because here is how it all works:

  • We can configure portals to use Azure AD B2C as an identity provider
  • Azure Active Directory B2C is a service from Microsoft that enables external customer sign-ins through local credentials and federation with various common social identity providers
  • Portals do support Open ID Connect, Azure AD B2C does support Open ID Connect… so there you have it: one can work with the other using Open ID Connect

 

What is Open ID Connect, though? It’s an extension of OAuth to start with, so we are still talking about all those client id-s and implicit/code flows. However, when utilizing Open ID Connect, we can get not only the authorization token, but, also, the so-called id_token. Which will actually represent user identity – there is a nice walkthrough in the post below if you are interested:

https://connect2id.com/learn/openid-connect

Azure AD B2C supports Open ID Connect: https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-oidc

Portals support Open ID Connect and can be configured to work with Azure AD B2C: https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/azure-ad-b2c

What’s interesting is that Azure AD B2C can also work as a “proxy” between the portal and external identity providers:

image

https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-overview

Even though those external identity providers have to be configured in your instance of Azure AD B2C, since, from the external identity provider standpoint, your users would have to authorize Azure AD B2C to access user identity information. So, for example, for the identity providers which are relying on OAuth, you’d have to go over the regular client registration steps to get client id & client secret so you could set up those providers in Azure AD B2C:

image

As I mentioned before, Azure AD B2C will work as a “proxy” in that sense. The portal will ask Azure AD B2C for the user identity, but Azure AD B2C will offer your users an option to authenticate through a configured external provider (and the portal does not need to even know about it).

Which may give you the benefit of single sign-on between the portal and other applications using Azure AD B2C(no matter if, ultimately, your users are using google/facebook/twitter/etc identity).

As a side note, what if you did not have Azure AD B2C and still wanted to use Google for portal authentication, for example? That would still be doable:

https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/configure-oauth2-settings

With all the above, it should be easier now to answer some of the questions about all this set up process, such as:

Why do we need to register an app (OAuth client) in Azure AD B2C for the portal?

That’s simply because it’s OAuth, and we need a client id to make requests to the OAuth server

Why do we need to register an app (OAuth client) in Google if we wanted to add google identity provider to Azure AD B2C?

That’s because Azure AD B2C will be using OAuth to request authorization from the google OAuth servers for the usage of google profile API-s etc

Why would we choose Azure AD B2C over other external identity providers?

Essentially, this is because we’d be outsourcing identity management to a separate service that has a bunch of useful features available “out of the box”: https://docs.microsoft.com/en-us/azure/active-directory-b2c/technical-overview

 

As for setting up your portal to work with Azure AD B2C, I’ll just refer you to the same two pages I mentioned earlier in this post:

https://readyxrm.blog/2019/07/24/configure-azure-ad-b2c-for-powerapps-portals/

https://docs.microsoft.com/en-us/powerapps/maker/portals/configure/azure-ad-b2c

PS. There is a continuation to this post here – you will find additional details on how to set up the portals with Azure AD B2C, and, yet, how to enable additional external identity providers through Azure AD B2C: https://www.itaintboring.com/powerapps/power-app-portals-and-multiple-external-identities/

Have fun!

OAuth, Implicit Flow, and Authorization Code Flow

If you ever tried registering applications in Azure, you have probably seen the term “implicit flow”. I’ve seen it a few times, and, finally, I’ve figured I need to get to the bottom of it. What I ended up with is the post below – it’s not meant to cover OAuth in details, but it is meant to provide a conceptual explanation and required references, even if only so I would know where to look for all this information when I need it again. If you find inaccuracies there, please drop me a note.

The purpose of OAuth is to provide a way for the users to authorize application access to various API-s. Once the authorization is provided, a token will be issued which the application will be able to utilize to call those API-s.

It all starts with registering a client (which is represented by a client id) on the authorization server. That client is normally set up to require access to certain API-s. However, required access is not granted automatically – it’s the user who has to authorize the client first.

So, you might ask, why can’t we keep utilizing user identity all the time? Why introducing those client id-s etc? Actually, it’s just a matter of reducing the “attack surface”. For example… As an Office 365 user, you might be able to access Common Data Service Web API, SharePoint API-s, Exchange API-s, and a whole lot of other services. However, when authorizing a client, you will only be authorizing access to certain API-s (so, an authorized client app might get access to the CDS API-s, while it won’t have access to the Exchange API-s).

Now, imagine there is a web page, and there is a JavaScript that needs to call certain API. When the page is loaded, it should not be able to just call that API – some kind of authentication and authorization has to happen first. Imagine there is an OAuth server, and there is a client registered there which can access required API-s. The idea is that, knowing the client ID, our imaginary web page needs to do a few things:

  • It needs to somehow ask the user to authenticate and authorize the usage of that client (which essentially means providing authorization to access those API-s)
  • Once this happens, it needs to somehow confirm to the API-s that it’s been authorized to use them

 

Let’s assume for a moment that the authentication and authorization has already happened. How does the second part work?
That is, actually, relatively straightforward (at least conceptually). On the client side, we just need to add authorization token to all API calls as a request header:


POST /api?param=123 HTTP/1.1
Host: apiserver.com
Authorization: Bearer AbCdEf123456

It will be up to the API to validate those tokens – for example, the API might just confirm token “validity” with the authorization server. Well, if you want to explore this topic a little more, have a look at this post:
https://dzone.com/articles/oauth2-tips-token-validation


But how does our imaginary web page gets that token to start with?

That’s what happens as part of the authorization grant, and this is where things get messy since there are different authorization grant flows. In other words, there are different ways our web page (or our application) can get a token from the authorization server.

You’ve probably spotted two of those authorization grant flows while looking at the Azure B2C configuration, or while trying to create app registrations in Azure portal:

  • Authorization code flow
  • Implicit flow

 

However, even though the authorization server might be able to support different authorization grant flows, not all of those flows might be supported on the client side.

There is a detailed explanation of how those flows work in the following post:

https://developer.okta.com/blog/2018/12/13/oauth-2-for-native-and-mobile-apps

I’ll copy one of the images from the post above just to illustrate, quickly, what’s happening in the implicit flow:

Implicit Flow

There is a bunch of redirects in this flow. You will open the browser, it will load the page, and the script in that page will realize that it needs to get a token. So, the script will redirect your browser to the authorization server, and, as part of that redirect, it will also specify that it wants to use implicit flow by passing “token” for the “response_type” in the query string:

https://alexbilbie.com/guide-to-oauth-2-grants/

From there, the user will provide the authorization, the token will be issued, and it will be sent back to the client browser as a url fragment…

What’s a url fragment? That any part of the url following the ‘#’ character. URL fragments are special since browsers won’t add fragments to the requests – instead, fragments live on the client side and they are available to the javascript running on the browser side. If you are interested in how fragments behave, have a look at the post below:

https://blog.httpwatch.com/2011/03/01/6-things-you-should-know-about-fragment-urls/

That reduces the “exposure” of OAuth tokens on the network, so this flow becomes more secure. However, it is still less secure than the other one (authorization code flow), and, actually, it’s been deprecated:

https://oauth.net/2/grant-types/implicit/

Why was it introduced in the first place, though? This is because authorization code flow usually requires cross-domain calls, and, come to think of it, cross-domain calls from javascript were not really supported when OAuth was introduced.

Things have changed, though. JavaScript-based applications should not have a problem utilizing cross-domain calls today:

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

Although, there is probably still a lot of apps which have not been migrated, so implicit flow may still be needed in many cases.

There is one important aspect of the authorization flows which I have not mentioned so far, and it’s the “redirect url-s”.

Imagine that our web page has redirected the browser to the authorization server, the user has provided required authorization, the token is ready… where should the authorization server “redirect” the browser now (since it’s all happening in the browser in this scenario)? This is what redirect url-s are for, and, if you are interested in a bit more details, have a look at the page below:

https://www.oauth.com/oauth2-servers/redirect-uris/

Hope this helps, though, as usual in such cases, somehow I have a feeling there is still more to it:)

PS. There is a continuation to this post here: https://www.itaintboring.com/power-platform/power-app-portals-and-azure-ad-b2c/