Connecting to Dataverse from Function App using Managed Identity

EDIT (19-Nov-21): Getting token through Azure Identity SDK means caching has to be handled in the Function App as well (refer https://github.com/microsoft/PowerPlatform-DataverseServiceClient/issues/161#issuecomment-912870265). So, I have slightly refactored the Function App to use caching.

Everyone hates passwords. But, passwords are easy, because most people use the same password across multiple services. When PowerPlatform DataverseServiceClient was first released, everyone were excited, because you can use it with v3 runtime. But at the same time, there was no option to use username, password like you would with CrmServiceClient. So, the next easiest option was to go with AppId and Secret or certificate, if you want bit more security. The only problem is both secrets and certificates expire. When they expire it creates a big headache, especially in integration scenarios.

Meme about Azure AD, Function App and authentication. Iron Man 2
Iron Man 2 featuring Azure AD, Power Apps and Function App

So, now look at a better option – Connecting to Dataverse using Managed Identity. If you would rather read the code than this post, you can head to https://github.com/rajyraman/PowerApps-Managed-Identity-Demo-Functions. You can provision the Function App and associated resources to your Azure tenant by clicking the “Deploy the Azure” button on the repo. After this you need to deploy the Function App and create the Application User in Power Apps.

I have also simplified the process with a Power Shell script that you can run from deploy/run.ps1. This Power Shell script provisions the required Azure resources, deploys the Function App, and also creates the Function App Service Principal as an Application User in your Power Apps environment.

This is where the magic happens.

Getting Token using Managed Identity with Azure Identity SDK
Getting Token using Managed Identity

Three key things in this code:

  1. Token is retrieved transparently using the Managed Identity
  2. Service Client uses the Azure Identity SDK’s GetTokenAsync to get the token. This token is uses on all subsequent calls to Dataverse Web API
  3. We don’t mention a specific scope. So, the scope URI ends with .default

DefaultAzureCredential class figures out the credentials in the following order.

  1. EnvironmentCredential
  2. ManagedIdentityCredential
  3. SharedTokenCacheCredential
  4. VisualStudioCredential
  5. VisualStudioCodeCredential
  6. AzureCliCredential
  7. AzurePowerShellCredential
  8. InteractiveBrowserCredential

Since I launch Azure Function Core Tools from Visual Studio, it will use the credentials I specified in Visual Studio.

Azure Service Authentication on Visual Studio
Visual Studio Azure Account

We are using Azure.Identity to connect to Azure AD. When you run this Function App locally it will use the credentials you used in Visual Studio or Visual Studio Code, or the credentials you used to login in Azure CLI, but when this line runs on the live Function App, it will use the Managed Identity of the Function App. Function App is an Application User in Power Apps with a security role.

Function App shown as Application User in Power Apps
Function App as an Application User

We can also confirm that the Function App is running with Managed Identity (System Assigned).

Function App's Managed Identity on Azure Portal
Function App Managed Identity

The Function App also has Open API attributes (provided by Azure Function Open API Extension), so I can use Swagger UI to interact with the Functions. I love this new capability.

Swagger UI for Function App
Swagger UI

I hope this makes your life simpler when using Azure Functions to connect to Power Apps Web API.

Repo:

https://github.com/rajyraman/PowerApps-Managed-Identity-Demo-Functions

Credits:

  1. This post would not be possible without the conversation I had with Amrit last Friday, where he casually mentioned this idea. Before that, I was always confused how the end to end process will work without AppId or Certification. The article he pointed to me was https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet#asal. This uses Microsoft.Azure.Services.AppAuthentication package which is deprecated. So, I have switched to Azure Identity. Huge thanks to Amrit. You can follow him on @amritsidhu61 (Twitter) or https://www.linkedin.com/in/sidhuamrit
  2. Justin Yoo – For helping me sort through the Open API issues. You can follow him at https://twitter.com/justinchronicle?s=20

Building Exposure Bot using Google Actions, Azure Functions and Power Automate

Almost everyone now has a smartphone. The two dominant platforms are: Google Android and Apple iOS. Both these platforms have virtual assistant built into the OS. Android’s virtual assistant is called Google Assistant, and Apple’s virtual assistant is called Siri. In this post, I will show you how you can build a custom action for Google Assistant. The equivalent for custom action in Bot Framework is Skills.

Here is the high level architecture diagram of the solution.

Architecture diagram of the virtual assistant solution
Architecture of virtual assistant solution

The starting point to build the virtual assistant solution is create a new Actions project. You need to head over to https://console.actions.google.com/ to create a new Actions project.

Google Actions projects screen
Google Actions projects

The first thing you need to do is name your bot. I have named mine as “Exposure Bot”. This means that the user can trigger the bot by saying “Talk to exposure bot” to the Google Assistant.

Settings screen
Settings screen

We then need to create one or more scenes to handle the user’s input. After the user has triggered the bot using the phrase “Talk to exposure bot”, she is then transferred to a scene called triggerExposure_scene.

Transitioning from main scene to triggerExposure scene
Transitioning from main scene to triggerExposure_scene

We can now design the main scene that handles multiple inputs. This scene is called triggerExposure_scene.

trigger exposure scene
trigger exposure scene

In this scene you can 3 intents to decide what to do based on user input. We have three intents on the Actions project.

  1. Suburb_intent – When the user says something like “Show me the numbers for Melbourne”, it will be mapped to this intent. A Flow (webhook) will be called to get the response for this intent
  2. Notifications_intent – When the user says something like “Send me daily notifications”, it will be mapped to this intent. The user is transitioned over to a scene called subscription_scene so that we can get accurate GPS information for the subscription and user’s permission for sending push notifications
  3. IncomingPushNotifications_Intent – There is a scheduled Flow that sends exposure notifications three times a day to the user. When the user clicks on that notification, this intent is invoked. But, as you can see when that happens, the conversation immediately ends. The user should never hit this intent in this scene.

Now let us look at intents next. Intent is basically figuring out what the user intends to do, based on training phrases. For example, below are the training phrases for Suburb_intent. Notice that you can map the types (Entity in PVA) right inside the training phrase itself.

List of intents
Intents

For figuring out whether the user wants to receive push notifications we have different set of training phrases.

Push notifications intent
Push Notifications Intent

The suburb type itself is just free text.

Suburb type
Suburb type

All the above scenes and intents are used when the user is chatting with the bot. But, in the case of push notifications the IncomingPushNotifications_Intent will be invoked only when the user clicks on the notification in Android. Here is how that scene looks. Notice that it is global intent, which means it can be invoked even in the middle of the conversation.

Push Notifications scene
Push Notifications scene

When the user clicks on the push notification, she is transferred to the pushNotification_scene, which grabs the current location using Android’s native location API and calls Flow to get the current exposure data.

Push Notification scene
Push Notification scene

This is how suburb_intent also gets the current exposure data.

Suburb intent webhook
Suburb intent webhook

We have similar webhook call for setting up the push notification subscription. Notice how we are grabbing the location and notification permission and then calling the Flow will the message name,

Push notification subscription scene
Push notification subscription scene

We are calling webhooks in couple of places. But, where are we defining the Flow that is called? It is in the webhook area. The key to call the Flow is on the URL itself. I am not aware of any alternate method to transparently use OAuth in Google Actions. If you know a better way to handle authentication in this context, please add your comment on the post.

Webhook setup
Webhook setup

There are two more things you need to make sure:

  • Service Account is created for the project
Service Account

Both enabling Actions API and creating service account for the project are documented in https://codelabs.developers.google.com/codelabs/actions-user-engagement/#3, which you can refer for additional instructions. After you download the JSON from service account copy only the key and keep it aside. We would need it for adding to key vault. Don’t include the —BEGIN PRIVATE KEY and –END PRIVATE KEY parts. Copy just the key as highlighted.

Service Account key
Service Account Key

The next part is setting up the Azure resources: Function App, Storage Account, App Insights and Key Vault. Head over to https://github.com/rajyraman/Google-Actions-Push-Notifications where you have a deploy to Azure button. Click the button and fill in the Google Account Email, Google Secret and you Azure User Id.

Deploy Azure Resources
Deploy Azure Resources

You can click on the visualise button to see what resources will be deployed.

Both Google Account Email and Google Secret are in the Service Account JSON file which you would have downloaded. In order to get the User Id run the following command in AZ CLI.

 az ad signed-in-user show --query objectId --output tsv

After the resources are provisioned, you need to deploy the Function code to the Azure, by running this command.

func azure functionapp publish FUNCTIONAPPNAME

If you want to deploy from command line and have Bicep and Az CLI installed, I have also included a PowerShell script that you can use to deploy from your local machine.

Deployment Script
Deployment Script

Since the Function App has OpenAPI enabled, you can also access it from the browser to quickly understand the Function App.

Swagger UI
Swagger UI

On production it is not a good idea to exposure this to everyone, as the documentation might have confidential data, so you can lock the Swagger UI down further or remove it altogether. Refer https://github.com/Azure/azure-functions-openapi-extension/blob/main/docs/openapi.md#configuration for additional information.

The Function App uses Durable Entity that is persisted in Azure Storage. Since we just need to store the Google User Id and Location for which exposure notification is required, it is a good fit for small applications like this one.

I have also uploaded all the Google Actions code to the repo, so in order to deploy this to Google Action, you can use the gactions CLI.

Google Actions source code
Google Actions source code

The command to run is

gactions login
gactions push

Before deploying you need to update the Flow URL to the URL of Get Exposure Data Flow. This Flow gets the COVID 19 Exposure Data provided by Data.VIC Victorian Exposure Sites API.

Update Flow URL for webhook
Update Flow URL for webhook

You can download the Managed Solution with Custom Connectors and Flows from https://1drv.ms/u/s!AvzjERKFC6gOyAZf2Mw1f2QkgYft?e=HL1bnV. Here is how the solution looks.

Flow and Custom Connector Solution

After importing you also have to change the base URL of the Google Assistant custom connector to point to the correct base URL of your Function App. This is the Function App that manages subscription for locations, Google API authentication and sending push notification to the user.

After all of this is setup you would be able to chat with the bot from your Google Assistant.

Push notifications from exposure bot
Push notifications from exposure bot
Google Assistant – Exposure Bot

All Actions messages are captures in logs, so you can always refer them if you are stuck.

Google Cloud Logs

You can refer the source code on https://github.com/rajyraman/Google-Actions-Push-Notifications

Credit: flaticon.com for the icons used in custom connector

References:

  1. Push Notifications on Google Actions
  2. gactions CLI
  3. Discover Azure Functions: OpenAPI and Power Apps
  4. Conversational Actions

Resubmitting failed Logic Apps using Power Automate

Even though each Logic App action has a retry policy, after a certain number of retries, the Logic App engine gives up and fails the whole execution. In these scenarios the Logic App needs to be re-run. You can use Power Automate to handle this scenario. You can download the Power Automate solution from https://1drv.ms/u/s!AvzjERKFC6gOx3-NptgmlSyD4FVu?e=vwxyAE

After you import the solution, you need to set the value for the environment variable: Azure Subscription Id to your Azure Subscription where the Logic Apps are.

Environment variable to store subscriptionid
Subscription Id environment variable

You need to have a HTTP with Azure AD connection that can be mapped to the HTTP with Azure AD Connection Reference in the solution. Both the Base Resource URL and Azure AD Resource URL should be: https://management.azure.com/

HTTP with Azure AD connection
HTTP with Azure AD connection to Azure RM API

You also need to have a Azure Resource Manager connection that can be mapped to the Azure Resource Manager Connection Reference.

The Flow runs on a HTTP Trigger, so it is manual at the moment. But, you are easily modify it to a Schedule trigger.

This is the key part where the Flow resubmits failed executions.

Re-running all failed Logic App executions
Segment to resubmit failed executions

The Flow returns JSON with both the old runId and the new runId of the resubmitted Logic App.

Flow response with Logic App execution detail
Output from Flow

This Flow can be made even smarter by persisting this JSON into CosmosDB or Table Storage, so that you can stop retrying Logic Apps that have failed more than 2 times. This is so that you don’t keep retrying executions that will fail due to some data validation errors.

Acknowledgements:

  1. Pieter Veenstra – Pieter’s method – https://sharepains.com/2019/07/09/compose-apply-to-each-power-automate/
  2. Pieter Veenstra – Unnest nested arrays – https://sharepains.com/2021/02/10/unnest-nested-arrays-in-power-automate/

Paging while using FetchXML in Dataverse Connector

If you want to retrieve more than 5,000 records in a Flow using List Rows action from Dataverse, you need to page through the records. Flow does not automatically do this for you. This is not a new topic. It is already been explored by Linn and Debajit. You can read their posts below:

  1. Linn – https://linnzawwin.blogspot.com/2021/01/retrieve-more-than-100000-dataverse.html
  2. Debajit – https://debajmecrm.com/how-to-query-more-than-5k-cds-records-using-fetchxml-in-powerautomate-microsoft-flow/

When I was looking into this same problem, I used two things differently:

  1. Using interationIndexes for paging
  2. Using xml for encoding paging cookie, instead of manually encoding “<” to “&lt;”, “>” to “&gt;” etc.

Here is the Flow

I use iterationIndexes like this for the page for each iteration.

The paging cookie returned in the first page looks like this

<cookie page="1"><systemuserid last="{7A44238F-9894-EB11-B1AC-002248153EDE}" first="{5C4D8EE8-62FF-E911-A811-000D3A799417}" /></cookie>

I then use the expression below to sanitise the characters so that I can use it in the subsequent page. It looks a bit clunky and verbose, but the key thing here is that when xml function to encode the XML, it also sanitises it. encodingJSON is a temporary object used to store the paging cookie XML. This JSON is what gets converted to XML and cleaned up with split and substring.

if(equals(iterationIndexes('Do_until'),0),
	'',
	concat(
		'paging-cookie=''', 
		substring(
			first(
				skip(
					split(
						string(xml(setProperty(variables('encodingJSON'),'x',variables('pagingCookieCleansed'))))
					,'<')
				,1)
			)
		,2),
	'''')
)

Here is the Flow running through all pages.

You can download this sample Flow from https://1drv.ms/u/s!AvzjERKFC6gOx3ZbdK8YOBYat0CV?e=M41ONd

Using Custom API as a trigger for Flow

Dropping new goodies straight to Microsoft Docs, without any formal announcement, has now been normalised. Couple of Virtual Table features have been “announced” without much fanfare this way. The ability to trigger Flows from Custom API is one such unannounced feature.

Custom API is a feature in Dataverse that is very similar to Custom Process Actions (formerly known as Actions). You can refer Compare Custom Process Action and Custom API doc to understand the differences. There are many useful tools to help you work with Custom API in XrmToolBox. They are

  1. Custom API Manager by David Rivard
  2. Custom API Tester by Jonas Rapp
  3. Custom Action to Custom API Convertor by Mark Carrington

The UI to create a new custom API is a bit to many clicks. So, we will use Custom API Manager to create our API and Custom API Tester to trigger it. You can easily create a new Custom API using Custom API Manager.

XrmToolBox Custom API Manager Tool

The important points to note when you create a Custom API that can be used as trigger are:

  1. You cannot have the IsFunction set to true
  2. You cannot have IsPrivate set to true
  3. You cannot have Allowed Custom Processing Step Type set to None

After you have created your Custom API, you need to create

  1. Root Catalog
  2. Atleast one Child Catalog
  3. One Catalog Assignment record for each Custom API or Table
  4. You cannot add Catalog Assignment records straight to the root catalog

You can do these right inside the solution. Here is how my root catalog record looks like.

Root Catalog

The unique name of the catalog needs to have a publisher prefix, otherwise you will get this error.

Root Catalog no prefix exception

After creating the root catalog, you can create the child catalog from the related records area in the form.

Child Catalog list

This is how my child catalog looks like

Child Catalog Record

I am going to add all my Custom APIs to this Custom APIs sub-catalog as Catalog Assignment. You can add both Custom APIs and Tables/Entities (if the Custom API is bound to an Table) from this screen.

Catalog Assignment

Here is how my solution looks like after creating Catalog, Custom APIs and Catalog Assignment records.

Solution with Custom API and Catalog records

Next step is to create the Flow with the “When an action is performed” trigger. In this trigger you need to choose the Root catalog, the sub-catalog and the custom API in that sub-catalog.

If the Custom API is bound to an Table, you need to choose the Table and then the Custom API, as it filters down the Custom API by Table.

Custom API bound to Table trigger

Here is how my trigger looks like. Since my Custom API is not bound to any entity, I choose none for the Table name.

Unbound Custom API Trigger

My Flow will now run when I trigger the Custom API. I can do this using Custom API tester.

Custom API Tester tool

I can also trigger the Custom API from Flow itself.

Trigger Custom API from Flow

This will cause trigger my Flow that was waiting for that Custom API call.

Flow executed on Custom API call

One thing to note is that you created the custom API with Is Private set to true, you will still see that custom API in the Flow trigger, but when you save the Flow, you will get this exception.

Private Custom API error

One word of warning: This is a preview feature. So, don’t use it in production yet. This is a welcome feature, and it opens up Flow to more integration scenarios.

References:

  1. Trigger flows when a Microsoft Dataverse action is called
  2. Catalog and Catalog Assignment Tables (Thanks to Jim Daly for sharing this docs link)

Future of developers in Power Platform

Generally all my posts are problem-solution type posts. I never write anything that is purely my opinion. But, considering the recent developments, I thought I should sit down and write down my thoughts. Before we even consider whether developers have a future in Power Platform, we need to first delve into the historical context of the developer’s role and their tasks in Dynamics CRM.

In the beginning of time

I come from a technical background, starting with ASP.NET WebForms, C# and Dynamics CRM OnPremise. Back in those days my role mostly revolved around Plugin, custom workflow assemblies, a bit of JavaScript to do data validation, hiding/unhiding or enabling/disabling fields. I also occasionally had to use Deployment Manager to do some backup/restore or UR installations.

A CRM Developer had to just know C#, CRM SDK, understand a bit of JavaScript to write crmForm.all. You never had to be highly skilled in these languages, since you need to know just enough to write a plugin, custom workflow and JavaScript. This might also be the case because CRM developers were previously C# or SharePoint developers. Since your code is being run in a managed environment, you did not need to worry about a whole lot of things a .NET Developer needed to worry about like caching, multi-threading/parallelisation, memory leaks, unit tests (💣).

ALM was non-existent because there were no concept of Solutions. Even when Solutions were introduced, there was not a whole bunch of enthusiasm for Managed Solutions, which is, I would say, is still the case. Most people just used unmanaged because it was the default. A developer’s role in the ALM (?!) front was to export the solution on DEV and import the solution into TEST/PROD, and keep a copy of the solution zip file in shared network folder.

Then came Solution Packager (2012), AdxStudio ALM Toolkit (2013), XRM CI Framework (2014). These along with TFS helped folks to enable unpack/checkin/repack/deploy the solution. There were only a few people who were passionate about ALM back then, even with this new tooling, since it was just as easy to take a database backup (in the OnPrem world), export and import the unmanaged solution, and restore the database from backup if there was some solution issue.

Once the size of projects grew, along with the number of other developers in team, it became necessary to write unit tests and integration tests. Everyone was simply adopting what was available in the .NET world. I was using NUnit, Moq and Fakes. Then came FakeXrmEasy (2015), a testing framework specifically for Dynamics CRM. It became easy to do unit testing by setting up test records in-memory and use the fake execution context to validate plugin/workflow behaviour. Jasmine and Sinon.JS was used for testing client side code, but these were front end developer frameworks, not Dynamics CRM specific.

So, by the end of 2016, Dynamics CRM developers were mostly doing plugin, custom workflow, JavaScript development, deployment, unit testing and DevOps using TFS. Since this is not based on feedback from other developers across the world in 2016, you could also interpret this as what I was doing back in those days.

Project Sienna – The one that got away

My only encounter with precursor to canvas apps, which was known as Project Sienna (2013) was very brief. As someone who was doing full-time dev work by this point, I downloaded the app to purely to play with it, drop a few controls to understand what it was doing, and lost interest because I compared it to what I can do I in a WinForms application or ASP.NET application. As someone who can build apps, it felt like a downgrade rather than a productivity addon. So, my life continued to revolve around mostly C#, JavaScript and Dynamics CRM.

CRM Online vs OnPremise – EV vs Petrol

My first encounter with CRM Online was around 2013. The biggest shock for me was the lack of access to the underlying database or IIS logs. It took away two items critical to on my debugging process. But it was merely a precursor of things to come, when things transitioned into the SaaS world. But, it was good in a way as you didn’t have to spoil your weekend looking through IIS Logs, Trace Logs, Querying database, trying to come up with a solution. All you needed to do was raise a support ticket.

So when things moved to CRM Online, developers had to just schedule the deployment window, rather then doing database backups, install the Update Rollup etc. This was a good change for developers, as they can focus on dev work and ALM, and not be the Windows Admin or Database Admin.

Lot of times developers had to write SSRS report as well, which meant you need to understand complex T-SQL, create indexes/stats etc. With CRMOnline you can only use FetchXML, which meant that lot of the reports could not be authored. When more SQL Reports moved to PowerBI, due to the limitations of SSRS Report, this gave more time to CRM Developers to work on their areas of interest.

Power Platform – Shock and Awe

Prior to Power Platform, there were two worlds: the world of a CRM Developer who might occasionally do Azure things, and SharePoint + canvas apps world. I think even now, when people say Power Apps they mean canvas app, while the CRM Developers use the term Model Driven Apps, instead of Power Apps. Even though Microsoft is persisting with its unification efforts, these two terms will most likely continue to exist.

Power Apps entered public preview in April, 2016 and it became generally available in Oct, 2016. With GA, we got the Common Data Service and Common Data Model. Even after this, I still continued to do CRM things, only because I was stuck in the CRM2015 OnPremise world, while Power Platform was beginning to take shape. People from the Office 365 E5 world, eagerly boarded the Power Platform rocket ship that started flying million miles an hour.

  • XRM became Common Data Model
  • People got confused between Common Data Model and Common Data Service
  • People were surprised to find that there can be multiple “Common” Data Service per tenant
  • Cool kids started using Flow rather than Workflows
  • It became even cooler to use Logic Apps over Flow
  • Flow/Workflow parity aka “in the fullness of time”, became a fun topic to discuss about
  • Choosing a right CDS connector was the “have you restarted your machine”
  • SharePoint/Canvas apps people still did not care for Solutions, which remained a CRM thing
  • It looked like canvas apps was a gaming platform
  • SharePoint List was the most popular “database” for canvas apps

No Code/Low Code – Reality Distortion Field

The origins of NoCode/Low Code is probably the whole SQL vs NoSQL. While NoSQL is technically feasible, and there are umpteen NoSQL databases that are alternative to SQL Server or Oracle, can you really do everything you can do with code, using low code? Microsoft’s stand these days seems to be that and it is not meant to be a replacement for code, but more like a power tool.

Also with code you commit them into source control, version control, write tests to mitigate bugs, but you still write Power Fx code inside Power Apps without any of these additional “overheads”. You can of course do this now with Power Apps Language Tools and testing to a degree with Test Studio, but back then it was just the appx based deployment and manual testing.

Initially the marketing for Power Apps was around how it is the great enabler, how it is transferring power to the people, how you can quickly build something without writing any code, how you can transition into Power Apps from any non-tech role if you just learn Power Apps. It was all about striking an emotional chord. There were “Happy Gilmore” and “Good Will Hunting” moments, because it is all about an unexpected outsider making an impact. But the campaign inadvertently turned code, IT Admins and developers into bogeyman and gatekeepers. If you don’t use Power Apps/Flow, you might have to write code became the scary proposition. Is writing code that scary?

Screenshot of Bane's speech

It might be inefficient to write code, when you could use low code tools to save dev time. But, that was not how I remember the initial marketing efforts. It was focused on how a regular business user can have the power (#LessCodeMorePower) using low code tools and how they can create apps to make their life better. It was not pitched as a productivity addon for a CRM/Power Apps developer. But, considering that it was predominantly marketing to the “no code” crowd, it is not hugely surprising.

For e.g. consider this tweet from a developer perspective.

If you are a developer it is hard not to feel disillusioned about the future. If one person could achieve something in a short time, that a whole team of developers could not, what does that do to your self-esteem and craftmanship? Does this mean that you should give up on coding and switch to low code tools? Short answer is No.

Developers are not going to end up like Zune*

According to State of Octoverse – 2020 there were 60M+ new repositories created between Oct 2019 and Sep 2020. So, code and developers are not going away any time time soon. Even if low code tools get incrementally powerful year after year, there will always be a gap that needs to be bridged with code. Also, sometimes, it might be much simpler to do something in code, rather than build a 40 step multi-branch Flow or a canvas app with duct-taped Power Fx expressions. If you have strong and inflexible opinion about either low-code or pro-code, the end result might not be optimal. So, it is important to keep an open mind.

There was a period of time, where developers could basically focus only on the CRM components i.e. form script, plugin, custom workflow step etc. and nothing else. As Power Platform + Azure convergence seems to be the emerging pattern in lot of the upcoming projects, developers also need to diversify, rather that rely only model-driven app.

Since Power Fx is now a language, everyone who uses it could technically call themselves a developer. It might start showing up inside other products in the Power Platform.

Power Fx started with Power Apps canvas apps and that is where you can experience it now. We are in the process of extracting the language from that product so that we can use it in more Microsoft Power Platform products and make it available here for you to use. That’s going to take some time and we will report on our progress here and on the Power Apps blog.

https://github.com/microsoft/Power-Fx

Power Fx might take off, or it might not, but pro-developers need to at least keep a watch on how it shapes up. Is it all hype/marketing, or is it something that can improve your productivity? Things can change quite dramatically like Nokia vs Apple or Chrome vs the rest in a short period of time.

https://zdnet1.cbsistatic.com/hub/i/2018/11/26/ba7e61c7-73c6-442d-b93a-c8f1c892ac32/4b4d5930ea733c2cb9f2a299e0b7ee94/apple-nokia-revenue.jpg
Apple beats Nokia in under 2 years
https://upload.wikimedia.org/wikipedia/commons/7/71/StatCounter-browser-ww-yearly-2009-2020_%28updated_until_November%29.png
Chrome took 2 years to become the dominant browser

Here are some areas developers can focus on:

  1. Power Apps Component Framework + React + TypeScript – PCF Components can be used in Power Apps portals, canvas apps and model-driven apps, so it saves lot of dev time since you don’t need to target individual products
  2. Virtual Tables – This is even more exciting now as you get CRUD support
  3. Azure Dev Ops – Citizen Developers are going to rely on the developers to manage the pipeline, releases, review/approve pull requests, work through solution layering, environment variable, connection reference issues etc. Since you might be using this for other Azure components like Azure Functions, Logic Apps etc. this is a good one to learn
  4. Azure Static Web Apps – If you need to build custom portal entirely in code, or single page app to embed inside an existing portal, interact with APIs, this is a good one to consider
  5. Azure Functions – While you might be able to implement what a single Function does using Logic Apps or Flow, it is hard to replicate what Durable Function or a Function with SignalR binding does. Functions give you greater control, since it is entirely code. As a developer in a Fusion team, you might be incharge of creating the API using Functions, expose it use Azure APIM, so that citizen developers can consume it using custom connector
  6. Canvas Apps/Flow/Logic Apps – Even though these are low-code, having code skills might quite beneficial when you write the expressions or Power Fx. It might also be a good opportunity to learn about UX, app design principles, accessibility etc
  7. Power Apps Virtual Agent Custom Skills – While you could call Flow and use the response in a PVA bot, using custom skills is a code first way to do it
  8. Docker – Containerisation and Dev Containers are so useful in getting a consistent dev environment setup. It also gives you the ability to switch between your local machine and CodeSpaces. So, it is good to learn and experiment with this

End of the day, if you love to code and would like to keep coding, learn new things, play with emerging tech, you still have choices, but most of them seem to be in Azure these days. It might not be a bad thing, as you could potentially switch-over to Full Stack or Azure Dev, if low code takes away lot of the opportunities for a developer. It might also end up being better career wise, as having Azure/PCF/React/TypeScript skills might open up a whole new world of opportunities outside of Microsoft partner ecosystem.