Everyone hates passwords. But, passwords are easy, because most people use the same password across multiple services. When PowerPlatform DataverseServiceClient was first released, everyone were excited, because you can use it with v3 runtime. But at the same time, there was no option to use username, password like you would with CrmServiceClient. So, the next easiest option was to go with AppId and Secret or certificate, if you want bit more security. The only problem is both secrets and certificates expire. When they expire it creates a big headache, especially in integration scenarios.
So, now look at a better option – Connecting to Dataverse using Managed Identity. If you would rather read the code than this post, you can head to https://github.com/rajyraman/PowerApps-Managed-Identity-Demo-Functions. You can provision the Function App and associated resources to your Azure tenant by clicking the “Deploy the Azure” button on the repo. After this you need to deploy the Function App and create the Application User in Power Apps.
I have also simplified the process with a Power Shell script that you can run from deploy/run.ps1. This Power Shell script provisions the required Azure resources, deploys the Function App, and also creates the Function App Service Principal as an Application User in your Power Apps environment.
This is where the magic happens.
Three key things in this code:
Token is retrieved transparently using the Managed Identity
Service Client uses the Azure Identity SDK’s GetTokenAsync to get the token. This token is uses on all subsequent calls to Dataverse Web API
We don’t mention a specific scope. So, the scope URI ends with .default
Since I launch Azure Function Core Tools from Visual Studio, it will use the credentials I specified in Visual Studio.
We are using Azure.Identity to connect to Azure AD. When you run this Function App locally it will use the credentials you used in Visual Studio or Visual Studio Code, or the credentials you used to login in Azure CLI, but when this line runs on the live Function App, it will use the Managed Identity of the Function App. Function App is an Application User in Power Apps with a security role.
We can also confirm that the Function App is running with Managed Identity (System Assigned).
The Function App also has Open API attributes (provided by Azure Function Open API Extension), so I can use Swagger UI to interact with the Functions. I love this new capability.
I hope this makes your life simpler when using Azure Functions to connect to Power Apps Web API.
Almost everyone now has a smartphone. The two dominant platforms are: Google Android and Apple iOS. Both these platforms have virtual assistant built into the OS. Android’s virtual assistant is called Google Assistant, and Apple’s virtual assistant is called Siri. In this post, I will show you how you can build a custom action for Google Assistant. The equivalent for custom action in Bot Framework is Skills.
Here is the high level architecture diagram of the solution.
The starting point to build the virtual assistant solution is create a new Actions project. You need to head over to https://console.actions.google.com/ to create a new Actions project.
The first thing you need to do is name your bot. I have named mine as “Exposure Bot”. This means that the user can trigger the bot by saying “Talk to exposure bot” to the Google Assistant.
We then need to create one or more scenes to handle the user’s input. After the user has triggered the bot using the phrase “Talk to exposure bot”, she is then transferred to a scene called triggerExposure_scene.
We can now design the main scene that handles multiple inputs. This scene is called triggerExposure_scene.
In this scene you can 3 intents to decide what to do based on user input. We have three intents on the Actions project.
Suburb_intent – When the user says something like “Show me the numbers for Melbourne”, it will be mapped to this intent. A Flow (webhook) will be called to get the response for this intent
Notifications_intent – When the user says something like “Send me daily notifications”, it will be mapped to this intent. The user is transitioned over to a scene called subscription_scene so that we can get accurate GPS information for the subscription and user’s permission for sending push notifications
IncomingPushNotifications_Intent – There is a scheduled Flow that sends exposure notifications three times a day to the user. When the user clicks on that notification, this intent is invoked. But, as you can see when that happens, the conversation immediately ends. The user should never hit this intent in this scene.
Now let us look at intents next. Intent is basically figuring out what the user intends to do, based on training phrases. For example, below are the training phrases for Suburb_intent. Notice that you can map the types (Entity in PVA) right inside the training phrase itself.
For figuring out whether the user wants to receive push notifications we have different set of training phrases.
The suburb type itself is just free text.
All the above scenes and intents are used when the user is chatting with the bot. But, in the case of push notifications the IncomingPushNotifications_Intent will be invoked only when the user clicks on the notification in Android. Here is how that scene looks. Notice that it is global intent, which means it can be invoked even in the middle of the conversation.
When the user clicks on the push notification, she is transferred to the pushNotification_scene, which grabs the current location using Android’s native location API and calls Flow to get the current exposure data.
This is how suburb_intent also gets the current exposure data.
We have similar webhook call for setting up the push notification subscription. Notice how we are grabbing the location and notification permission and then calling the Flow will the message name,
We are calling webhooks in couple of places. But, where are we defining the Flow that is called? It is in the webhook area. The key to call the Flow is on the URL itself. I am not aware of any alternate method to transparently use OAuth in Google Actions. If you know a better way to handle authentication in this context, please add your comment on the post.
Both enabling Actions API and creating service account for the project are documented in https://codelabs.developers.google.com/codelabs/actions-user-engagement/#3, which you can refer for additional instructions. After you download the JSON from service account copy only the key and keep it aside. We would need it for adding to key vault. Don’t include the —BEGIN PRIVATE KEY and –END PRIVATE KEY parts. Copy just the key as highlighted.
The next part is setting up the Azure resources: Function App, Storage Account, App Insights and Key Vault. Head over to https://github.com/rajyraman/Google-Actions-Push-Notifications where you have a deploy to Azure button. Click the button and fill in the Google Account Email, Google Secret and you Azure User Id.
You can click on the visualise button to see what resources will be deployed.
Both Google Account Email and Google Secret are in the Service Account JSON file which you would have downloaded. In order to get the User Id run the following command in AZ CLI.
az ad signed-in-user show --query objectId --output tsv
After the resources are provisioned, you need to deploy the Function code to the Azure, by running this command.
func azure functionapp publish FUNCTIONAPPNAME
If you want to deploy from command line and have Bicep and Az CLI installed, I have also included a PowerShell script that you can use to deploy from your local machine.
Since the Function App has OpenAPI enabled, you can also access it from the browser to quickly understand the Function App.
The Function App uses Durable Entity that is persisted in Azure Storage. Since we just need to store the Google User Id and Location for which exposure notification is required, it is a good fit for small applications like this one.
I have also uploaded all the Google Actions code to the repo, so in order to deploy this to Google Action, you can use the gactions CLI.
After importing you also have to change the base URL of the Google Assistant custom connector to point to the correct base URL of your Function App. This is the Function App that manages subscription for locations, Google API authentication and sending push notification to the user.
After all of this is setup you would be able to chat with the bot from your Google Assistant.
All Actions messages are captures in logs, so you can always refer them if you are stuck.
Even though each Logic App action has a retry policy, after a certain number of retries, the Logic App engine gives up and fails the whole execution. In these scenarios the Logic App needs to be re-run. You can use Power Automate to handle this scenario. You can download the Power Automate solution from https://1drv.ms/u/s!AvzjERKFC6gOx3-NptgmlSyD4FVu?e=vwxyAE
After you import the solution, you need to set the value for the environment variable: Azure Subscription Id to your Azure Subscription where the Logic Apps are.
You need to have a HTTP with Azure AD connection that can be mapped to the HTTP with Azure AD Connection Reference in the solution. Both the Base Resource URL and Azure AD Resource URL should be: https://management.azure.com/
You also need to have a Azure Resource Manager connection that can be mapped to the Azure Resource Manager Connection Reference.
The Flow runs on a HTTP Trigger, so it is manual at the moment. But, you are easily modify it to a Schedule trigger.
This is the key part where the Flow resubmits failed executions.
The Flow returns JSON with both the old runId and the new runId of the resubmitted Logic App.
This Flow can be made even smarter by persisting this JSON into CosmosDB or Table Storage, so that you can stop retrying Logic Apps that have failed more than 2 times. This is so that you don’t keep retrying executions that will fail due to some data validation errors.
If you want to retrieve more than 5,000 records in a Flow using List Rows action from Dataverse, you need to page through the records. Flow does not automatically do this for you. This is not a new topic. It is already been explored by Linn and Debajit. You can read their posts below:
I then use the expression below to sanitise the characters so that I can use it in the subsequent page. It looks a bit clunky and verbose, but the key thing here is that when xml function to encode the XML, it also sanitises it. encodingJSON is a temporary object used to store the paging cookie XML. This JSON is what gets converted to XML and cleaned up with split and substring.
Dropping new goodies straight to Microsoft Docs, without any formal announcement, has now been normalised. Couple of Virtual Table features have been “announced” without much fanfare this way. The ability to trigger Flows from Custom API is one such unannounced feature.
Custom API is a feature in Dataverse that is very similar to Custom Process Actions (formerly known as Actions). You can refer Compare Custom Process Action and Custom API doc to understand the differences. There are many useful tools to help you work with Custom API in XrmToolBox. They are
The UI to create a new custom API is a bit to many clicks. So, we will use Custom API Manager to create our API and Custom API Tester to trigger it. You can easily create a new Custom API using Custom API Manager.
The important points to note when you create a Custom API that can be used as trigger are:
You cannot have the IsFunction set to true
You cannot have IsPrivate set to true
You cannot have Allowed Custom Processing Step Type set to None
After you have created your Custom API, you need to create
Atleast one Child Catalog
One Catalog Assignment record for each Custom API or Table
You cannot add Catalog Assignment records straight to the root catalog
You can do these right inside the solution. Here is how my root catalog record looks like.
The unique name of the catalog needs to have a publisher prefix, otherwise you will get this error.
After creating the root catalog, you can create the child catalog from the related records area in the form.
This is how my child catalog looks like
I am going to add all my Custom APIs to this Custom APIs sub-catalog as Catalog Assignment. You can add both Custom APIs and Tables/Entities (if the Custom API is bound to an Table) from this screen.
Here is how my solution looks like after creating Catalog, Custom APIs and Catalog Assignment records.
Next step is to create the Flow with the “When an action is performed” trigger. In this trigger you need to choose the Root catalog, the sub-catalog and the custom API in that sub-catalog.
If the Custom API is bound to an Table, you need to choose the Table and then the Custom API, as it filters down the Custom API by Table.
Here is how my trigger looks like. Since my Custom API is not bound to any entity, I choose none for the Table name.
My Flow will now run when I trigger the Custom API. I can do this using Custom API tester.
I can also trigger the Custom API from Flow itself.
This will cause trigger my Flow that was waiting for that Custom API call.
One thing to note is that you created the custom API with Is Private set to true, you will still see that custom API in the Flow trigger, but when you save the Flow, you will get this exception.
One word of warning: This is a preview feature. So, don’t use it in production yet. This is a welcome feature, and it opens up Flow to more integration scenarios.
Generally all my posts are problem-solution type posts. I never write anything that is purely my opinion. But, considering the recent developments, I thought I should sit down and write down my thoughts. Before we even consider whether developers have a future in Power Platform, we need to first delve into the historical context of the developer’s role and their tasks in Dynamics CRM.
In the beginning of time
ALM was non-existent because there were no concept of Solutions. Even when Solutions were introduced, there was not a whole bunch of enthusiasm for Managed Solutions, which is, I would say, is still the case. Most people just used unmanaged because it was the default. A developer’s role in the ALM (?!) front was to export the solution on DEV and import the solution into TEST/PROD, and keep a copy of the solution zip file in shared network folder.
Then came Solution Packager (2012), AdxStudio ALM Toolkit (2013), XRM CI Framework (2014). These along with TFS helped folks to enable unpack/checkin/repack/deploy the solution. There were only a few people who were passionate about ALM back then, even with this new tooling, since it was just as easy to take a database backup (in the OnPrem world), export and import the unmanaged solution, and restore the database from backup if there was some solution issue.
Once the size of projects grew, along with the number of other developers in team, it became necessary to write unit tests and integration tests. Everyone was simply adopting what was available in the .NET world. I was using NUnit, Moq and Fakes. Then came FakeXrmEasy (2015), a testing framework specifically for Dynamics CRM. It became easy to do unit testing by setting up test records in-memory and use the fake execution context to validate plugin/workflow behaviour. Jasmine and Sinon.JS was used for testing client side code, but these were front end developer frameworks, not Dynamics CRM specific.
Project Sienna – The one that got away
CRM Online vs OnPremise – EV vs Petrol
My first encounter with CRM Online was around 2013. The biggest shock for me was the lack of access to the underlying database or IIS logs. It took away two items critical to on my debugging process. But it was merely a precursor of things to come, when things transitioned into the SaaS world. But, it was good in a way as you didn’t have to spoil your weekend looking through IIS Logs, Trace Logs, Querying database, trying to come up with a solution. All you needed to do was raise a support ticket.
So when things moved to CRM Online, developers had to just schedule the deployment window, rather then doing database backups, install the Update Rollup etc. This was a good change for developers, as they can focus on dev work and ALM, and not be the Windows Admin or Database Admin.
Lot of times developers had to write SSRS report as well, which meant you need to understand complex T-SQL, create indexes/stats etc. With CRMOnline you can only use FetchXML, which meant that lot of the reports could not be authored. When more SQL Reports moved to PowerBI, due to the limitations of SSRS Report, this gave more time to CRM Developers to work on their areas of interest.
Power Platform – Shock and Awe
Prior to Power Platform, there were two worlds: the world of a CRM Developer who might occasionally do Azure things, and SharePoint + canvas apps world. I think even now, when people say Power Apps they mean canvas app, while the CRM Developers use the term Model Driven Apps, instead of Power Apps. Even though Microsoft is persisting with its unification efforts, these two terms will most likely continue to exist.
Power Apps entered public preview in April, 2016 and it became generally available in Oct, 2016. With GA, we got the Common Data Service and Common Data Model. Even after this, I still continued to do CRM things, only because I was stuck in the CRM2015 OnPremise world, while Power Platform was beginning to take shape. People from the Office 365 E5 world, eagerly boarded the Power Platform rocket ship that started flying million miles an hour.
XRM became Common Data Model
People got confused between Common Data Model and Common Data Service
People were surprised to find that there can be multiple “Common” Data Service per tenant
Cool kids started using Flow rather than Workflows
It became even cooler to use Logic Apps over Flow
Flow/Workflow parity aka “in the fullness of time”, became a fun topic to discuss about
Choosing a right CDS connector was the “have you restarted your machine”
SharePoint/Canvas apps people still did not care for Solutions, which remained a CRM thing
It looked like canvas apps was a gaming platform
SharePoint List was the most popular “database” for canvas apps
No Code/Low Code – Reality Distortion Field
The origins of NoCode/Low Code is probably the whole SQL vs NoSQL. While NoSQL is technically feasible, and there are umpteen NoSQL databases that are alternative to SQL Server or Oracle, can you really do everything you can do with code, using low code? Microsoft’s stand these days seems to be that and it is not meant to be a replacement for code, but more like a power tool.
Also with code you commit them into source control, version control, write tests to mitigate bugs, but you still write Power Fx code inside Power Apps without any of these additional “overheads”. You can of course do this now with Power Apps Language Tools and testing to a degree with Test Studio, but back then it was just the appx based deployment and manual testing.
Initially the marketing for Power Apps was around how it is the great enabler, how it is transferring power to the people, how you can quickly build something without writing any code, how you can transition into Power Apps from any non-tech role if you just learn Power Apps. It was all about striking an emotional chord. There were “Happy Gilmore” and “Good Will Hunting” moments, because it is all about an unexpected outsider making an impact. But the campaign inadvertently turned code, IT Admins and developers into bogeyman and gatekeepers. If you don’t use Power Apps/Flow, you might have to write code became the scary proposition. Is writing code that scary?
It might be inefficient to write code, when you could use low code tools to save dev time. But, that was not how I remember the initial marketing efforts. It was focused on how a regular business user can have the power (#LessCodeMorePower) using low code tools and how they can create apps to make their life better. It was not pitched as a productivity addon for a CRM/Power Apps developer. But, considering that it was predominantly marketing to the “no code” crowd, it is not hugely surprising.
For e.g. consider this tweet from a developer perspective.
If you are a developer it is hard not to feel disillusioned about the future. If one person could achieve something in a short time, that a whole team of developers could not, what does that do to your self-esteem and craftmanship? Does this mean that you should give up on coding and switch to low code tools? Short answer is No.
Developers are not going to end up like Zune*
According to State of Octoverse – 2020 there were 60M+ new repositories created between Oct 2019 and Sep 2020. So, code and developers are not going away any time time soon. Even if low code tools get incrementally powerful year after year, there will always be a gap that needs to be bridged with code. Also, sometimes, it might be much simpler to do something in code, rather than build a 40 step multi-branch Flow or a canvas app with duct-taped Power Fx expressions. If you have strong and inflexible opinion about either low-code or pro-code, the end result might not be optimal. So, it is important to keep an open mind.
There was a period of time, where developers could basically focus only on the CRM components i.e. form script, plugin, custom workflow step etc. and nothing else. As Power Platform + Azure convergence seems to be the emerging pattern in lot of the upcoming projects, developers also need to diversify, rather that rely only model-driven app.
Since Power Fx is now a language, everyone who uses it could technically call themselves a developer. It might start showing up inside other products in the Power Platform.
Power Fx started with Power Apps canvas apps and that is where you can experience it now. We are in the process of extracting the language from that product so that we can use it in more Microsoft Power Platform products and make it available here for you to use. That’s going to take some time and we will report on our progress here and on the Power Apps blog.
Power Fx might take off, or it might not, but pro-developers need to at least keep a watch on how it shapes up. Is it all hype/marketing, or is it something that can improve your productivity? Things can change quite dramatically like Nokia vs Apple or Chrome vs the rest in a short period of time.
Here are some areas developers can focus on:
Power Apps Component Framework + React + TypeScript – PCF Components can be used in Power Apps portals, canvas apps and model-driven apps, so it saves lot of dev time since you don’t need to target individual products
Virtual Tables – This is even more exciting now as you get CRUD support
Azure Dev Ops – Citizen Developers are going to rely on the developers to manage the pipeline, releases, review/approve pull requests, work through solution layering, environment variable, connection reference issues etc. Since you might be using this for other Azure components like Azure Functions, Logic Apps etc. this is a good one to learn
Azure Static Web Apps – If you need to build custom portal entirely in code, or single page app to embed inside an existing portal, interact with APIs, this is a good one to consider
Azure Functions – While you might be able to implement what a single Function does using Logic Apps or Flow, it is hard to replicate what Durable Function or a Function with SignalR binding does. Functions give you greater control, since it is entirely code. As a developer in a Fusion team, you might be incharge of creating the API using Functions, expose it use Azure APIM, so that citizen developers can consume it using custom connector
Canvas Apps/Flow/Logic Apps – Even though these are low-code, having code skills might quite beneficial when you write the expressions or Power Fx. It might also be a good opportunity to learn about UX, app design principles, accessibility etc
Docker – Containerisation and Dev Containers are so useful in getting a consistent dev environment setup. It also gives you the ability to switch between your local machine and CodeSpaces. So, it is good to learn and experiment with this
End of the day, if you love to code and would like to keep coding, learn new things, play with emerging tech, you still have choices, but most of them seem to be in Azure these days. It might not be a bad thing, as you could potentially switch-over to Full Stack or Azure Dev, if low code takes away lot of the opportunities for a developer. It might also end up being better career wise, as having Azure/PCF/React/TypeScript skills might open up a whole new world of opportunities outside of Microsoft partner ecosystem.
You cannot use act or tmatebecause you are using Windows Runner. One way to work around the jam is to use self-hosted runner, if you are troubleshooting the Action. Rather than configuring the runner on your local machine, what if you could run it inside a container? This way you can run multiple runners on the same machine, all using the same underlying Windows Servercore image.
You need to install the following in your local machine:
Once you have the token, you can run the runner-setup PowerShell script to create the environment file for the container that has the repo url where the Actions with self-hosted runner will run, and the personal access token required to connect to the GitHub API.
Now run docker-compose up –build and you should see the container being built and started with the GitHub Runner.
You should now be able to see the runner on the repo that you setup the env file for.
Now let us create a new workflow that utilises the self-hosted runner. The key this here is to mention the self-hosted runner on “runs-on”.
Since this workflow uses workflow_dispatch trigger, you can run it manually. If you now run the workflow, you should start see the logs appear on the container’s console.
With the latest version of the Docker extension for VSCode, you can also explore the container’s file system, which is quite handy, in case you want to inspect the artifacts produced by the workflow.
I hope this tip will help you when you are stuck building workflows with Power Platform Actions.
It is not an overstatement to say that there is still so much buzz around PCF components, even though there are no monthly announcements like canvas apps or Power Automate. Just look at the sheer number of components in pcf.gallery and you will be amazed by the creativity of the community.
To help people with creating new PCF components, you have Yeoman PCF Generator to do the scaffolding. Power Apps community is significantly lagging behind in using DevContainers, and believe me, by 2022, Dev Containers and GitHub Codespaces will become so common place, that we don’t blink an eyelid.
With that in mind, I thought it will good if I can move the scaffolding process entirely to GitHub rather than doing this locally. PCF CLI is not cross platform. This is the blocker at the moment for Codespaces, but that is being worked on.
In order to create a new PCF Repo, just click the “Use this Template” button.
When you create a new repo from the template, use this convention “PCF-ComponentName-Component“. The important thing is to have hyphen in the repo name, otherwise you will have issues in scaffolding out the repo, and PCF CLI will throw an exception.
Your new repo will also have a GitHub workflow to build the component and release the solution. It can be run manually. It will also run automatically when you commit something the repo with a tag that begins with “v” e.g. v1.5. You can refer https://git-scm.com/book/en/v2/Git-Basics-Tagging for information on Git tags.
I hope this will help people creating new PCF Components.
Before anyone can start developing PCF components, they have to prepare their local machine and get it ready. This usually means that a whole bunch of tools and frameworks have to be installed. As per Microsoft documentation you need these pre-requisites:
.NET 4.6.2 Developer Pack
Power Apps CLI
Visual Studio Build Tools (for Solution packaging)
What if you could start with just VSCode without wasting time on any of this, with a bit of templating thrown in? It is possible, with one secret ingredient: Docker.
When I started this exploration around April, I tried using Linux containers. The reason being both VS Dev Containers and GitHub Codespaces only support Linux containers. But, I faced an issue straight-away during npm build.
I tried to edit the files in node_modules and make it work but I encountered more issues, due to the fact that PCF CLI specifically targets Windows only. So, I gave up on that idea and until about a couple of weeks back.
I thought, why not use Windows containers in Docker, rather than Linux containers. The first image of my choice was nanoserver, because of the image size. But, nanoserver images does not seem to play nicely when I wanted to install VS Build Tools and chocolatey in the image, so I ended up moving to Windows Server Core.
Below are the relevant links you can look into for the Docker file and the image:
Step 2: Once Docker is up and running, right click on the icon and choose “Switch to Windows Containers”. The screenshot below is displaying Linux Containers because I am already using Windows Containers
Step 3: Choose the folder where you want your sourcecode, and create two items: a folder called src and a file called pcf.env
Step 4: Add the details about your PCF Component on the pcf.env file. These correspond to the commandline args in PCF CLI. Below is a sample
Step 5: Install Docker extension for VSCode, if you want to use the GUI, rather than commands for working with containers.
Step 5: Run the command below in VSCode’s PowerShell Terminal window to create the container from the image in GitHub Container Registry. I have specified the name of the container as “TestComponent” in the example below, but pick a different name based on the component name for the container.
Guido runs a great site called pcf.gallery where there are over 200+ PCF components. The components on the site are hosted on GitHub and have solutions that can be installed on CDS. Some components have managed solution, some have unmanaged, and some have both. Since lot of the components are open-source and are hosted in GitHub, I thought it would be good to automate this using GitHub Actions and ensure every component has both managed and unmanaged solutions.
In order to use this workflow on your repo do these steps:
Clone this repo and copy the .github folder from the cloned repo into the root of your repo.
Update the msbuildtarget environment variable on the yml to point to the folder with the “Other” folder. This folder would have been created when you ran the “pac solution init” command.
The “Other” folder will have the xml files that will be packaged up into the solution.
Make sure that you update the value for “SolutionPackageType” on your cdsproj file to “Both”, so that the Solution Packager can build both Managed and Unmanaged Solutions.
Once you make these changes commit the changes to your repo. The build command will run everytime you commit something to the repo. You can download the managed and unmanaged solutions from the artifact area on the build.
Once you are ready to release the solutions, tag the commit using “git tag”. If the tag is in this format: v* e.g. v1.0, v.1.0.0 etc., the workflow will run the Release steps, which will create a new release and add both the managed and unmanaged solutions to the release.
Both the Solution Name and the Solution Version will be picked up from the Solution.xml file, so as you release new versions, make sure you update this information on the file.
I hope this makes it little easier to automate your PCF builds and releases on GitHub.