Integrating Slack to Dynamics 365 Customer Engagement

In the previous post I described how easy it is to use Microsoft Flow to interact with Dynamics 365 Customer Engagement, by letting Azure Functions handle the core logic. In this post, I will show how to integrate Slack to Dynamics 365 Customer Engagement using Flow and Functions.

This is the objective: In my Slack channel, I quickly want to query the record count using a slash command, without have to jump into XrmToolBox/Dynamics 365 Customer Engagement application itself. I took the record count as a simple use case. You can create multiple slash commands, with each one doing a different targeted action in Dynamics 365.

The first step is to create the new app in Slack. Navigate to https://api.slack.com/apps/new

New Slack App.png

Since this is an internal app that I won’t be distributing, I am choosing a simple name. If you plan to distribute this app, choose a more appropriate name.

Now you will be taken to the app’s initial config screen.

New App Initial Screen.png

We will be creating a new slash command that will return the record count of the entity from Dynamics 365 Customer Engagement. Click on “Create a new command”

Slash Commands.png

Choose the name for the slash command. I am just going with “/count”.

Add new slash command.png

 

The critical part here is the Request URL. This the URL that Slack will POST to with some information. What is the information and how does this look like? I used RequestBin* (see footnote) to find out this information.

Request Bin.png

 

Note the two relevant parameters:

  • command – This the actual slash command the user executed
  • text: This is the text that comes after the slash command

For e.g., if I typed “/count account” into the Slack chat window, the command parameter’s value will be “/count” and the text parameter’s value will be “account“. During the development phase, I put in the RequestBin’s URL in the Request URL. We will come back later, once the Flow is complete and replace this placeholder URL, with the actual Flow URL.

Now you can see the list of slash commands in this app.

List of slash commands.png

Now click the “Basic Information” screen on the left, and then on “Install your app to the workspace”. This should expand the section, and you can now actually install the app into your workspace by clicking on “Install App to Workspace”.

Slack App Information.png

Grant the required permissions for the app.

Authorise App.png

Now it is time to develop the Flow, which looks very similar to my previous post about Flow and Functions. The difference here is, that the Flow is triggered by HTTP POST, and not manually using a Flow button. Flow will receive the slash command from Slack. Here is what the Flow looks like.

Flow Execution Log

Here is what the Flow does:

  1. When HTTP POST request is received from Slack, it posts a message back to Slack asking the user to wait while the record count is retrieved.
  2. Checks if the slash command is “count”
  3. If the slash command is “count”, call the Azure Function using the custom connection (refer previous post, on how to do create a custom connection to the Azure Function that you can use in Flow)
  4. Parse the response received from Azure Function, which queries Dynamics 365 Customer Engagement for the entity’s record count
  5. Send a mobile notification that shows up if the user has Flow app installed
  6. Send a message back to the channel that the slash command was executed on, with the record count

There are three important bits in the Flow:

The first is getting the slash command from the POST message.

Parse command.png

The second is posting into the right Slack channel i.e. the channel that was the source of the slash command. You can get the channel from the “channel_name” parameter.

Post message step.png

The third is parsing the JSON returned by the Azure Function. This is schema of the JSON returned.

{
    "type": "object",
    "properties": {
        "entityName": {
            "type": "string"
        },
        "count": {
            "type": "number"
        }
    }
}

You can get the Flow URL by clicking on the HTTP step that is the first step of the Flow.

Flow URL.png

Grab the whole HTTP URL and plug it in on the slash command’s request URL.

Now, you can use the slash command on your workspace to get the record count.

Slack WorkspaceSlack Workspace result

Note: When I worked on this post last month, RequestBin had the capability to create private bins. But, when I looked into this again this week it looks like they have taken away this capability, due to abuse -> https://github.com/Runscope/requestbin.

Request Bin message.png

You would have to self-host to inspect the POST message from Slack. The other option is to create the Flow with just the HTTP request step and look into the execution log, to see what was posted like below.

HTTP Post.png

 

Advertisements

Introduction to integrating Azure Functions & Flow with Dynamics 365

I haven’t paid much attention to what is happening in the Azure space (Functions, Flow, Logic Apps etc.), because I was under the impression that it is a daunting task to setup the integration i.e. Azure AD registration, getting tokens, Auth header and the whole shebang.

As a beginner trying to understand the stack and how to integrate the various applications, I have been postponing exploring this due to the boiler-plate involved setting this up. But then, I read this post from Yaniv Arditi: Execute a Recurring Job in Microsoft Dynamics 365 with Azure Scheduler. Things started clicking, and I decided to spend some days exploring the Functions & Flow.

I started with a simple use case: As a Dynamics 365 Customer Engagement administrator, I need the ability to do some simple tasks from my mobile during commute. Flow button fits this requirement perfectly. The scenario I looked into solving is, how to manage the Dynamics 365 Customer Engagement trace log settings from Flow app on my mobile, in case I get a call about a plugin error on my way to work, and need the logs waiting for me, when I get to work.

As I wanted to get a working application as fast as possible, I did not start writing the Functions code from Visual Studio. Instead, I tested my code from LINQPad as it is easier to import Nuget packages and also get Intellisense (Premium version). If you want to do execute Azure Functions locally, read Azure Functions Tools for Visual Studio on docs site. I did install and play with it, once I got completed the Flow+Function integration. Also, when you install the Azure Functions Tools for Visual Studio you get the capability to run and debug the functions locally. How awesome is that ❤️!

There are two minor annoyances that I encountered with Visual Studio development locally:

  1. There is no Intellisense for csx files. Hopefully this will be fixed soon. The suggested approach in the mean time appears to be “Pre-compiled Azure Functions“. But, I did not try in this exploration phase. It also improves the Function execution time from cold start.
  2. I had to install the Nuget packages locally using Install-Package even though these were specified on project.json. I could not debug the Azure Functions locally without this, as the Nuget restore did not seem to happen automatically on build.

Now, I will now specify the steps involved in creating the Azure Flow button to update the Trace Log setting in Dynamics 365 Customer Engagement.

Step 1: Head to the Azure Portal (https://portal.azure.com/).

Azure Portal.png

Step 2: Search for Functions App, select the row that says “Function App” and click on create from the right most pane.

Functions App.png

Step 3: Specify the Functions App name and click “Create”.

Function App Settings

Step 4: Navigate to the newly created Function App from the notification area. It is also good to “Pin to dashboard” for easier access next time you login to the portal.

Azure Notifications

Step 5: Click on the “Application Settings” link from the initial Functions App screen.

Functions Initial Screen.png

Step 6: Choose the Platform as 64-bit. I got compilation errors with CrmSdk nuget packages when this was set to 32-bit. You will also have to add the connection string to your CRM instance. The connection string name that I have specified is “CRM”. You may want to make this bit more descriptive.

Functions General settings

Connection String.png

Step 7: Now is the exciting part. Click on the “+” button and then click on the “Custom Function” link.

Custom Function.png

Step 8: This new function will execute on a HTTP trigger and coded using C#.

Functions Http Trigger

Step 9: After this, I sporadically experienced a blank right hand pane with nothing in it. If this happens, simply do a page refresh and repeat steps 6-8. If everything goes well, you should see this screen. I left the Authorize level as “Function” which means that the Auth key needs to be in the URL for invocation.

New Function creation screen

Step 10: You are now presented with some quick start code. Click on the “View Files” pane, which is collapsed on the right hand side.

Default Functions Code.png

Step 11: Click on “Add” and enter the file name as “project.json”

Add Project Json

Step 12: Paste the following JSON into the “project.json” file and press “Enter”. Now paste the below JSON for retrieving the CRM SDK assemblies from Nuget and press “Save”. The Nuget packages should begin to download.

{
  "frameworks": {
    "net46":{
      "dependencies": {
        "Microsoft.CrmSdk.CoreAssemblies": "9.0.0.7",
        "Microsoft.CrmSdk.XrmTooling.CoreAssembly": "9.0.0.7"
      }
    }
   }
}

Project Json Updated.png

Step 13: Now open the “run.csx” file, paste in the follow code and save.

using System.Net;
using System.Configuration;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Crm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Tooling.Connector;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
    var queryParameters = req.GetQueryNameValuePairs();

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();
    string traceLogLevel = data?.traceloglevel;
    
    var client = new CrmServiceClient(ConfigurationManager.ConnectionStrings["CRM"].ConnectionString);
	var organizationSetting = client.RetrieveMultiple(new FetchExpression("<fetch><entity name='organization'><attribute name='plugintracelogsetting' /></entity></fetch>")).Entities.First();
	var oldTraceLogValue = (TraceLog)organizationSetting.GetAttributeValue<OptionSetValue>("plugintracelogsetting").Value;
    var newTraceLogValue = (int)Enum.Parse(typeof(TraceLog), traceLogLevel);
	organizationSetting["plugintracelogsetting"] = new OptionSetValue(newTraceLogValue);
	client.Update(organizationSetting);
    if(oldTraceLogValue.ToString() == traceLogLevel)
    {
        return req.CreateResponse(HttpStatusCode.OK, $"TraceLog Level has not changed from {traceLogLevel}. No update.");
    }
    return traceLogLevel == null
        ? req.CreateResponse(HttpStatusCode.BadRequest, "TraceLog Level not found on the query string or in the request body")
        : req.CreateResponse(HttpStatusCode.OK, $"Trace Log updated from {oldTraceLogValue} to {traceLogLevel}");
}

enum TraceLog{
	Off,
	Exception,
	All
}

Step 14: You can now execute this function by click “Run” and using the JSON in the screenshot for the POST body. The “traceloglevel” can be one of three values: Off, Exception and All.

Execute Function.png

As you can see the function:

  1. Connected to the organization specified in the Application Settings using the connection string
  2. Retrieved the current trace setting and updated it, if there is a change, using the SDK
  3. Returned the response as text/plain.

If you want to execute the same using Postman or Fiddler, you can grab the Function URL as well. Note that the AuthToken is in the URL.

Function Url

Step 15: Since I am going to do an update, I don’t want the function call to trigger the change. So, just turn off “GET” and save. This means that the “traceloglevel” will only be updated on a “POST”, and not on a “GET” with query string.

Functions Integrate.png

Step 16: Now it is time to export the API definition JSON for consumption by Flow.

API Definition.png

Step 17: Choose “Function Preview” as the API Definition Key and then click “Generate API Definition Template” button to generate the Swagger JSON.

API Definition Generate.png

Step 18: Now click on the “Authenticate” button and enter the function auth key (see Step 14) in the API Key textbox and click on “Authenticate” button in the dialog box.

Authenticate.png

You should see a green tick next to the apiKeyQuery. This means that the key has been accepted.

Authenticated.png

Step 19: Now it is time to add the post body structure to the Swagger JSON. I used the Swagger editor to play around with the schema and understand how this works. Thank you Nishant Rana for this tip.

Swagger JSON

You should now able able to POST to this function easily and inspect the responses.

Swagger Response.png

Step 20: Now click on the “Export to PowerApps+Flow” button and then on the “Download” button. You should now be prompted to save ApiDef.json into your file system.

Export to PowerApps Flow.png

Step 21: Now it is time to navigate to Flow

Microsoft Flow

Step 22: You can now create a custom connector to hookup Function and Flow.

Custom Connector.png

Step 23: It is now time to import the Swagger JSON file from Step 20. Choose “Create custom connector” and then “Import an OpenAPI file”. In this dialog box, choose the Swagger JSON file from Step 20.

Step 24: Specify the details about the custom connector. This will be used to later search the connector when you build the Flow.

Connector Information.png

Step 25: Just click next, as the API key will be specified on the connection, not on the connector. The URL query string parameter is “code”.

Connector Api Key

Step 26: Since I just have only “ModifyTraceLogSetting” action, this is the only one that shows up. If you have multiple functions on the Functions app, multiple operations should be displayed on this screen.

Connector Action Definitions

Step 27: If you navigate down, you can see that the connector has picked up the message body that is to be sent with the POST.

Connector Message Body.png

Step 28: If you click on the “traceloglevel” parameter, you see see details about the POST body.

Connector Post Message Param.png

Step 29: This is the time to create the connection that will be used by the connector.

Connector Test.png

Step 30: Enter the Function API key that you got from Step 14. This will be used to invoke the Function.

Connections Api Key

Step 31: The connection does not show up straight away. You will have to click on the little refresh icon that is to the right of the Connections section. You can now test the connection by clicking the “Test Operation” button, and choosing the parameter value for “traceloglevel” that will be sent with the POST body. You can also see the live response from the Function on this screen.

Connections Test with body.png

Connections Result

Step 32: Once you have saved your connector, you will see something like this below, on the list of custom connectors.

Custom Connector View

Step 33: Now is the time to create the Flow. Choose My Flows -> Create from blank -> Search hundreds of connectors and triggers

Create FlowCreate blank flow

Step 34: Enter the Flow name and since this will be invoked from the Flow app on mobile, choose “Flow button for mobile” as the connector.

Flow Button.png

Step 35: The Flow button will be obviously triggered manually.

Manually trigger flow.png

Step 36: When the user clicks the Flow button, it is time to grab the input, which in the case will the Trace Log Level setting. Choose “Add a list of options” and also a name for the input.

Trigger Flow Input

Step 37: You don’t want the user to enter free-text or numbers, hence you present a list of options from which the user will choose one.

Trace Level Options.png

Step 38: After clicking “Add an action”, you can now choose the custom connector that you created. Search and locate your custom connector.

Flow Custom Connector.png

Step 39: Flow has magically populated the actions that are exposed by this connector. In this case there is only one action to modify the Trace Log setting.

Flow Custom Connector Action.png

Step 40: In this step you don’t want to choose a value at design time, rather map the user entered value to the custom connector. So, choose “Enter custom value”.

Trace Log Level Custom Connector.png

Step 41: The name of the input in Step 37 is “Trace Level”, so choose this value as the binding value that will be used in the custom connector.

Trace Log Level Custom Connector Bind

Step 42: In this case, I have a simple action. I just want to receive mobile notification.

Trace Log Notification.png

Step 43: I just want to receive a notification on my mobile, since I have Flow app installed. When my custom connector calls the function that updates the trace log level, the response text that is returned by the function comes through on the body on the Flow app.

This text is displayed as a notification. If you have a JSON returned by the Function and Flow app, you have to use the parseJSON manipulation to grab the right property. In this case, it is not required as the response is plaintext.

Send Mobile Notification.png

Send Mobile Notification Body.png

Step 44: When the Flow design is complete it should look like this.

Flow Design Complete.png

Step 45: You can run the Flow from either the Flow app on mobile or right from here. I click “Run Now” to check if everything is OK. You can also specify the “Trace Level” here that will be passed to the Function.

Run Flow.png

Run Flow Trace Level Parameter.png

Step 46: I can check the status of the Flow easily. The cool thing about this screen it that it logs so much information that is useful while troubleshooting what went wrong.

Flow Execution Log.png

I can also invoke this Flow on my mobile, using the Flow App. I get a native notification when the Flow completes.

What’s next

While I was experimenting with Flow and Function, I wanted to test integration between Slack and Dynamics 365. For proof of concept, I am running a custom command (“/recordcount”) on Slack channel to retrieve records from Dynamics 365.

Slack Channel.png

I will blog about this next.

Conclusion: I am really excited about the future of Flow & Functions and what this brings to the table for both developers, who want to get their hands dirty and power-users, who want something that they can hook up easily without writing any code.

If you have any feedback, suggestions or errors in this post, please comment below, so that I can learn and improve.

Export all attachments using LINQPad

I was playing around with LINQPad today and wrote this C# code to export all attachments from CRM. You can customise the query to export only certain attachments if required. You could also modify the code to gather the output location from the user, instead of asking them to choose between “My Documents” or “Desktop”. This could also be potentially written as an XrmToolBox tool.

I executed the code in LINQPad v5.26 and Dynamics CRM 2016 OnPremise 8.1 environment. I tried to retrieve the attachment using LINQ, but decided to use normal QueryByAttribute with paging for performance reasons.

Util.RawHtml("
<h4>Choose an output path</h4>
").Dump();
var folders = new List<Environment.SpecialFolder> { Environment.SpecialFolder.Desktop, Environment.SpecialFolder.MyDocuments };
folders.ForEach(x => new Hyperlinq(() => DumpFiles(Environment.GetFolderPath(x)), x.ToString()).Dump());

void DumpFiles(string selectedFolder)
{
	Util.ClearResults();
	new Hyperlinq(selectedFolder).Dump("Chosen output path");
	var progress = new Util.ProgressBar("Writing files: ").Dump();
	progress.HideWhenCompleted = true;
	var retrieveQuery = new QueryByAttribute("annotation")
	{
		ColumnSet = new ColumnSet("documentbody","filename"),
		PageInfo = new PagingInfo{ Count = 500, PageNumber = 1 }
	};
	retrieveQuery.AddAttributeValue("isdocument", true);
	var resultsDc = new DumpContainer().Dump($"Results");
	EntityCollection results;
	int totalRecordCount = 0;
	do
	{
		results = ((RetrieveMultipleResponse)this.Execute(new RetrieveMultipleRequest { Query = retrieveQuery })).EntityCollection;
		var files = results.Entities.Cast<Annotation>();
		totalRecordCount += results.Entities.Count;
		resultsDc.Content = $"Completed Page {retrieveQuery.PageInfo.PageNumber}, Files: {totalRecordCount}";
		int fileNumber = 0;
		foreach (var f in files)
		{
			fileNumber++;
			var fileContent = Convert.FromBase64String(f.DocumentBody);
			File.WriteAllBytes(Path.Combine(selectedFolder, f.FileName), fileContent);
			progress.Caption = $"Page {retrieveQuery.PageInfo.PageNumber} - Writing files: {fileNumber}/{retrieveQuery.PageInfo.Count}";
			progress.Percent = fileNumber * 100 / retrieveQuery.PageInfo.Count;
		}
		retrieveQuery.PageInfo.PageNumber++;
		retrieveQuery.PageInfo.PagingCookie = results.PagingCookie;
	} while (results.MoreRecords);
	resultsDc.Content = $"{totalRecordCount} files saved.";
}

LINQPad Annotation Export User Input.png

LINQPad Annotation Export.png

Basic CRUD using Xrm.WebApi

UPDATE (30/10): Official documentation has been published -> https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/clientapi/reference/xrm-webapi. Andrii got it right. IMHO this feels a little clunky and incomplete, as you need to now the message parameters along with the types. Luckily it appears Jason is already working on this issue -> https://github.com/jlattimer/CRMRESTBuilder/issues/30 and once this this will make life easy again.

UPDATE (23/10): Part 2 (http://butenko.pro/2017/10/18/microsoft-dynamics-365-v9-0-usage-of-new-oob-webapi-functions-part-2/) & Part 3 (http://butenko.pro/2017/10/18/microsoft-dynamics-365-v9-0-usage-of-new-oob-webapi-functions-part-3/) have been published. I am not sure if this is how MS intends this to be used. I’ll will wait for official MS documentation for confirmation regarding this.

UPDATE (18/10): It appears Andrii got to this topic first -> http://butenko.pro/2017/10/05/microsoft-dynamics-365-v9-0-usage-of-new-oob-webapi-functions-part-1/. I should have probably subscribed to his RSS feed – could have saved some time for me. Anyway there is also a Part 2 that he has not posted yet, so I am looking forward to see what I missed.

Dynamics 365 Customer Engagement v9 has added CRUD functionality to query the WebAPI endpoint using Client API.

Xrm Web Api.png

Based on my initial analysis, this seems to be a work in progress and more functions will be added over time. This is some sample code how you can do the basic CRUD using this new feature. This is not an exhaustive documentation, but considering that there is nothing about this in the official documentation, it is a starting point.

Create : Method signature is ƒ (entityType, data)

Sample code to create 3 contact records

[...new Array(3).keys()].forEach(x => Xrm.WebApi.createRecord('contact', {
    firstname: 'Test',
    lastname: `Contact${x}`
}).then(c => console.log(`${x}: Contact with id ${c.id} created`))
  .fail(e => console.log(e.message)))

WebApi Create.png

Retrieve: Method signature is ƒ (entityName, entityId, options)

Sample code to retrieve contact record based on the primary key

Xrm.WebApi.retrieveRecord('contact', 'cadf8ac6-17b1-e711-a842-000d3ad11148', '$select=telephone1')
  .then(x => console.log(`Telephone: ${x.telephone1}`))
  .fail(e => console.log(e.message))

WebApi Retrieve

RetrieveMultiple: Method signature is f(entityType, options, maxPageSize)

Sample code to retrieve 10 contact records without any conditions.

Xrm.WebApi.retrieveMultipleRecords('contact', '$select=fullname,telephone1', 10)
  .then(x => x.entities.forEach(c => console.log(`Contact id: ${c.contactid}, fullname: ${c.fullname}, telephone1: ${c.telephone1}`)))
  .fail(e => console.log(e.message))

WebApi RetrieveMultiple.png

Update: Method signature is ƒ (entityName, entityId, data)

Sample code to update field on contact record

Xrm.WebApi.updateRecord('contact', 'cadf8ac6-17b1-e711-a842-000d3ad11148', {
    telephone1: '12345'
}).then(x => console.log(`Contact with id ${x.id} updated`))
  .fail(x => console.log(x.message))<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

WebApi Update.png

Delete: Method signature is ƒ (entityName, entityId)

Xrm.WebApi.deleteRecord('contact', '88E682D8-18B1-E711-A842-000D3AD11148')
  .then(c => console.log('Contact deleted'))
  .fail(x => console.log(x.message))

WebApi Delete.png

What is not yet done/appears to be in progress

  1. Xrm.WebApi.offline not yet implemented
  2. Ability to construct custom OData requests to pass into Xrm.WebApi.execute (Refer Andrii’s post)
  3. Batching multiple requests (Refer Andrii’s post)

You can use this on your client side code on v9. It is quite basic at the moment, but you don’t need to include any external libraries. But in more advanced scenarios, you can always use Xrm WebAPI Client till these features are made available in the Client API.

Referencehttps://docs.microsoft.com/en-au/dynamics365/get-started/whats-new/customer-engagement/new-in-july-2017-update-for-developers#new-client-apis

Cancelling save event based on the result of async operation

EDIT (06/02/2018):  Tanguy reported a scenario where the original code did not work when the user did a “saveandclose” instead of “save”. I have updated the code to handle this scenario. The uses the jQuery library on the parent frame to do the deep clone, but you could very well do the same using lodash clonedeep so that you don’t have to rely on CRM’s jQuery to do the job.

When you want to cancel a save event in CRM Dynamics 365 Customer Engagement, you use “preventDefault()”  to block the save operation. This works when you block the operation based on the information that is currently on the form/page, but it does not work, if you want to block the save based on the result of an async operation.

In this contrived example, I would like to block the save of the current form, if there exists an user with “homephone” field set to 12345. The async operation is performed by “retrieveMultipleRecords” which returns a Promise.

The code below does not work

Xrm.Page.data.entity.addOnSave((e)=>{
	Xrm.WebApi.retrieveMultipleRecords('systemuser','$select=fullname,jobtitle,homephone').then(x=>{
		console.log(`DataXml OnSave: ${Xrm.Page.data.entity.getDataXml()}`);
		if(x.entities.some(x=>x.homephone == '12345')){
			e.getEventArgs().preventDefault();
			console.log('User with homephone 12345 exists. Save blocked.');
		}
	});
});

Result

Notice the the save event completed and form’s load event fired even though preventDefault ran. The “jobtitle” field that I modified also succeeded, when I expected it to not succeed.

Async Save block does not work

In order to block the save, you’ll have to restructure the code little differently, like the one below. Block save before async operation and explicitly call save, when your criteria for save is met and use closure variable to keep track of whether to save or not.

Working code

Xrm.Page.data.entity.addOnSave((()=>{
	let isSave = false;
	var uiClone = parent.jQuery.extend(true, {}, Xrm.Page.ui);
	var entityClone = parent.jQuery.extend(true, {}, Xrm.Page.data.entity);

	var closeHandler = ()=>{
		console.log('local. close blocked.');
	};

	var saveHandler = (ev)=>{
			console.log('local. save blocked.');
			Xrm.WebApi.retrieveMultipleRecords('systemuser','$select=fullname,jobtitle,homephone').then(x=>{
				isSave = !x.entities.some(x=>x.homephone == '12345');
				if(isSave){
					Xrm.Page.data.entity.save = entityClone.save;
					Xrm.Page.ui.close = uiClone.close;
					if((typeof ev === 'string' && ev === 'saveandclose') ||
						(ev.getEventArgs && ev.getEventArgs() && ev.getEventArgs().getSaveMode() === 2)){
						console.log('saveandclose');
						entityClone.save('saveandclose');
					}
					else{
						console.log('save');
						entityClone.save();
					}
				}
				else{
					console.log('User with homephone 12345 exists. Save blocked.');
				}
			});
	};

	return (e)=>{
		var eventArgs = e.getEventArgs();
		console.log(`DataXml OnSave: ${Xrm.Page.data.entity.getDataXml()}`);
		console.log(`Save Mode: ${eventArgs.getSaveMode()}`);
		if(isSave) {
			console.log('proceed to save');
			Xrm.Page.data.entity.save = entityClone.save;
			Xrm.Page.ui.close = uiClone.close;
			return;
		}
		else{
			Xrm.Page.data.entity.save = saveHandler;
			Xrm.Page.ui.close = closeHandler;
			if(eventArgs.getSaveMode() !== 2){
				eventArgs.preventDefault();
			}
			saveHandler(e);
		}
	}
})());

Result

Console Log Save Blocked

I have tested this only in Chrome on Dynamics 365 Online v9. Hope this is useful.

Puppeteer and Dynamics 365

Puppeteer is a Node API to drive Headless Chrome. I have used Selenium and DalekJS in the past to do some UI testing. I have been experimenting/learning puppeteer for the past few weeks and have found it to relatively easy to learn and use. It is still on alpha though and so there are some bugs.

In my sample repo (https://github.com/rajyraman/Puppeteer-Dynamics-365), I demonstrate:

  1. How to use puppeteer to login to ADFS OnPrem CRM
  2. How to use puppeteer to take full page screenshot
  3. Annotate the screenshot using imagemagick

I envision this repo to provide documentation assistance by capturing and annotating screenshots. Below are the steps to run this project:

    1. After cloning the github repo run the following command to download the npm packages: yarn
    2. Install imagemagick from https://www.imagemagick.org/script/download.php#windows
    3. Confirm that the path to magick.exe exists in PATHImagemagick path.png
    4. Create a new .env file in the root of the repo. Below is the .env file that I used: OnPrem
      env onprem
      Onlineenv online.png
    1. Change the USER_SELECTOR, PASSWORD_SELECTOR, LOGIN_SUBMIT_SELECTOR if they are different. These were the ids in the OnPrem ADFS login page
    2. Check the runsheet.csv file provided in the repo and change it to suit your screenshot requirements. The run sheet specifies the sequence of clicks. In this file, on line 2, I am specifying that I should first click Workspace group and then Clients subgroup. The screenshot should be annotated with text “Clients list”. On line 3, I am specifying that the “NEW” button should be clicked and the screenshot should be annotated as “New client form” and the file name should be “New Client Form.png”. The command bar clicks are always specified in a new line with blank group and subgroup.run sheet.png
    3. Run the node application using “node index.js”Run application.png

 

The screenshots will be captured with headless Chrome and annotated using imagemagick. Here is a sample screenshot:

Administration-Annotated.png

Possible future improvements:

  1. Build the exe using pkg and distribute the exe, .env and runsheet.csv. Building the exe using pkg requires a copy of the puppeteer folder from node_modules along side the exe
  2. Navigate to a record based on id
  3. Run workflow/dialogs
  4. Populate new entity form with data before command bar button click
  5. Automatically scroll if group is outside of viewport

Please submit your feedback/ideas/criticism on the comments area or as a issue in the repo.

Bug: Appointment organizer not being set

This bug was one the toughest for me to identify and implement a workaround as

  • It happens only one time in each session
  • Quick glance at the form during create does not provide any clues regarding the bug

The bug is this: When a new appointment is saved, the organizer field is not set even though it shows the field being populated during form load and save.

Here is how the appointment form looks when you create a new appointment:

Organizer

The organizer field appears to be populated, but it is not. This is the dataxml:

<appointment>
	<instancetypecode>0</instancetypecode>
	<prioritycode>1</prioritycode>
	<scheduledstart>2017-07-07T11:30:00</scheduledstart>
	<ownerid type="8" name="Natraj Yegnaraman">{40327227-A570-E611-80FA-005056A6A11C}</ownerid>
	<scheduleddurationminutes>30</scheduleddurationminutes>
	<isalldayevent>0</isalldayevent>
	<scheduledend>2017-07-07T12:00:00</scheduledend>
	<statuscode>5</statuscode>
	<transactioncurrencyid name="Australian Dollar" type="9105">{F0B901CE-7692-E211-82F1-0050569B4EC3}</transactioncurrencyid>
	<isbilled name="">false</isbilled>
	<ismapiprivate name="">false</ismapiprivate>
	<attachmenterrors name="">0</attachmenterrors>
	<isworkflowcreated name="">false</isworkflowcreated>
</appointment>

Compare this with the dataxml when the organizer field is really populated:

<appointment>
	<instancetypecode>0</instancetypecode>
	<organizer>
		<activityparty>
			<partyid type="8" name="Natraj Yegnaraman">{40327227-A570-E611-80FA-005056A6A11C}</partyid>
		</activityparty>
	</organizer>
	<prioritycode>1</prioritycode>
	<scheduledstart>2017-07-07T11:30:00</scheduledstart>
	<ownerid type="8" name="Natraj Yegnaraman">{40327227-A570-E611-80FA-005056A6A11C}</ownerid>
	<scheduleddurationminutes>30</scheduleddurationminutes>
	<isalldayevent>0</isalldayevent>
	<scheduledend>2017-07-07T12:00:00</scheduledend>
	<statuscode>5</statuscode>
	<transactioncurrencyid name="Australian Dollar" type="9105">{F0B901CE-7692-E211-82F1-0050569B4EC3}</transactioncurrencyid>
	<isbilled name="">false</isbilled>
	<ismapiprivate name="">false</ismapiprivate>
	<attachmenterrors name="">0</attachmenterrors>
	<isworkflowcreated name="">false</isworkflowcreated>
</appointment>

Also, notice that the Organizer field does not have a person icon next to the name, like the Owner field.

This is fix I implemented to workaround the issue:

	var organizer = Xrm.Page.getAttribute('organizer').getValue();
	if(Xrm.Page.ui.getFormType() === 1 && !organizer){
		Xrm.Page.getAttribute('organizer').setValue(Xrm.Page.getAttribute('ownerid').getValue());
	}

The script basically runs on form save provided it is a new record, and copies the Owner lookup value to Organizer lookup, if the Organizer lookup does not contain any value.

The reason the Organizer field is important is because, without this being set, that appointment won’t show up in the appointment creator’s Outlook  (even though they created it and they are also the owner).

This issue is happening in 8.1.0 and has been possibly fixed in 8.2.0.