Designing Accountable Integration

Reviewing various Dynamics 365 integration mechanisms for over a decade now (yes, Dynamics CRM back then) taught me some valuable lessons:

  • Integration mechanisms break down silently the second you look away. Usually metadata or data changed without notification, endpoint moved or some port closed due to security considerations.
  • Sooner or later after an integration mechanism break, someone will claim the problem is on your side on the integration.
  • An integration mechanism performance may degrade over time. Sometimes the volume of data sets you started with on day has growing rapidly so queries/business logic operations now consume more time and resources. Other times, an external endpoint you are using is now responding slowly.
  • Application user used for integration on Dynamics 365 side almost always has more privileges than it actually requires, often granted the System Administrator role.

In this post I would like to share some design tips that can help you build an accountable integration. For me, accountable integration implements few key concepts I’ll explain next.
While these design concepts may sound trivial, it is surprising how little they are actually implemented.

  1. Failing gracefully
    When integration mechanism fails, system administrator must be able to get the full failure details, but end users and the integrating party should not be exposed to failure details, as it usually means nothing to them and they can’t do anything about it anyway. For them, it is enough to know a failure occurred with the minimum required details. Another concern here is security: unhandled exception details can expose sensitive information to hackers and such.

    That means that no exception should go unhandled. Hermetically seal your code with try/catch blocks and if you let an exception float, make sure it is not exposing any redundant details.
  2. Diagnostic enabled
    The first element I look for in failure scenario is diagnostics as it should direct to the problem cause and sometimes even allow solving it without recompiling code.
    In many implementations, diagnostic elements are missing or turned off. In other, it is missing critical information.
    Any diagnostics element should at least be able to confirm that an integration mechanism actually executed, even if it did not operate on any data or performed any action. Additionally, did it succeed or fail and if so, what is the failure message? What is the time stamp for each execution and what were the incoming and out going parameters?

    If you are using Cusotm Workflow Activities or Plug-ins, that means using the Trace service. Azure elements like Function and Logic Apps also allow tracing.
    Make your own convention but make sure you are tracing the following key events:

    1. Execution start/end along with time stamps. This will give you an indication of any execution, event if it did nothing.
    2. Incoming and outgoing parameters. This will allow you to diagnose problems faster as unexpected parameter values are the culprit most of the time.
    3. Any outgoing calls to external APIs. Trace before and after any such call as well as the outgoing message. This will allow you to measure the time such call actually took, the outgoing request and response.
    4. Any exception
  3. Proactive
    Many implementations has logging/tracing enabled but no one even know how to access or analyze it. Sometimes a failure is detected days or weeks after it actually appeared.
    When failure occurs, the system administrator should know about it ASAP. I don’t expect system administrators to spend time scanning trace and logs searching for anomalies.
    An integration mechanism should be able to send an immediate notification of failure in the form of an email, SMS or push.

    If you are using Custom Workflow Activities, you can leverage the hosting process built-in emailing capabilities. Set one of the outgoing parameters to indicate success/failure, so you can send an email to the system administrator every time a failures occurs.
    Azure elements can leverage Application Insights Alerts capability to send SMS, email or push notification on failure.
  4. Least Exposed
    What’s wrong with granting a designated application user (used only for integration scenarios) with the System Admin Role? Everything.
    If this omnipotent user credentials leak (sometimes leaked), it exposes all of your data and allows an attacker to do maximum damage.

    When setting an application user to perform integration operations within Dynamics 365, make sure it has a least possible privileges to allow the required operations. Best to create a custom Security Role.

In the past, I posted about Custom Code Tracing & Exception Logging in Dynamics. This actual solution embeds most of the accountable integration concepts discussed here and may be a good starting point.

Implementing No Code Dynamics 365 Service Bus Listener using Logic Apps

My last post demonstrated how to implement Dynamics 365 Service Bus listener using Azure Function.
This implementation type requires writing code and if you need absolute flexibility, Azure Function is a great solution. Additionally, Azure Function Apps support pay-as-you-go pricing with free grants, which should be considered when planing your architecture costs.

So why would you want to replace Azure Function with Logic Apps?
Mainly because in many cases, no code is required to implement patterns like Inbound Asynchronous Integration. This enable non-developers to implement complex integration scenarios in very short time.
How is this achieved? By leveraging Logic Apps built-in connectors to ASB, Dynamics 365 and numerous other applications. You can even expose custom connectors to your organizational APIs.
When working with Logic Apps, you should note that triggers (except HTTP WebHook trigger) use polling to detect relevant business events. While you can manage the trigger polling interval, you are billed by each polling action (trigger invocation) even if there is no relevant business event to operate on.

Using the built-in Dynamics 365 connectors allow simple CRUD operations but using the Command pattern can enable more advanced approach, where Logic Apps creates a Command (custom entity) record and Dynamics 365 A\Sync. Workflow operates on the request.

In this post I’ll walkthrough the process of setting a Logic Apps process as an ASB one-way listener which posts incoming messages into Dynamics 365.
To demonstrate posting a message to ASB I’ll use Postman.

Business Scenario:

Whenever a new Account record is created in Contoso’s ERP application, reflect the new Account data into Dynamics 365 CE.

Prerequisites:

  1. Azure subscription allowing you to provision Azure Service Bus and Azure Logic Apps
  2. Microsoft Dynamics 365 9.x instance with admin privileges, a trial will do

Walkthrough:

  1. Setup Azure Service Bus queue 
  2. Setup Logic App process as ASB listener
    In Azure portal click ‘Create a resource’, type ‘Logic App’ and then ‘Create’
    undefined
    undefined
    undefined
    Type in Logic App details and click ‘Create’.
    Once created, navigate to the new resource
    undefined
    Select ‘When a message is received in a Service Bus queue’ trigger
    undefined
    Click +
    undefined
    Enter a meaningful connection name and click to select existing ASB queue. Then click ‘Create’ and ‘Continue’
    undefined
    undefined
    Change the polling interval if you want and click ‘+ New step’
    undefined
    Type ‘Data operations’ and select it from the menu. Then select the Parse JSON action
    undefined
    undefined
    Select the content field and feed in the following expression:
    decodeBase64(triggerBody()?[‘ContentData’])
    undefined
    Click ‘Use sample payload to generate schema’ and then paste in the following JSON sample payload.
    {“name”: “The Krusty Krab”, “websiteurl”: “http://thekrustykrab.com”, “numberofemployees”: 3}
    Click ‘Save’ on the command bar.

    Click ‘New Step’ on your process, then type in Dynamics 365. Select Dynamics 365 and then ‘Create Dynamics 365 record action’
    undefined
    undefined
    Click ‘Sign in’ and sign into your Dynamics 365 organization
    undefined
    Select your organization name, the target entity and then map each required field to the matching detail in the JSON payload.
    To map the nubmerofemployees integer data, put in the following expression:
    int(body(‘Parse_JSON’)?[‘numberofemployees’])
    undefined
    You should now have the following process defined:
    undefined
  3. Test by clicking ‘Run’ which will make you process start polling ASB.
    undefined
    Next, post a message with the required format to ASB queue.
    Note that if you left the trigger interval its 3 minutes default, you may wait up to 3 minutes for the message to be processed.
    You can do this using Postman and this useful collection

    Once Logic Apps picks the new message, you’ll see the process run
    undefined
    Finally, check the new Account record created in your Dynamics 365 organization
    undefined

    If you just set this listener up for learning purposes and you don’t actually need it running, make sure you disable your Logic App process so it will not burden your Azure budget for nothing.


Implementing Dynamics 365 Service Bus Listener using Azure Function

One of the built-in triggers for Azure Function App is Service Bus (ASB) queue trigger, which makes Azure Function a listener for ASB events.
While Azure Service Bus provides a robust decoupling mechanism, the receiving side has to poll for messages or implement a listener

This trigger makes Azure Functions an ideal implementation component for Inbound Asynchronous Integration with Microsoft Dynamics 365,
one that allows you to write custom code, but releases you from explicitly writing a listener.
An Inbound Asynchronous Integration pattern describes scenarios where a business event occurs in an external application and must be reflected into Dynamics 365.
Processing of the event in Dynamics 365 does not occur immediately and sometimes no response is required from Dynamics 365 side (one way).

In this post I’ll walkthrough the process of setting an Azure Function as an ASB one-way listener which posts incoming messages into Dynamics 365.
To demonstrate posting a message to ASB I’ll use Postman.

Business Scenario:

Whenever a new Account record is created in Contoso’s ERP application, reflect the new Account data into Dynamics 365 CE.

Prerequisites:

  1. Azure subscription allowing you to register apps, provision Azure Service Bus and Azure Function App
  2. Microsoft Dynamics 365 9.x instance with admin privileges, a trial will do

Walkthrough:

  1. Setup Azure Service Bus queue and make a note of the connection string
  2. Register an AAD App and application user to allow using Microsoft Dynamics 365 Web API. Note the Application Id  and secret
  3. Setup Azure Function App with Service Bus queue trigger
     
    1. In Azure Portal, click ‘Create a resource’
      undefined
    2. Select Function App
      undefined
    3. Search for Function App (classic) and create a new one
      undefined

      undefined
    4. Set Function details and click ‘Create’
      undefined
    5. Select the new Function and click Function app settings
      undefined
    6. Select 1 as Runtime version
      undefined
    7. Select the Functions and Click New function
      undefined
    8. Select Azure Service Bus Queue trigger with C#
      undefined
    9. Set triggers details. Under Service Bus connection click new and select details for the ASB queue you set on stage 1

      undefined
      undefined
    10. Once back in the Function code, replace existing code with the code found here.
      This code parse the incoming message, authenticates to Dynamics 365 and create a new Account record.
      Replace the ApplicationId, Secret and WebAPIURL to match the details you noted on step 2 and your organization Web API URL.
      Save the Function and make sure it compiles.

      undefined
  4. Test by posting a message with the required format to ASB queue.
    You can post messages to ASB using Postman and this useful collection
    undefined
    Make sure Azure Function processes the message
    undefined
    and that the record was created in Dynamics 365
    undefined

Post message to Azure Service Bus via Custom Workflow Activity

While Dynamics 365 Plug-ins allow declarative integration with Azure Service Bus via ServiceEndPoints, integrating through Custom Workflow Activity requires some coding.

So why would you prefer a Custom Workflow Activity over a Plug-in?
Here are just a few good reasons:

  1. Cost-efficiency: A Custom Workflow Activity component can be embedded in Custom Actions, A/Sync. Workflow, Dialog (while it’s still here), while Plug-in component can only be called and operate via the Plug-in execution context. When you put time into writing code, you want to allow as much re-use as possible.
  2. Declarative: once registered, Custom Workflow Activity can be leveraged by non-developers in various Processes, while Plug-in is usually accessible only to developers.
  3. Ubiquity: A Custom Action wrapping a Custom Workflow Activity can be accessed from other processes, client side code, server side code and consumers external to Dynamics 365
  4. Flexibility: unlike the built in declarative integration, addressing Azure Service Bus via code allows you to add conditional logic and also reduce the message payload (see component code).

Convinced or not, the following walkthrough demonstrates using the Post Message To Azure Service Bus Custom Workflow Activity component to post message to ASB queue.
You can view the source code here and modify as required.

Prerequisites:

  1. Azure subscription allowing you to provision Azure Service Bus
  2. Microsoft Dynamics 365 9.x instance with admin privileges, a trial will do
  3. Access to Plugin Registration Tool

Walkthrough:

  1. Setup Azure Service Bus and make a note of the connection string
  2. Use the Plugin Registration Tool to register a ServiceEndPoint and the noted connection string
  3. Download, import and publish the unmanaged solution available here
  4. Configure the solution contained Workflow by selecting the registered ServiceEndPoint, stating semi-column delimited properties list and an optional custom message. Activate the Workflow Rule.


  5. Test by manually activating the Workflow on a Contact record.
    If all goes well, a new message will appear in ASB queue. You can view the queue dashboard in Azure Portal.

    undefined
    If you want to view the actual message content, you can use Azure Service Bus Explorer

    undefined

Few implementation notes:

  1. While the IWorkflowContext allows accessing the target record properties via Pre/Post entity images, this technique is not documented and therefor considered unsupported.
    For this reason, the code explicitly retrieves and push the required attributes values into the SharedVariables collection.
    The propertiesSet input parameter allows sending in a list of required properties.
  2. Clearing the IWorkflowContext data heavy collections before posting it to Azure Service Bus can reduce the the message payload weight by ~80%, which can simplify handling the message and improve performance.





Plug-in ServiceEndPoint Integration – Minimize IPluginExecutionContext payload

Integrating Microsoft Dynamics 365 with Azure Service Bus (ASB) can be easily implemented via the Plug-in Registration Tool without writing any code.
With this integration method, each time the target business event occurs, the complete operation IPluginExecutionContext payload is automatically sent to ASB, though often you need only a small portion of the data.

For some integration scenarios you need to apply conditional logic to decide if and which ASB queue to target. For these scenarios, you can write custom code which allows you to programmatically post message to ASB from Plug-in/Custom Workflow Activity components.
This method, also allows minimizing the IPluginExecutionContext payload which can simplify the logic required to parse it and generally improve performance. Unfortunately, I could not find a way to reduce the payload to just the required data, only minimize it.

The following screenshots (ASB Explorer) describes 2 similar messages (Contact creation event) posted to ASB before (4656 bytes) and after minimizing the IPluginExecutionContext payload (1842 bytes).
As you can see, the full payload weighs ~2.5 times the minimized payload.


To minimize the IPluginExecutionContext payload while maintaining the data you need, follow these simple steps in your code:

  1. Extract the required data from IPluginExecutionContext and add it to the SharedVariables collection
  2. Clear the data heavy IPluginExecutionContext collections
    • InputParameters
    • OutputParameters
    • PreEntityImages
    • PostEntityImages

The following sample code extracts the lastname attribute from the IPluginExecutionContext, clears the redundant collections and posts the minimized context to ASB:

        public void Execute(IServiceProvider serviceProvider)
        {
            // Retrieve the execution context.
            IPluginExecutionContext context = 
                (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

            IServiceEndpointNotificationService cloudService = 
                (IServiceEndpointNotificationService)serviceProvider.GetService(typeof(IServiceEndpointNotificationService));

            // Extract the tracing service.
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            //extract target record data from InputParameters collection
            Entity contact = (Entity)context.InputParameters["Target"];
            //extract required data 
            string fullName = contact["lastname"].ToString();

            //add required data to the SharedVariables collection
            context.SharedVariables.Add("lastname", fullName);

            //clear redundant data from context 
            context.PreEntityImages.Clear();
            context.PostEntityImages.Clear();           
            context.InputParameters.Clear();
            context.OutputParameters.Clear();
            
            try
            {
                tracingService.Trace("Posting the execution context.");

                string response = cloudService.Execute(new EntityReference("serviceendpoint", 
                    serviceEndpointId),
                    context);
                
                if (!String.IsNullOrEmpty(response))
                {
                    tracingService.Trace("Response = {0}", response);
                }

                tracingService.Trace("Done.");
            }
            catch (Exception e)
            {
                tracingService.Trace("Exception: {0}", e.ToString());
                throw;
            }
        }
    }

Dynamics 365 Power Tel Aviv – Azure Integration Patterns

I had the privilege to deliver a lecture about Dynamics 365 Azure Integration Patterns in the DynamicsPower! Tel Aviv event last week.

This is the first 365 Saturday event in Israel, gathering Power Platform MVPs and experts from around the world to share their insights with the community.

I would like to thank Victor Dantas & Liran Debbi for their efforts in organizing this awesome event.

Click here to download my presentation.

Dynamics 365 Power Israel team

The Strange Case of The Liquid Template fetxhxml Tag

I have been asked by one of my customers to investigate a problem related to the Liquid Template fetxhxml tag.
If you aren’t familiar with this powerful Liquid tag yet, it allows retrieving Dynamics 365 data using a standard FetchXML query. You can even leverage this tag to retrieve data a-synchronously.

Trying to reproduce the problem, I added a simple FetchXML query within a fetchxml tag in one of the Portal pages via the actual Web Page record, knowing that the front side editor disrupts FetchXML queries syntax.

Fetchxml query

Testing the query results in the target Portal page, I noticed that the query condition was ignored – the query returned all Contact records.

Returning to the Web Page and refreshing it, I noticed that the FetchXML query mysteriously changed – the closing  slash for each attribute element was replaced with a full closing <attribute> tag.

Changed query

It seems that before saving the Web Page form, the text editor diligently ‘corrects’ my query to comply to XML hierarchy format. This behavior was reproduced with Chrome/Edge browsers on both UII/old forms UI.
Strangely enough, the changed query syntax is considered valid by Dynamics 365, but any query condition is completely ignored.
I am guessing a recent features update changed the text editor behavior, as this behavior does not exist on earlier Dynamics 365 versions. 

After searching Microsoft Portal documentation for the fetchxml Liquid Template tag finding no results, I started considering the option that this is not a bug, but maybe some kind of feature phase out. Naaah.

So until Microsoft fixes this glitch, how can you use conditional FetchXML queries and fetchxml liquid tags? By using XrmTookBox Portal Code Editor Plug-in which allows Portal content editing (among other features) without disrupting FetchXML queries

 XrmTookBox Portal Code Editor Plug-in