Sonoma Partners Microsoft CRM and Salesforce Blog

Changing Themes in Microsoft Dynamics CRM

Today's post was written by Neil Erickson, Development Principal at Sonoma Partners.

Themes in Microsoft Dynamics CRM are a great way to brand your Company’s application, allowing you to choose a logo and color scheme that differs from what CRM provides when first installed. 

Themes were released in CRM Online 2015 Update 1 and in CRM 2016 for On-Premise environments.

Themes iconAfter rolling this out at Sonoma Partners, we had one user report that upon logging into our CRM system, they continued to see the out-of-the-box experience and not our logo and colors.  

A quick search found us the resolution. It required that we go into the affected User’s Personal Options and verify that high contrast settings were not enabled. It turns out that although it looked similar to the out-of-the-box theme, there were subtle differences.  For example, the background color of some clickable items was now black instead of dark blue and hovering over items behaved differently.

Themes

While this did solve the issue, we were now curious if the theme that was being used for High Contrast settings could be customized as well.  We decided to compare the requests from this user’s browser to another request that loaded properly using Fiddler, one of our favorite Tools by Telerik.  Here we could see that two different Theme ID’s were actually being requested from the server.

The organization’s database also showed that the problematic theme was the one that came with CRM, when the organization was first provisioned.  Unfortunately, none of the columns pointed to being related to high contrast settings.  Even swapping the Theme IDs yielded no change in the high contrast appearance. 

For this reason, we do not believe that the High Contrast theme can be changed.

Having trouble? Feel free to contact us.

Topics: Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016

Need to export more records to Excel? We’ve got you covered with the OrgDBOrgSettings Editor!

With Dynamics, the default maximum record count to export to Excel is 10,000.  While this may work for a lot smaller business without a lot of data, it won’t work for most organizations.  An instance of this came up recently where a client of ours kept hitting the 10,000 record limit though they had many more records to export.

Typically in the past, if the customer was CRM OnPremise, you would be able to access this setting (along with the other OrgDBOrgSettings) using direct SQL.  Updating these values with SQL definitely wasn’t supported, but at least you could have conversations of updating the settings if you had individuals that knew what they were doing, or you created a support ticket with Microsoft to help you out.

However, if you had CRM Online, these settings weren’t available to you through the UI or even through SQL since with Online, you don’t have direct SQL access to your database.  What can you do?

That’s where the OrgDBOrgSettings editor comes in to play.  You can download the managed solution from this link.  The process to get it installed and use it is pretty simple.  Download the managed solution from that link, import it in as a normal solution into your environment, and then open up the solution.

From the configuration page of the solution, you’ll see the different settings that you have access to, what the default value is, what the current value is, and what the maximum value is (there are some limitations – you cannot update the MaxRecordsForExportToExcel to 500,000,000).

image

To edit a value, either double click on a row, or click the Edit link in the row for that setting.  When you do so, you have the option to set a custom value, or revert back to the default.  A checkbox at the bottom of the configuration page can be set or unset which will display a prompt to confirm the change upon making an update.

image

image

If you try to set a value over the maximum, you’ll get a message stating the requested change wasn’t saved, and the value will remain as it currently is.

image

This is a great utility to make supported updates to the OrgDBOrgSettings without having to reach out to Microsoft Support.  For a full list of all the settings that can be updated and a description of what the setting drives, navigate to this link.  Also, for more explanation on how to use the tool and what it can be used for, see this post from Sean McNellis who created the solution.  While this solution has been available for some time now, we’re hoping this is a great refresher to let you know what tools are available for free to help you make changes on your own.

Topics: CRM Best Practices Microsoft Dynamics CRM Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Turbo Forms: The Time is Nigh

Today's post is written by Mike Dearing, Development Principal at Sonoma Partners.

Released with CRM 2015 Online Update 1, and CRM 2016 on-prem, Turbo Forms was introduced to significantly decrease form load times

And with the recent announcement that the legacy form rendering option will be deprecated, our clients that had yet to take the plunge have begun to express concern.  While it is true that for more heavily customized environments the switch to Turbo Forms can be quite intimidating, hopefully some of the issues that I recently faced will help you on your journey.

Accessing a Form’s Xrm Context From Within an Iframe

Prior to turbo forms, web resources could reliably access their host form's Xrm context by calling parent.Xrm.  However, with Turbo Forms enabled we have found that setting the parent’s Xrm context onto an exposed variable within the resource works well as an alternative.  The parent form can supportedly retrieve the context of the iframe and attach an on load event to it.  The on load event can then be used to reference the parent’s Xrm context, as per the following:

Asynchronous Script Loading

It has always been best practice to ensure that scripts dependent on one another are fully loaded before invoking each other.  The new script engine enforces this as well.  The easiest way to safeguard from falling victim to race conditions is to refrain from calling into separate scripts immediately upon load of a script.  Instead, defer these calls to an on load event handler through the form's properties.  The script engine will load all scripts before calling your load handlers, ensuring that you won’t end up calling a script that hasn’t fully loaded yet.

GetFormType() Returning the Wrong Form Type

One bug that we noticed during our upgrade is that turbo form's implementation of Xrm.Page.ui.getFormType() method doesn't return the proper type for a disabled form, despite showing the correct form type in the footer.  Instead, it will return a form type of 'update'.  This bug is still present as of 2016 SP1/Update 1.  Our workaround for the time being is to do an organization service call to get the access rights of the current user to determine if the form is disabled or not.  This is unfortunately much more inefficient with the overhead of the service call, but is edge case enough that hopefully it won't affect too many implementations. Microsoft support confirmed that a fix will be in place for CRM 2016 Update 2, with a projected release date of 2017 Q2.


Thinking of making the switch soon?  You don’t have to upgrade alone… we’re here to help!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016

Microsoft Text Analysis and CRM–Tracking Client Sentiment

Microsoft has released a set of intelligence APIs known as Cognitive Services which cover a wide range of categories such as vision, speech, language, knowledge and search.  The APIs can analyze images, detect emotions, spell check and even translate text, recommend products and more.  In this post I will cover how the Text Analysis API can be used to determine the sentiment of your client based on the emails they send.

The idea is that any time a client (Contact in this case) sends an email that is tracked in CRM, then we will pass it to the Text Analysis API to see if the sentiment is positive or negative.  In order to do this, we will want to register a plugin on create of email.  We will make the plugin asynchronous since we’re using a third party API and do not want to slow down the performance of the email creation if the API is executing slowly.  We will also make the plugin generic and utilize the secure or unsecure configuration when registering the plugin to pass in the API Key as well as the schema name of the sentiment field that will be used.

Below is the constructor of the plugin to take in either a secure or unsecure configuration expecting the format of “<API_KEY>|<SENTIMENT_FIELD>”.

Next is the Execute method of the Plugin which will retrieve the email description from the email record and pass it to the AnalyzeText method.  The AnalyzeText method will return the sentiment value which we will then use the populate the sentiment field on the email record.

Then we have the AnalyzeText method which will pass the email body to the Text Analysis API which then returns the sentiment value.

And finally the classes used as input and output parameters for the Text Analysis API.

Now register the plugin on post-Create of Email with the Text Analysis API Key and the schema name of the sentiment field either in the secure or unsecure configuration

image

Now when an email is created in CRM, once the asynchronous job is complete, the email record will have a Sentiment value set from a range of 0 (negative) to 1 (positive).

image

The sentiment field on the email record can then be used as a calculated field on the contact to average the sentiment values of all the email records where the contact is the sender to track the sentiment of your clients based on the emails they send you.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Building CRM Web Resources with React

Web Resource Development

Microsoft Dynamics CRM has allowed us to develop and host custom user interfaces as Web Resources since CRM 2011.  Since then, the web has exploded with JavaScript frameworks.  In addition, browsers have started to converge on standards both in JavaScript object support and CSS.  In short, its a great time to be building custom user interfaces on top of Microsoft Dynamics CRM.

Today we’ll be working with React, an emerging favorite in the JavaScript world.  React’s key benefits are its fast rendering time and its support of JSX.  React is able to render changes to the HTML DOM quickly, because all rendering is first done to JavaScript objects, which are then compared to the previously generated HTML DOM for changes.  Then, only those changes are applied to the DOM.  While this may sound like a lot of extra work, typically changes to the DOM are the most costly when it comes to performance.  JSX is a syntax that combines JavaScript and an XML-like language and allows you to develop complex user interfaces succinctly.  JSX is not required to use React, but most people typically use it when building React applications.

The Sample Application

To demonstrate these benefits, we’ll build a simple dashboard component that displays a list of the top 10 most recently created cases.  We’ll have the web resource querying for new cases every 10 seconds and immediately updating the UI when one is found.

CaseSummary

The files that I will be creating, will have the following structure locally:

CaseSummary/ 
├── index.html 
├── styles.css 
├── app.jsx 
└── components/ 
    ├── CaseSummary.jsx     
    ├── CaseList.jsx 
    └── Case.jsx

However, when we publish them as web resources in CRM, they will be simplified to the following:

demo_/
└── CaseSummary/ 
    ├── index.html 
    ├── styles.css 
    └── app.js

Other than including the publisher prefix folder, the main change is that all of the JSX files have been combined into a single JavaScript file.  We’ll step through how to do this using some command line tools.  There are a few good reasons to “compile” our JSX prior to pushing to CRM:

  1. Performance – We can minify the JavaScript and bundle several frameworks together, making it more efficient for the browser to load the page.
  2. More Performance – JSX is not a concept that browsers understand by default.  By converting it to plain JavaScript at compile time, we can avoid paying the cost of conversion every time the page is loaded.
  3. Browse Compatibility – We can write our code using all of the features available in the latest version of JavaScript and use the compiler to fill in the gaps for any older browsers that might not support these language features yet.
  4. Maintainability – Separating our app into smaller components makes the code easier to manager.  As you build more advanced custom UI, the component list will grow past what I am showing here.  By merging multiple files together, no matter how many JSX files we add to the project we just need to push the single app.js file to the CRM server when we are ready.
  5. Module Support – Many JavaScript components and libraries are distributed today as modules.  By compiling ahead of time we can reference modules by name and still just deploy them via our single app.js file.

Exploring the Source Code

The full source code for the example can be found at https://github.com/sonomapartners/web-resources-with-react, but we will explore the key files here to add some context.

index.html

This file is fairly simple.  It includes a reference to CRM’s ClientGlobalContext, the compiled app.js and our style sheet.  The body consists solely of a div to contain the generated UI.

app.jsx

Now things start to get more interesting.  We start by importing a few modules.  babel-polyfill will fill in some browser gaps.  In our case it defines the Promise object for browsers that don’t have a native version (Internet Explorer).  The last three imports will add React and our top level CaseSummary component.  Finally we register an onload event handler to render our UI into the container div.

components/CaseSummary.jsx

CaseSummary is our top level component and is also taking care of our call to the CRM Web API.  This is also our first look at creating a component in React, so let’s take a look at each function.  React.createClass will take the object passed in and wrap it in a class definition.  Of the five functions shown here, four of them are predefined by React as component lifecycle methods: getInitialState, componentDidMount, componentWillUnmount and rendergetInitialState is called when an instance of the component is created and should return an object representing the starting point of this.state for the component.  componentDidMount and componentWillUnmount are called when the instance is bound to and unbound from the DOM elements respectively.  We use the mounting methods to set and clear a timer, which calls the loadCases helper method.  Finally, render is called each time the state changes and a potential DOM change is needed.  We also have an additional method, loadCases where we use the fetch API to make a REST call.  The call to this.setState will trigger a render whenever cases are loaded.  We definitely could have made this component smarter by only pulling case changes, but this version demonstrates the power of React by having almost no impact on performance even though it loads the 10 most recent cases every 10 seconds.

components/CaseList.jsx

By comparison CaseList.jsx is pretty straight forward.  There are two interesting parts worth pointing out.  The use of this.props.cases is possible because CaseSummary.jsx set a property on the CaseList like this: <CaseList cases={this.state.cases} />.  Also, it is important to notice the use of the key attribute on each Case.  Whenever you generate a collection of child elements, each one should get a value for the key attribute that can be used when React is comparing the Virtual DOM to the actual DOM.

components/Case.jsx

The simplest of the components, Case.jsx outputs some properties of the case with some simple HTML structure.

Compiling the Code

We’re going to start with using NodeJS to install both development tools and runtime components that we need.  It is important to note that we’re using NodeJS as a development tool, but it isn’t being used after the code is deployed to CRM.  We’ll start by creating a package.json file in the same folder that holds our index.html file.

package.json

After installing NodeJS, you can open a command prompt and run “npm install” from the folder with package.json in it.  This will download the packages specified in package.json to a local node_modules folder.  At a high level, here are what the various packages do:

  • webpack, babel-*, imports-loader, and exports-loader: our “compiler” that will process the various project files and produce the app.js file.
  • webpack-merge and webpack-validator: used to help manipulate and validate the webpack.config.js (we will discuss this file next).
  • webpack-dev-server: a lightweight HTTP server that can detect changes to the source files and compile on the fly.  Very useful during development.
  • react and react-dom: The packages for React.
  • babel-polyfill and whatwg-fetch: They are bringing older browsers up to speed.  In our case we are using them for the Fetch API (no relation to Fetch XML) and the Promise object.

The scripts defined in the package.json are runnable by typing npm run build or npm run start from the command prompt.  The prior will run and produce our app.js file and the latter will start up the previously mentioned webpack-dev-server.  Prior to running either of them though, we need to finish configuring webpack. This requires one last config file to be placed in the same folder as package.json. It is named webpack.config.js

webpack.config.js

As the file name implies, webpack.config.js is the configuration file for webpack.  Ultimately it should export a configuration object which can define multiple entries.  In our case we have a single entry that monitors app.jsx (and its dependent files) and outputs app.js.  We use the webpack.ProvidePlugin plugin to inject whatwg-fetch for browsers that lack their own fetch implementation.  We also define that webpack should use the babel-loader for any .jsx or .js files it encounters and needs to load.  The webpack-merge module allows us to conditionally modify the configuration.  In our case we are setting the NODE_ENV environment variable to “production” for a full build and turning on JavaScript minification.  Finally we use the webpack-validator to make sure that the resulting configuration is a valid.

Deploying and Continuing Development

At this point all of the files should be set up.  To deploy the code, you would run npm run build and then deploy index.html, app.js, and styles.css as web resources to CRM. 

If it becomes tedious to keep deploying app.js to CRM as you make small changes, you can set up an AutoResponder rule in Fiddler to point at the webpack-dev-server.  Once this rule is in place, when the browser requests files like index.html and app.js from the right subfolder of the CRM server, Fiddler will intercept the request and provide the response from wepack-dev-server instead.  This way you can just save your local JSX files and hit refresh in the browser as you are developing.  Of course you need to be sure that you have started wepack-dev-server by running npm run start from the command line.  I have included an example for the rule I set up for this demo below:

fiddlerAutoResponder

With that you should be set to start building your own CRM Web Resources using React!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Analyzing Audit Logs using KingswaySoft

If you have ever looked into analyzing audit log records in Dynamics CRM, you know how hard it can be.  Using the API there isn’t a good way to retrieve all the audit log records for a specific entity.  You can only either retrieve all the changes for a certain attribute or retrieve all the changes for a specific record.  If you’re on-premise and have access to the database, you can get to the audit detail records but you will find that the data is very hard to parse through.

Thanks to the wonderful folks at KingswaySoft, with version 7.0, this is no longer the case.  With KingswaySoft v7.0, audit details can easily be retrieved for a specific entity and then can be dumped into a file or a database for further reporting or analysis.

In order to accomplish this, first you will need to make sure you have the SSIS Toolkit installed and then download KingswaySoft v7.0 here.  Then open up Visual Studio and create a new Integration Services project.

clip_image002

Next add a Data Flow Task and drill into it.

clip_image004

Then we will set up a Dynamics CRM Connection using the Connection Manager.  In the Connection Manager view, right-click and select “New Connection”.

clip_image006

Now select the DynamicsCRM connection and click Add

clip_image008

This will pop open the Dynamics CRM Connection Manager which will allow you to connect to your Dynamics CRM organization.

clip_image010

Now use the SSIS Toolbox view to drag the Dynamics CRM Source component onto the canvas.

clip_image012

Double-click the Dynamics CRM Source component to pop open the editor.  Select the Connection Manager that you created earlier and set AuditLogs as the Source Type.  In the FetchXML text editor, write a fetch xml query to pull back the records of an entity where you want to retrieve audit details from.  In my example I’m retrieving 25 account records with my Fetch XML query.

image

Select Columns on the left and pick the columns you would like to be a part of your report.  In my example I’m going to use action (Create, Update, Delete, etc), the objectid and objecttypecode (the record that was changed), and the userid and useridname (the user that triggered the change).

clip_image016

The Dynamics CRM Source component will have two outputs, one for the header audit record and one for the list of audit detail records.  In my example I want to join these two outputs into one dataset so I can display both sets of data in the same report.  In order to do this we will need to drag two Sort components onto the canvas and then connect each output into the separate Sort components.  The result should look something like this:

clip_image018

Now double-click the first Sort to open the editor.  Select the auditid as the sort attribute as it is the unique key to join the two datasets together and check the “Pass Through” box for all the other columns that you want to use in your report.

clip_image020

Now double-click the other Sort component and perform the same steps.

clip_image022

Next drag the Merge Join component onto the canvas, connect the two outputs from the two Sort components into the new Merge Join component and then double-click the Merge Join component to open the editor.  Select Inner join as the Join type and then select any columns you want in your report and map them in the bottom pane.

clip_image024

Now we need to drag a Derived Column component onto the canvas and connect the output from the Merge Join into the Derived Column component.  This component needs to be used as we’re going to output the data into a CSV file so the oldvalue and newvalue columns need to be converted from a DT_NTEXT to a DT_TEXT.  Open the editor for the component and set the expression to convert ‘oldvalue’ to DT_TEXT using the 1252 codepage and repeat the same for ‘newvalue’.

image

Lastly, use a Flat File Destination to output the audit records into a CSV file that can be opened in Excel.  The screenshot below is the columns I used for my output file. 

image

Now your Data Flow should look like the following:

image

Then you can run the SSIS package and you should get an output file that displays all the audit records for the first 25 retrieved accounts.  The output will show the name of the user that made the change, the field that was changed, the old value, the new value as well as if it was a Create or Update.

image

So there you have it!  Thanks to the wonderful KingswaySoft toolkit, it is now possible to extract audit logs into a readable output that can be analyzed as needed.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Lookups - Null vs Empty Array

The other day I discovered an interesting ‘gotcha’ when working with a lookup in JavaScript.  A business requirement called for some JavaScript to be registered when a lookup value changed and then execute certain logic based on if the lookup had a value or not. 

This is pretty straightforward logic and could be handled easily with the following code:

var customerValue = Xrm.Page.getAttribute('parentcustomerid').getValue();
if (!customerValue) {
   // do some logic
}
else
{
   // do some other logic
}

Come to find out, this works for the most part but there is one scenario where it falls short which is where the ‘gotcha’ comes in. 

  1. Record form loads and the lookup doesn’t have a value
  2. The lookup has a value and the user selects the lookup and hits the “Delete” key
  3. The lookup has a value and the user clicks the magnifying glass, then “Look Up More Records” and then clicks “Remove Value” on the subsequent dialog
In the first two scenarios with the above JavaScript code, the customerValue variable will be null and will work as expected.  In the third scenario, the customerValue variable will be an empty array and not work as expected as it isn’t null.


Therefore we need to update the block of code with the following:

var customerValue = Xrm.Page.getAttribute('parentcustomerid').getValue();
if (!customerValue || customerValue.length == 0) {
   // do some logic
}
else
{
   // do some other logic
}

Now the code is flexible and will handle all 3 scenarios where the lookup value doesn’t exist.

Note:  This was tested in CRM 2015 Update 1 and CRM 2016

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Firing events in JavaScript when status changes in Dynamics CRM 2013/ 2015/ 2016

Today's post is written by Rob Montague, a Developer at Sonoma Partners.

I needed to show a specific section based on the status of the entity.  I was trying to use onload only, but unfortunately, this doesn’t trigger when the status changes even though the page pseudo-refreshes.

You can, however, add an on-change event to the statecode field and whenever the state changes, it will fire your event.

Sample Code:

function opportunityExecuteOnLoad() {
	hideShowSectionsBasedOnState();
    attachEvents();	
}

function attachEvents() {
	var statecode = Xrm.Page.getAttribute("statecode");
	if (statecode) {
	   statecode.addOnChange(hideShowSectionsBasedOnState);
	}	
}

function hideShowSectionsBasedOnState() {
    // Make Sure statecode exists and you can read value
    var stateCodeAttribute = Xrm.Page.getAttribute('statecode');
    if (!stateCodeAttribute || !stateCodeAttribute.getSelectedOption()) {
        return;
    }

    var stateCode = stateCodeAttribute.getSelectedOption(),
	// You can change "open" to whatever status you need
	isStatusOpen = stateCode.text.toLowerCase() === "open",
	generalTab = Xrm.Page.ui.tabs.get("general");

    // If general tab doesn’t exist, exit
    if (!generalTab) {
        return;
    }

    // Get the first section.  If it doesn’t exist, do nothing, otherwise 
    // If state is open, show it, otherwise hide it.
    var sectionToShowIfOpen = generalTab.sections.get("sectionToShowIfOpen");
    if (sectionToShowIfOpen) {
        sectionToShowIfOpen.setVisible(isStatusOpen);    
    }
    
    // Get the second section. If it doesn’t exist, do nothing, otherwise 
    // If the state is open, hide it, otherwise show it.
    var sectionToHideIfOpen = generalTab.sections.get("SectionToHideIfOpen");
    if (sectionToHideIfOpen) {
        sectionToHideIfOpen.setVisible(!isStatusOpen);    
    }
} 

You are now able to harness the status change event and customize your page whenever the state changes.

Topics: Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016

FetchXML: Left Outer Joins with Multiple On Clauses

Having worked on CRM for ten years, I thought I understood everything that was possible with FetchXML. After all it seems pretty straight forward and each clause has almost a one to one equivalent with SQL. However, while I was recently doing some work on a project that required me to find records missing certain child records, I was surprised to find my understanding of left outer joins was incomplete.

The Requirement

In the project I was working on, we were using the native Connection entity to track relationships between contacts and users. Some of the Connections were manually created, but others needed to be automatically created based on business logic. In some cases we needed to detect all contacts that a user did not have a connection of a specified role with.  This seemed like a good case for using a left outer join and I sat down and wrote the following FetchXML:

The Concern

As I reviewed the FetchXML, I became concerned that I wouldn’t get the proper results with this query. I was assuming that CRM only used the to and from attributes on the link-entity element to build the on clause for the left join in SQL.  I knew that if the additional conditions inside the link-entity were added to the SQL where clause, that I would get no rows back.  In short, I was worried the generated SQL would look something like this:

It was enough of a concern that I decided to fire up SQL Profiler on my local dev org and see what exactly CRM generates in this case.  Much to my surprise I found the following (slightly cleaned up to help legibility):

In Summary

So in the end, CRM came through and put the link-entity’s conditions in the on clause for the left join.  This subtle change makes a huge difference in the results returned and makes left joins much more useful in CRM than one might assume based on the FetchXML structure.  This left me with an efficient query to solve the business requirement and a new found respect for FetchXML.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Editable Grid for 2015 is Now Available

Editable Grid for 2015 is now available!

Some background information for you: years ago, Sonoma Partners developed Editable Grid, a popular tool that allowed users to edit records inline within a View. Based on user feedback, we have rebuilt the tool for CRM 2015, with improvements that increase both functionality and usability. Updates include:

  • Users can edit any field type, allowing update of multiple records at one time. This saves the user from having to open each record individually.
  • Works with native, custom, and personal Views.
  • Works with native and custom entities. Previously, Editable grid was limited to the core entities: Contact, Account, Lead, and Opportunity.

Editable Grid

Download Editable Grid for 2015 now. If you have any questions about the Editable Grid utility, or anything related to Microsoft Dynamics CRM 2015, please contact us.

Upgrading to Microsoft Dynamics CRM 2015

Topics: Microsoft Dynamics CRM 2015