Sonoma Partners Microsoft CRM and Salesforce Blog

Need to export more records to Excel? We’ve got you covered with the OrgDBOrgSettings Editor!

With Dynamics, the default maximum record count to export to Excel is 10,000.  While this may work for a lot smaller business without a lot of data, it won’t work for most organizations.  An instance of this came up recently where a client of ours kept hitting the 10,000 record limit though they had many more records to export.

Typically in the past, if the customer was CRM OnPremise, you would be able to access this setting (along with the other OrgDBOrgSettings) using direct SQL.  Updating these values with SQL definitely wasn’t supported, but at least you could have conversations of updating the settings if you had individuals that knew what they were doing, or you created a support ticket with Microsoft to help you out.

However, if you had CRM Online, these settings weren’t available to you through the UI or even through SQL since with Online, you don’t have direct SQL access to your database.  What can you do?

That’s where the OrgDBOrgSettings editor comes in to play.  You can download the managed solution from this link.  The process to get it installed and use it is pretty simple.  Download the managed solution from that link, import it in as a normal solution into your environment, and then open up the solution.

From the configuration page of the solution, you’ll see the different settings that you have access to, what the default value is, what the current value is, and what the maximum value is (there are some limitations – you cannot update the MaxRecordsForExportToExcel to 500,000,000).

image

To edit a value, either double click on a row, or click the Edit link in the row for that setting.  When you do so, you have the option to set a custom value, or revert back to the default.  A checkbox at the bottom of the configuration page can be set or unset which will display a prompt to confirm the change upon making an update.

image

image

If you try to set a value over the maximum, you’ll get a message stating the requested change wasn’t saved, and the value will remain as it currently is.

image

This is a great utility to make supported updates to the OrgDBOrgSettings without having to reach out to Microsoft Support.  For a full list of all the settings that can be updated and a description of what the setting drives, navigate to this link.  Also, for more explanation on how to use the tool and what it can be used for, see this post from Sean McNellis who created the solution.  While this solution has been available for some time now, we’re hoping this is a great refresher to let you know what tools are available for free to help you make changes on your own.

Topics: CRM Best Practices Microsoft Dynamics CRM Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Building CRM Web Resources with React

Web Resource Development

Microsoft Dynamics CRM has allowed us to develop and host custom user interfaces as Web Resources since CRM 2011.  Since then, the web has exploded with JavaScript frameworks.  In addition, browsers have started to converge on standards both in JavaScript object support and CSS.  In short, its a great time to be building custom user interfaces on top of Microsoft Dynamics CRM.

Today we’ll be working with React, an emerging favorite in the JavaScript world.  React’s key benefits are its fast rendering time and its support of JSX.  React is able to render changes to the HTML DOM quickly, because all rendering is first done to JavaScript objects, which are then compared to the previously generated HTML DOM for changes.  Then, only those changes are applied to the DOM.  While this may sound like a lot of extra work, typically changes to the DOM are the most costly when it comes to performance.  JSX is a syntax that combines JavaScript and an XML-like language and allows you to develop complex user interfaces succinctly.  JSX is not required to use React, but most people typically use it when building React applications.

The Sample Application

To demonstrate these benefits, we’ll build a simple dashboard component that displays a list of the top 10 most recently created cases.  We’ll have the web resource querying for new cases every 10 seconds and immediately updating the UI when one is found.

CaseSummary

The files that I will be creating, will have the following structure locally:

CaseSummary/ 
├── index.html 
├── styles.css 
├── app.jsx 
└── components/ 
    ├── CaseSummary.jsx     
    ├── CaseList.jsx 
    └── Case.jsx

However, when we publish them as web resources in CRM, they will be simplified to the following:

demo_/
└── CaseSummary/ 
    ├── index.html 
    ├── styles.css 
    └── app.js

Other than including the publisher prefix folder, the main change is that all of the JSX files have been combined into a single JavaScript file.  We’ll step through how to do this using some command line tools.  There are a few good reasons to “compile” our JSX prior to pushing to CRM:

  1. Performance – We can minify the JavaScript and bundle several frameworks together, making it more efficient for the browser to load the page.
  2. More Performance – JSX is not a concept that browsers understand by default.  By converting it to plain JavaScript at compile time, we can avoid paying the cost of conversion every time the page is loaded.
  3. Browse Compatibility – We can write our code using all of the features available in the latest version of JavaScript and use the compiler to fill in the gaps for any older browsers that might not support these language features yet.
  4. Maintainability – Separating our app into smaller components makes the code easier to manager.  As you build more advanced custom UI, the component list will grow past what I am showing here.  By merging multiple files together, no matter how many JSX files we add to the project we just need to push the single app.js file to the CRM server when we are ready.
  5. Module Support – Many JavaScript components and libraries are distributed today as modules.  By compiling ahead of time we can reference modules by name and still just deploy them via our single app.js file.

Exploring the Source Code

The full source code for the example can be found at https://github.com/sonomapartners/web-resources-with-react, but we will explore the key files here to add some context.

index.html

This file is fairly simple.  It includes a reference to CRM’s ClientGlobalContext, the compiled app.js and our style sheet.  The body consists solely of a div to contain the generated UI.

app.jsx

Now things start to get more interesting.  We start by importing a few modules.  babel-polyfill will fill in some browser gaps.  In our case it defines the Promise object for browsers that don’t have a native version (Internet Explorer).  The last three imports will add React and our top level CaseSummary component.  Finally we register an onload event handler to render our UI into the container div.

components/CaseSummary.jsx

CaseSummary is our top level component and is also taking care of our call to the CRM Web API.  This is also our first look at creating a component in React, so let’s take a look at each function.  React.createClass will take the object passed in and wrap it in a class definition.  Of the five functions shown here, four of them are predefined by React as component lifecycle methods: getInitialState, componentDidMount, componentWillUnmount and rendergetInitialState is called when an instance of the component is created and should return an object representing the starting point of this.state for the component.  componentDidMount and componentWillUnmount are called when the instance is bound to and unbound from the DOM elements respectively.  We use the mounting methods to set and clear a timer, which calls the loadCases helper method.  Finally, render is called each time the state changes and a potential DOM change is needed.  We also have an additional method, loadCases where we use the fetch API to make a REST call.  The call to this.setState will trigger a render whenever cases are loaded.  We definitely could have made this component smarter by only pulling case changes, but this version demonstrates the power of React by having almost no impact on performance even though it loads the 10 most recent cases every 10 seconds.

components/CaseList.jsx

By comparison CaseList.jsx is pretty straight forward.  There are two interesting parts worth pointing out.  The use of this.props.cases is possible because CaseSummary.jsx set a property on the CaseList like this: <CaseList cases={this.state.cases} />.  Also, it is important to notice the use of the key attribute on each Case.  Whenever you generate a collection of child elements, each one should get a value for the key attribute that can be used when React is comparing the Virtual DOM to the actual DOM.

components/Case.jsx

The simplest of the components, Case.jsx outputs some properties of the case with some simple HTML structure.

Compiling the Code

We’re going to start with using NodeJS to install both development tools and runtime components that we need.  It is important to note that we’re using NodeJS as a development tool, but it isn’t being used after the code is deployed to CRM.  We’ll start by creating a package.json file in the same folder that holds our index.html file.

package.json

After installing NodeJS, you can open a command prompt and run “npm install” from the folder with package.json in it.  This will download the packages specified in package.json to a local node_modules folder.  At a high level, here are what the various packages do:

  • webpack, babel-*, imports-loader, and exports-loader: our “compiler” that will process the various project files and produce the app.js file.
  • webpack-merge and webpack-validator: used to help manipulate and validate the webpack.config.js (we will discuss this file next).
  • webpack-dev-server: a lightweight HTTP server that can detect changes to the source files and compile on the fly.  Very useful during development.
  • react and react-dom: The packages for React.
  • babel-polyfill and whatwg-fetch: They are bringing older browsers up to speed.  In our case we are using them for the Fetch API (no relation to Fetch XML) and the Promise object.

The scripts defined in the package.json are runnable by typing npm run build or npm run start from the command prompt.  The prior will run and produce our app.js file and the latter will start up the previously mentioned webpack-dev-server.  Prior to running either of them though, we need to finish configuring webpack. This requires one last config file to be placed in the same folder as package.json. It is named webpack.config.js

webpack.config.js

As the file name implies, webpack.config.js is the configuration file for webpack.  Ultimately it should export a configuration object which can define multiple entries.  In our case we have a single entry that monitors app.jsx (and its dependent files) and outputs app.js.  We use the webpack.ProvidePlugin plugin to inject whatwg-fetch for browsers that lack their own fetch implementation.  We also define that webpack should use the babel-loader for any .jsx or .js files it encounters and needs to load.  The webpack-merge module allows us to conditionally modify the configuration.  In our case we are setting the NODE_ENV environment variable to “production” for a full build and turning on JavaScript minification.  Finally we use the webpack-validator to make sure that the resulting configuration is a valid.

Deploying and Continuing Development

At this point all of the files should be set up.  To deploy the code, you would run npm run build and then deploy index.html, app.js, and styles.css as web resources to CRM. 

If it becomes tedious to keep deploying app.js to CRM as you make small changes, you can set up an AutoResponder rule in Fiddler to point at the webpack-dev-server.  Once this rule is in place, when the browser requests files like index.html and app.js from the right subfolder of the CRM server, Fiddler will intercept the request and provide the response from wepack-dev-server instead.  This way you can just save your local JSX files and hit refresh in the browser as you are developing.  Of course you need to be sure that you have started wepack-dev-server by running npm run start from the command line.  I have included an example for the rule I set up for this demo below:

fiddlerAutoResponder

With that you should be set to start building your own CRM Web Resources using React!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Firing events in JavaScript when status changes in Dynamics CRM 2013/ 2015/ 2016

Today's post is written by Rob Montague, a Developer at Sonoma Partners.

I needed to show a specific section based on the status of the entity.  I was trying to use onload only, but unfortunately, this doesn’t trigger when the status changes even though the page pseudo-refreshes.

You can, however, add an on-change event to the statecode field and whenever the state changes, it will fire your event.

Sample Code:

function opportunityExecuteOnLoad() {
	hideShowSectionsBasedOnState();
    attachEvents();	
}

function attachEvents() {
	var statecode = Xrm.Page.getAttribute("statecode");
	if (statecode) {
	   statecode.addOnChange(hideShowSectionsBasedOnState);
	}	
}

function hideShowSectionsBasedOnState() {
    // Make Sure statecode exists and you can read value
    var stateCodeAttribute = Xrm.Page.getAttribute('statecode');
    if (!stateCodeAttribute || !stateCodeAttribute.getSelectedOption()) {
        return;
    }

    var stateCode = stateCodeAttribute.getSelectedOption(),
	// You can change "open" to whatever status you need
	isStatusOpen = stateCode.text.toLowerCase() === "open",
	generalTab = Xrm.Page.ui.tabs.get("general");

    // If general tab doesn’t exist, exit
    if (!generalTab) {
        return;
    }

    // Get the first section.  If it doesn’t exist, do nothing, otherwise 
    // If state is open, show it, otherwise hide it.
    var sectionToShowIfOpen = generalTab.sections.get("sectionToShowIfOpen");
    if (sectionToShowIfOpen) {
        sectionToShowIfOpen.setVisible(isStatusOpen);    
    }
    
    // Get the second section. If it doesn’t exist, do nothing, otherwise 
    // If the state is open, hide it, otherwise show it.
    var sectionToHideIfOpen = generalTab.sections.get("SectionToHideIfOpen");
    if (sectionToHideIfOpen) {
        sectionToHideIfOpen.setVisible(!isStatusOpen);    
    }
} 

You are now able to harness the status change event and customize your page whenever the state changes.

Topics: Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016

FetchXML: Left Outer Joins with Multiple On Clauses

Having worked on CRM for ten years, I thought I understood everything that was possible with FetchXML. After all it seems pretty straight forward and each clause has almost a one to one equivalent with SQL. However, while I was recently doing some work on a project that required me to find records missing certain child records, I was surprised to find my understanding of left outer joins was incomplete.

The Requirement

In the project I was working on, we were using the native Connection entity to track relationships between contacts and users. Some of the Connections were manually created, but others needed to be automatically created based on business logic. In some cases we needed to detect all contacts that a user did not have a connection of a specified role with.  This seemed like a good case for using a left outer join and I sat down and wrote the following FetchXML:

The Concern

As I reviewed the FetchXML, I became concerned that I wouldn’t get the proper results with this query. I was assuming that CRM only used the to and from attributes on the link-entity element to build the on clause for the left join in SQL.  I knew that if the additional conditions inside the link-entity were added to the SQL where clause, that I would get no rows back.  In short, I was worried the generated SQL would look something like this:

It was enough of a concern that I decided to fire up SQL Profiler on my local dev org and see what exactly CRM generates in this case.  Much to my surprise I found the following (slightly cleaned up to help legibility):

In Summary

So in the end, CRM came through and put the link-entity’s conditions in the on clause for the left join.  This subtle change makes a huge difference in the results returned and makes left joins much more useful in CRM than one might assume based on the FetchXML structure.  This left me with an efficient query to solve the business requirement and a new found respect for FetchXML.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

CRM DevTools - Easily Update Hidden or Read-Only Fields for Testing

When testing functionality in CRM, most likely you will need to update some data in order to complete the test.  But what happens if the field doesn’t exist on the record form or is read-only?  One option would be to edit the form temporarily to display the field on the form, update the field value and then remove the field from the form after your test.  A better option, to save the hassle of all that publishing, would be to create an on demand workflow that updates your field directly and then manually run that workflow against your record.  However, CRM DevTools provides an even better option through the Test tab by allowing a System Administrator to update the field value through the record form even if the field doesn’t exist on the form or it is read-only.

First, you need to head over to the Chrome web store and add the CRM DevTools extension to your Chrome browser.  Also make sure you are System Administrator of your environment, otherwise the Test tab will not display.

Next, head to the record you want to update and press F12 which will pop open Chrome’s DevTools pane.  There should be a CRM DevTools tab at the top of the pane.

image

Click CRM DevTools and once it loads, click on the TEST tab.  You can now select up to three fields at a time.  Once you select a field, it will display the existing value in the Value column and then you can change that value.  If the field is a Lookup then you will need to enter the schema name for that Lookup in the Entity Schema Name column.  Then click Update and your field will be set to the new value.

image

Now you can easily update hidden or read-only fields in just a few easy steps without messing with the form or creating unnecessary workflows! 

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Access Teams: One developer’s journey of hunting down his white whale and living to tell the tale

Today's post is written by Mike Dearing, a development principal here at Sonoma Partners.

Since the introduction of Access Teams in CRM 2013, I’ve had several opportunities to implement creative ways of leveraging this leaner form of record sharing to address our client’s more advanced security needs.  For those unfamiliar with this feature, it is a simplified and higher performing alternative to the days of sharing records to individual users with no relation to one another other than for the purposes of accessing a particular record. 

MSDN has a great write up if you want to learn more about when to use access teams versus owner teams.  Though implementing and using access teams is fairly straight forward, I recently ran into some snags when programmatically adding members to these teams that’d I’d like to share with my fellow CRM developers to hopefully save you the headache of troubleshooting these should you be as unfortunate as I once was.

The user id is invalid

What user id, you ask?  After some sleuthing, I determined it to be the team administrator being added as CRM programmatically created my access team.  If you find yourself programmatically adding members to access teams, make sure that you are not instantiating the organization service, or that a prior plugin that kicks off your code isn't instantiating your organization service, as SYSTEM. 

The initial access team creation, which happens when the first member is added to it, needs to run as an actual CRM user.  Whether or not you leave that as the current user, or if you decide to elevate to a system administrator service account, is up to you.  That user then becomes the team administrator for the access team, which CRM prevents from being SYSTEM.  You will get an error of "the user id is invalid" otherwise.

The wrong way (UserId will be SYSTEM):

1

The right way (ensure UserId is not SYSTEM):

2

Everything the light touches is our kingdom

The cascading section of this post has been updated to include a leaner cascading configuration, thanks to clarifications from Adam Vero.

If you want child records to inherit their parent record’s access team privileges, you will need to set up the behavior for the child relationship to have, at a minimum, cascade share, cascade unshare, and cascade reparent. Cascade share and unshare ensures that any child records created before your access team has its first member added will inherit the parent's access team template's rights appropriately. Cascade reparent will ensure that any records created after the access team has been added (or switches parents from a different contact to the one with the access team) will also inherit these rights. 

8

9

That’s it for now, my friends.  May your future programmatic access team implementations be merry and bright!

Facing your own CRM white whale? Contact Sonoma Partners, we’d love to help out!

Dynabacus the Microsoft Dynamics CRM Record Count Tool

 

Topics: CRM Best Practices CRM for Professional Services Microsoft Dynamics CRM 2013

Permissions Issues for CRM 2013/2015 Forms

Today's post is written by Kyle Bippus, an Associate Software Developer at Sonoma Partners.

I was on a support case with a client where a sales representative was getting a strange error from CRM when converting a Lead into a Contact:

Photo one

This case was unusual because the sales representative had the correct Read permissions for the Lead and Contact entities, as well as all other related records on the form.  Furthermore, if the same user had been given the System Administrator role and had attempted to access the newly-created Contact record, then had the role removed, the user could then view the record instead of getting the above error.

Since the error does not give any more information beyond “Insufficient Permissions,” the next step was to scan the trace logs and look for any error messages that look similar.  After some searching I came across this message, which occurred around the time I replicated the error:

Principal user…is missing prvReadComplexControl Privilege

According to the Microsoft Dynamics CRM SDK, the prvReadComplexControl privilege is the Read permission for the ComplexControl entity (also known as Process Configuration).  Below is the SDK’s description for this entity:

This entity is for internal use only. However, users need read access to this entity in order to see the updated experience for lead and opportunity forms.

Essentially what this means is that users need the Read privilege for this entity in order to view the user interface for entity forms in CRM 2013 and 2015.  This applies not only to Lead and Opportunity, but almost all entities in CRM (some entities, such as Resource Group, still retain the look and feel from CRM 2011).  All security roles should have the Read privilege for the Process Configuration entity by default, but sometimes this configuration gets messed up when an organization is upgraded from a previous version of CRM.

So all we need to do is set the Read privilege for this entity to “Organization” for each affected security role.  Below are the steps to do just that:

  1. From your home page, navigate to Settings, then to Security, then to Security Roles
  2. Double-click on the role you want to modify
  3. Click the “Customization” tab, then set the Read privilege on the Process Configuration entity to “Organization” (see picture below)

Photo two

You should no longer see the “Insufficient Permissions” error. I hope this helps! If you have any additional questions, contact us to speak with one of our Microsoft Dynamics CRM experts.

Upgrading to Microsoft Dynamics CRM 2015
Topics: Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015

How to Mass Update Business Process Stages

Today's guest blogger is Rachel Sullivan, a Senior Consultant at Sonoma Partners

CRM allows you to easily update fields on records using their bulk edit functionality.  Records that you can query and return back in a list/grid can all be edited using this functionality.

However, since it’s currently not a field on the Opportunity form, there is no way to update the Business Process stage using this native bulk edit feature.  However, fear not, as Sonoma is here to provide a workaround on how to edit this field using native export/import.

In order to update the Business Process Stage using this workaround, follow these steps below.

1. You’ll first need to find the unique identifier of each process stage (the GUID).   To accomplish this, use Advanced Find to identify the opportunities that you would like to update.

3. Edit the columns and include Process Stage

4. Click Results and from this view, you can see all of the unique Process Stages and their GUIDS

image

You will need to match each unique GUID with the Process Stage Name.  The easiest way to do this is to open a record of each unique Process Stage value, view the Process Stage name.  Do this for each unique Process Stage.

For Example:

  • d3ca8878-8d7b-47b9-852d-fcd838790cfd = Propose
  • 650e06b4-789b-46c1-822b-0da76bedb1ed = Develop

imageimage

5. Once all GUIDS are gathered, you’ll need to export the records that you’d like to update.  When doing this, make sure to check the box that makes the file available for re-import (note that with CRM 2015 Online Update 1, all records are automatically exported for reimport and this step isn’t necessary.  This should come to CRM 2015 On Prem in the fall).  Save the file locally and open it with Excel.

image

6. Once in Excel, copy and paste the GUID of the business process stage (that you’d like to change each record to) into the Process Stage column.  Save the file and import back into CRM.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM Online

Dynamics CRM 2015 – Advanced Find Returns

Like most users of Dynamics CRM 2013, it’s taking me awhile to get used to the new navigation.  However, with any software deployment, it takes users time to get used to the new functionality and especially the new look and feel. 

It’s hard to remember when CRM 2011 came out (way back almost 4 years ago now) and the introduction of the ribbon.  What a crazy concept the ribbon was and how would we ever get used to it?  However, after using 2011 over time (and honestly any Microsoft product), the ribbon became second nature.  When Microsoft removed the ribbon in 2013, everyone complained it was missing.  How would we now get used to not having a ribbon?  I believe that over time Dynamics CRM 2013 and 2015 will fall into the same camp as 2011 where users will become comfortable using the new navigation and will have forgotten the ribbon ever existed.

We recently just upgraded our internal CRM deployment to 2015 and I’m forced to get used to the navigation even quicker than originally anticipated.  Of course with every new release there are those learning curves and the questions you ask “why did they do it this way?” but the good news is that with Microsoft, they’re listening.

One of the biggest complaints of 2013 is the fact that the Advanced Find was buried and not readily available on the global tool bar like it was in 2011.  In some areas of the application you couldn’t even initiate Advanced Find.  And those areas where you could, you had to click on the ellipsis to bring down additional contextual menu items to find Advanced Find.

SNAGHTML33926d4

However, the good news is in 2015, Microsoft has listened to initial feedback from users of 2013, and have added the Advanced Find menu back in the global tool bar so that you can always initiate Advanced Find no matter where you are in the application. Enjoy!

SNAGHTML3430206

Topics: CRM Upgrade Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015

Data De-Duplication through Fuzzy Matching

Today's guest blogger is Chris LaBadie, a Senior Database Developer at Sonoma Partners

Whenever we work on a data project for a customer the subject of cleaning their client data is always a discussion point.  The project could involve a migration from one CRM platform to another, integrating an ERP system into CRM, or even cleaning their data “in place”- but the message is usually the same, “we know we have duplicates in our data, help us clean it up!”

Anybody that has ever tried to track people or companies knows that it can be a huge challenge to avoid duplicate data.  When you have multiple users maintaining data, it is very common to introduce duplicate data no matter the de-duplication safeguards your system uses. 

Working with people-
•    Names can be difficult to track (misspellings, maiden names, nicknames, etc).
•    People move/change contact information.
•    Even unique fields like address or email address can be shared amongst more than one person. 

Working with companies-
•    Abbreviations or acronyms in names can present a challenge.
•    Companies can have multiple locations.
•    Often use different addresses to track billing, shipping, etc.

In our experience, there isn’t a magic bullet to eliminate de-duplication.  The best solution is usually a layered approach- use form validation to ensure quality data entry, intelligent system design to store records in an organized manner, and system de-duplication rules to search out potential duplicate data and present to a user records that might match the information they are attempting to enter. 

However, a common project for Sonoma Partners is moving a client to a new CRM platform and an important part of migrating data to the new platform is to identify potential duplicate data before it ever reaches the new system.  While this may sound like a large effort, it is actually pretty easy thanks to the Fuzzy Grouping functionality built into SQL Server Integration Services (SSIS). 

Fuzzy Grouping allows SSIS to inspect a set of data and compare one or more fields in the dataset.  Rather than comparing the field data, Fuzzy Grouping will match strings based on their sounds- giving more accurate results based on how a person would hear the string while overcoming misspellings, typos, abbreviations, nicknames, etc.  Note, you will need SQL Server Enterprise or SQL Server Developer edition to use Fuzzy Grouping.  This example is developed using the Business Intelligence Design Studio (BIDS) in SQL Server 2012.

In this demonstration we will process an Excel file of contacts.  This file can be generated from any other system and used in a process like this to identify potential duplicate data before that data is migrated to CRM.  The end result of this process is to produce a file containing potential duplicates so they can be reviewed and cleaned up in the source system before data migration.

Fuzzy Grouping evaluates the file of Contacts and compares them based on selected fields.  Potentially duplicate records are grouped together and assigned a group number.

After Fuzzy Grouping, the process splits so it can sort the results based on their Fuzzy Grouping group number and count the number of records per group. 

Finally, a step to check the group count determines which records are potential dupes- a group count of 1 means the record is unique, anything more than 1 means that group contains 1 or more records that should be reviewed.  The potential duplicates are then exported to an Excel file for review. 

dedupe-new-process 

Configuring Fuzzy Grouping is a pretty straight-forward process, just select the fields you want to compare and set the minimum amount of similarity (roughly a percentage of matching).  This process can involve a little bit of trial and error while you fine-tune the Fuzzy Grouping to identify the records that are potential duplicates without letting through any false positives. 

Typically we will start with lower minimums and go up until we are seeing the desired results.

DeDupeBlog-FuzzyGroup1 

When you run the process, you can see how many records it processes and how many records SSIS ultimately decided to export to Excel for review.

dedupe-new-process-complete 

When the process is complete, you can view the results in your Excel file. Here you can see the unique record number assigned to the record (KeyIn), group number (KeyOut), overall score (percentage of match to the potential duplicate record), similarities for First Name/Last Name/City (percentage of match for each column), and the Group Count (number of potential duplicates per record group).  The highlighted values in the screenshot show some of the values that were compared and demonstrate how Fuzzy Grouping can identify potential duplicates despite common misspellings, nicknames, and partial matches. 

DeDupeBlog-Output-cropped 

In conclusion, Fuzzy Grouping is an easy to use and powerful tool to assist in any data cleanup effort.  It is simple to setup, and quickly evaluates large amounts of data.  SSIS can provide you with all of the tools to make informed decisions regarding your customer data, your most valuable asset.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013