Sonoma Partners Microsoft CRM and Salesforce Blog

Turbo Forms: The Time is Nigh

Today's post is written by Mike Dearing, Development Principal at Sonoma Partners.

Released with CRM 2015 Online Update 1, and CRM 2016 on-prem, Turbo Forms was introduced to significantly decrease form load times

And with the recent announcement that the legacy form rendering option will be deprecated, our clients that had yet to take the plunge have begun to express concern.  While it is true that for more heavily customized environments the switch to Turbo Forms can be quite intimidating, hopefully some of the issues that I recently faced will help you on your journey.

Accessing a Form’s Xrm Context From Within an Iframe

Prior to turbo forms, web resources could reliably access their host form's Xrm context by calling parent.Xrm.  However, with Turbo Forms enabled we have found that setting the parent’s Xrm context onto an exposed variable within the resource works well as an alternative.  The parent form can supportedly retrieve the context of the iframe and attach an on load event to it.  The on load event can then be used to reference the parent’s Xrm context, as per the following:

Asynchronous Script Loading

It has always been best practice to ensure that scripts dependent on one another are fully loaded before invoking each other.  The new script engine enforces this as well.  The easiest way to safeguard from falling victim to race conditions is to refrain from calling into separate scripts immediately upon load of a script.  Instead, defer these calls to an on load event handler through the form's properties.  The script engine will load all scripts before calling your load handlers, ensuring that you won’t end up calling a script that hasn’t fully loaded yet.

GetFormType() Returning the Wrong Form Type

One bug that we noticed during our upgrade is that turbo form's implementation of Xrm.Page.ui.getFormType() method doesn't return the proper type for a disabled form, despite showing the correct form type in the footer.  Instead, it will return a form type of 'update'.  This bug is still present as of 2016 SP1/Update 1.  Our workaround for the time being is to do an organization service call to get the access rights of the current user to determine if the form is disabled or not.  This is unfortunately much more inefficient with the overhead of the service call, but is edge case enough that hopefully it won't affect too many implementations. Microsoft support confirmed that a fix will be in place for CRM 2016 Update 2, with a projected release date of 2017 Q2.


Thinking of making the switch soon?  You don’t have to upgrade alone… we’re here to help!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016

Get ready, Dynamics is about to get “Linked” in

Today's post is written by Aaron Robinson, an Engagement Manager at Sonoma Partners.

It’s true. Microsoft dropped a cool $26.2B to purchase LinkedIn.

Unless you have been on vacation for the last month in Vinh Hy Bay or just living in a cave, you probably already know this. At Sonoma Partners, this has been quite the buzz, filled with hope and speculation for what this means to the CRM world. It’s also personally interesting because the LinkedIn corporate offices sit in the floors directly underneath us in our Chicago office, giving us a unique perspective to their corporate culture.

These days there are a lot of things to get excited about in the world of CRM. Both Microsoft and Salesforce are continually adding to their product stack through their own product development cycles. Salesforce just launched their Spring ’16 release with enhancements to the Lightning Experience, Mobile Apps, Chatter, Analytics and more.  Microsoft’s Dynamics CRM 2016 Spring Wave for online customers added Learning Path, enhanced Office 365 group collaboration, multi-entity search, a new field service module, and the first release of CRM Online Portals.

In addition to development of their stacks, both companies are always on the hunt for best of breed solutions to add to their product arsenal. Salesforce most recently acquired several companies such as SteelBrick for quote-to-cash functionality, Implisit for predictive analytics, and Demandware for enterprise cloud commerce. Microsoft has also been on the buying spree with targets such as FieldOne for field service functionality and ADXstudio for portals.  Frankly, I think all of this pales in comparison to the LinkedIn acquisition. 

Why LinkedIn?

Do you have a LinkedIn account?  Do you have many colleagues that have LinkedIn accounts? Have you benefited in some way from having access to the largest self-updated business database in the world?  I’m sure you answered yes to at least one of these questions, if not all three.  Don’t worry, you’re in a small group of 433 million people who also subscribe to the site.

I think the most compelling part of LinkedIn is the self-updating component. I, like many users regularly update the content associated with my account – both on my own and from the occasional prod from LinkedIn to do so. LinkedIn provides other value for its member base. It updates my profile for me and notifies my network about content that I’m producing, such as this blog post. Additionally, it is pretty much the defacto tool used by recruiters (judging based on our use at Sonoma and the number of out reaches I receive each week). Their communication platform generates significant content and membership views.  With such significant value as a stand-alone app, there are many possibilities for use in CRM.

Linking In CRM

Many organizations we work with have inquired, requested, and sometime even demanded LinkedIn integration to their CRM system. In most recent years (2014 on), LinkedIn closed access to their API to all outside parties except Salesforce and Microsoft. LinkedIn currently supports integration with both platforms through the use of their Sales Navigator app, however many of our clients have complained about the price of the solution (top-level team edition) on top of their CRM platform cost. 

With this acquisition, the Dynamics CRM product team has to be salivating over the potential use cases of LinkedIn data in CRM.  This could take the form of the current integration Dynamics online customers have with the InsideView add-on, which is available for no cost to those subscribers.  This add-on enables the use and sync of InsideView data into CRM.  With a similar approach to methodology and pricing, Dynamics CRM could have the same capability with contact data, which is more relevant and available in real time, than the data sourced from InsideView.  As my colleague Bryson Engelen pointed out in his post, having accurate contact information and understanding the relationships between them is extremely important, especially as individuals in your network change companies and/or roles. It will be interesting to see if Microsoft even continues the relationship with InsideView, but don’t expect it to go anywhere in the immediate future. Of course we at Sonoma would like to see this applied to Salesforce as well in the interest of fairness to our customers, but I can appreciate that Microsoft may play this close to the vest as a competitive advantage. After all, I think I can say with a pretty good degree of certainly, had Salesforce been the buyer, they wouldn’t be in a sharing mood either.

The Future

It is going to be very exciting to see what comes of this acquisition, although don’t expect to see major headline shifts in the next 4-6 months. As we have seen with other Microsoft deals, it takes closer to 12-18 months for things to shake out a bit and a strategic direction and product roadmap to be determined. But with possibilities such as native contact enrichment and sync and deep relationship mapping, I can’t wait to see CRM linked in!

Topics: Microsoft Dynamics CRM

The Best Method to Demo Mobile Applications

Today's post is written by Brian Kasic, Principal Consultant at Sonoma Partners.

Mobile applications are becoming more and more prevalent within CRM projects.

Seeing how they function is critical to the success of any mobile deployment. However, in my experience, screen prints seem to be the standard way of showing functionality or training end users on how a mobile CRM application works. Typically users learn how to use mobile apps by interacting with them. They should be intuitive and straight forward. Demoing an app or training end users on how to use an app should be just as intuitive. However, getting started can sometimes be challenging, especially when there is a process involved or if end users are seeing the app for the first time.  

At Sonoma Partners, we utilize a product called Reflector 2 to mirror your phone via Bluetooth on your computer. The set-up and steps to reflect your app to your computer are simple. The Reflector license pricing is reasonable and worth the corporate license fee. By mirroring the app on your computer, you can quickly create re-usable videos to assist end users when they are first starting to work with their new mobile CRM applications.

Here are the steps for getting started on an iPhone:

  1. Download a trial of Reflector 2 to your computer. If you end up liking it the pricing and licensing can also be found on the website.
  2. Launch Reflector 2 on your computer.
    1

  3. Next make sure your computer Wi-Fi and your phone Wi-Fi are on the same network. This is important and can be where you run into trouble. Also be aware that if you get bounced off of Wi-Fi, you need to start over again. I’ve seen Reflector need to be completely closed and restarted to begin a new session when bounced off of Wi-Fi.

  4. From your phone find your Bluetooth button. This is can be done by swiping up from the bottom of an iPhone. Then Click “AirPlay”.
    2

  5. Within Airplay find your laptop and swipe the mirroring button.
    3

  6. This will Mirror your content from your phone screen to your computer screen without wires.

  7. You can move the phone on your computer with your mouse by hovering over it and dragging it to another part of your screen.

From here, customers and end users can see the actions on your phone directly on your computer. I’ve given demos showing data being entered into the phone then immediately refreshed my CRM online environment showing the data being updated. You can also demonstrate voice activation on the phone and the time savings it can provide to enter notes or activities. Both of these demo tricks of showing the mobile app in action have fostered very positive feedback from the audience.

Android installation instructions can be found here.

Have a question about mobile CRM applications? We're here to help.

Topics: Enterprise Mobility

Microsoft Text Analysis and CRM–Tracking Client Sentiment

Microsoft has released a set of intelligence APIs known as Cognitive Services which cover a wide range of categories such as vision, speech, language, knowledge and search.  The APIs can analyze images, detect emotions, spell check and even translate text, recommend products and more.  In this post I will cover how the Text Analysis API can be used to determine the sentiment of your client based on the emails they send.

The idea is that any time a client (Contact in this case) sends an email that is tracked in CRM, then we will pass it to the Text Analysis API to see if the sentiment is positive or negative.  In order to do this, we will want to register a plugin on create of email.  We will make the plugin asynchronous since we’re using a third party API and do not want to slow down the performance of the email creation if the API is executing slowly.  We will also make the plugin generic and utilize the secure or unsecure configuration when registering the plugin to pass in the API Key as well as the schema name of the sentiment field that will be used.

Below is the constructor of the plugin to take in either a secure or unsecure configuration expecting the format of “<API_KEY>|<SENTIMENT_FIELD>”.

Next is the Execute method of the Plugin which will retrieve the email description from the email record and pass it to the AnalyzeText method.  The AnalyzeText method will return the sentiment value which we will then use the populate the sentiment field on the email record.

Then we have the AnalyzeText method which will pass the email body to the Text Analysis API which then returns the sentiment value.

And finally the classes used as input and output parameters for the Text Analysis API.

Now register the plugin on post-Create of Email with the Text Analysis API Key and the schema name of the sentiment field either in the secure or unsecure configuration

image

Now when an email is created in CRM, once the asynchronous job is complete, the email record will have a Sentiment value set from a range of 0 (negative) to 1 (positive).

image

The sentiment field on the email record can then be used as a calculated field on the contact to average the sentiment values of all the email records where the contact is the sender to track the sentiment of your clients based on the emails they send you.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Integrating QuickBooks and Dynamics CRM

Today's post is written by Rob Jasinski, Development Principal at Sonoma Partners.

We recently needed to integrate QuickBooks Desktop with our Microsoft Dynamics CRM solution, specifically for invoices.

Since we have invoices that are generated from data that originates in CRM, our current process had us generate a report in CRM, then manually create the invoice in QuickBooks and CRM. We wanted to automate this process.

For the integration, we wanted to use Microsoft Integration Services and create an SSIS package that we can schedule to run on a nightly basis to create invoices in QuickBooks from data generated in CRM.

The first thing we needed to do was to choose a tool that would allow us to connect to, and access the data, stored in QuickBooks. We looked at many tools, but the one thing we found in common was that every tool required a proxy to be running on the QuickBooks server (if someone is aware of a way to interface with QuickBooks directly, without the use of a proxy, please feel to leave a comment in the comments section below).

Then the SSIS package communicated with QuickBooks via this proxy, so it wasn’t a direct connection from SSIS to QuickBooks. So if the proxy wasn’t running, a connection to QuickBooks couldn’t be established. Finally we chose to use the QuickBooks Desktop connector from CData as it seemed to meet all of our needs.

In the following example, we’ll give a brief demo of setting up an SSIS package to create an invoice in QuickBooks.

The first thing was to create a connection to the QuickBooks server (don’t forget the proxy application must be running on the QuickBooks server). The only required fields are the URL (of the QuickBooks server), user name, and password.

  1

Then I setup a simple data flow task that queries invoice data from our CRM system and passes it to the CData QuickBooks destination component, which then will create the Invoice and Invoice detail records in QuickBooks.

2

When creating an invoice in QuickBooks there are a couple of things to note. First, there are some required fields that need to be passed in, and the invoice must have at least one detail record. At first this posed a problem for me, in that I was hoping to first create the invoice then add detail lines later. Then I discovered there is a field on invoice called ItemAggregate, which allows you to pass in one or more invoice detail records in an XML format, essentially creating the invoice and all detail records in one call. Below is an example of ItemAggregate data:

<InvoiceLineItems>
<Row><ItemName>Professional Fees - Consultant</ItemName><ItemDescription>Consultant</ItemDescription><ItemQuantity>210.7500</ItemQuantity><ItemRate>10.00</ItemRate><ItemAmount>2100.75</ItemAmount></Row>
<Row><ItemName>Professional Fees - Sr. Consultant</ItemName><ItemDescription>Sr. Consultant</ItemDescription><ItemQuantity>84.0000</ItemQuantity><ItemRate>15.00</ItemRate><ItemAmount>1230.00</ItemAmount></Row>
</InvoiceLineItems>

Once all the detail records and all required fields were passed in, the invoice was successfully created in QuickBooks. Please note that during the last step, I was logging all errors returned by QuickBooks into an error log table. This allowed me to do some trial and error runs of creating invoices that helped me determine which fields were required as those were returned as errors.

I hope this small introduction to integrating Dynamics CRM with QuickBooks can help kick start any similar projects you’ve been thinking about. Have a question about integrating QuickBooks with Microsoft Dynamics CRM? We're here to help.

Topics: Microsoft Dynamics CRM

Support Work is Never Done

Today's post is written by Kristie Reid, VP of Consulting at Sonoma Partners.

In order for your CRM deployment to be successful, it is often what you do after go-live that is the determinant factor.

CRM systems are different than other application deployments in that they must continue to evolve after launch. If not, the system quickly becomes stale and user adoption will continue to decrease.

We often get asked to provide guidance on what types of resources will be needed once a CRM application is up and running. Our answer: it depends! Yes, we’ve seen done this hundreds of times and yes, we have some best practice guidelines. But, we prefer to work with your unique organization to determine exactly what it will take to keep your CRM application widely used, and successful, after you go live.

You aren’t reading this blog to hear that you need to hire a consulting firm to identify your post-deployment resources. So here are some general guidelines we use (please note a lot of these roles are not full-time positions but rather shared across multiple applications):

2016-06-13_8-47-48

Most of the roles are self-explanatory for IT organizations, but one resource that often gets overlooked is the Product Manager.

Similar to Product Managers for any off-the-shelf applications, your CRM Product Manager has the final say for what features get released when based on business criticality. However, the hidden talent that this individual also has to possess is the ability to continually sell your CRM system to the organization.

That selling occurs from the user level where new people are being introduced and trained on the system, all the way to the executive level each time a new VP of Sales comes onboard and wants to understand where their sales people are spending their time. Don’t overlook this person – they will be the champion of your CRM application to make sure it’s usefulness is embraced for years to come!

Do you need help shaping your support strategy? Contact us to learn more.

Topics: CRM Best Practices

Inside Edition: How Sonoma Partners Uses CRM to Track Time

Today's post is written by Matt Weiler, a Principal Architect at Sonoma Partners.

As a consulting company, tracking time spent on projects is critical to billing our customers correctly, making sure people are busy, and making sure we're estimating accurately. In Grapevine (our internal name for CRM), Time has relationships to:

  • Projects (which is the level the time is billed at)
  • Items (which is the unit of development or customization work that has to be done)
  • Project Task (which bundles related time together so we can see how much time was spent developing vs. testing vs. designing, etc.)
  • Cases, in addition to the actual amount of time and assorted other fields.

We've covered Time entry on our blog in the past where you can see some screenshots of how this process worked in our CRM 4.0 environment.

As you can see, for something that is a relatively simple idea (what did I do during these 15 minutes?), becomes a much more complicated process because of the way we want to use the data.

One of our first cracks at making this process easier was adding time-related fields to the Item entity. Most Time records are related to an Item in some way  (development, testing, defects, design, etc.), so this was a way to easily enter the amount of time and the Task being performed, and a plugin behind the scenes would fill in fields like the Item and Project. While this worked well for some scenarios, others still required entering in the full Time form. This includes any time not related to an item (internal meetings, time spent writing specifications, etc.). In addition, at the time, most of us kept track of our time either on Excel spreadsheets or (GASP!) on pen and paper. It was kind of a laborious task to enter in time every week, especially if you waited till the beginning of the next week when time must be entered and finalized.

So, as an intern project, our developer Mike Dearing created what is now known as Time Buddy. Time Buddy was designed to make the process of tracking, entering, and reviewing time much easier and faster. Here's a quick look:

1

It's a Windows desktop app, so it only works on Windows PCs, and it has be installed everywhere, so there's a bit of a maintenance downside over a centralized web site. However, it has built-in timers, connects directly to Grapevine to pull back active Projects, Items, and Project Tasks, has a bunch of great time savers like the ability to split or join multiple records, and the ability to import your weekly calendar from Outlook, thus saving the entry time for non-item based Time entries. And it caches data offline, so as long as you connect occasionally, you can track time while not connected to the Internet.

As Sonoma Partners continued to grow, we had more and more non-.NET developers using Macintosh instead of Windows PCs, so our next step was to add an editable grid inside CRM. This not only gave our non-Windows users a quick entry ability, but also we incorporated the grid into a larger CRM dashboard that broke down time entries by day and by project, making it easier to review the entered time and validate simple mistakes haven't been made before submitting the time for final approval.

1

Our latest updates have been in response to a more diverse set of users utilizing Time Buddy. As we’ve added an iOS practice and graphic and UX designers, those people are utilizing Macs day to day, and have had to log time the old fashioned way. When we thought about how to give them an easier way to enter time, we took a step back and thought it also might be cool if we had an iOS app to allow time entry as well. So, we created a set of web services to abstract the time entry process from CRM and utilized those services to build our new clients. We’ll also be looking to update the Windows version of Time Buddy to utilize the same services. Thus we're shielded a little bit from CRM upgrades and we can more aggressively use new features or APIs in CRM without having to update a bunch of apps, and the external time entry is all routed through the same place.

The history of Time entry at Sonoma is, I think, a classic example of the crawl, walk, run CRM strategy that makes sense:

  1. Identify the data you'd like to start tracking
  2. Build out a basic implementation of a way to track and report on that data
  3. Identify inefficiencies through talking to employees or looking at the data you've already collected
  4. Develop targeted apps and websites to make the process easier, more efficient, and increase data reliability
  5. Repeat steps 3 & 4 as your business, process, and/or people change
Topics: Microsoft Dynamics CRM

Building CRM Web Resources with React

Web Resource Development

Microsoft Dynamics CRM has allowed us to develop and host custom user interfaces as Web Resources since CRM 2011.  Since then, the web has exploded with JavaScript frameworks.  In addition, browsers have started to converge on standards both in JavaScript object support and CSS.  In short, its a great time to be building custom user interfaces on top of Microsoft Dynamics CRM.

Today we’ll be working with React, an emerging favorite in the JavaScript world.  React’s key benefits are its fast rendering time and its support of JSX.  React is able to render changes to the HTML DOM quickly, because all rendering is first done to JavaScript objects, which are then compared to the previously generated HTML DOM for changes.  Then, only those changes are applied to the DOM.  While this may sound like a lot of extra work, typically changes to the DOM are the most costly when it comes to performance.  JSX is a syntax that combines JavaScript and an XML-like language and allows you to develop complex user interfaces succinctly.  JSX is not required to use React, but most people typically use it when building React applications.

The Sample Application

To demonstrate these benefits, we’ll build a simple dashboard component that displays a list of the top 10 most recently created cases.  We’ll have the web resource querying for new cases every 10 seconds and immediately updating the UI when one is found.

CaseSummary

The files that I will be creating, will have the following structure locally:

CaseSummary/ 
├── index.html 
├── styles.css 
├── app.jsx 
└── components/ 
    ├── CaseSummary.jsx     
    ├── CaseList.jsx 
    └── Case.jsx

However, when we publish them as web resources in CRM, they will be simplified to the following:

demo_/
└── CaseSummary/ 
    ├── index.html 
    ├── styles.css 
    └── app.js

Other than including the publisher prefix folder, the main change is that all of the JSX files have been combined into a single JavaScript file.  We’ll step through how to do this using some command line tools.  There are a few good reasons to “compile” our JSX prior to pushing to CRM:

  1. Performance – We can minify the JavaScript and bundle several frameworks together, making it more efficient for the browser to load the page.
  2. More Performance – JSX is not a concept that browsers understand by default.  By converting it to plain JavaScript at compile time, we can avoid paying the cost of conversion every time the page is loaded.
  3. Browse Compatibility – We can write our code using all of the features available in the latest version of JavaScript and use the compiler to fill in the gaps for any older browsers that might not support these language features yet.
  4. Maintainability – Separating our app into smaller components makes the code easier to manager.  As you build more advanced custom UI, the component list will grow past what I am showing here.  By merging multiple files together, no matter how many JSX files we add to the project we just need to push the single app.js file to the CRM server when we are ready.
  5. Module Support – Many JavaScript components and libraries are distributed today as modules.  By compiling ahead of time we can reference modules by name and still just deploy them via our single app.js file.

Exploring the Source Code

The full source code for the example can be found at https://github.com/sonomapartners/web-resources-with-react, but we will explore the key files here to add some context.

index.html

This file is fairly simple.  It includes a reference to CRM’s ClientGlobalContext, the compiled app.js and our style sheet.  The body consists solely of a div to contain the generated UI.

app.jsx

Now things start to get more interesting.  We start by importing a few modules.  babel-polyfill will fill in some browser gaps.  In our case it defines the Promise object for browsers that don’t have a native version (Internet Explorer).  The last three imports will add React and our top level CaseSummary component.  Finally we register an onload event handler to render our UI into the container div.

components/CaseSummary.jsx

CaseSummary is our top level component and is also taking care of our call to the CRM Web API.  This is also our first look at creating a component in React, so let’s take a look at each function.  React.createClass will take the object passed in and wrap it in a class definition.  Of the five functions shown here, four of them are predefined by React as component lifecycle methods: getInitialState, componentDidMount, componentWillUnmount and rendergetInitialState is called when an instance of the component is created and should return an object representing the starting point of this.state for the component.  componentDidMount and componentWillUnmount are called when the instance is bound to and unbound from the DOM elements respectively.  We use the mounting methods to set and clear a timer, which calls the loadCases helper method.  Finally, render is called each time the state changes and a potential DOM change is needed.  We also have an additional method, loadCases where we use the fetch API to make a REST call.  The call to this.setState will trigger a render whenever cases are loaded.  We definitely could have made this component smarter by only pulling case changes, but this version demonstrates the power of React by having almost no impact on performance even though it loads the 10 most recent cases every 10 seconds.

components/CaseList.jsx

By comparison CaseList.jsx is pretty straight forward.  There are two interesting parts worth pointing out.  The use of this.props.cases is possible because CaseSummary.jsx set a property on the CaseList like this: <CaseList cases={this.state.cases} />.  Also, it is important to notice the use of the key attribute on each Case.  Whenever you generate a collection of child elements, each one should get a value for the key attribute that can be used when React is comparing the Virtual DOM to the actual DOM.

components/Case.jsx

The simplest of the components, Case.jsx outputs some properties of the case with some simple HTML structure.

Compiling the Code

We’re going to start with using NodeJS to install both development tools and runtime components that we need.  It is important to note that we’re using NodeJS as a development tool, but it isn’t being used after the code is deployed to CRM.  We’ll start by creating a package.json file in the same folder that holds our index.html file.

package.json

After installing NodeJS, you can open a command prompt and run “npm install” from the folder with package.json in it.  This will download the packages specified in package.json to a local node_modules folder.  At a high level, here are what the various packages do:

  • webpack, babel-*, imports-loader, and exports-loader: our “compiler” that will process the various project files and produce the app.js file.
  • webpack-merge and webpack-validator: used to help manipulate and validate the webpack.config.js (we will discuss this file next).
  • webpack-dev-server: a lightweight HTTP server that can detect changes to the source files and compile on the fly.  Very useful during development.
  • react and react-dom: The packages for React.
  • babel-polyfill and whatwg-fetch: They are bringing older browsers up to speed.  In our case we are using them for the Fetch API (no relation to Fetch XML) and the Promise object.

The scripts defined in the package.json are runnable by typing npm run build or npm run start from the command prompt.  The prior will run and produce our app.js file and the latter will start up the previously mentioned webpack-dev-server.  Prior to running either of them though, we need to finish configuring webpack. This requires one last config file to be placed in the same folder as package.json. It is named webpack.config.js

webpack.config.js

As the file name implies, webpack.config.js is the configuration file for webpack.  Ultimately it should export a configuration object which can define multiple entries.  In our case we have a single entry that monitors app.jsx (and its dependent files) and outputs app.js.  We use the webpack.ProvidePlugin plugin to inject whatwg-fetch for browsers that lack their own fetch implementation.  We also define that webpack should use the babel-loader for any .jsx or .js files it encounters and needs to load.  The webpack-merge module allows us to conditionally modify the configuration.  In our case we are setting the NODE_ENV environment variable to “production” for a full build and turning on JavaScript minification.  Finally we use the webpack-validator to make sure that the resulting configuration is a valid.

Deploying and Continuing Development

At this point all of the files should be set up.  To deploy the code, you would run npm run build and then deploy index.html, app.js, and styles.css as web resources to CRM. 

If it becomes tedious to keep deploying app.js to CRM as you make small changes, you can set up an AutoResponder rule in Fiddler to point at the webpack-dev-server.  Once this rule is in place, when the browser requests files like index.html and app.js from the right subfolder of the CRM server, Fiddler will intercept the request and provide the response from wepack-dev-server instead.  This way you can just save your local JSX files and hit refresh in the browser as you are developing.  Of course you need to be sure that you have started wepack-dev-server by running npm run start from the command line.  I have included an example for the rule I set up for this demo below:

fiddlerAutoResponder

With that you should be set to start building your own CRM Web Resources using React!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online