Today's post is written by Richard Failla, a Salesforce Business Analyst at Sonoma Partners.
One of our clients recently asked us to design a survey solution for them. At that time, they had a separate system they used to manage, administer, and report on customer satisfaction surveys. Much of this work was done manually, especially the reporting piece, which involved some heavy Excel crunching. Our project goal? Use Clicktools to create a more automated solution that brought survey data into Salesforce, thus creating a more holistic customer picture.
After reviewing the requirements I noticed some common issues when implementing a survey solution that would require some creative approaches.
The Problem: “Surveys need to be personalized for each of our clients.”
Since you need a custom field for any survey question you want to pass back into Salesforce, there’s a problem of rigidity: how do we build a solution that allows survey questions to be customized without needing to constantly add new custom fields? The trick was to take each survey question and ask, “What metric does this question probe?”
The Solution: “Don’t fixate on specific survey questions, group them into question themes.”
Let’s look at an example question from a survey:
“Our team is responsive and acts with a sense of urgency."
Mapping that question verbatim to a custom picklist field would make it inflexible and cumbersome to manage for an admin should the question change in subsequent surveys. So instead, we created a custom picklist field and called it “Responsiveness.” Mapping the question to a more general theme allows the flexibility to change the wording of the survey question without having to manipulate the metadata it maps to. So we apply this logic to all relevant survey questions. Here’s an example:
Now, in a future survey we can change the wording of the question in the CMS without having to change it in CRM, so long as the question still gets to the heart of the metric we’re after.
The Problem: “Each new survey must be re-mapped by an admin.”
The value of automating survey data into Salesforce comes with a cost: each new question/survey you want to ask needs to be re-mapped to Salesforce. This takes unnecessary time to plan and coordinate with admins and can easily delay pushing out surveys. While not ideal, it’s doable if you have an admin but without one this solution looks less sustainable over time.
The Solution: “Create a master survey template.”
Using Clicktools, we created a master survey template. This template included every single question theme across all surveys. To clarify, no survey questions were added to this template - only the themes we created back in Salesforce.
One of the great things about Clicktools is that it allows you to hide questions on surveys. By hiding all questions, users could use this template to create new surveys by simply replacing the theme with the specific question they wanted to ask and un-hiding the field. Since this master template has already been mapped to all themes in Salesforce, admins would never have to re-map. We also took some time to think through questions themes that could come up in the future but aren’t necessarily on any current surveys and included them in this master template.
The Problem: “Feedback should be collected for any response that scores poorly.”
Client feedback can help you identify areas in need of improvement but to eliminate the guesswork in rectifying the problem, you need to know why you’re not meeting expectations. For our solution, we could have dropped into the survey optional open-ended feedback boxes after each question but that only muddies an otherwise simple form.
The Solution: Creative dynamic question conditions in Clicktools
A neat feature in Clicktools is the ability to dynamically pop questions onto the form based on a
certain condition. In our case, for any response that scored “Neutral,” “Disagree,” or “Strongly Disagree,” we popped an open-ended feedback box.
So we actually created an open-ended feedback box for each question on our master template (and likewise, created these in Salesforce). Since these questions appear dynamically, there’s no need to “hide” them on the master form thus alleviating admins from more manual manipulation.
The Problem: “How do we use calculations to assess our performance if our response options aren’t numbers.”
For the majority of questions in our survey, respondents had the following values to choose when answering:
- Strongly Agree
- Strongly Disagree
- Don’t Know
These options make it easy to map to picklist values but we lose native reporting functionality if we leave them like this. What happens if you want to see the average score for a particular question over time?
The Solution: “Use formula fields to convert picklist values to numbered scores.”
Lucky for us, we can create a simple formula field that assigns a number value for each response:
Without this formula we’d have to either change the survey response options to numbers (which may not be an option) or settle without basic reporting functionality for trend analysis. With this formula, we can both preserve the original survey and still leverage native report summarizations in Salesforce. So we apply this logic to all survey theme questions in Salesforce:
There’s nothing crazy here, but this little formula allows you to leverage native reporting functionality in Salesforce without compromising the original survey.
The Problem: “What’s the top-box and top-2-box score for a particular group of questions?”
If you don’t know, the top-box is the most positive response possible for a given survey question. Subsequently, the top-2-box is any question that scores at least the second most positive response. To determine these scores, we used the following formula:
Total Number of Top-Box Responses / Total Number of Responses
We knew there was some combination of field and report formulas but we scratched our head a bit to figure out how to tie it all together.
The Solution: “Create parent groupings for your question themes and reference them in formulas.”
In order to begin designing a solution to this problem, we had to bundle our themes into parent groupings called “dimensions.” At these dimension levels is where we would apply our top-box scores. For example, we grouped the following question themes into a dimension called “Relationship":
We created three other dimensions and grouped all themes to one of these. We then used 5 formula fields for each dimension to calculate the scores.
Not every question theme was included in a survey so the first formula needed to count the number of total questions in a dimension with a valid response (excluding “Don’t Know”):
Now we needed to find how many of those questions contained either a top-box or a top-2-box response.
Here’s the formula for the number of top-box questions:
And here’s the formula for the number of top-2-box questions:
At this point, we had everything we needed to calculate our top-box scores, so we created two more formula fields.
Here’s the formula for the Top-Box Score:
And here’s the formula for the Top-2-Box Score:
We applied this same logic to each dimension and now we can see how each dimension performed on a given survey:
Furthermore, we can see how these scores change over time by rolling them into a dashboard with a simple average summary:
Collecting customer feedback in Salesforce requires some consideration before implementation but with only native functionality we were able build a flexibly dynamic solution that requires minimal admin intervention, automates reporting, and can be easily manipulated to account for new surveys in the future.