TURNING YEARS OF SURVEYS INTO INSIGHT
Updating the Orientation Feedback Process with Qualtrics and Tableau

My favorite thing about the design process is that, at its heart, the design process is an approach to problem solving. This means that it can be applied to a variety of mediums, from tangible products, to intangible processes. While working as a Data and Analytics intern for Mastercard, the first project that I worked on was, with the help of my manager, redesigning the process of gathering feedback about the New Hire Orientation within the Global Business Shared Center (GBSC).

Heads up, I cant show pictures of the dashboard that I made because it's sensitive material to Mastercard, and there aren't a lot of easy ways to take a picture of a process, so I'm going to do my best to illustrate my design process with relevent pictures from Unsplash.com and Powerpoint replications of my visualizations.

BACKGROUND

The GBSC is Mastercard’s support wing. Most non-revenue portions of the company, such as Law, Finance, and half of HR, fall under this part of the business. Part the onboarding process for new hires inside of Mastercard’s GBSC is an orientation. During the orientation, a business assistant runs over the overall layout of the campus, the layout of the GBSC, how to use some of Mastercard’s virtual infrastructure, and other basic things you need to know before you can get any work done. After this survey, new hires are asked to fill out a quick survey about the quality of the content, the quality of the instruction, and what they’d like to see changed.

THE PROBLEM

Since the inception of the GBSC 4 years ago, HR has been collecting feedback about the orientation via pen and paper surveys. This was an issue because there is no easy way to pool answers from surveys written on pen and paper. This prevented the feedback data from being summarized into information, preventing Mastercard from taking any steps forward based on the feedback they were receiving. Long story short, Mastercard had 4 years of feedback about its orientation and no idea what that feedback said.

My job was to update the process of collecting and analyzing the orientation feedback so that it provided actionable insights for the business assistant responsible for the orientation.

RESEARCH

The first step of the determining how to update the feedback process was to familiarize myself with the onboarding for new hires, and to determine what the business assistant wanted to get out of her feedback. I talked with both my manager and the HR employee to get a better understanding of what new hires should get out of the orientation, and what the HR employee wanted to get out of her feedback survey.

I learned that since the inception of the GBSC, Mastercard had hired approximately 200 new employees to work inside of it. Hiring occurred year round, but peaked during the summer months due to interns and college hires joining the company. Additionally, I learned that the business assistant responsible for the orientation had also written the feedback survey herself. This was useful to know as it meant the data points we already had from the survey largely related to what she wanted to know.

In regards to the orientation feedback, the business assistant’s main goal was to figure out what people aren’t liking or understanding about her orientation, so that she can work to improve those areas of the orientation. She was also interested in how different demographics, such as intern versus full time hire, rated the survey, and how satisfaction changed over time. This was important because there was no question asking whether the new hires were interns or full time, meaning that we would have to devise a way to pull this information out of the existing data points.

IDEATION

Knowing more about the process and what our audience wanted, the next step was to come up with potential solutions. The overall solution of digitizing the survey and visualizing the information via Tableau dashboard seemed clear from the beginning (I was mainly hired to design and build Tableau dashboards), but the exact method of executing this was not so clear.

There were challenges, and decisions that had to be made. There were over 200 individual survey responses, and somehow, each of these responses needed to be digitized. On each survey, there were both Likert-scale multiple choice questions (“to what extent do you agree with the following statement”-type questions) and free response questions. With the help of my manager, I was tasked with figuring out what data was worth using, based on the benefits that data provided, how long it would take to extract actionable information out of that data, and how much extracting the information would cost.

The potential choices were: use both the free response and Likert-scale questions, use just the free response, or use just the Likert-scale questions. Based on our criteria, I decided solely to focus on the responses to the Likert-scale questions.

In order to digitize the responses, I had to manually enter each of the survey responses from the prior four years as a response to the digital survey that I created on Qualtrics. One major concern that we had with the free response data was whether it provided enough additional information compared to the multiple choice answers to be worth spending the extra time manually entering it and figuring out how to process it. We decided that while, an individual free response answer contains more information than a multiple choice one, as a collective, multiple choice answers provide more information. This is because there was lots of variance between the free responses, making it difficult to summarize into a few actionable takeaways. On the other hand, the responses Likert-scale style questions could be averaged to create a metric measuring satisfaction, which was useful when attempting to gauge and compare satisfaction levels across categories.

IMPLEMENTING THE PROCESS

Once we figured out what we wanted the process to be, the next step was to put that process in place. I recreated the pen and paper version of the survey on qualtrics, and input the 4 years’ worth of responses (cry). Using a rest API I created an Alteryx flow that fetched new responses every week and exported them as tableau data file on Mastercard’s Tableau Server. Then, I created an interactive dashboard using Tableau that displays historical overall satisfaction, along with when the surveys are filled out, and categorical satisfaction.

Now, after a new hire goes through the orientation, they are no longer handed a piece of paper with a survey on it. Instead, they are emailed a link to the Qualtrics survey. At the end of the week, my Alteryx flow downloads their response and incorporates it into the data source the Tableau Dashboard is displaying. The Tableau dashboard automatically updates its visualization to match the new data. This provides a constantly updated stream of actionable feedback for the business assistant who provides the orientation.

DESIGNING THE DASHBOARD

Just as important as redesigning the process was designing, and then creating, the visualization of the orientation feedback. Describing exactly what the dashboard looks like is a little tricky because its sensitive information to Mastercard so I cant show the exact dashboard. But, I’ll do my best to describe the specific visualizations that I used and their role within the dashboard, and approximate the visualizations using PowerPoint slides. Additionally, if you’d like to know a bit more about my takeaways from a summer of designing Tableau dashboards, here is a link to a Medium article I wrote detailing my 5 keys to a successful dashboard design: https://blog.prototypr.io/5-tips-for-designing-a-better-dashboard-69a82526184b

There were three pieces of information to describe: How has satisfaction changed over time, when do people fill out this survey, and how does satisfaction vary over the subcategories of the orientation.

Before any visualization, the first step was to prep the data so that it could be properly displayed. The survey asked whether respondents thought portions of the orientation were “extremely helpful”, “somewhat helpful”, “neutral”, “not very helpful”, or “useless”. I converted these categorical answers into percent levels of satisfaction. “Extremely helpful” responses were assigned a 100% level of satisfaction, “somewhat helpful” responses were assigned a satisfaction level of “75%”, “neutral” responses a level of 50%, “somewhat helpful” a level of 25%, and “useless” responses were said to be 0% satisfied. While transferring from the Likert-style responses to numeric levels of satisfaction is admittedly inexact, the benefit it provides in terms of quantifying satisfaction levels outweighs the potential inaccuracies that stem from assigning arbitrary numbers to categorical data.

To visualize the overall satisfaction overtime, I chose to use a line graph displaying the overall satisfaction over time. The line graph tracked the overall satisfaction level, calculated by averaging the satisfaction level from each question, over calendar year intervals. I chose this an annual interval instead of a monthly one because many months contained only one or two survey responses, while others contained upwards of 10. Choosing a monthly interval would unevenly weight the feedback of new hires hired in months without many other hires.

To visualize when people fill out the survey, I chose to use a heat map. This is the most useful method of visualization because it makes filtering by year, month, or both very easy, while still allowing for month to month comparisons. While a bar chart may make comparing the differences in hires across month easier, it does serve as a very intuitive filter. Given that the main purpose of this visualization was filtering, I chose to go with a heat chart.

While the business associate was not necessarily interested in what month’s people fill out the feedback, she was interested in how satisfaction differs from interns to full time hires. Since this was not asked directly in the survey in the past, I figured the best way to manipulate the data that we had into this information was by the date the survey was filled out. Interns only fill out the survey in June, so by comparing satisfaction in June to satisfaction outside of June, the business assistant can get an understanding of how the two demographics differ.

I believe that this is a better method of comparing demographics than simply adding a question to the survey asking if they are an intern or full-time hire, and then adding a corresponding filter to the dashboard because it is more flexible. For example, the Law department is joining the GBSC this September. Every person in that department will have to go through some amended version of the orientation. Using the year and month heatmap, the business assistant can compare levels of satisfaction for September 2017 to the rest of the responses to compare satisfaction between the Law department employees who have been transferred, and traditional external hires.

To display the Likert scale questions I used a stacked, segmented, and centered bar chart that categorizes the responses into either positive or negative. The stacked, segmented and centered bar chart allows for at a glance comparison of satisfaction levels across questions. Additionally, given that the distributions were often largely similar and positive, I added a text bubble displaying the average level of satisfaction.

Responses were grouped into 3 categories, “Content”, “Instruction”, and “Overall”. The “Content” section contains questions about how people felt about what was said, and how useful it was. The “Instruction” section contains how people felt about how the content was delivered. The “Overall” section provided an mean level of satisfaction by averaging each question. Each category could be viewed in its entirety, or collapsed for a summary.

TRADEOFFS

The main tradeoff we made was sacrificing depth of information for quantifiable metrics by deciding to use only the multiple choice responses. By focusing only the multiple choice data, we lose out on the deeper stories that the answers to the free response questions can tell. However, we found that the deeper stories actually made it more difficult to reach actionable insights, so we decided to remove it. I stand by the decision to focus on the Likert-style questions as the basis of the dashboard, as I believe that the free response feedback could actually lead to a lack of action, by providing too much information with no clear takeaways. By limiting the total information visible, the feedback is actually more useful. However, do think we could have made better use of the free response questions instead of discarding them.

REFLECTIONS

Looking back on this project now that my internship is over, I see two main opportunities for improvement. While we decided to work solely with the multiple choice portion of the survey because it led to more actionable insights once visualized, I think it was wrong to completely discard the free responses. While the multiple choice responses can tell us how satisfied or dissatisfied people are, it can’t tell us why. Using a dashboard action to jump to from a specific demographic of responses, for example, new hires in June who were dissatisfied with the orientation, would have been a great way to connect metrics to the stories behind the metrics.

Secondly, I should have taken the time to better reflect on the project during the internship. With some distance between me and the project, and a better understanding of what Tableau is capable of, it’s a lot easier for me to see ways to improve than it was when working on it. Taking the time to step back and evaluate my decisions in term of the larger picture while still interning would have allowed this project to be even more impactful.