By now we all know the adage, modern data analysis is like drinking from a firehose. However, we might argue it’s more like standing under a waterfall. Though the data we collect is abundant, our human capacity to take it in and analyze it is limited. Given the abundance of data, it is increasingly difficult to tell where to look to find interesting insights and useful analysis. While this challenge is one that is likely to continue to grow exponentially, many of us are trying to develop tools to help tackle this problem. Our newest: volatility scores.
A volatility score measures how much and how often a respondent changes their mind on an issue, opinion, or view. This measurement came from looking at years of data on the Trendency platform and the realization that we have probably been thinking about changes in opinion in the wrong way. Typically, we tend to think of movement in terms of broad brush strokes on categories of demographics: men under 30, People of Color etc. But what if volatility around a belief is more driven by the person, than what categories they tend to be branded as? Taking this question a step further, what if individuals are volatile in some views, but not others, and what if that volatility changes from week to week and month to month?
Getting technical for a moment, Trendency’s volatility scores are found by taking the intraclass correlation of a respondent’s answers to a specific question over time. This metric is taken from the fields of psychology and medicine, where intraclass correlation is used to measure test-retest reliability. Our purpose for using the correlation is slightly different, but the measure is still appropriate. To make the results a little more intuitive for all, even those who don’t exactly enjoy a good statistical discussion, we subtract the resulting coefficient by one and then multiply by 100 to make the resulting number a more intuitively interpretable ‘score’ that ranges from 0 to 100. A score near zero implies a person’s mind does not change much or at all. A score near 100 implies their mind changes often and dramatically. We can also run a test of significance on the resulting volatility scores to detect and categorize highly volatile respondents.
The usefulness of such a score is immense. It makes it easy to identify what kind of respondents often change their minds, allowing our clients to narrow in on the most persuadable people, saving time and money. Let’s take, as an example, the generic congressional ballot which asks respondents how likely they are to vote for a Democrat, a Republican, or a third-party candidate. Below is a density plot showing the distribution of respondent’s volatility scores. As with most political questions, the vast majority of respondents have very low volatility scores with the average being a paltry 13.76
Given this distribution, we know that while volatile individuals are present, their numbers are small. With this information, we can try to get a better understanding of which groups contain a larger number of individuals who show a likelihood of changing their minds or being swayed in one direction or another. Below is a breakdown of average volatility scores by three identifiers: the respondent’s level of education, gender, and location. We can see one group stands out as particularly volatile (relative to the question): Non-college educated men in cities.
Having identified a volatile group on this issue, we can look at how this group’s opinion has been shifting. Above are two trendlines showing how the monthly averages change over time. The first is of urban non-college educated men – the group we identified as most volatile. We can see the group generally prefers Democrats. However, this was momentarily exacerbated after the passing of the GOP Tax bill with a spike of support for Democrats, followed by a sharp decline. This huge swing and subsequent smaller swings are certainly volatile behavior. Indeed, by May of this year, the two main parties ran almost even among this cohort. It is safe to say, this group has less dedication to partisan allegiance.
For comparison purposes, the trendline above shows all other respondents. This trend shows much less dramatic shifts in preference. While not a steady state, the movements are much less pronounced and the difference between August of 2017 and June of 2018 are minimal.
We are only just beginning to implement volatility scores, but their utility is already clear. These scores help direct our research and analysis to where it matters most. By identifying where opinions are changing the most, we can better understand our data, making it more useful and less overwhelming. In addition, we can start getting a better understanding of what events, advertising etc truly has an effect, and which changes in overall opinion tend to be little more than amplified fluctuations in traditional research methods.