Got a question about the percentages in our platform? Check out the below list of FAQs to see if it's covered.
What’s the difference between audience and data point %?
In charts, the audience % tells you the proportion of your audience who match with a given data point. For example, an audience % of 50 means that half of the people in your audience match with that data point.
Meanwhile, the data point % tells you the contribution your audience makes to a given data point. For example, a data point % of 50 means that half of the people who match with that data point are also in your audience.
What’s the difference between column and row %?
In crosstabs, the column and row % are the exact same as the audience and data point %, so you can refer to the above explanation for more detail. Note the above explanation works best if you have your audience in your columns and the data points you’re comparing to in your rows.
How are percentages calculated?
The percentages shown in our platform are all calculated using universe figures, not responses figures.
For example, if the universe figures show there are 10,000,000 people in your audience, and 2,500,000 of those people use a particular social network, then you should have an audience % of 25 against that particular data point.
Why are my percentages lower than expected?
Ask yourself if all of the data points you’re looking at are:
Featured in all selected markets
Available in all selected waves
Asked of all respondents (e.g. is the question only asked of certain age groups, or respondents who answered a previous question in a particular way? Check the question notes to find out)
All of these factors could lead to percentages that are lower than expected. For example, if a brand you’re looking at isn’t featured in our APAC markets, then over 50% of the world’s online population are, by default, not going to be users.
Why are my percentages higher than expected?
Consider who’s being represented by the data set you’re using and remember that our data sets:
Represent internet users only
Have a lower and upper age limit (typically 16-64, but the age range represented varies by data set)
Sometimes represent a specific group of people with a shared trait (for example, GWI Work and business professionals)
In other words, our data sets aren’t designed to represent absolutely everyone in a particular market. This means percentages can appear higher, particularly in emerging markets where fewer people have internet access and those that do tend to be younger and more affluent, educated, and urban than the national average.
Why don’t my percentages add up to 100%?
We all like things to be tidy, and there’s nothing tidier than a chart in which all the data points you’ve selected add up to 100%. If you’re expecting this to be the case but aren’t getting 100%, don’t panic! Chances are it’s for a logical reason.
If your percentages are adding up to more than 100%, this is usually because the options you’re looking at aren’t mutually exclusive. In other words, respondents can select more than one of them, so adding them together will give you more than 100%.
For example, let’s take a look at the race and ethnicity questions we field in the US. Here, respondents can select more than one race to describe themselves and are asked if they identify as Hispanic in a separate question. This means the sum of all race and ethnicity options combined exceeds 100%. However, this example is no outlier, as most questions aren’t mutually exclusive, so don’t expect things to add up to 100% most of the time.
If your percentages are adding up to less than 100%, it could be because the options you're looking at aren't asked of everyone. For example, the corresponding question may only be asked of people who answered a previous question in a particular way. You can check the question notes in our platform to see if a question has been "routed" from a previous question.
How do I check if a set of answer options are mutually exclusive?
Start by putting yourself in the respondent’s shoes: look at the question and ask yourself if more than one answer option applies to you. If so, they’re almost definitely not going to be mutually exclusive. If you’re still unsure, try putting all of the options in both the columns and rows of a crosstab. If you see any overlap between them, you can be sure they’re not mutually exclusive.
Why are my manual calculations giving me different results?
This usually happens when you compare a question that’s been asked of all respondents with one that’s been asked of a representative subsample only. Common examples include:
Comparing a question from the main Core survey with one from the brand & media module
Comparing a question from the main USA survey with one the CPG & healthcare module
Comparing a question from a primary data set with a question from an add-on study
In these cases, respondents from outside the relevant subsample have to be excluded from the calculation. Our platform does this automatically so you don’t have to. However, because this all takes place behind the scenes, any calculations you make with the figures you can see won’t give you the same results.
If you want to take a different approach (e.g. you want to be able to make manual calculations or would like a consistent sample to feature throughout your entire analysis) you can apply the relevant subsample as a base using the corresponding “audience size” data point from the “survey details” folder. This would allow you to apply a base of brand & media module or Core Plus respondents to your whole analysis, for instance.
However, before making the above changes to your analysis, it's important to note you don’t have to do this: when comparing questions asked of different subsamples, our platform automatically uses the smaller of those two subsamples as a base when making the relevant calculations even if that base isn’t applied to the analysis as a whole. For more detail on all things rebasing, check out this article.