Throughout 2021 BehaviourWorks Australia (BWA) is publishing a book to help policymakers and program managers use tools within our ‘Method’ to design and deliver more effective behaviour change programs. We are releasing one free chapter each month.
In Chapter 7 (published 1 July 2021), we discussed how to research audiences.
In this post we revisit some of that content and explore some common mistakes with audience research that are best to avoid.
What is audience research?
Audience research is a process to understand a group of people who you intend to influence (i.e., an audience). It is usually done to inform a strategy for changing that audiences’ behaviour.
Ideally, it is done scientifically, using rigorous and replicable methods.
Often audience research includes things such as:
Basic demographics: Age profile, gender, education levels, income, employment, occupation and cultural background.
Household characteristics: Number of people living there, how many children, the type and size of the house.
Life experience and psychosocial measures: Personal history, values, social norms, previous participation in the target behaviours, attitudes towards e-waste/recycling, and knowledge about the topic (e.g., recycling, collection site locations).
There are many methods you can use, but some of the most common are: questionnaires, interviews, observation, focus groups and a mix of these methods.
You can read more about audience research in Chapter 7.
Common mistakes with audience research (part 1)
Next we are going to discuss some mistakes that people make when doing audience research. There are quite a few so we broke them into two parts. We will cover the rest in the next post.
Asking the wrong people
Your sample (i.e., those people you engage with) should be representative of your audience.
It should be unbiased. A survey of barbers would probably give you a very distorted impression of how important haircuts are.
It should be genuine. Be careful with cheap recruitment options offered by some panel companies. People answering your survey for cheap may be quite different from those you generalise to. They might be answering surveys all day and not very authentically. They are likely to give quite different responses to a more authentic sample.
Avoid Self-Selection Bias. This happens when you let people opt in to your study and doing so biases your result. For instance, if you do a survey to understand the public’s views on a given topic and advertise for a sample, you might only attract people who are very interested in that topic which could distort your results.
Engaging with too few people
Engage with enough people in your audience to draw appropriately strong conclusions. If you ask one audience member if they think something is a good idea then you shouldn’t run a million dollar campaign on the back of their answer.
How many is enough really depends on your method and aims. The main thing is to know that there will be a generally accepted lower limit for most methods and you should make sure to be above it.
Engaging with too many people
Don’t engage with more people than needed. You usually don’t need a census to draw strong conclusions about your audience. At a certain point, you will be getting significantly diminishing returns from each new participant.
Don’t overpower samples. If you overpower certain statistical analyses by using a very large sample size then you are guaranteed to find significant results in almost any comparison.
Asking the wrong types of questions
It is generally best to use open ended questions for complicated topics (ensure you have planned time and an approach for synthesising the answers when you do).
It is generally best to use closed ended questions (e.g., ‘please pick a or b’) when you need to save time and there is little risk of confusion or misrepresentation.
Avoid forcing potentially misleading responses – you will get answers but they may be misrepresentative. If you force an opinion on a topic that a respondent doesn’t know about you don’t get good data.
Avoid double barrelled or ambiguous questions – For instance, assume someone selects ‘disagree’ when asked to “Please agree or disagree with the following statement: We should have more recycling and waste management plants.” What does this mean exactly?
Underestimating time and effort
Expect research to be more time consuming and demanding than your prediction. Planning fallacy is perhaps particularly common with research.
Data cleaning is often more time consuming than expected. As the joke goes half the work in data science is cleaning the data and the other half is complaining about data cleaning.
Research design and analysis are often harder than expected, particularly if you are doing field work as you will often have difficulty setting up clean conditions and are more likely to end up with ‘messy’ data.
Interview transcripts can take a considerable amount of time to transcribe and code (i.e., interpret).
Synthesising and preparing summaries and reports is often more work than expected, particularly if no template exists.
Making the task harder than it should be
Make your research task easy enough that you will definitely get enough audience members to complete it.
Don’t assume that you have a captive audience (unless you really do).
Make sure you ask the minimum amount from the audience – make everything as quick and easy as possible.
Don’t assume that asking more questions will get you more answers – it might just lead to fewer or lower quality completions. Survey Monkey gives some useful information on recommended survey length here.
In this post we introduced audience research and explored some common mistakes.
In part 2 we will look at a few more issues that you should consider before you do audience research and discuss ways to overcome the challenges.Please offer feedback if you think that there’s anything you’d like us to do differently in future posts. Please see our chapters, website and previous posts for more information about how to do research effectively.
You can download Chapter 7 and all previous chapters here.