It’s time to experiment with social impact
Only by combining scientific methods with creative thinking will we develop novel ideas that genuinely improve people’s lives.
I don’t like the term “social impact”.
There, I said it.
It’s jargon, easy to throw about but difficult to understand.
Social Enterprise UK defines it as “the effect of an activity on the social fabric of the community and well-being of individuals and families”.
Endless articles, presentations and toolkits describe the importance of data, measurement, analysis, goals, values and frameworks.
Wikipedia has three lines.
So clear as mud then.
But all gloss over one important word: “effect”.
Social impact is about demonstrating that an activity is changing people’s lives for the better. In other words, it’s about demonstrating a cause-and-effect relationship between an activity and a set of outcomes that benefit people.
It’s science.
Except it’s not. It’s currently pseudoscience. The methods are unreliable and the claims are often vague or exaggerated. The Guardian article, "Best bits: how to measure your social impact", provides nothing of the sort.
The integration of scientific methods into the creative process is exciting and should be supported. However what social impact currently stands for is not fit for purpose. The intention is good but the approach is misguided.
Getting the science right
You can do randomised controlled trials for social policy. You can pull social innovation to the same rigorous scientific test.
Esther Duflo, French economist and Co-Founder of the Abdul Latif Jameel Poverty Action Lab
To demonstrate whether an activity causes an outcome to occur, you need to provide evidence for two statements:
- If the activity is given, then the outcome occurs.
- If the activity is not given, then the outcome does not occur.
The best way to do this is with an experiment. In their simplest form, experiments involve randomly assigning people to either receive an activity or not. The assumption is that randomisation controls for all other differences so any change in outcome must be due to the activity. Randomised controlled trials are not always appropriate but they are the gold standard to aim for.
Experimentation is the first principle of science. The second is reproducibility. It must be possible for independent groups to reproduce the results of an experiment for the findings to be accepted. Scientific progress is a shared responsibility.
Scientific methods can revolutionise how organisations demonstrate and improve the effect their activities are having but scientists are often flippant about how straightforward the process is.
You can’t just slam two worlds together and expect them to get along. You’ve got to empathise with the other side.
Getting personal
The first principle is that you must not fool yourself – and you are the easiest person to fool.
Richard Feynman, American physicist and Nobel Laureate
Making something is a deeply personal process. Whether it’s a crafted object, impeccable service or an imaginative story, it involves pouring reason and emotion into something that may last forever or not last the night. The creative industries are characterised by uncertainty, sometimes leading to results that are neither predicted nor understood. Success comes in many forms, often in the eye of the beholder. Trying to evaluate this success naturally leads to tension.
Evaluation strategies currently focus on providing feedback to investors and commissioners at the end of the development process. A minimum standard of evidence is used to weed out ideas.
It’s like crash testing a new car once it’s rolled off the production line.
The result is a selection bias against novel activities that may be making a positive difference but don’t have the time or money, or aren’t established enough, to commission the required standard of evaluation.
Research – rather than evaluation – strategies should help designers and developers get feedback throughout the creative process. Experiments can help ventures understand what works and improve what doesn’t from an early stage, while building an evidence base for stakeholders.
This is the approach we took with the teams going through the Knee High Design Challenge, as written about recently by Ingrid Melvaer.
Ideas should be nurtured, not monitored.
Back to the science
The use of experiments in the creative process will no doubt cause some people discomfort. Small sample sizes, fragile data, overinflated results, ethical implications, are all points for debate.
But these are challenges for science as a whole. The influential paper, “Why most published research findings are false” by John Ioannidis argues that less than half of scientific papers can be believed.
It’s no longer enough for scientists to publish novel, interesting (and valid) insights. Findings must have real world relevance.
Universities are increasingly being judged on their research impact, defined by the ESRC as “the demonstrable contribution that research makes to society and the economy”.
The creative industries are under pressure to demonstrate their social impact, embedding scientific research into their activities.
There is a natural synergy that can be sparked by common interest.
Only by combining rigorous scientific methods with novel, creative ideas will we design products, services and places that truly benefit people’s lives.
Find out more
Find out more about the Knee High Design Challenge.
Subscribe to our newsletter
Want to keep up with the latest from the Design Council?