It’s not an experiment, I’m being experimental

Sorrel Harriet, PhD
7 min readSep 29, 2022

--

Image credit: Alex Kondratiev, Unsplash

In this article I want to explore the differences in how we talk about and approach experiments in business vs academia, and share my tips for designing and running experiments in an industry setting.

What’s an experiment?

Experiment: a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact.

Experimental: based on untested ideas or techniques and not yet established or finalised.

Both the above are derived from the Latin, experimentum, meaning based on experience as opposed to authority or conjecture which, in Medieval times, equated with personal experience and observation.

Why am I telling you this? Well, because I think we have a tendency in the tech industry to conflate the two. Having myself made the leap from academia to industry, I’ve found the differences in how we approach and talk about experiments, and research more generally, both exciting and challenging.

Evidence-based != scientific

There is a healthy appetite within the tech community for evidence-based practices, and that can only be a good thing. However, when we conflate what is ‘evidence-based’ with what is ‘scientific’, in the strictly academic sense, it can have the following negative consequences:

  1. We under-value experience-based research because it isn’t deemed scientific (a phenomenon often manifested by an over-obsession with quantifying the unquantifiable);
  2. We fail to interrogate ‘science-based’ research adequately because it feels or sounds compelling or consistent with our world view.

Solutions?

Well, in the age of information overload and disinformation, we could all do with getting better at judging the value of information, objectively as well as pragmatically (there’s a difference). That’s the short answer. Now for the long one…

I work with consultants, for whom research is an integral part of what they do. It is my belief that anyone who does research, whether an academic or otherwise, should endeavour to do so with consideration for the ethics, integrity and limitations of their research. That’s, of course, very context dependent, and making the right choices for a given situation means having an awareness of relevant research methodologies and knowing when and how to apply them. It also means understanding the limitations of the tools we use so that we can be transparent about that in our communications.

Let’s return to this idea of ‘experiments’. As someone schooled in running scientific experiments, it feels disingenuous claiming to have run an ‘experiment’ knowing full well it could never stand up academic scrutiny. Allow me to explain…

True experiments vs non-experiments

Generally speaking, in scientific circles, for something to be a ‘true experiment’, it needs to meet certain criteria. To be specific, there needs to be both experimental and control groups, and these need to be determined through random assignment. You also need to be able to isolate the effects of the independent variable under test. For example, let’s say I take a large group of school children, all of whom meet certain selection criteria, and I split them into 2 smaller groups at random. One group of children are given jumbo slushies at snack time; the other gets their usual carton of milk. Only in this way is it possible to establish a causal relationship between the independent variable (sugar) and one or more dependent variables (e.g. tooth decay.) Obviously, there’s a bit more to it than that, not to mention some questionable ethics at play, but that’s basically how true experiments work.

When control groups are used but participant selection is not randomised, that may be referred to as a quasi-experiment; but if neither criteria are met, it’s deemed a non-experiment. In the case of non-experiments, it is not possible to establish cause and effect. You may be able to establish correlation, and you can certainly present case studies, but you can never establish a cause and effect relationship between variables (for more, see Price et al., 2014).

But wait…there’s hope!

Non-experiments can still hold enormous value, just as true experiments may turn out to be utterly useless. And that’s OK, because science isn’t a monolith. It consists of many distinct and overlapping disciplines, each with their own methodologies and techniques for analysing data.

So, to all those highly intelligent, analytical people out there busy tying yourselves in knots trying to measure the unmeasurable: relax! We live in a world full of uncertainty. Sometimes you’ve got to rumble with it. And quell the myth that, for evidence to be worth anything, it has to be accompanied by graphs and statistics, irrespective of how robust they are. Used wisely, lived experience can be a powerful form of evidence.

Which brings me to another trap we fall into: believing that “some data is better than no data.” Some data needs to be treated carefully. You need to pay attention to the data that is missing, and the possibility that your very limited data set is grossly skewed. Some data has the potential to be worse than no data, depending on the quality of that data and what you do with it (to be fair, as a colleague rightly pointed out, it’s not the data we need to be wary of, it’s the information that is synthesised from it.)

Ok, so I’ve given you lots of reasons why I think people, consultants included, need to be savvy as well as open-minded when it comes to experiments and research, but where does one start?!

Tips for designing and running experiments

  • Define the purpose of the experiment. Is it to persuade? Inform? Contribute new knowledge? Create buy-in for a new approach or way of working? Build a business case for a new idea or product? Change hearts and minds? Win business? Is an experiment the only way to do this?
  • Define what you want to learn from it. In many cases, it can help to articulate this as a hypothesis statement, even if it isn’t strictly testable (i.e. whether it is true or false can’t be determined absolutely.)

An example of an untestable hypothesis statement:

“If we increase the coaching capabilities of our consultants, we will receive more positive feedback from our clients.”

There is no way I can realistically test this hypothesis via a true experiment. It is also highly unlikely I will accumulate sufficient empirical evidence to draw statistically robust conclusions, at least not within a realistic timeframe. But that doesn’t mean the experiment isn’t worth doing. There can still be value in approaching this as an experience based learning opportunity, or as experiential qualitative research. And who knows…maybe other people will run similar experiments. Ever heard of meta-analyses? Meta analyses combine secondary data from multiple sources in order to identify patterns and inconsistencies and can be performed on both qualitative and quantitative data (great argument for open data right there!)

  • Evolve your critical and statistical thinking. When you read a statistic like, “9 out of 10 people are willing to earn less money to do more meaning-full work”, you’ll ask questions like, Which people? How are we defining meaningful?
  • Get curious about different research design and methodologies, while seeing them as ‘tools to pick up when you need them’. For example, you don’t need to become an expert in research design, but if you are running a survey, take the time and trouble to learn good survey design principles.
  • Be open to all of the metrics available to you, including human success metrics, and choose wisely based on your context. Take time to justify the rationale for your choice of metrics. How do they connect you to your experimental learning objectives?
  • Consider the type of data you are collecting and how best to work with it. Look at how others have approached analysing similar data. Terms to be aware of include: qualitative vs quantitative, discrete vs continuous, statistical significance (beware of the misuse of P values!), false precision and other types of bias, and so on. See recommended resources below.
  • Consider sources of uncertainty and risk. Here it can help to think about all the ways the experiment could go wrong and cause harm.

Going back to the example experiment, let’s assume I have opted to implement a coaching intervention within one or more client engagements in order to collect qualitative data from the affected teams. There are things I can do to improve the quality of my experimental results, such as skilfully designed and administered surveys and interviews, followed by careful analysis. Or I could do things really badly, and, in the worst case, cause harm to participants or the business.

Example of how not to present data. Source: So Dutch people are about 3 times as big as Philippinos

At this point, it can also be good to consider to whom you are reporting to, and what you want from them. Are they an audience who will respond better to facts and figures, or will they respond better to a compelling story? Will facts and figures distract them from something more important?

  • Get help and feedback. Yes it’s a minefield, and no, having a PhD doesn’t always help. Seek advice and feedback as often as necessary.
  • Follow 6 simple steps to guide your experimental design process:
Experimental design in 6 simple steps

Finally…

  • Keep being experimental. Not everything needs to be an experiment. Being experimental is a creative and joyful state with huge potential for learning and innovation. Call it for what it is, and celebrate it.

Suggested further reading and learning resources

--

--

Sorrel Harriet, PhD
Sorrel Harriet, PhD

Written by Sorrel Harriet, PhD

Independent Learning Consultant with a focus on supporting software engineering teams to achieve high performance through continuous learning.

No responses yet