chevron_left Back to insights

AI experimentation for innovation. Trial and error.

Date

20.11.2023

Reading time

3 minutes

Author

Andreas D.

Categories

Failure is good. That's quite a statement. But it's true. Innovation goes with trial and error. And so does (AI) experimentation. A company that doesn’t try to move forward or innovate is standing still. Organizations need projects and experiments to excel and 'rise' above the competition. But what makes a good experiment?

Standing still is going backwards.

AI experimentation, the 'why'

Experimentation with AI in its many forms is an important step to gain a better understanding of how AI models are trained and which data is key. Organizations need to map out which data points they have gathered, how to link and use all this information and lastly, which data is important for training an AI model.

What makes a good experiment?

A good experiment includes 4 steps:

  1. Question
  2. Hypothesis
  3. Formulation
  4. Results

Let's explain with some theoretical basics and a recognizable webshop use case.

Question

A question can be as simple as 'We want to be able to see how our customers will react if we implement X?' to more complex cases such as 'How will the disruptive nature of capital venture startups affect our business?' The clue within this step is that we want a defined goal, a question our experiment should be able to answer.

A webshop wants to lower transport costs by increasing the number of purchased items per customer. Their question might be as follows: 'Which items are often bought by customers next?' 

Hypothesis

Next up is the hypothesis. Which data do we have or do we need to be able to answer our question? And is this data usable or do we need to refine and distill it? This step, while it might seem easy, is often one of the most labor intensive ones. Cleaning, refining and generally distilling data is very important though. A wrongly refined data set can ruin an entire model.

The webshop wants to see if they can sell extra items by promoting the item a customer is most likely to buy next as a suggestion. This means they need data of what each customer bought, the amount of time in between, etc.

Formulation

Formulation. This is where we start training a model. Mostly, a couple of different models are tested all at once to reduce costs. In this phase, we learn the exact data points that the AI model finds most important. Usually some extra refinement is required in this step as well.

The webshop further refines their data and trains a couple of different models to try and predict which items a customer might buy next.

Results

The last part - and not to be forgotten - are the results. We analyze what these models predict and what they can do for the organization. Do we need more data? Can we verify our hypothesis?

The webshop can now analyze the results. 'If we suggest the predicted items to 50% of our customers, do we see an increase in sales?' 'Are customers buying our predicted items?' The key takeaway to remember is that it might not be the case. They might not have the correct data or enough data. This means that the webshop has some follow up steps, even after the failed experiment. 'Which data are we lacking?', 'How can we improve our models?', 'Do we need more training?' or 'Do we need other models?' 

AI experimentation takeaways

There have been several successful and less successful experiments done in this field. Some have changed the world and others have been relegated to the scrapheap. And that's just what experiments - and in the end, life - are about.

In need of a guide to get your experiment started? Download our free AI Canvas here & kick off your AI experimentation!

about the author

Andreas D.

Andreas D. is Business & Functional Analyst at The Value Hub. His expertise lies in integration & project finalization. Combined with business understanding & requirement distillation as his strengths, it makes him a truly valuable team member. On top of that, he’s particularly interested in and focused on AI, analysis en predictive machine learning, an interest and expertise he used and shared in his latest projects. Andreas is the one to delve deeper and to realize future proof and innovative projects.

Discover other insights