AI experiments supporting our decision-making
It was taking too long to get new ideas into production. Our IT director believed that building more of an engineering culture could help speed things up. But for that to happen, we first needed to change the management team to give the engineering teams the space and support to improve things themselves.
We also set up a separate enablement team to help accelerate development — by creating tools and sharing skills and knowledge. Over time, this led to big changes: from releasing every two weeks (with two full days just for testing and releasing), we’re now releasing up to eight times a day. Operational work dropped from 70% to 25%, and it’s still going down.
The improvements mostly came from the teams themselves. They started asking if AI could help speed things up even more.
So, at the end of 2024, we started using AI. We introduced GitHub Copilot in VS Code, and hired a few AI engineers. We ran small experiments to automate boring or repetitive work, with help from Patrick Debois and Microsoft.
We also started defining where AI could be used, and how. Some experiments stayed small, others grew into more stable and production-ready solutions. Our enablement team is now adding AI tools to our toolbox — for example, an AI-based reviewer that checks if code follows our internal guidelines.
In this talk, I’ll walk you through our real-life experiments — the ones that failed, the ones that were “meh,” and the ones that actually worked. I’ll also share what we’re doing right now with RB2, a dev company, to compare how AI is really impacting our development lifecycle.