What does “using AI at work” look like when you move past hype and actually try it—in two very different organizations?
In this talk, Pascal Dufour shares a set of hands-on AI experiments from the past ~6 months, comparing adoption in:
RB2 (≈60 people, multi-national dev setup; e-commerce delivery projects)
DELA (large Dutch insurer; ~6 million customers, corporate governance, IT ~250 people)
Rather than presenting AI as a silver bullet, Pascal focuses on experiments that support decision-making: where AI fits, where it doesn’t, and what changes when teams use it in real delivery pipelines.
What you’ll learn
Why AI adoption moves faster in small companies—and slower in corporates (governance, tooling restrictions, regions, procurement)
How teams tried PR review bots / review agents (and why usefulness can fade over time unless prompts and standards evolve)
Using multiple “review agents” (e.g., Gemini + Cursor bot) to cross-check changes before shipping
How “tests pass → ship to production” changes risk profiles when AI is involved
Figma + MCP experiments for design systems: where it works well (storybook/tests/code generation) and where it becomes too rigid
Why some teams pivot from Figma-driven UI work to vibe-coding prototypes for faster feedback—then rebuild “properly” in the architecture
The emerging reality of AI budget limits / premium quotas and what that does to adoption behavior
A practical observation: teams that already excel at DevOps experimentation tend to adopt AI effectively too
The less-talked-about downsides: AI as a “pleaser,” addictive loops, and the need to tighten testing strategies because agents change things you didn’t intend
A key takeaway
AI experiments succeed more often in teams with ownership, fast feedback loops, and an experiment culture—the same traits that tend to predict success in DevOps transformations.
🎤 Speaker: Pascal Dufour
📍 Recorded at the LeSS Conference