At last week’s fantastic Beyond Boundaries conference, we networked, we listened, we shared. And we reflected.

Beyond Boundaries is a global Conversational AI festival hosted by the CDI Foundation, focusing on human-centric AI, agentic systems, and building conversational AI applications. The event brings together professionals to discuss ethical AI, multimodal experiences, and practical enterprise implementation.

There were far too many brilliant speakers to mention every highlight properly, but our Co-Founder Alice Kerly has summarised a few takeaways that are worth sharing.

Trust

Hans van Dam opened with a question that cuts to the core of any project: “Will your automation build or destroy conversational capital?”. Trust compounds slowly, but collapses suddenly. This feels like such an important starting point for any conversation about AI, automation or agents. Especially when the pressure to “do something with AI” is so high.

Intentionality before implementation

Several speakers challenged the idea that launching an agent is, in itself, progress.

Steve Bell made the clear point: organisations are under significant pressure to apply AI, but the work must start with identifying the problem to solve, then choosing the right technology.

Daniel Layne built on this with a practical reminder that content debt, skills debt and technical/tooling debt need attention before building CAI. Otherwise we risk automating weakness, not improving experience.

Lorraine Burrell and Helene Benz put it perfectly: the things you didn’t design will also scale. Even the poor parts.

And Grace Hughes’ work on mapping unintended consequences as a design activity was a useful reminder that surfacing uncomfortable conversations early is not a blocker but a highly pragmatic and valuable risk mitigation strategy.

Responsible AI cannot be something we bolt on later. Allegra Guinan framed this well: we are already making decisions about use cases, metrics, data inputs and outputs, and technology choices. The next step is to ensure we are making them intentionally across every layer of the enterprise architecture because you cannot compensate for errors or decisions not made in preceding layers.

Humans and conversations before technology

Elizabeth Stokoe talked about what it means to be conversationable and reminded us that we need to build from how people actually talk, not how we think they talk. Chatbots can’t do reciprocal self-disclosure – they have no self. Human communication demands vulnerability and holds power, and we are wise to keep working to remain aware of where chatbots only simulate authenticity and reciprocity.

Daniel Padgett’s session on the foundations of good conversation, common ground, relevant contribution and robust repair, was a powerful reminder that conversation is much more than turn-taking. We can reason over a combination of shared experiences, data about the current state and our knowledge of the world at every step in a conversation to find the sweet spot for the next best response.

Elizabeth Rodwell’s culture-first approach to design also really resonated with me. Cultural bias in AI is not neutral, and it is not inevitable. It’s shaped by design choices. Not only should we ask the engineering questions needed to build and ship, but the humanistic questions about what kind of system this should be.

And Mark van der Heijden’s comment about Bol’s call centre agents not working from scripts was striking. Instead, their customer service team are trained to empathise and create personal connections. I found it both refreshing and a little sad that this feels so different from the norm.

Know how well you’re really doing

Another strong thread was measurement.

Lorraine Burrell and Helene Benz spoke about the challenge of defining what “good” actually is, and how that definition needs feeding and defending.

Darren Ford’s point also landed strongly: many organisations are measuring activity, but do not always know whether outcomes are being met. That matters because if you make system updates based on the wrong data, you will simply make things worse, not better.

Content quality isn’t optional

AI is not a magic plaster for content chaos (I forget whether it was Lorraine or Helene who said this!). The pair shared Lloyds’ approach to creating content specifically for good experiences when retrieved by their AI agent, which felt like a mature and intentional shift.

Rahel Anne Bailie also made an important distinction: LLMs are trained on content, not data. We need to stop conflating the two.

If the content is unclear, inconsistent or poorly governed, the experience will reflect that.

Boundaries, hybrid systems and smaller specialised agents

There was also a very practical theme emerging around architecture. A number of teams seem to be moving away from the idea of trying to create one agent, but instead have many, each undertaking smaller tasks.

Shauna Griffin’s line stayed with me: “Your enterprise needs boundaries and isolation. Your user doesn’t.” because it captures one of the central design tensions in enterprise CAI. Internally, we need clear boundaries, controls and ownership. Externally, the user should experience something coherent.

Guillermo Vazquez’s point was a useful rule of thumb too: use generative AI for probabilistic problems, and code for deterministic ones.

Rebecca Evanhoe’s reflections on intentionally hybridising a product, rather than going all-in with LLMs, also felt very grounded. Understand which topics are hard to handle deterministically, where an LLM can genuinely do better, and use smaller controllable pieces to build organisational confidence in the overall system.

For me, the takeaways are more questions (always ask more questions!!) Before building more automation, we need to pause and ask:

  • Are we solving the right problem?
  • Do we know what “good” looks like?
  • Is our content fit to be retrieved and reused?
  • Have we surfaced the uncomfortable risks early?
  • Are we using generative AI where it adds value, and deterministic approaches where they are safer and more reliable?
  • Can we measure whether outcomes are improving, not just whether activity is increasing?

At The CAI Company, we’re so grateful for the opportunity to sponsor and attend such an inspiring event hosted by Conversation Design Institute / CDI Foundation.

Find out more about our upcoming events and webinars.