
About us

We are commercially focussed AI Consultants, who have turned a combined 40+ years of technology leadership experience into an industry leading blueprint for how to Lead AI Transformation.​
We don't sell AI tools - we work with businesses to educate, inform, and find real problems that can be fixed with AI solutions. We guide the end-to-end process from up front thinking, to investment strategy to partner selection & deployment.
​
We specialise in turning complex topics such as AI ethics, workforce impact, and EU AI Act compliance into practical advantage for leadership teams.
Our Founders


Antony Roberts - Co Founder & Chief AI Officer
With over 20 years’ experience leading cutting-edge technology and digital transformation at senior leadership level, Antony has led on strategy, people, data, and emerging technology across complex, regulated environments.
Antony leads Full Fathom Five’s work on AI strategy, AI technology assessment, AI audit, and commercial value realisation. He specialises in helping organisations move beyond experimentation to make informed decisions about where AI genuinely adds value, how it should be governed, and how risks can be identified and managed early.
A recognised expert in EU AI Act compliance, Antony advises boards, executives, and risk leaders on proportionate, risk-based approaches to AI governance. This includes AI inventories, shadow AI discovery, policy development, and workforce training, ensuring organisations meet regulatory obligations while enabling innovation rather than constraining it.
Claire Roberts - Co Founder & Chief Transformation Officer
Alongside authoring the FFF AI Training programmes and working with boards to craft AI strategies, Claire is also a passionate speaker on diversity; women in tech and AI ethics, and works closely with government agencies, charities & organisations campaigning for safer AI, and raising awareness of the impacts of AI on women and underrepresented groups.
Claire is a member of the UKAI Women in AI taskforce; a board advisor for multiple organisations including the Safe AI for Children Alliance (SAIFCA); and part of the National Council of Women’s research team investigating internet harms.












The way we use AI
Our Commitment to Responsible AI use
We practise what we preach
At Full Fathom Five, we use AI as an analytical aid in our own work, supporting research, analysis, and documentation.
We believe organisations using AI responsibly should be transparent about how they use it, and willing to hold themselves to the same standards they recommend to others.
That’s why we’ve documented how we use AI internally, and why we’re happy to share it.
Why this matters
AI is now embedded in everyday business tools, from productivity software to analytics platforms. The risk isn’t that organisations are using AI, it’s that they don’t fully understand how it’s configured, governed, or relied upon.
For us, responsible AI isn’t about fear or restriction. It’s about clarity, control, and accountability.
By being open about how we use AI ourselves, we aim to remove ambiguity, build trust, and demonstrate what good, proportionate governance looks like in practice.
Our principles for responsible AI use
Our internal approach to AI is guided by a small number of clear principles:
-
Enterprise-grade platforms with contractual data protection guarantees
-
Human oversight for all decisions with legal or similarly significant effects
-
Data minimisation and explicit purpose limitation
-
Risk-based assessment aligned with EU AI Act thinking
-
Clear accountability and access controls for AI usage
-
Transparent disclosure where AI materially contributes to our work
These principles reflect the same expectations we help our clients design and adopt.
Our Responsible AI Usage Commitment
We’ve captured these principles in the FFF Responsible AI Usage Commitment.
The Commitment sets out minimum standards for safe, defensible AI use in regulated and business-critical contexts. It covers areas such as licensing decisions, data handling, human oversight, AI tool governance, and alignment with EU AI Act risk categories.
It is the same Commitment we follow internally, and the same framework we reference when supporting clients with AI governance, discovery, and EU AI Act readiness.
The Commitment is principles-led, deliberately proportionate, and designed to be adapted to each organisation’s risk profile and regulatory environment.
What this Commitment is (and is not)
​
The FFF Responsible AI Usage Commitment:
-
Shows how Full Fathom Five uses AI in its own work
-
Provides a practical reference point for organisations building AI governance
-
Supports informed discussion around EU AI Act readiness
​
It is not:
-
Legal advice
-
A compliance certification
-
A substitute for formal risk assessment or regulatory guidance
Why we share this
​
AI governance shouldn’t be abstract.
If we’re advising organisations on responsible AI use, licensing choices, and regulatory readiness, we believe clients deserve to see how we handle the same challenges ourselves.
Sharing our baseline is part of that commitment. It demonstrates our approach, encourages open conversation, and provides a replicable reference point for organisations developing their own AI policies.
​
You can request a copy of the Commitment by email us at hello@fullfathomfive.uk
​

