top of page
Holding Plant
AI is powerful, but without responsibility, it can cause harm
poorly managed ai has the potential to exacerbate existing inequalities, concentrate power, or create negative externalities that harm communities
Your people, customers, and stakeholders need confidence that AI is safe, fair, and aligned with your values
3.png
3.png
3.png
3_edited_edited.jpg
3_edited_edited.jpg
People During Workshop

What is Responsible AI?

the 6 essentials of responsible Ai

Rollover each card below to read more about how to get started with each area (or click the doc to the right to see it in full screen)

2.png
8.png
9.png
4.png
5.png
10.png
11.png
6.png
7.png
12.png
Site images Responsible AI.png
Judges Examining Document

Who's job is it?

Using AI in your organisation will mean new skills, new responsibilities, or even completely new roles

people who will need new ai skills

Legal Counsel: Needs to develop expertise in emerging AI regulations and standards to advise on compliance and risk mitigation specific to AI applications.

Risk Managers: Needs to incorporate AI-specific risks into enterprise risk frameworks, including reputational, operational, and regulatory dimensions.

Product Manager: Must embeds AI ethical considerations directly into product development lifecycles, ensuring responsible design from concept through to deployment.

HR: Should develop guidelines for using AI in hiring, promotion, and employee assessment that protect against discrimination while capturing benefits.

Privacy Officers: Must address new privacy challenges created by AI's data requirements and potential for unintended inference about individuals.

Programme / Portfolio Management: Needs to build industry standards and ethical frameworks into AI project planning prioritisation frameworks.

Data Managers/ML Engineers: Must expand beyond technical optimisation to consider fairness, explainability, and potential social impacts of their models.

Learning & Development: Develops and delivers educational programs to ensure all employees understand responsible AI principles relevant to their roles.

IT System Testers: Should introduce specific testing to mitigate bias in AI systems

cross functional / Board roles

Chief AI Officer: Provides executive leadership on responsible AI practices, sets organisational standards, and ensures alignment between AI implementations and company values

AI Governance Lead: Develops and maintains the frameworks, policies, and processes that guide responsible AI development and deployment across the organisation.

Law Books

regulations & future landscape

european union

EU IA ACT

The world's first comprehensive AI regulation, categorising AI systems by risk level with varying obligations. While the UK is no longer an EU member, this regulation affects any UK business serving EU customers or using AI systems that impact EU citizens.

https://artificialintelligenceact.eu/

General Data Protection Regulation (GDPR)

Though primarily a data protection law, it contains important provisions affecting AI, including restrictions on solely automated decision-making, requirements for data minimisation, and the right to explanation.

https://gdpr.eu/what-is-gdpr/

Image by simon frederick

Coming soon...

See below for info on the UK AI Regulation Bill

UK

Data Protection and Digital Information Bill

The post-Brexit evolution of UK data protection law, maintaining many GDPR principles while potentially diverging in areas relevant to AI innovation and automated decision-making.

https://ico.org.uk/about-the-ico/the-data-use-and-access-dua-bill/

UK Algorithmic Transparency Standard

A government initiative requiring public sector organisations to publish information about how they use algorithmic tools in decision-making, potentially expanding to affect private sector suppliers.

https://www.gov.uk/government/collections/algorithmic-transparency-recording-standard-hub

National AI Strategy

While not regulation per se, the UK's strategic approach to AI includes regulatory components and signals future regulatory direction, particularly around innovation-friendly frameworks.https://www.gov.uk/government/publications/national-ai-strategy

Financial Conduct Authority (FCA) AI Guidelines

Specific regulatory guidance for financial services firms using AI, focusing on explainability, fairness, and governance of AI systems.

https://www.fca.org.uk/publications/corporate-documents/artificial-intelligence-ai-update-further-governments-response-ai-white-paper

Image by Kelsey Knight
cross border considerations

OECD AI Principles

International guidelines that the UK has adopted, emphasising AI systems should be transparent, explainable, robust, secure, and respect human rights

ISO/IEC Standards

Emerging technical standards for AI systems (such as ISO/IEC 42001 for AI Management Systems) that are increasingly referenced in regulatory frameworks and procurement requirements.

Sectoral Regulations

Industry-specific rules covering AI applications in healthcare (MHRA), advertising (ASA), and employment (Equality Act implications), which impose additional requirements beyond general AI regulations.

emerging

UK Artificial Intelligence Regulation Bill

The UK’s proposed Artificial Intelligence (Regulation) Bill aims to create a central AI Authority, mandate AI Officers in businesses, enforce transparency on training data, and introduce independent audits. It promotes ethical, accountable AI use without stifling innovation—offering regulatory sandboxes for testing. Unlike the EU AI Act, which enforces strict, risk-based rules across all sectors, the UK’s approach is more principles-led and flexible. Although still a private member’s bill, it signals growing pressure for formal regulation. UK businesses should prepare for likely changes by strengthening governance, data practices, and audit readiness, especially if competing in or partnering with EU-regulated markets.

Algorithmic Impact Assessments

Increasingly required by various regulations, these structured evaluations help organisations systematically assess potential harms of AI systems before deployment.

Corporate AI Transparency Requirement

Evolving expectations for organisations to disclose how they develop, deploy and govern AI systems, including to shareholders and in ESG reporting.

AI Assurance Ecosystem

The UK's approach to building trust in AI through standards, tools and services that help organisations verify claims about their AI systems

2.gif

Come and talk to us...

​​If you’re looking for support to scale and future-proof your business, let’s talk. We offer a free introductory consultation to understand your ambitions and explore how AI & digital transformation can unlock new growth opportunities for you

enquiries@fullfathomfive.uk

or call us on +44 (0)7788 548088

bottom of page