top of page
Working Together
ETHICAL STRATEGY / 
The 3Cs Decision Framework
for Responsible AI use

An Ethical AI Strategy defines intentions and direction - but to make it real, we need to establish practical governance that makes it easy to deliver.

​

To do this we create 2 key governance mechanisms:

Responsible AI Design principles
6 Essential factors in designing & implementing responsible AI
3C's Decision Framework for Responsible AI Usage
A simple set of questions that supports users to think ethically about how and where they use AI
this page

Our Responsible AI Decision Framework helps users of AI by asking 9 simple questions to help you pause, reflect, and decide whether your use of AI is ethical, inclusive, and appropriate

The 3Cs: care, consent & constraint
care

Who could be affected, and how?

Care is about recognising that AI outputs don’t exist in isolation. They can shape decisions, opportunities, perceptions, and outcomes for real people. We challenge learners to think beyond efficiency and ask whether their use of AI could reinforce bias, exclusion, harm, or unfairness - intentionally or not.

​​

Care brings DEI, societal impact, and environmental responsibility into everyday AI use.

BLUE 123.PNG
BLUE 123.PNG
consent

Who has agreed to this use of AI?

Consent addresses permission, power, and agency. It challenges the assumption that because AI tools are available, their use is automatically acceptable. This includes consent to use someone’s work, data, likeness, or to make decisions that affect them using AI.

​​

This lens is especially critical in workplaces, creative contexts, and decision-making scenarios where power imbalances exist.

BLUE 123.PNG
constraint

Where should human judgement remain in control?

Constraint is about setting boundaries. It asks learners to recognise where AI should support thinking, not replace it - and where using AI at all may be inappropriate. This lens tackles over-reliance, deskilling, emotional dependency, and abdication of responsibility.

​

Constraint protects human agency, learning, and accountability.

Man Working Remotely

9 Big Responsible AI Questions for users 

Care

Who could be affected, and how?

me

“can I take full responsibility for the outputs i’ve created, and how I created them”

you

“could this output exclude, disadvantage, or misrepresent anybody”

the world

"is there an environmental cost to using AI for this?”

bottom of page