The Fair Imagery Manifesto

For the responsible use of AI-generated images of people and communities

Why This Matters

Images shape how the world sees people, places and power.
With generative AI, we can now digitally, AI-produce faces, bodies, crowds and crises at scale.
With that power comes responsibility.

This manifesto is our shared commitment as enterprises and NGOs:
to use AI-generated imagery in ways that protect dignity, truth and justice – especially for people and communities most affected by inequality, conflict or crisis.

Our Commitments

1. People come before visuals

We will never sacrifice the dignity, safety or rights of individuals and communities for attention, clicks or brand impact.

2. We refuse digitally, AI-produced suffering

We will not use AI to create “realistic” images of poverty, crisis, disaster, conflict or trauma that did not happen to real people in real places.
If an image looks like documentation of reality, it must be reality.

3. We do not trade consent for convenience

We will not use AI imagery as a shortcut to avoid informed consent, safeguarding or fair collaboration with real people.
Consent, participation and shared decision-making remain non-negotiable.

4. We reject stereotypes, always

We will not generate or circulate AI images that rely on harmful stereotypes based on race, gender, age, disability, class, religion, geography or any other identity.
We will actively check prompts and outputs for bias and correct them.

5. We are honest about what is real and what is not

Whenever we use AI-generated or heavily AI-manipulated imagery, we will say so clearly.
Our audiences have the right to know when they are seeing a constructed image.

6. We use AI where it does least harm

When we use AI imagery, we limit it to conceptual, abstract or clearly illustrative uses – icons, diagrams, textures, animation-style visuals – not to depict real people or real events.

7. We prioritise local, ethical creators

Wherever possible, we choose real, consented imagery created by photographers and filmmakers, especially local visual creators who know their own contexts.
AI is not a replacement for human storytelling.

8. We hold our partners to the same standard

We expect agencies, platforms, stock providers and technology partners to meet these principles.
We will ask hard questions, refuse harmful content and demand fast correction when things go wrong.

9. We build systems for accountability

We will create clear internal processes for reviewing high-risk imagery, flagging concerns and acting quickly – including removing or correcting misleading or harmful visuals.

10. When in doubt, we don’t use it

If we are unsure whether an AI-generated image might harm, mislead or stereotype, we choose not to use it.

11. We open our prompts to scrutiny

We commit, wherever feasible, to making the prompts and instructions used to generate AI imagery publicly available or easily accessible alongside key campaigns and materials.
Where full disclosure is not possible (for example, for privacy, security or contractual reasons), we provide an anonymised or representative version.
Transparency includes how an image was made – not just the fact that AI was involved.

Our Call to Action

We invite all organisations using images of humans and social groups to adopt, adapt and publicly endorse this manifesto.

Make these principles part of your brand, not just your policies.

Share them with your teams, agencies and partners.

Publish your prompts and practices so that communities, peers and audiences can understand, question and improve them.

Listen to the communities you portray and let their feedback shape your practice.

AI will transform visual storytelling.
Together, we can ensure it does so fairly – with transparency, consent and respect at its core.

Pledge supporters

fairpicture-logo

Add Your Voice

You're joining a growing community of people who believe in this mission. Sign the pledge and stand with us.