4 min read

How We Built Our AI Ethics Framework (And How You Can Build Yours)

AI-Generated image of an ethical compass with the word ethics where North would be.  Cameras, maps, and journalis are also on the desk in the style of Antoni Gaudí. (Generated with Imagen 4)
AI-Generated image of an ethical compass in the style of Antoni Gaudí. (Generated with Imagen 4)

What is possible with generative AI is changing faster than many of us can keep up. Establishing an ethical framework for emerging AI tools can feel like trying to saddle a horse in full gallop.

While some bristle at any form of restraint in this fast-moving era, I’ve found that constraints often catalyze creativity. More importantly, transparency about those constraints builds trust with audiences.

Even if you dislike creative restraints, they’re coming. The EU AI act has implications for creators, and countless court cases will likely affect how we use AI tools. Having established guidelines and principles makes it easier to navigate these changes as they arise.

Building Origen Story’s Ethical Framework

When crafting our initial AI guidelines, I started with my personal experience using various AI tools over the past few years, plus early thinking on new approaches I expect to employ in the coming year.

I also ran Deep Research queries using Gemini and Claude to gather AI guidelines related to visual storytelling and journalism. My plan: gather these resources in NotebookLM, identify any generative AI applications or ethical considerations I might be missing, and figure out how to incorporate them into our workflows.

Here's my initial draft:


Origen Story ethical guidelines for the use of generative AI

  1. Research - We use LLMs to research topics, gather sources, and generate internal reports. All factual information is independently verified before sharing publicly or with clients.
  2. Writing and editing - We use AI tools for organization, planning, and idea iteration. Initial drafts and significant revisions are done by humans. LLMs assist with spelling, grammar, and structural analysis.
  3. Transparency - We identify to audiences any use of generative AI for video, images, audio, or other content, including the tools and models used and how AI content differs from documentary material.
  4. Digital recreations of individuals - We may create "deepfake" versions of individuals to protect their identity or in extraordinary situations. This requires consent from the individual or, if deceased, their estate. Without available estate, decisions are based on potential harm vs. public good, with reasoning disclosed to audiences. All deepfake techniques are clearly identified through on-screen text, visual cues, or description. 
  5. Digital recreations of notable public figures - In rare cases where consent is impossible but recreation serves public interest, depictions must be clearly stylized as recreations (e.g., sketch animation rather than realistic footage that could be mistaken for historical record).
  6. Post-production polishing - We routinely use AI-based finishing tools native to post-production software for color correction, EQ adjustment, and background noise removal. Common applications listed below may not be disclosed project-by-project.
  7. "Vibe-coding" and tool development - We use AI-assisted coding for non-linear documentary presentations and internal tools. Specific instances where these techniques have content and information implications will be disclosed.
  8. Accessibility - We use automated tools for transcription, translation, and other features that make content more accessible. These tools will be added to the common applications list below.
  9. Social impact - We believe in the impact of storytelling and thoughtfully choose topics that will address pressing social issues.
  10. Sustainability - We recognize AI's demand on water and energy resources and regularly review workflows to minimize or offset our carbon impact.

Common Applications

Regular uses that may not require project-level disclosure:

  • Interview transcription
  • Research and analysis of large datasets
  • Translation
  • Native post-production tools for color correction, audio cleanup, and general refinements that don't alter content substance
  • Basic spelling, grammar, and copy-editing
  • Pre-visualization and storyboarding when it doesn't significantly influence visual output

What the Research Revealed

I entered my draft into NotebookLM and compared it to more than 15 AI guidelines I had gathered (full list below). Because most input came from guidelines for large newsrooms and organizations, many recommendations fit that mold. However, the process surfaced several valuable suggestions:

  1. Bias and discrimination - Many LLMs demonstrate clear bias in their output. Mitigating this requires intentional action.
  2. Sensitivity in recreations - Based on Archival Producers Alliance suggestions, I should explicitly state that AI recreations require the same care for accuracy and sensitivity as other forms of recreations.
  3. Privacy and data protection - For sensitive material from sources or vulnerable persons, avoid using identifiable information with cloud-based tools or processes that might cause data leakage.
  4. Structured review process - Regular guideline updates and a system for collecting audience feedback and suggestions.

You can read the updated version incorporating these changes on the Origen Story ethics page. https://www.origenstory.com/ethics-guidelines/

Your Turn

Use these resources to craft your own ethical framework for generative AI. If you do, please share it at ethics@origenstory.com.

Resources: AI Ethics guidelines