Let's talk about the ethics of using AI Art in humanitarian storytelling
I didn’t make this - but also, I kind of did?
A few weeks ago, my colleague shared with me a paper she’d been working on - it was the culmination of months of effort and she wanted to make it as appealing as possible to outside readers. I hadn’t joined the team yet, so when it came to presentation and editorial design, she had to improvise. She showed me the cover: to me, it was fairly obvious that she’d used ChatGPT to generate the illustration featured on it. “I wanted it to look nice“, she said, “But I can’t draw and ChatGPT’s image was exactly what I was looking for.“ I agreed; sometimes, when a team has limited resources and time, you have to take a shortcut or two. New tools and services are making it easier than even to do that. “Do you think I should disclose it as a photo credit?“, she asked me. I found myself not knowing how to respond.
I’ve been working remotely for years now, and I’ve just recently started sharing an office with fellow UN workers. It actually caught me a bit off guard just how much the daily, informal discourse had changed around the use of AI text and image generation. New questions and dilemmas were coming to the surface that I hadn’t had to think about before. What my colleague asked me was a fair question:
Do we have an ethical obligation to disclose AI-generated illustrations when we use them in our reports and publications?
It’s tempting to skip that step - AI-generated illustrations can be a practical solution when there’s no graphic design support available and deadlines are knocking at the door. You can adjust the image to your preferences within seconds, without back and forth with an illustrator that might take days. A migration or social protection expert would likely prefer to focus their time and effort on the content and impact of the report, rather than the aesthetics of its design.
But taking shortcuts always hides risks and in this case, the potential ethical implications of using AI-generated images are worth considering. One of the main issues is the risk of misrepresentation or oversimplification of complex humanitarian situations. AI algorithms are trained on existing data and can inadvertently perpetuate biases or stereotypes. Without context or explanation, Ai-generated illustrations may not accurately reflect the nuances of the stories being told.
Not to mention the potential negative impact this would have on artists and illustrators, particularly those from developing countries and underrepresented communities. Relying too heavily on AI-generated visuals could lead to missed opportunities to showcase the talent and perspectives of these artists. Women-led art collectives, for example, have a rich history of using traditional techniques to create powerful and culturally relevant visual narratives. The most recent UNFPA State of the World Population Report 2024 used such beautiful artwork that I just had to look up the creative minds behind it. Sure enough, UNFPA had included the work of artists such as Nneka Jones and Pankaja Sethi to illustrate their report both in their print and digital versions. It was an inspirational collaboration that supported these creators, not only enhancing the authenticity and cultural relevance of their publications but also helping to uplift the communities they serve.
Nneka Jones is a Trinidadian artist and activist based in the United States. Her work uses embroidery and mixed media, and mainly focuses on social issues.
Of course, we’d probably all want to include beautiful art in our reports but we have to be realistic; sometimes good art is just not in the budget. Sometimes you’re just trying to get the information out and time and money is running out, so the best you can do is generate a simple and straightforward image on ChatGPT. I don’t think that’s inherently wrong and I think there is a balance that we should aim for between the practical benefits of AI creations and the importance of transparency of representation. “Let’s aim for this next time“, I said to my colleague, “Let’s disclose the ‘artist’ in the photo credits of the cover; but be as honest as possible. Something like ‘AI-generated illustration created with Midjourney by Jane Smith based on the theme of children's education in rural areas’. This way, we can ensure transparency and allow readers to understand the nature of the visual content.“
I’m still trying to navigate these murky waters and solve issues that we haven’t had to face before. And I feel passionately about art and encouraging creatives from all over the world, particularly artists from developing countries that continue to make beautiful works even in the face of adversity. It’s so crucial that we don’t lose sight of the value and impact that art created by humans, and as humanitarians, we have a responsibility to be intentional and careful when producing our content.
I do believe that moving forward, it will be essential for humanitarian organisations to develop clear guidelines and best practices for the use of AI-generated images in reports and other publications. These guidelines should emphasise the importance of disclosure, context, and collaboration with local artists and communities. By fostering open dialogue and shared learning around these issues, we can work towards a more ethical and impactful approach to visual storytelling in the humanitarian sector.