Is generative AI creating a race & ethnicity representation crisis in comms?

Can AI imagery can be used to ensure campaigns are diverse and representative, or are we hurtling towards a DE&I crisis? Our Head of DE&I and Senior Account Director, Sandy Downs shares her thoughts.

By Sandy Downs on Friday, 27 October 2023

I’m not the first to acknowledge that artificial intelligence (AI) has an inherent and systemic risk of producing biased results, which can have discriminatory consequences – especially sexist and racist ones. It’s a product of the society which built it – expecting it to be exempt from those trappings is unrealistic. I’m also certainly not well-versed enough in computer science, sociology, or AI to posit a solution to the problem – and this piece isn’t about that.

Instead, I’m interested in an AI issue which is very particular to the comms world. I recently joined a webinar on generative AI in PR and beyond – we’re not talking about using AI data tools to track coverage, spit out monthly reports, or create GANTT campaign timelines. We are, however, talking about leaning on the generative technology to:

a) Create mood boards and inspire creativity

b) Help build bespoke pitches/ press releases

c) And create imagery to align with PR and advertising campaigns

The first two make me a bit nervous – I worry about the role of discerning human professionals in a world where creativity and personalisation can be outsourced, and the impact of this on trust, learning, and human evolution. But the latter makes me terrified. The ability to create imagery that mimics real people without having to involve them fills me with dread. Part of this dread comes from a point made by a (well-meaning) comms professional in the webinar itself, which was that AI imagery can be used to ensure campaigns are diverse.

Plenty of research (e.g. this piece on The Conversation and this by Time Magazine) has been done to show that if you search AI image sites such as Midjourney for ‘job roles’ (journalist, doctor etc), you’ll usually get young, white, and predominantly male images – because those are what the majority of the data set images look like. But you can of course fix that by specifying what you’re looking for – ‘black doctor woman’, or ‘Asian journalist man’, for example.

The well-meaning comms practitioner says, ‘great! Let’s do that and have a truly representative campaign full of visual diversity’. And there are several blogs already talking about which search terms to use (e.g. this blog on Midjourney and this Medium article) to create imagery which is ‘authentically diverse’.

In the DE&I world, we all know the phrase, ‘if you can’t see it, you can’t be it’ – and I understand this instinct to create diverse imagery, especially now we have the power to seemingly create representation via a simple keyword search.

But in my view, this cannot be considered true representation as these images aren’t truly human –and as such, no human will be remunerated for their use. Models (regardless of gender, race, or anything else) deserve to be paid for their work, and the increased desire for greater representation in comms campaigns in recent years has led more diverse individuals towards the industry.

How can hiring a human model for a shoot – who looks a set way, needs to eat, and travel, and other inconveniences of that nature – compete with 10 minutes of clicking about online? We’ve already seen this play out in the real world. Shudu Gram, the first digital supermodel, is a recognisable, dark-skinned black woman who has featured in Vogue – but she’s AI generated, and her creator is a white man who reaps the profits of her work. And for every slot she secures, a real black model loses an opportunity.

To me, this feels like a looming crisis. This year’s Black History Month theme is ‘saluting our sisters’, and it must not escape our notice that the most common ‘diverse go-to’ is a woman of colour. If we start plucking diverse images from thin air, there’s no opportunity to include, represent, and remunerate real people – and that’s a catastrophe.

So, what’s the solution?

My personal, and admittedly pretty radical, view is that we need to slow to a stop on using generative AI imagery – certainly of people, and arguably, altogether (given no artists are paid for the inclusion of their work as an AI-training-tool). AI is set to change the world, and we’ve already seen the benefits of AI-driven tools in enhancing efficiency – but if we’re going to use it to be creative, more conversations (and regulation) have to happen first to ensure we’re not sliding backwards on hard-won societal progress.

Speaking to colleagues, a different view shines through – that truly responsible comms professionals can take a strong ethical approach to mitigate these risks, while still seizing the creative opportunities that AI brings. We’ve done a lot of work as an agency to ensure our creative outputs are diverse and authentic, be those stock library images or original photoshoots – many of those learnings can be transferable to working with AI tools, so long as we ensure we’re considering the ethical ramifications and raising them with our clients to reach the best outcome for all.

And of course, there’s a question mark around whose responsibility this is, really. Is it the comms professionals, or is it the generative AI tools themselves? If the likes of Midjourney are going to generate imagery from art scraped from the web, should the companies running them be responsible for remunerating those artists, and even the people at the end of the art? And should it also be working to mitigate the inherent bias in its tool, by including inclusive prompts around the specifications users enter to help everyone consider the impact of the work they’re doing?

The answer isn’t clear, and there are other views in this discussion I haven’t covered – but what matters most is that the debate happens, and we don’t just charge full steam ahead without pausing as an industry and discussing these complex and thorny topics. Jumping feet first into murky water is a dangerous game – companies which value societal progress shouldn’t just hop on the fastest moving train, but consider the destination we’re speeding towards.

Related News

Tue 26 Mar 2024

AI & Ethics: Revolutionising marketing without compromising ethical principles

Read more

Mon 25 Mar 2024

Why long-termism matters more today

Read more

Thu 21 Mar 2024

The New Adviser

Read more

Thu 21 Mar 2024

In marketing, are we losing sight of ‘Business to Human?

Read more