Recently at CogX in London, Stephen Fry, the British thespian and audiobook narrator noted his solidarity with actors striking in Hollywood. His personal anecdote of how an AI used his voice without his permission or knowledge to narrate a historical documentary struck a chord with other creatives the world over. While there isn’t any doubt that a historical documentary with Stephen Fry narrating will demonstrably improve the viewer’s listening pleasure, using Fry’s voice without compensation or attribution is a whole different ball of wax.
Imagine having Stephen Fry’s voice narrate your how-to manuals for instance. An ingenious marketer might easily conclude — rightly — that Generative AI has the potential to spice up even the dullest subject matter. Making it dead simple for anyone to give, ahem, voice to their creativity doesn’t make it a good idea to do so, however. One obvious issue is that maybe Stephen Fry doesn’t want to be associated with narrating how-to manuals. As a public personality he should have agency in how his voice is used. Any number of other creators have equally strenuous issues with having their creativity co-opted without their permission or knowledge.
With great Generative AI comes great copyright problems. The US Patent and Trademark Office recently granted a copyright record to a graphic novel, Zarya of the Dawn. The patent was for the protection of text and organization of images, but it denied protection of individual images, which were machine generated. US Copyrights are explicitly limited to the works of human creators only. Clearly there are many pitfalls and consequences for the eager marketer who wants to use tools like Midjourney and DALL-E in a marketing project.
There is no clear answer for what marketers should watch out for because the law isn’t yet settled. Keeping in mind that the US PTO only grants copyrights to works of human creation, brand campaigns reliant on generative AI might not be able to easily cement originality with copyright. The same goes for copy. The expectation from a client paying for authorship of copy — whether for use on a website, or other marketing materials like sell sheets — is that they are engaged in a “work for hire” relationship and that they get to keep and own the work product. If a freelancer or agency uses Generative AI for “writing” the copy in part or in whole, that potentially throws the whole relationship between client and consultant out of whack.
So what’s a marketer to do? There aren’t very many clear-cut answers here since the law isn’t settled and all of the answers end in an “it depends.” For in-house marketers, one clear path is to work with IT to make sure that any 3rd party AI services used in the creation of copy, images or video is congruent with governance, risk and compliance policies.
The real wake up call is to the entire Generative AI ecosystem. If this technology is to meet its transformative potential, it has to support human creativity and not supplant it. The best way of achieving it is with transparency. Open source models with good, clear documentation on data used for training is a start, as are explainability standards and usage guidelines.
While transparency can clear up a lot of this doubt, it isn’t a single step. As Reid Blackman and Beena Ammanath point out here, “transparency is not something that happens at the end of deploying a model when someone asks about it. Transparency is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between.” Everyone in the Generative AI world should want that.
At bookend AI, we are driven by the motivation to make Safe AI simple. We believe that AI is safe when it’s transparent, trustable as well as accessible and assistive. To make it happen, we’re building a platform that makes it easy for enterprise application developers — including marketing developers — to easily use Generative AI to power applications. Select, secure & scale models in minutes with bookend AI.