How we learned to stop worrying and love AI.
The industry’s response to the emergence of Generative AI has been wide and varied, from welcoming our robot overlords to fears that sentient computers will replace us all. Or, you know, Terminators. But what’s it really like working with these tools and platforms? Join us on a (mid)journey into the world of AI-dvertising, where we share the good, the bad, and the ugly… like, say, what generative AI does with hands. 😳
Our first foray into professionally using AI-generated art came as part of a campaign for our client Erik’s Delicafé. The California-based sandwich, salads, soups and sweets brand turns fifty this year, so we decided to take consumers on a visual tour through their five fantastic decades. Or more specifically, their five groovy, awesome, fly, OMG and fire decades (one might say eras, but one’s lawyers might advise one not to 😉
There’s a mutual love affair between Erik’s and their consumers – people are passionate about the brand and its quirky personality “Crafted with Character.” The idea of character became our hook, with a vision of showcasing fifty years of characters at Erik’s. Which is all well and good if you’ve got a massive budget to cast, stage, set and shoot an elaborate photo session, commission an expensive series of intricate illustrations or break out the big licensing bucks; but, to put it simply… we didn’t. And truth be told, many clients often don’t.
That’s where AI came in – or to be more specific, that’s where our stellar Art Director Amy Buchanan brought AI in. An early tissue session featured this as her contribution…
…and that inspiration launched what became the “Take Me Back” campaign. Utilizing Midjourney through Discord, Amy created an evocative vision of what it must have felt like to walk into an Erik’s 50 years ago: cinematic yet hyperrealistic. At this point you might think great – one down, four to go, just feed the machine some words and it’ll spit out the art, call it a day and let’s break for lunch. Which, not so much.
While it’s certainly true that tools like these are incredibly powerful, it became immediately clear that they still need to be guided by a talented designer… who’s also a writer. Because these things need words. Typing in “70s guy at deli” might point you to the team photo of what you’re looking for, but you won’t find your star unless you’re able to articulate your vision in exacting detail. But in a twist, despite their desire for literary clarity, generative AI platforms are oddly terrible at creating text. Some platforms like Adobe’s Firefly do interesting things dimensionalizing or even reconceptualizing existing letterforms, but it turns out that recreating your signature in letters covered in leaves is far easier than coaxing out the correct spelling of your brand’s name.
Here is a sample prompt we used to create our characters:
And once your descriptive prompt becomes an AI vision, there are a surprising amount of “last mile” design needs that aren’t immediately apparent. First there’s the aforementioned hands issue. Right now AI has a real problem getting people’s digits. From developing oddly low-poly mitten blocks to adding so many extra fingers to become centipedal nightmare fuel, prepare to do some work if your AI image includes real people. Also keep those Photoshop skills sharp for feedback because it turns out that making client changes can be a challenge (more than usual – jk jk we kid 😉 But often even small tweaks create huge ripples that rework your entire image. See? AI can’t actually render your diploma worthless (altho you can ask it to render a worthless diploma, which is actually fairly successful given their difficulties accurately forming words). Speaking of Photoshop, Adobe has introduced a “generative fill” function that allows you to isolate sections of an image to make iterations, but as of this writing is still in beta. Regardless, these AI models certainly don’t have a designer’s eye for things like color, type and composition.
Before (untouched AI image) and After (agency designed and client approved).
At the end of the day (ok, several days) Amy was able to create a series of evocative images that represented Erik’s throughout the years – and do a significant amount of work to bring them up to snuff. They became the backbone of Erik’s ongoing anniversary celebration, as these unique visualizations take consumers back across the decades and point the way toward promotions, offers, incentives and deals that will guide Erik’s into the future.
Using AI to create unique, ownable art enabled E29 to execute what could’ve been a multi-million dollar campaign for a $25k budget – completing a journey that was anything but mid. This campaign was the perfect representation of the E29 ethos: relentlessly forward thinking and innovative, consistently driven to help clients do big things no matter their budgets. And while fears for our creative careers may be a bit premature, generative AI is undoubtedly going to make an impact on our industry. It’s staggering to envision what we could’ve achieved with this technology fifteen, even five years ago. In properly trained hands like Amy’s and ours, you can harness this tech to do work that would’ve taken thousands of hours (and even more dollars).
For now, we’re planning on incorporating AI imagery where it makes sense – concept-driven campaigns like these, developing storyboards for content concepts, and when your ideas are just too out there for traditional tools to visualize. Always in the hands of creatives that keep your brand’s goals, promise, vision and vibes in mind and deployed to stretch any size budget to incredible levels.
Unless you’ve got a brand that’s all about hands. We’d be more than happy to help you move gloves, paint nails or create recipes… but we’ll stick with the traditional to elevate those ideas and spare you any multi-fingered nightmares.
Contact E29 for innovative solutions to your brand’s challenges – you’ll hear back from a real person, not a robot. We promise!