
On November 2, 2023, the world welcomed, probably, the last Beatles song ever to be released.
While a 1970s demo tape recording of John Lennon’s “Now and Then” was considered for release in the 1990s, the producers felt the quality of the recording would be a great disappointment to fans, so was never made public.
But today, we live in the age of AI. When director Peter Jackson was working on the documentary, Get Back, based on old grainy film and spotty audio recorded in 1969, he employed machine learning algorithms taught, among other things, to identify different sounds and instruments, and isolate each digitally.
Thus Jackson not only gave us a whole new clear and colorful look of film footage that had seen the wear and tear of decades, he was able to digitally isolate the voice of Lennon singing “Now and Then” amidst the algorithmic noise of the tape cassette.
With Consent
But this is just the start of an amazing journey down the generative AI rabbit hole. We’re now beginning to see virtual and reality blur. We have seen actors brought back to life on the screen – one of the more famous examples being Rogue One, the prequel to Star Wars: The New Hope. In this 2016 film, the digital replicas of Peter Cushing and Carrie Fisher were superimposed over respective actors Guy Henry and Ingvild Deila, to ensure continuity in the storyline, despite the unavailability of the actual actors.
In the Marvel film, Captain America: Civil War, the entrepreneurial engineer, Tony Stark, demonstrates a virtual reality construct in which his younger self interacts with his parents just before they leave, the last time he sees his parents. It is, he says, a way “to clear traumatic memories.”
We’re not at Tony Stark’s level yet, but the company HereAfterAI offers a way today for you to have audio conversations with your parents, or any loved ones beyond the dates of their demise, while Storyfile allows you to record yourself for future video interactions with people after you pass.
More amazingly, digital images of famous people can interact with others, live. In the season finale of America’s Got Talent in 2022, Elvis tribute performer, Emilio Santoro, sang Presley songs in front of a bank of video cameras and sensors while a young Elvis Presley reconstituted Santoro’s voice and movements on a big screen, bringing the King back to life.
Without Consent
But like all powerful tools, Generative AI and its ability to create digital replicas of people, their expressions, their voices and the movements to evoke instinctively powerful connections to people we love and admire is double sided.
In the cases of the deceased Beatles who perform on “Now and Then,” the estates of Lennon and George Harrison, allowed the corporations to produce and monetize this new song. But we are seeing many instances where that is not the case, as explained in this Mashable article. And when a digital double is perceived as an attempt to deceive, we call them deep fakes.
There have been controversies around the use of AI to create audio deep fakes, namely a song by anonymous creator Ghostwriter who used Drake and The Weeknd’s voices. But much of the outrage stems from a lack of consent.
The likeness of Tom Hanks was apparently hawking dental services, and the real Hanks vociferously objected. “BEWARE!! There’s a video out there promoting some dental plan with an AI version of me,” he superimposed over a deepfake ad. “I have nothing to do with it.”
The case of Greg Rutkowski is not about digital copies of his face or voice, but his artwork. Popular for his fantasy landscapes, this Polish artist has been openly ripped off in AI-driven image generators. In fact, according to the MIT Technology Review article “This artist is dominating AI-generated art. And he’s not happy about it,” Rutkowski’s artistic style is one of the most popular on AI art platforms like Stable Diffusion. However, he benefits not a bit.
“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”
With Consent (But Not Really)
There are also cases where consent is given, perhaps unwittingly, because we are impatient or unwilling to forego the short-term gain. We do it all the time when we click “I agree” at the end of a long legal document we don’t read.
In another MIT Technology Review article, How Meta and AI companies recruited striking actors to train AI,” we learned that out-of-work actors made money without crossing the picket lines by signing up for a research project. For $150 an hour, they would perform in front of cameras. The footage would not be shared with the public. Instead it would be fed into a generative AI algorithm as data, “to help train ‘virtual avatars’ for Meta.”
The irony is that actors who were paid for this research could very well be training AI to replace them and their peers in the future.
Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements.
Yet again, another example of AI outsourcing humanity to us.
Cory Doctorow is a science fiction author, as well as a journalist and blogger. A recent TikTok post of his stated this issue succinctly and insightfully, striking the alarm bells about the existential risk to creatives. There is a genuine concern in his tone, one tinged with futility.
There are a bunch of people who work in voice acting who are rightfully worried that their employers would like to put them out of a job, who have argued vociferously for this new right to basically think really hard about their voice, and who are now going into recording sessions with the games companies that constitute the largest plurality of voice actor work, and every session is required to begin with a phrase to the effect of “My name is Cory Doctorow, and I hereby assign in perpetuity, freely and of my own free will the right to train a machine learning system with the recording from this session.” And that has just become the boilerplate for these large concentrated firms. So my concern is that if we just create this right and we give it to people who have no bargaining power, and we say, “go and bargain with it,” that they’ll just bargain it away, that it’s just a roundabout way of telling companies, “Here, go make some AI.”
ARTICLE FAQS
1. What are digital doubles and why are they significant?
Digital doubles are AI-generated replicas of people’s voices, images, or movements, often used in film, music, or advertising. They allow creators to bring back deceased actors or musicians, or to extend performances beyond natural limits. While they can preserve cultural icons, they also raise ethical concerns about consent and authenticity.
2. What is the difference between a digital double and a deepfake?
A digital double is typically created with permission, such as the Beatles’ estates approving Lennon’s voice isolation for “Now and Then.” A deepfake, by contrast, is an unauthorized imitation intended to deceive, such as AI ads using Tom Hanks’ likeness without consent.
3. Why is consent such a major issue in generative AI?
Consent determines whether a digital reproduction is ethical. Without it, creators and performers lose control over their voices, faces, or artistic styles. Even when consent is technically given—like actors signing away rights in contracts—power imbalances often leave individuals with little real choice.
4. How is AI affecting artists and performers economically?
AI systems are trained on existing creative work, often without compensation to the original artists. Visual artists like Greg Rutkowski see their styles replicated across AI platforms with no benefit to them, while voice actors and extras worry about being replaced by AI-generated avatars trained on their own performances.
5. Can digital doubles be beneficial?
Yes, when used responsibly. Families, estates, or performers themselves may choose to use AI to preserve voices, likenesses, or memories. Services like HereAfterAI and Storyfile let people record interactive digital legacies for loved ones. In entertainment, digital doubles can enhance storytelling while respecting the rights of those represented.
6. What is the larger risk of generative AI in this space?
The danger lies in normalizing exploitation, where people sign away rights with little bargaining power, or where cultural icons are endlessly recycled without originality or respect. This can undermine trust, devalue human creativity, and erode livelihoods in the arts.
