Amazon has unveiled an experimental Alexa feature that lets the AI Assistant mimic the voices of users’ dead relatives.
The company demonstrated the feature at its annual MARCH conference, and shows a video where a child asks Alexa to read a goodnight story in the voice of her dead grandmother.
“As you saw in this experience, instead of Alexa’s voice reading the book, it’s the child’s grandmother’s voice,” said Rohit Prasad, Amazon’s lead researcher for Alexa AI. Prasad introduced the clip by saying that adding “human traits” to AI systems became increasingly important “in these times of the ongoing pandemic, when so many of us have lost someone we love.”
“While AI may not eliminate the pain of loss, it can definitely make their memories last,” Prasad said. You can see the demo itself below:
Amazon has not given any indication as to whether this feature will ever be announced, but says that their systems can learn to imitate someone’s voice from just one minute of recorded sound. In an age of plenty of videos and voice memos, this means that it’s well within the reach of the average consumer to clone the voices of their loved ones – or anyone else they like.
Although this specific application is already controversial, with users on social media calling the feature “scary” and a “monstrosity”, such AI voice mimicry has become increasingly common in recent years. These imitations are often known as “sound depth fakes” and are already used regularly in industries such as podcasting, film and TV and video games.
Many audio recording suites, for example, offer users the ability to clone individual voices from their recordings. That way, if, for example, a podcast host flumber their line, an audio technician can edit what they’ve said simply by typing in a new script. Replicating lines with seamless speech requires a lot of work, but very small edits can be made with a few clicks.
The same technology has been used in film as well. Last year, it was revealed that a documentary about the life of chef Anthony Bourdain, who died in 2018, used AI to clone his voice to read quotes from emails he sent. Many fans were disgusted by the use of the technology, calling it “scary” and “misleading.” Others defended the use of the technology as similar to other reconstructions used in documentaries.
Amazon’s Prasad said the feature could allow customers to have “lasting personal relationships” with the deceased, and it is absolutely true that many people around the world already use AI for this purpose. People have already created chatbots that mimic dead loved ones, such as training AI based on stored conversations. Adding accurate voices to these systems – or even video avatars – is entirely possible using current AI technology, and is likely to become more widespread.
Whether customers want their dead loved ones to become digital AI dolls or not, however, is a completely different matter.