Amazon Alexa unveils new technology that can mimic voices, including the dead

Placeholder when loading article actions

Leaning on top of night table table during this week’s Amazon Tech Summit, Echo Dot was asked to complete a task: “Alexa, can Grandma finish reading me? Wizard of Oz’?”

Alexa’s typically cheerful voice rang from the kids-themed smart speaker with a panda design: “Okay!” So when the device began narrating a scene of the cowardly lion begging for courage, Alexa’s robotic sound was replaced by a more human-sounding narrator.

“Instead of Alexa’s voice reading the book, it’s the baby’s grandmother’s voice,” Rohit Prasad, Alexa’s senior vice president and chief artificial intelligence scientist, enthusiastically explained Wednesday during a keynote speech in Las Vegas. (Amazon founder Jeff Bezos owns the Washington Post.)

The demo was the first taste of Alexa’s new feature, which, while still under development, would allow the voice assistant to replicate people’s voices from short audio clips. The goal, Prasad said, is to create greater trust with users by infusing artificial intelligence with “the human attributes of empathy and affection.”

The new feature could “do [loved ones’] the memories last, “Prasad said. But while the prospect of hearing the voice of a dead relative can pull your heartstrings, it also raises a myriad of ethical and safety concerns, experts said.

“I don’t think our world is ready for easy-to-use voice cloning technology,” Rachel Tobac, chief executive officer of San Francisco-based SocialProof Security, told the Washington Post. Such technology, she added, could be used to manipulate audiences through fake audio or video clips.

“If a cybercriminal can simply and credibly replicate another person’s voice with a small voice sample, they can use that voice sample to impersonate other people,” added Tobac, a cybersecurity expert. “That bad actor can then trick others into believing that he is the person they are impersonating, which can lead to fraud, data loss, account takeover and more.”

Then there is the risk of blurring the lines between what is human and what is mechanical, said Tama Leaver, a professor of internet studies at Curtin University in Australia.

“You won’t remember that you are speaking to the depths of Amazon … and its data collection services if it is talking to your grandmother or in the voice of your grandfather or that of a lost loved one.”

“In a way, it’s like an episode of ‘Black Mirror’,” said Leaver, referring to the sci-fi series envisioning a tech-themed future.

The Google engineer who thinks the company’s AI has come to life

Alexa’s new feature also raises questions about consent, Leaver added, particularly for people who never imagined their voice would be sung by a robotic personal assistant after death.

“There is a real slippery track in using deceased people’s data in a way that is both disturbing on the one hand, but deeply immoral on the other because they never considered those tracks used that way,” Leaver said.

Having recently lost her grandfather, Leaver said she empathizes with the “temptation” of wanting to hear the voice of a loved one. But the possibility opens up a dam of implications that society may not be willing to assume, she said, for example. who has the rights to the small fragments that people leave to the ethers of the World Wide Web?

“If my grandfather sent me 100 messages, should I have the right to post them in the system? And if I do, who owns it? So does Amazon have that recording? “She asked.” Did I give up the rights to my grandfather’s voice?

Prasad did not address such details during Wednesday’s speech. He speculated, however, that the ability to mimic voices was a product of “living without a doubt in the golden age of AI, where our dreams and science fiction are becoming reality.”

This model of AI tries to recreate the mind of Ruth Bader Ginsburg

If Amazon’s demo were to become a real feature, Leaver said people may need to start thinking about how to use their voices and likenesses when they die.

“I have to think in my will that I have to say: ‘My voice and my pictorial story on social media are the property of my children, and they can decide if they want to revive it in chat with me or not? Leaver wondered.

“It’s a strange thing to say now. But that’s probably a question we should have an answer to before Alexa starts talking like me tomorrow, ”she added.

Leave a Comment