OpenAI’s Approach to Synthetic Voices in an Era of Misinformation.

OpenAIs-Approach-to-Synthetic-Voices-in-an-Era-of-Misinformation

A new technology developed by OpenAI, called Voice Engine, has been withheld from general release due to concerns about potential misuse during significant global events, particularly elections. This tool, capable of replicating anyone’s voice with just 15 seconds of recorded audio, was initially created in 2022 and utilized in the text-to-speech functionality of ChatGPT, OpenAI’s flagship AI tool. However, the organization has refrained from publicly unveiling its full capabilities, prioritizing a cautious approach to its release.

OpenAI aims to stimulate discussions on the responsible implementation of synthetic voices and how society can adapt to this emerging technology. Through small-scale tests and dialogue, they intend to make informed decisions regarding its future deployment. Examples shared by the company include Age of Learning using Voice Engine for scripted voiceovers in educational technology and HeyGen offering users the ability to generate translations while preserving the original speaker’s accent and voice.

Notably, OpenAI researchers at the Norman Prince Neurosciences Institute utilized Voice Engine to restore the voice of a young woman who had lost hers due to a brain tumor, using a poor-quality 15-second clip of her speaking.

While OpenAI’s Voice Engine is marked with watermarks for traceability, competitors have already released similar tools to the public. Companies like ElevenLabs can create complete voice clones with just a few minutes of audio. To mitigate potential misuse, ElevenLabs has implemented safeguards, such as preventing the creation of voice clones mimicking political candidates actively involved in elections, starting with those in the US and the UK.

OpenAI emphasizes the importance of protecting individuals’ voices in AI and suggests policies to address the challenges posed by increasingly convincing generative models. They advocate for public education on AI technologies’ capabilities and limitations, including the potential for deceptive content. Additionally, OpenAI’s partnerships require explicit consent from the original speaker, and they do not allow developers to enable individual users to create their own voices.

More Info

3 thoughts on “OpenAI’s Approach to Synthetic Voices in an Era of Misinformation.

  1. You really make it seem really easy along with your presentation however I to find this matter to be
    actually one thing which I think I would by no means understand.
    It kind of feels too complex and very large for me.
    I’m having a look forward to your next post, I’ll try to get the dangle of it!
    Escape room

Leave a Reply

Your email address will not be published. Required fields are marked *