OpenAI, the generative artificial intelligence (AI) giant and publisher of ChatGPT, on Friday introduced an audio cloning tool, the use of which will be restricted to prevent fraud or crimes, such as identity theft.
This AI model, called “Voice Engine,” can reproduce a person's voice from a 15-second audio sample, according to a statement from OpenAI about the results of a small-scale test.
“We recognize that the ability to generate human-like voices carries significant risks, and is especially important in this election year,” the San Francisco-based company said.
“We are working with U.S. and international partners from government, media, entertainment, education, civil society, and other sectors and taking their feedback into account as we develop the tool.”
In this crucial election year around the world, disinformation researchers are concerned about the misuse of generative AI applications (automated generation of text, images, etc.), especially voice cloning tools, which are cheap, easy to use, and difficult to trace.
OpenAI said it had taken a “cautious and informed approach” ahead of wider distribution of the new tool “due to the potential for misuse of synthetic voices.”
This cautious offer comes after a major political incident, when a consultant working in the presidential campaign of the Democratic competitor, Joe Biden, developed an automated program that impersonated the US President, during his re-election campaign.
We called in a voice that imitated Joe Biden's voice. [par téléphone] Voters to be encouraged to abstain from voting in the New Hampshire primary.
Since then, the United States has banned calls using cloned voices, generated by artificial intelligence, in order to combat political or commercial fraud.
OpenAI has made it clear that partners testing the Voice Engine have accepted rules that require, among other things, explicit, informed consent from anyone whose voice will be replicated and transparency for listeners: They must clearly know that the voices they are hearing are generated by AI.
“We have implemented a range of security measures, including watermarking so we can trace the origin of all voices produced by Voice Engine, as well as proactively monitoring its use,” OpenAI insisted.
Last October, the White House unveiled rules and principles governing the development of artificial intelligence, including transparency.
Joe Biden was impressed by the idea that criminals could use this technology to trap people by pretending to be family members.