Audio of the post

 Fake videos mean that you can't trust everything you watch. Now, audio deepfakes can mean that you can no longer trust your ears. Was he really the president declaring war on Canada? Is it really your dad on the phone asking for your email password?

An audio deepfake is when a "cloned" voice that is potentially indistinguishable from that of the real person is used to produce synthetic audio.

"It's like Photoshop for speech," said Zohaib Ahmed, CEO of Resemble AI, of her company's speech cloning technology.

However, bad Photoshop jobs are easily discredited. One security company we spoke to said that people generally only guess whether deepfake audio is real or fake with 57 per cent accuracy - no better than flipping a coin.

Also, because many voice recordings are of low-quality phone calls (or recorded in noisy places), audio deepfakes can be made even more indistinguishable. The poorer the sound quality, the harder it is to pick up on those telltale signs that a voice isn't real.

But why would anyone need a Photoshop for voices, anyway?


Also Read: How does Facebook know what you have searched on Google?

An article by Munna Suprathik