Pindrop voice report finds 90 fraud attacks every minute
12 November 2019 19:36 GMT

Pindrop has released its annual Voice Intelligence Report uncovering skyrocketing fraud rates, with 90 voice channel attacks occurring every minute in the US. Additional key findings include: 

·  Voice fraud continues to serve as a major threat, with rates climbing more than 350 percent from 2014 to 2018

·  The 2018 fraud rate is 1 in 685, remaining at the top of a five-year peak

·  Insurance voice fraud has increased by 248 percent as fraudsters chase policies that exceed $500,000

·  In 2018, 446 million records were exposed from more than 1,200 data breaches

·  The industries facing the highest fraud risks include insurance (1 in 7,500 fraudulent calls), retail (1 in 325 fraudulent calls), banking (1 in 755 fraudulent calls), card issuers (1 in 740 fraudulent calls), brokerages (1 in 1,742 fraudulent calls), and credit unions (1 in 1,339 fraudulent calls)

 

“Cybersecurity crimes are increasing each and every day as fraudsters and the technologies they use become more sophisticated,” said Vijay Balasubramaniyan, CEO and cofounder of Pindrop. “As we examine the biggest threats and trends impacting the insurance, financial, and retail sectors and prepare to battle emerging technologies, we urge enterprises to assess their fraud and authentication strategies to ensure they are prepared to safeguard their customers’ assets.”

 

Fraud casualties will continue to rise as bad actors make fewer attempts but exploit companies for bigger losses through more sophisticated and targeted tactics. The report details emerging fraud threats, the birth of the conversational economy, and why voice authenticated customer experience is the next revenue battleground for enterprises. 

 

Emerging Threats in Deepfakes and Synthetic Voices

Deepfake made headlines in 2019 as fraudsters found ways to improve this technology for both entertaining and malicious purposes. The impact ranged from involuntary pornography featuring Scarlett Johansson to harmless content such as Steve Buscemi’s face swapped on to Jennifer Lawrence’s body. Deepfakes have entered the political arena, with both federal and state government officials passing legislation to combat this technology. 

 

In the report, Pindrop highlights how synthetic voice attacks will soon become the next form of data breaches. In the near future, we will see fraudsters call into contact centers utilizing synthetic voices to test companies on whether or not they have the technology in place to detect them, particularly targeting the banking sector. These attacks are dependent on deep learning and Generative Adversarial Networks (GANs), a deep neural net architecture comprised of two neural nets, pitting one against the other. GANs can learn to mimic any distribution of data—augmenting images with animation, or video with sound. These technologies use machine learning to generate audio from scratch, analyzing waveforms from a database of human speech and re-creating them at a rate of 24,000 samples per second. The end result includes voices with subletities such as lip smacks and accents, making it easier for bad actors to commit breaches.