News

AI Could Stop Snooping By Predicting What You’ll Say

But is it a good idea to add another listening device?

  • Researchers have developed a method to scramble conversations to prevent fake microphones from picking up our conversations.
  • The method is important because it works in real-time audio streaming and with minimal training.
  • Experts applaud the research, but think it does little for the average smartphone user.

BraunS/Getty Images

We are surrounded by smart devices with microphones, but what if they might listen to us?

To protect our speech from spies, researchers at Columbia University have developed a Neural Sound Camouflage method that disrupts automatic speech recognition systems in real time without disturbing people.

“With the invasion [smart voice-activated devices] The idea of ​​privacy begins to evaporate as it creeps into our lives, as these listening devices are always on and monitoring what is being said, said Charles Everette, director of Cyber ​​​​Advocacy, Deep Instinct, to Lifewire via email. “This research is a direct response to the need to hide or camouflage a person’s voice and speech from those electronic listeners, known or unknown in an area.”

speak

To prevent fake microphones from spying on your conversations, researchers have developed a system that produces quiet sounds that you can play in any room.

The way this type of technology prevents eavesdropping reminds Everette of noise canceling headphones. Instead of producing silent sounds to eliminate background noise, researchers broadcast background noise that disrupts artificial intelligence (AI) algorithms that convert sound waves into intelligible sound.

Such mechanisms for masking a person’s voice are not unique, but what sets Neural Sound Camouflage apart from other methods is that it works on the audio stream in real time.

“Our approach to working on live speech has to be guesswork. [the correct scrambling audio] in the future so that they can be read in real time,” the researchers write in their paper. Currently, the method works for the majority of the English language.

Brand3D CEO Hans Hansen told Lifewire that the research is crucial as it addresses a major weakness in current AI systems.

In an email interview, Hansen explained that existing deep learning AI systems in general and natural speech recognition systems in particular operate after processing millions of voice data records collected from thousands of speakers. In contrast, Neural Sound Camouflage only works after conditioning with two seconds of introductory speech.

“Personally, if I’m concerned about listening devices, my solution wouldn’t be to add another listening device that tries to create background noise.”

Bad tree?

Brian Chappell, chief security strategist at Bomgar, says Search is most beneficial for business users who fear being in the midst of compromised devices listening for keywords that indicate valuable information is spoken.

“Where this technology would potentially be more attractive is in a more authoritarian surveillance situation where AI video and audio impression analysis is used against citizens,” said James Maude, senior cybersecurity researcher at Bomgar. , to Lifewire via email.

Maude suggested a better alternative would be to enforce privacy controls on how data is captured, stored and used by these devices. Additionally, Chappell believes that the usefulness of the Seeker Method is limited, as it is not designed to shut down human listening.

“Be aware that for the home, at least in theory, using such a tool will cause Siri, Alexa, Google Home, and any other system activated by a spoken trigger word to ignore you,” Chappell said.

A businessman hidden under a conference table takes notes on a conference with other people.

Jacobs Stock Photo Ltd/Getty Images

However, experts believe that with the increasing integration of AI/ML-specific technology in our smart devices, it is quite possible that this technology will find its way into our phones in the near future.

Maude worries that artificial intelligence technologies can quickly learn to distinguish between noise and real sound. Although the system was initially successful, he believes it could quickly turn into a game of cat and mouse when a listening device learns to filter out jamming sounds.

More worryingly, Maude noted that anyone using it could actually attract attention, as disrupting voice recognition would seem unusual and could indicate you’re trying to hide something.

“Personally, if I’m concerned about devices that are listening, my solution wouldn’t be to add another listening device that’s trying to create background noise,” Maude said. “Especially because it increases the risk of a device or app being hacked into listening to me.”


See more

AI Could Stop Snooping By Predicting What You’ll Say

But is it a good idea to add another listening device?

Researchers have devised a method to scramble conversations to defeat rogue microphones from capturing our conversations. 
The method is significant since it works in real-time on streaming audio and with minimal training.
Experts applaud the research but think it isn’t of much use to the average smartphone user. 
BraunS / Getty Images

We’re surrounded by smart devices with microphones, but what if they’ve been compromised to eavesdrop on us?

In an effort to shield our conversations from snoopers, Columbia University researchers have developed a Neural Voice Camouflage method that disrupts automatic speech recognition systems in real-time without inconveniencing people.

“With the invasion of [smart voice-activated devices] into our lives, the idea of privacy starts to evaporate as these listening devices are always on and monitoring what’s being said,” Charles Everette, Director of Cyber Advocacy, Deep Instinct, told Lifewire via email. “This research is a direct response to the need to hide or camouflage an individual’s voice and conversations from these electronic eavesdroppers, known or unknown in an area.”  

Talking Over

The researchers have developed a system that generates whisper-quiet sounds that you can play in any room to block rogue microphones from spying on your conversations.

The way this type of technology counters eavesdropping reminds Everette of noise-canceling headphones. Instead of generating whisper quiet sounds to cancel out the background noise, the researchers broadcast background sounds that disrupt the Artificial Intelligence (AI) algorithms that interpret soundwaves into understandable audio. 

Such mechanisms to camouflage a person’s voice aren’t unique, but what sets Neural Voice Camouflage apart from the other methods is that it works in real-time on streaming audio. 

“To operate on live speech, our approach must predict [the correct scrambling audio] into the future so that they may be played in real-time,” note the researchers in their paper. Currently, the method works for the majority of the English language. 

Hans Hansen, CEO of Brand3D, told Lifewire that the research is very significant since it attacks a major weakness in today’s AI systems. 

In an email conversation, Hansen explained that current deep learning AI systems in general and natural speech recognition in particular work after processing millions of speech data records collected from thousands of speakers. In contrast, Neural Voice Camouflage works after conditioning itself on just two seconds of input speech.

“Personally, if I am concerned about devices listening in, my solution would not be to add another listening device that seeks to generate background noise.”
Wrong Tree?

Brian Chappell, chief security strategist at BeyondTrust, believes the research is more beneficial to business users who fear they could be in the midst of compromised devices that are listening for keywords that indicate valuable information is being spoken. 

“Where this technology would potentially be more interesting is in a more authoritarian surveillance state where AI video and voice print analysis is used against citizens,” James Maude, BeyondTrust’s Lead Cyber Security Researcher, told Lifewire over email.

Maude suggested that a better alternative would be to implement privacy controls on how data is captured, stored, and used by these devices. Moreover, Chappell believes the usefulness of the researcher’s method is limited since it isn’t designed to stop human eavesdropping.

“For the home, bear in mind that, at least in theory, using such a tool will cause Siri, Alexa, Google Home, and any other system that’s activated with a spoken trigger word to ignore you,” said Chappell.

Jacons Stock Photography Ltd / Getty Images

But experts believe that with the increasing inclusion of AI/ML specific technology in our smart devices, it’s entirely possible that this technology could end up inside our phones, in the near future.

Maude is concerned since AI technologies can learn quickly to differentiate between noise and real audio. He thinks that while the system might be initially successful, it could quickly turn into a cat and mouse game as a listening device learns to filter out the jamming noises.

More worryingly, Maude pointed out that anyone using it could, in fact, draw attention to themselves as disrupting the voice recognition would appear unusual and might indicate you are trying to hide something.

“Personally, if I am concerned about devices listening in, my solution would not be to add another listening device that seeks to generate background noise,” shared Maude. “Especially as it just increases the risk of a device or app being hacked and able to listen to me.”

#Stop #Snooping #Predicting #Youll

AI Could Stop Snooping By Predicting What You’ll Say

But is it a good idea to add another listening device?

Researchers have devised a method to scramble conversations to defeat rogue microphones from capturing our conversations. 
The method is significant since it works in real-time on streaming audio and with minimal training.
Experts applaud the research but think it isn’t of much use to the average smartphone user. 
BraunS / Getty Images

We’re surrounded by smart devices with microphones, but what if they’ve been compromised to eavesdrop on us?

In an effort to shield our conversations from snoopers, Columbia University researchers have developed a Neural Voice Camouflage method that disrupts automatic speech recognition systems in real-time without inconveniencing people.

“With the invasion of [smart voice-activated devices] into our lives, the idea of privacy starts to evaporate as these listening devices are always on and monitoring what’s being said,” Charles Everette, Director of Cyber Advocacy, Deep Instinct, told Lifewire via email. “This research is a direct response to the need to hide or camouflage an individual’s voice and conversations from these electronic eavesdroppers, known or unknown in an area.”  

Talking Over

The researchers have developed a system that generates whisper-quiet sounds that you can play in any room to block rogue microphones from spying on your conversations.

The way this type of technology counters eavesdropping reminds Everette of noise-canceling headphones. Instead of generating whisper quiet sounds to cancel out the background noise, the researchers broadcast background sounds that disrupt the Artificial Intelligence (AI) algorithms that interpret soundwaves into understandable audio. 

Such mechanisms to camouflage a person’s voice aren’t unique, but what sets Neural Voice Camouflage apart from the other methods is that it works in real-time on streaming audio. 

“To operate on live speech, our approach must predict [the correct scrambling audio] into the future so that they may be played in real-time,” note the researchers in their paper. Currently, the method works for the majority of the English language. 

Hans Hansen, CEO of Brand3D, told Lifewire that the research is very significant since it attacks a major weakness in today’s AI systems. 

In an email conversation, Hansen explained that current deep learning AI systems in general and natural speech recognition in particular work after processing millions of speech data records collected from thousands of speakers. In contrast, Neural Voice Camouflage works after conditioning itself on just two seconds of input speech.

“Personally, if I am concerned about devices listening in, my solution would not be to add another listening device that seeks to generate background noise.”
Wrong Tree?

Brian Chappell, chief security strategist at BeyondTrust, believes the research is more beneficial to business users who fear they could be in the midst of compromised devices that are listening for keywords that indicate valuable information is being spoken. 

“Where this technology would potentially be more interesting is in a more authoritarian surveillance state where AI video and voice print analysis is used against citizens,” James Maude, BeyondTrust’s Lead Cyber Security Researcher, told Lifewire over email.

Maude suggested that a better alternative would be to implement privacy controls on how data is captured, stored, and used by these devices. Moreover, Chappell believes the usefulness of the researcher’s method is limited since it isn’t designed to stop human eavesdropping.

“For the home, bear in mind that, at least in theory, using such a tool will cause Siri, Alexa, Google Home, and any other system that’s activated with a spoken trigger word to ignore you,” said Chappell.

Jacons Stock Photography Ltd / Getty Images

But experts believe that with the increasing inclusion of AI/ML specific technology in our smart devices, it’s entirely possible that this technology could end up inside our phones, in the near future.

Maude is concerned since AI technologies can learn quickly to differentiate between noise and real audio. He thinks that while the system might be initially successful, it could quickly turn into a cat and mouse game as a listening device learns to filter out the jamming noises.

More worryingly, Maude pointed out that anyone using it could, in fact, draw attention to themselves as disrupting the voice recognition would appear unusual and might indicate you are trying to hide something.

“Personally, if I am concerned about devices listening in, my solution would not be to add another listening device that seeks to generate background noise,” shared Maude. “Especially as it just increases the risk of a device or app being hacked and able to listen to me.”

#Stop #Snooping #Predicting #Youll


Synthetic: Ôn Thi HSG

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *

Back to top button