1. Home
  2. >
  3. AI 🤖
Posted

Scammers Use AI to Clone Voices and Trick People in Emergencies

  • Scammers are using AI to clone people's voices from audio clips and make fake calls to family/friends requesting money.

  • They only need a few seconds of audio to make a realistic voice clone.

  • The calls try to create urgency, asking for cash transfers to deal with fake emergencies.

  • Scammers research targets online for personal info to make the calls more believable.

  • People should be cautious about posting audio/video online and wary of urgent calls from unknown numbers.

foxbusiness.com
Relevant topic timeline:
Main topic: Website hijackings and scams targeting US government agencies, universities, and professional organizations. Key points: 1. Thousands of websites belonging to US government agencies, universities, and professional organizations have been hijacked over the last half decade. 2. Scams aim to trick children into downloading apps, malware, or submitting personal details in exchange for nonexistent rewards in popular games like Fortnite and Roblox. 3. The website compromises are linked to the activities of affiliate users of the advertising company CPABuild, which pushes advertising campaigns into compromised infrastructure.
Main Topic: Increasing use of AI in manipulative information campaigns online. Key Points: 1. Mandiant has observed the use of AI-generated content in politically-motivated online influence campaigns since 2019. 2. Generative AI models make it easier to create convincing fake videos, images, text, and code, posing a threat. 3. While the impact of these campaigns has been limited so far, AI's role in digital intrusions is expected to grow in the future.
### Summary Hackers are finding ways to exploit AI chatbots by using social engineering techniques, as demonstrated in a recent Def Con event where a participant manipulated an AI-powered chatbot by tricking it into revealing sensitive information. ### Facts - Hackers are using AI chatbots, such as ChatGPT, to assist them in achieving their goals. - At a Def Con event, hackers were challenged to crack AI chatbots and expose vulnerabilities. - One participant successfully manipulated an AI chatbot by providing a false identity and tricking it into revealing a credit card number. - Exploiting AI chatbots through social engineering is becoming a growing trend as these tools become more integrated into everyday life.
A recent study conducted by the Observatory on Social Media at Indiana University revealed that X (formerly known as Twitter) has a bot problem, with approximately 1,140 AI-powered accounts that generate fake content and steal selfies to create fake personas, promoting suspicious websites, spreading harmful content, and even attempting to steal from existing crypto wallets. These accounts interact with human-run accounts and distort online conversations, making it increasingly difficult to detect their activity and emphasizing the need for countermeasures and regulation.
The proliferation of deepfake videos and audio, fueled by the AI arms race, is impacting businesses by increasing the risk of fraud, cyberattacks, and reputational damage, according to a report by KPMG. Scammers are using deepfakes to deceive people, manipulate company representatives, and swindle money from firms, highlighting the need for vigilance and cybersecurity measures in the face of this threat.
AI is being used by cybercriminals to create more powerful and authentic-looking emails, making phishing attacks more dangerous and harder to detect.
There has been a surge in online scams in India, with individuals losing money to fraudsters who lure them with the promise of easy income through part-time employment or online tasks, emphasizing the need for vigilance and precautionary measures to avoid falling victim to these scams.
Seniors are increasingly falling victim to online scams, losing thousands of dollars to cyber con artists who use artificial intelligence, social engineering, and widely-available apps to target them, according to a report from the FBI.
The Prescott Valley Police Department warns of the "Grandparent Scam" where scammers use AI technology to create realistic audio of a family member to urgently ask for money.
Scammers are increasingly using artificial intelligence to generate voice deepfakes and trick people into sending them money, raising concerns among cybersecurity experts.
The increasing sophistication of AI phishing scams poses a significant threat to crypto organizations as scammers utilize AI tools to execute highly convincing and successful attacks, warns Richard Ma, co-founder of Quantstamp. These AI-powered attacks involve scammers posing as key personnel within targeted companies to establish legitimacy and request sensitive information, making it crucial for individuals and organizations to avoid sending sensitive information via email or text and instead utilize internal communication channels like Slack. Investing in anti-phishing software is also advised to filter out automated emails from bots and AI.
Tech scammers are using phony cryptocurrency accounts to dupe victims into investing large sums of money, resulting in billions of dollars in stolen cryptocurrency and financial ruin for many victims.
A father fell for a phone scam after hearing his daughter's voice and threats of harm, highlighting the use of advanced technology that can make calls sound convincing.
Voice cloning technology, driven by AI, poses a risk to consumers as it becomes easier and cheaper to create convincing fake voice recordings that can be used for scams and fraud.
Generative AI is empowering fraudsters with sophisticated new tools, enabling them to produce convincing scam texts, clone voices, and manipulate videos, posing serious threats to individuals and businesses.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
AI-aided cyber scams, including phishing emails, smishing texts, and social media scams, are on the rise, with Americans losing billions of dollars each year; however, online protection company McAfee has introduced an AI-powered tool called AI Scam Protection to help combat these scams by scanning and detecting malicious links in real-time.
The use of generative AI poses risks to businesses, including the potential exposure of sensitive information, the generation of false information, and the potential for biased or toxic responses from chatbots. Additionally, copyright concerns and the complexity of these systems further complicate the landscape.
Scammers are increasingly impersonating property owners and attempting to sell homes or vacant land they do not own, making buyers vulnerable to losing their down payments in seller impersonation fraud.
Criminals are increasingly using artificial intelligence, including deepfakes and voice cloning, to carry out scams and deceive people online, posing a significant threat to online security.
Real estate companies are using AI assistants to communicate with tenants and apartment seekers, with some individuals being deceived into believing they were interacting with human brokers.
Artificial intelligence (AI) threatens to undermine advisors' authenticity and trustworthiness as machine learning algorithms become better at emulating human behavior and conversation, blurring the line between real and artificial personas and causing anxiety about living in a post-truth world inhabited by AI imposters.
AI-driven fraud is increasing, with thieves using artificial intelligence to target Social Security recipients, and many beneficiaries are not aware of these scams; however, there are guidelines to protect personal information and stay safe from these AI scams.
A new phone scam in New York City involves thieves using artificial intelligence to simulate a distressed child calling for help, leading parents to hand over cash for bail.
Call Assistant AI has released an app that allows users to block unwanted spam calls and provides features such as call screening, call blocking, hold music, voicemail service, and smart scheduling to enhance call management.
Scammers using AI to mimic human writers are becoming more sophisticated, as evidenced by a British journalist discovering a fake memoir about himself published under a different name on Amazon, leading to concerns about the effectiveness of Amazon's enforcement policies against fraudulent titles.
Celebrities such as Tom Hanks and Gayle King have become victims of AI-powered scams, with AI-generated versions of themselves being used to promote fraudulent products, raising concerns about the use of AI in digital media.
AI technology is making advancements in various fields such as real estate analysis, fighter pilot helmets, and surveillance tools, while Tom Hanks warns fans about a scam using his name.
Artificial Intelligence is being misused by cybercriminals to create scam emails, text messages, and malicious code, making cybercrime more scalable and profitable. However, the current level of AI technology is not yet advanced enough to be widely used for deepfake scams, although there is a potential future threat. In the meantime, individuals should remain skeptical of suspicious messages and avoid rushing to provide personal information or send money. AI can also be used by the "good guys" to develop software that detects and blocks potential fraud.
Americans are projected to lose over $90 billion to phone scams by the end of this year, as robocalls and robotexts continue to increase in volume and sophistication, prompting experts to advise caution and updated contact lists.
Online scammers posing as fake tech support specialists, referred to as "phantom hackers," are preying on older adults and fraudulently extorting large sums of money from them, with one Navy veteran losing a staggering $800,000 to these scams.
Cybersecurity firm Avast has exposed an upgraded tool called "LoveGPT" that uses artificial intelligence to create fake profiles on dating apps and manipulate unsuspecting users, with capabilities to bypass CAPTCHA, interact with victims, and anonymize access using proxies and browser anonymization tools. The tool uses OpenAI's AI models to generate interactions, and it can create convincing fake profiles on at least 13 dating sites while scraping users' data. Romantic scams are becoming more common, ranking among the top five scams, and users are advised to be cautious of AI-powered deception on dating apps.
The prevalence of online fraud, particularly synthetic fraud, is expected to increase due to the rise of artificial intelligence, which enables scammers to impersonate others and steal money at a larger scale using generative AI tools. Financial institutions and experts are concerned about the ability of security and identity detection technology to keep up with these fraudulent activities.
Experts recommend setting up anti-impostor code words with friends and family to protect against scams and hoaxes pretending to be a loved one in distress, although the risk of such incidents is relatively low.
The emergence of AI tools designed for cybercrime, such as WormGPT and FraudGPT, highlights the potential risks associated with AI and the urgent need for responsible and cautious usage.
New York City is using artificial intelligence to send robocalls featuring Mayor Eric Adams' voice in different languages, leading to concerns from privacy experts and criticism from privacy advocates who argue that it is deceptive and reminiscent of "deep fakes."
American venture capitalist Tim Draper warns that scammers are using AI to create deepfake videos and voices in order to scam crypto users.
Cryptocurrency scammers are now using Artificial Intelligence (AI) to steal cryptocurrencies, and investors need to be cautious and take necessary measures to protect themselves and their investments. Scammers commonly rely on social engineering and trick users into revealing their private keys or wallet addresses. Warning signs of cryptocurrency scams include the absence of a whitepaper, lack of relevant background information on the cryptocurrency, and promises of guaranteed returns. To protect your crypto wallet, use a secure and encrypted wallet, avoid sharing your wallet address, and enable two-factor authentication. Additionally, offline crypto storage is considered the safest option. AI can also be used to detect and prevent hacking attempts in the cryptocurrency space.