1. Home
  2. >
  3. AI đŸ€–
Posted

Online Scams Soar as AI Makes Cybercrime More Sophisticated

  • Americans lost $10.3 billion to online scams last year, with phishing being the top threat.

  • Scammers are using AI to create increasingly sophisticated fake emails, texts, calls, and social media scams.

  • Gen Z lost $210 million to online scams in 2021, up 2,500% since 2017.

  • New AI tools like McAfee's AI Scam Protection can help combat AI-powered cybercrime.

  • Stay vigilant Don't click unsolicited links, slow down, and use antivirus protections.

usatoday.com
Relevant topic timeline:
Main topic: Cyabra's new tool, botbusters.ai, uses artificial intelligence to detect AI-generated content online. Key points: 1. The tool can identify fake social media profiles, catch catfishers, and determine if content is AI-generated. 2. It uses machine learning algorithms to analyze content against various parameters and provide a percentage estimation of its authenticity. 3. Cyabra aims to make the digital sphere safer by exposing AI-generated content and helping restore trust in social media.
Main topic: The usage of AI-powered bots and the challenges they pose for organizations. Key points: 1. The prevalence of bots on the internet and their potential threats. 2. The rise of AI-powered bots and their impact on organizations, including ad fraud. 3. The innovative approach of Israeli start-up ClickFreeze in combatting malicious bots through AI and machine learning.
Main topic: The role of artificial intelligence (AI) in cybersecurity and the need for regulation. Key points: 1. AI-powered cybersecurity tools automate tasks, enhance threat detection, and improve defense mechanisms. 2. AI brings advantages such as rapid analysis of data and continuous learning and adaptation. 3. Challenges include potential vulnerabilities, privacy concerns, ethical considerations, and regulatory compliance. Note: While there are seven questions in the provided text, it is not possible to limit the key points to just three within the given context.
WormGPT, an AI program advertised on the dark web, poses a dangerous cyber threat by allowing anyone to craft convincing scam emails and engage in potential ransomware attacks.
Cybercriminals are increasingly using artificial intelligence (AI) to create advanced email threats, while organizations are turning to AI-enabled email security systems to combat these attacks. The perception of AI's importance in email security has significantly shifted, with the majority of organizations recognizing its crucial role in protecting against AI-enhanced attacks. Strengthening email defenses with AI is vital, and organizations are also looking to extend AI-powered security to other communication and collaboration platforms.
AI survey identifies ideal email sending time on Sundays, surge in cyber attacks linked to misuse of AI, AI's impact on jobs is more about disruption than elimination, AI integration into combat raises concerns, and AI-based solutions offer promise for compliance in IT/ITeS industry.
Google has introduced new AI-based solutions at its Google Next conference to enhance the cybersecurity capabilities of its cloud and security solutions, including integrating its AI tool Duet AI into products such as Mandiant Threat Intelligence, Chronicle Security Operations, and Security Command Center, to improve threat detection, provide response recommendations, and streamline security practices.
Scammers are increasingly using artificial intelligence to generate voice deepfakes and trick people into sending them money, raising concerns among cybersecurity experts.
Amazon Web Services (AWS) is working to democratize access to artificial intelligence (AI) tools, making it easier for small and medium-sized businesses (SMBs) to benefit from these technologies and disrupt their industries, according to Ben Schreiner, head of innovation for SMBs at AWS. He advises SMBs to identify the business problem they want AI to solve and focus on finding the right tool for that specific problem. Additionally, Schreiner emphasizes the importance of having reliable and clean data to achieve accurate and valuable insights from AI tools. SMBs should also prioritize data security and protect their data from unauthorized use. In the future, AI advancements are expected to enhance customer support tools like chatbots, making them more lifelike and conversational, but not replacing human customer support roles.
The increasing sophistication of AI phishing scams poses a significant threat to crypto organizations as scammers utilize AI tools to execute highly convincing and successful attacks, warns Richard Ma, co-founder of Quantstamp. These AI-powered attacks involve scammers posing as key personnel within targeted companies to establish legitimacy and request sensitive information, making it crucial for individuals and organizations to avoid sending sensitive information via email or text and instead utilize internal communication channels like Slack. Investing in anti-phishing software is also advised to filter out automated emails from bots and AI.
Financial institutions are using AI to combat cyberattacks, utilizing tools like language data models, deep learning AI, generative AI, and improved communication systems to detect fraud, validate data, defend against incursions, and enhance customer protection.
Norton Genie is a free AI-powered scam detector tool that analyzes suspicious messages and websites to determine if they are scams or legitimate, though it may not be accurate all the time due to its early access phase.
Generative AI is empowering fraudsters with sophisticated new tools, enabling them to produce convincing scam texts, clone voices, and manipulate videos, posing serious threats to individuals and businesses.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
Scammers are using AI tools to clone the voices of their targets and make fraudulent phone calls, impersonating their loved ones and requesting money or sensitive information. Individuals should be cautious about what they post online and be wary of urgent calls from unknown numbers.
Criminals are increasingly using artificial intelligence, including deepfakes and voice cloning, to carry out scams and deceive people online, posing a significant threat to online security.
AI-driven fraud is increasing, with thieves using artificial intelligence to target Social Security recipients, and many beneficiaries are not aware of these scams; however, there are guidelines to protect personal information and stay safe from these AI scams.
Artificial Intelligence is being misused by cybercriminals to create scam emails, text messages, and malicious code, making cybercrime more scalable and profitable. However, the current level of AI technology is not yet advanced enough to be widely used for deepfake scams, although there is a potential future threat. In the meantime, individuals should remain skeptical of suspicious messages and avoid rushing to provide personal information or send money. AI can also be used by the "good guys" to develop software that detects and blocks potential fraud.
Cybersecurity firm Avast has exposed an upgraded tool called "LoveGPT" that uses artificial intelligence to create fake profiles on dating apps and manipulate unsuspecting users, with capabilities to bypass CAPTCHA, interact with victims, and anonymize access using proxies and browser anonymization tools. The tool uses OpenAI's AI models to generate interactions, and it can create convincing fake profiles on at least 13 dating sites while scraping users' data. Romantic scams are becoming more common, ranking among the top five scams, and users are advised to be cautious of AI-powered deception on dating apps.
The prevalence of online fraud, particularly synthetic fraud, is expected to increase due to the rise of artificial intelligence, which enables scammers to impersonate others and steal money at a larger scale using generative AI tools. Financial institutions and experts are concerned about the ability of security and identity detection technology to keep up with these fraudulent activities.
The field of cybersecurity is experiencing significant growth, with AI-powered products playing a crucial role, but AI will eventually surpass human defenders in handling critical incidents and making high-stake decisions. However, human involvement will still be necessary to train, supervise, and monitor the AI systems. It is important for humans to set the right parameters and ensure accurate data input for AI to function effectively. As AI becomes part of the cybersecurity architecture, protecting AI from threats and attacks will become a crucial responsibility. The rise of AI in cybersecurity will require the industry to adapt and evolve to a greater degree.
The emergence of AI tools designed for cybercrime, such as WormGPT and FraudGPT, highlights the potential risks associated with AI and the urgent need for responsible and cautious usage.