1. Home
  2. >
  3. AI 🤖
Posted

Crypto is in ‘arms race’ against AI-powered scams: Quantstamp co-founder

The increasing sophistication of AI phishing scams poses a significant threat to crypto organizations as scammers utilize AI tools to execute highly convincing and successful attacks, warns Richard Ma, co-founder of Quantstamp. These AI-powered attacks involve scammers posing as key personnel within targeted companies to establish legitimacy and request sensitive information, making it crucial for individuals and organizations to avoid sending sensitive information via email or text and instead utilize internal communication channels like Slack. Investing in anti-phishing software is also advised to filter out automated emails from bots and AI.

cointelegraph.com
Relevant topic timeline:
Cybercriminals are increasingly using artificial intelligence (AI) to create advanced email threats, while organizations are turning to AI-enabled email security systems to combat these attacks. The perception of AI's importance in email security has significantly shifted, with the majority of organizations recognizing its crucial role in protecting against AI-enhanced attacks. Strengthening email defenses with AI is vital, and organizations are also looking to extend AI-powered security to other communication and collaboration platforms.
A recent study conducted by the Observatory on Social Media at Indiana University revealed that X (formerly known as Twitter) has a bot problem, with approximately 1,140 AI-powered accounts that generate fake content and steal selfies to create fake personas, promoting suspicious websites, spreading harmful content, and even attempting to steal from existing crypto wallets. These accounts interact with human-run accounts and distort online conversations, making it increasingly difficult to detect their activity and emphasizing the need for countermeasures and regulation.
AI survey identifies ideal email sending time on Sundays, surge in cyber attacks linked to misuse of AI, AI's impact on jobs is more about disruption than elimination, AI integration into combat raises concerns, and AI-based solutions offer promise for compliance in IT/ITeS industry.
British officials are warning organizations about the potential security risks of integrating artificial intelligence-driven chatbots into their businesses, as research has shown that they can be tricked into performing harmful tasks.
Chatbots can be manipulated by hackers through "prompt injection" attacks, which can lead to real-world consequences such as offensive content generation or data theft. The National Cyber Security Centre advises designing chatbot systems with security in mind to prevent exploitation of vulnerabilities.
Scammers are increasingly using artificial intelligence to generate voice deepfakes and trick people into sending them money, raising concerns among cybersecurity experts.
Amazon Web Services (AWS) is working to democratize access to artificial intelligence (AI) tools, making it easier for small and medium-sized businesses (SMBs) to benefit from these technologies and disrupt their industries, according to Ben Schreiner, head of innovation for SMBs at AWS. He advises SMBs to identify the business problem they want AI to solve and focus on finding the right tool for that specific problem. Additionally, Schreiner emphasizes the importance of having reliable and clean data to achieve accurate and valuable insights from AI tools. SMBs should also prioritize data security and protect their data from unauthorized use. In the future, AI advancements are expected to enhance customer support tools like chatbots, making them more lifelike and conversational, but not replacing human customer support roles.
Using AI tools like ChatGPT to write smart contracts and build cryptocurrency projects can lead to more problems, bugs, and attack vectors, according to CertiK's security chief, Kang Li, who believes that inexperienced programmers may create catastrophic design flaws and vulnerabilities. Additionally, AI tools are becoming more successful at social engineering attacks, making it harder to distinguish between AI-generated and human-generated messages.
Twitter is plagued by scam bots that impersonate users and offer fraudulent support for cryptocurrency and NFT services, highlighting the platform's lack of effective moderation and the growing problem of crypto scams.
The UK's National Cyber Security Centre has warned against prompt injection attacks on AI chatbots, highlighting the vulnerability of large language models to inputs that can manipulate their behavior and generate offensive or confidential content. Data breaches have also seen a significant increase globally, with a total of 110.8 million accounts leaked in Q2 2023, and the global average cost of a data breach has risen by 15% over the past three years. In other news, Japan's cybersecurity agency was breached by hackers, executive bonuses are increasingly tied to cybersecurity metrics, and the Five Eyes intelligence alliance has detailed how Russian state-sponsored hackers are using Android malware to attack Ukrainian soldiers' devices.
Hackers targeted Ethereum co-founder Vitalik Buterin's Twitter account, swindling nearly $700,000 from users by posting a fraudulent ConsenSys link that led to a trap. This incident highlights growing concerns about the increase in phishing scams on the platform formerly known as Twitter.
Financial institutions are using AI to combat cyberattacks, utilizing tools like language data models, deep learning AI, generative AI, and improved communication systems to detect fraud, validate data, defend against incursions, and enhance customer protection.
Generative AI is empowering fraudsters with sophisticated new tools, enabling them to produce convincing scam texts, clone voices, and manipulate videos, posing serious threats to individuals and businesses.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
Scammers are using artificial intelligence and voice cloning to convincingly mimic the voices of loved ones, tricking people into sending them money in a new elaborate scheme.
AI-aided cyber scams, including phishing emails, smishing texts, and social media scams, are on the rise, with Americans losing billions of dollars each year; however, online protection company McAfee has introduced an AI-powered tool called AI Scam Protection to help combat these scams by scanning and detecting malicious links in real-time.