Experts highlight AI threats like deepfakes and dark LLMs in cybercrime

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

Artificial intelligence is revolutionizing cybercrime in unprecedented ways, according to recent analysis. Deepfakes, which create realistic fake videos or audio, AI-powered phishing attacks that mimic trusted communications, and dark LLMs—malicious versions of large language models—are at the forefront of this shift. These technologies allow individuals with limited technical skills to launch sophisticated operations at scale, democratizing cyber threats and amplifying their reach.

Experts express concern over the implications for businesses, warning that such weaponized AI could represent the most pressing security challenge of the year. The ability of dark LLMs to generate convincing scams without requiring deep expertise lowers barriers for cybercriminals, potentially overwhelming traditional defenses. As these tools evolve, organizations are urged to stay vigilant against deceptive tactics that exploit AI's generative capabilities.

This evolving landscape highlights the dual-edged nature of AI advancements, where innovation in one area fuels risks in another. Businesses must prioritize awareness and adaptive strategies to mitigate these dangers.

Related Articles

Illustration of crypto crime surge: hackers using AI to steal $17B in scams per Chainalysis report, with charts, bitcoins, and law enforcement seizures.
Image generated by AI

Chainalysis 2026 Report: $17 Billion in 2025 Crypto Scams Amid Surging AI Fraud and Hacks

Reported by AI Image generated by AI

The Chainalysis 2026 Crypto Crime Report, published January 13, 2026, reveals at least $14 billion stolen in 2025 scams—projected to reach $17 billion—driven by a 1,400% surge in AI-boosted impersonation tactics, amid broader losses including $4 billion from hacks per PeckShield and $154 billion in total illicit volumes linked to nation-state actors.

Cybersecurity experts warn that hackers are leveraging large language models (LLMs) to create sophisticated phishing attacks. These AI tools enable the generation of phishing pages on the spot, potentially making scams more dynamic and harder to detect. The trend highlights evolving threats in digital security.

Reported by AI

In 2025, cyber threats in the Philippines stuck to traditional methods like phishing and ransomware, without new forms emerging. However, artificial intelligence amplified the volume and scale of these attacks, leading to an 'industrialization of cybercrime'. Reports from various cybersecurity firms highlight increases in speed, scale, and frequency of incidents.

Researchers at Check Point have revealed that VoidLink, a sophisticated Linux malware targeting cloud servers, was largely built by a single developer using AI tools. The framework, which includes over 30 modular plugins for long-term system access, reached 88,000 lines of code in under a week despite plans suggesting a 20-30 week timeline. This development highlights AI's potential to accelerate advanced malware creation.

Reported by AI

Researchers warn that major AI models could encourage hazardous science experiments leading to fires, explosions, or poisoning. A new test on 19 advanced models revealed none could reliably identify all safety issues. While improvements are underway, experts stress the need for human oversight in laboratories.

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

Reported by AI

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline