North Korean hackers use AI video to spread malware

A North Korean hacking group known as UNC1069 has employed AI-generated videos to deliver malware targeting both macOS and Windows systems. This tactic highlights evolving methods in cyber threats. The development was reported by TechRadar on February 11, 2026.

North Korean hackers, operating under the alias UNC1069, have adopted an innovative approach by using AI-generated videos to distribute malware compatible with macOS and Windows operating systems. According to TechRadar, this method demonstrates the group's increasing creativity in evading detection and infecting devices.

The technique involves embedding malicious payloads within seemingly innocuous video content created by artificial intelligence. While specific details on the malware's functionality or distribution channels remain limited in available reports, the use of AI underscores a growing sophistication in state-sponsored cyber operations attributed to North Korea.

UNC1069, previously linked to various cyber activities, continues to pose risks to users across major platforms. TechRadar's coverage emphasizes the need for heightened vigilance against such deceptive tactics in digital security. No further incidents or victim details were disclosed in the initial report published on February 11, 2026.

相关文章

Illustration of a hacker using AI to swiftly build VoidLink malware targeting Linux cloud servers, featuring rapid code generation and infiltrated systems.
AI 生成的图像

AI-assisted VoidLink malware framework targets Linux cloud servers

由 AI 报道 AI 生成的图像

Researchers at Check Point have revealed that VoidLink, a sophisticated Linux malware targeting cloud servers, was largely built by a single developer using AI tools. The framework, which includes over 30 modular plugins for long-term system access, reached 88,000 lines of code in under a week despite plans suggesting a 20-30 week timeline. This development highlights AI's potential to accelerate advanced malware creation.

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

由 AI 报道

IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.

人工智能(AI)已跻身现代战争的核心,在最近的美以对伊朗打击中发挥了作战支持作用。Anthropic 的 Claude 和 Palantir 的 Gotham 被用于情报评估和目标识别。专家预测 AI 在军事应用中的进一步扩展。

由 AI 报道

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

由 AI 报道

A recent scan of millions of Android apps has revealed significant data leaks from AI software, exceeding expectations in scale. Hardcoded secrets persist in most Android AI applications today. The findings highlight ongoing privacy risks in mobile technology.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝