North Korean hackers use AI video to spread malware

A North Korean hacking group known as UNC1069 has employed AI-generated videos to deliver malware targeting both macOS and Windows systems. This tactic highlights evolving methods in cyber threats. The development was reported by TechRadar on February 11, 2026.

North Korean hackers, operating under the alias UNC1069, have adopted an innovative approach by using AI-generated videos to distribute malware compatible with macOS and Windows operating systems. According to TechRadar, this method demonstrates the group's increasing creativity in evading detection and infecting devices.

The technique involves embedding malicious payloads within seemingly innocuous video content created by artificial intelligence. While specific details on the malware's functionality or distribution channels remain limited in available reports, the use of AI underscores a growing sophistication in state-sponsored cyber operations attributed to North Korea.

UNC1069, previously linked to various cyber activities, continues to pose risks to users across major platforms. TechRadar's coverage emphasizes the need for heightened vigilance against such deceptive tactics in digital security. No further incidents or victim details were disclosed in the initial report published on February 11, 2026.

Related Articles

Illustration of a hacker using AI to swiftly build VoidLink malware targeting Linux cloud servers, featuring rapid code generation and infiltrated systems.
Image generated by AI

AI-assisted VoidLink malware framework targets Linux cloud servers

Reported by AI Image generated by AI

Researchers at Check Point have revealed that VoidLink, a sophisticated Linux malware targeting cloud servers, was largely built by a single developer using AI tools. The framework, which includes over 30 modular plugins for long-term system access, reached 88,000 lines of code in under a week despite plans suggesting a 20-30 week timeline. This development highlights AI's potential to accelerate advanced malware creation.

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

Reported by AI

IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

Reported by AI

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

Reported by AI

A recent scan of millions of Android apps has revealed significant data leaks from AI software, exceeding expectations in scale. Hardcoded secrets persist in most Android AI applications today. The findings highlight ongoing privacy risks in mobile technology.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline