North Korean hackers use AI video to spread malware

A North Korean hacking group known as UNC1069 has employed AI-generated videos to deliver malware targeting both macOS and Windows systems. This tactic highlights evolving methods in cyber threats. The development was reported by TechRadar on February 11, 2026.

North Korean hackers, operating under the alias UNC1069, have adopted an innovative approach by using AI-generated videos to distribute malware compatible with macOS and Windows operating systems. According to TechRadar, this method demonstrates the group's increasing creativity in evading detection and infecting devices.

The technique involves embedding malicious payloads within seemingly innocuous video content created by artificial intelligence. While specific details on the malware's functionality or distribution channels remain limited in available reports, the use of AI underscores a growing sophistication in state-sponsored cyber operations attributed to North Korea.

UNC1069, previously linked to various cyber activities, continues to pose risks to users across major platforms. TechRadar's coverage emphasizes the need for heightened vigilance against such deceptive tactics in digital security. No further incidents or victim details were disclosed in the initial report published on February 11, 2026.

관련 기사

Illustration of a hacker using AI to swiftly build VoidLink malware targeting Linux cloud servers, featuring rapid code generation and infiltrated systems.
AI에 의해 생성된 이미지

AI-assisted VoidLink malware framework targets Linux cloud servers

AI에 의해 보고됨 AI에 의해 생성된 이미지

Researchers at Check Point have revealed that VoidLink, a sophisticated Linux malware targeting cloud servers, was largely built by a single developer using AI tools. The framework, which includes over 30 modular plugins for long-term system access, reached 88,000 lines of code in under a week despite plans suggesting a 20-30 week timeline. This development highlights AI's potential to accelerate advanced malware creation.

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

AI에 의해 보고됨

IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.

미국-이스라엘의 최근 이란 공격에서 인공지능(AI)이 작전 지원 역할을 수행하며 현대 전쟁의 중심으로 부상했다. Anthropic의 Claude와 Palantir의 Gotham이 정보 분석과 목표 식별에 활용됐다. 전문가들은 AI의 군사 적용이 확대될 것으로 전망한다.

AI에 의해 보고됨

Scammers are sending emails that appear genuine to OpenAI users, designed to manipulate them into revealing critical data swiftly. These emails are followed by vishing calls that intensify the pressure on victims to disclose account details. The campaign highlights ongoing risks in AI platform security.

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

AI에 의해 보고됨

A recent scan of millions of Android apps has revealed significant data leaks from AI software, exceeding expectations in scale. Hardcoded secrets persist in most Android AI applications today. The findings highlight ongoing privacy risks in mobile technology.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부