Kali Linux launches local AI tools for penetration testing

The Kali Linux team has released a guide for running AI-driven penetration testing entirely on local hardware, eliminating cloud dependencies. This setup uses Ollama, 5ire, and MCP Kali Server to enable natural language commands for security tools. Published on March 10, 2026, the guide addresses privacy concerns in sensitive environments.

The Kali Linux team published a new guide on March 10, 2026, as part of its series on large language model (LLM)-driven security tools. This entry focuses on a fully self-hosted stack that processes all AI operations on local hardware, avoiding third-party cloud services. The approach tackles privacy and operational security issues that have limited cloud-based AI in penetration testing.

The setup requires an NVIDIA GPU with CUDA support. The guide uses an NVIDIA GeForce GTX 1060 with 6 GB of VRAM as reference hardware. It involves installing NVIDIA's proprietary drivers, replacing the open-source Nouveau driver, to enable CUDA acceleration. After installation and reboot, the system confirms Driver Version 550.163.01 and CUDA Version 12.4.

Ollama serves as the core LLM engine, acting as a wrapper for llama.cpp to simplify model management. Installed via a Linux AMD64 tarball and set up as a systemd service, it runs in the background. The guide evaluates three models with tool-calling support: llama3.1:8b (4.9 GB), llama3.2:3b (2.0 GB), and qwen3:4b (2.5 GB), all fitting within the 6 GB VRAM limit.

The Model Context Protocol (MCP) integrates the AI with security tools through the mcp-kali-server package, available in Kali repositories. This creates a local Flask server on 127.0.0.1:5000, verifying tools like nmap, gobuster, dirb, and nikto. It supports tasks such as web application testing, CTF challenges, and interactions with platforms like Hack The Box or TryHackMe.

To connect Ollama and MCP, the guide uses 5ire, an open-source AI assistant and MCP client distributed as a Linux AppImage in version 0.15.3. Installed to /opt/5ire/ and configured with a desktop entry, it enables Ollama as the provider and registers mcp-kali-server for tool access.

Validation involved a natural language prompt in 5ire, using qwen3:4b, to scan scanme.nmap.org on ports 80, 443, 21, and 22. The LLM invoked nmap via MCP, delivering structured results offline, with full GPU processing confirmed.

According to the Kali Linux Team, "the full-stack Ollama, mcp-kali-server, and 5ire are open source, hardware-dependent rather than service-dependent, and tunable based on available VRAM." This configuration offers a privacy-preserving option for red teams and researchers in air-gapped or data-sensitive settings.

ተያያዥ ጽሁፎች

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
በ AI የተሰራ ምስል

Linux Foundation announces AI security initiative with tech partners

በAI የተዘገበ በ AI የተሰራ ምስል

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

The open-source project Ollama has announced the release of its version 0.17. This update features enhancements to OpenClaw onboarding. The news was reported by Phoronix.

በAI የተዘገበ

A new tutorial shows how to run large language models and vision-language models locally on the Arduino UNO Q microcontroller. Edge Impulse's Marc Pous has outlined steps using the yzma tool to enable offline AI inference on the board's Linux environment. This approach allows for privacy-focused applications in edge computing.

Following its January launch, the Linux Foundation is promoting its LFWS307 'Deploying Small Language Models' course, highlighting SLM deployment as a key AI skill for IT professionals. The training emphasizes efficient, portable models via hands-on labs, aligning with MLOps and Edge AI trends.

በAI የተዘገበ

The Linux kernel project has officially documented its policy on AI-assisted code contributions with the release of Linux 7.0. The guidelines require human accountability, disclosure of AI tool use, and a new 'Assisted-by' tag for patches involving AI. Sasha Levin formalized the consensus reached at the 2025 Maintainers Summit.

China's national cybersecurity authority has warned of security risks in the OpenClaw AI agent software, which could allow attackers to gain full control of users' computer systems. The software has seen rapid growth in downloads and usage, with major domestic cloud platforms offering one-click deployment services, but its default security configuration is weak.

ይህ ድረ-ገጽ ኩኪዎችን ይጠቀማል

የእኛን ጣቢያ ለማሻሻል ለትንታኔ ኩኪዎችን እንጠቀማለን። የእኛን የሚስጥር ፖሊሲ አንብቡ የሚስጥር ፖሊሲ ለተጨማሪ መረጃ።
ውድቅ አድርግ