Kali Linux launches local AI tools for penetration testing

The Kali Linux team has released a guide for running AI-driven penetration testing entirely on local hardware, eliminating cloud dependencies. This setup uses Ollama, 5ire, and MCP Kali Server to enable natural language commands for security tools. Published on March 10, 2026, the guide addresses privacy concerns in sensitive environments.

The Kali Linux team published a new guide on March 10, 2026, as part of its series on large language model (LLM)-driven security tools. This entry focuses on a fully self-hosted stack that processes all AI operations on local hardware, avoiding third-party cloud services. The approach tackles privacy and operational security issues that have limited cloud-based AI in penetration testing.

The setup requires an NVIDIA GPU with CUDA support. The guide uses an NVIDIA GeForce GTX 1060 with 6 GB of VRAM as reference hardware. It involves installing NVIDIA's proprietary drivers, replacing the open-source Nouveau driver, to enable CUDA acceleration. After installation and reboot, the system confirms Driver Version 550.163.01 and CUDA Version 12.4.

Ollama serves as the core LLM engine, acting as a wrapper for llama.cpp to simplify model management. Installed via a Linux AMD64 tarball and set up as a systemd service, it runs in the background. The guide evaluates three models with tool-calling support: llama3.1:8b (4.9 GB), llama3.2:3b (2.0 GB), and qwen3:4b (2.5 GB), all fitting within the 6 GB VRAM limit.

The Model Context Protocol (MCP) integrates the AI with security tools through the mcp-kali-server package, available in Kali repositories. This creates a local Flask server on 127.0.0.1:5000, verifying tools like nmap, gobuster, dirb, and nikto. It supports tasks such as web application testing, CTF challenges, and interactions with platforms like Hack The Box or TryHackMe.

To connect Ollama and MCP, the guide uses 5ire, an open-source AI assistant and MCP client distributed as a Linux AppImage in version 0.15.3. Installed to /opt/5ire/ and configured with a desktop entry, it enables Ollama as the provider and registers mcp-kali-server for tool access.

Validation involved a natural language prompt in 5ire, using qwen3:4b, to scan scanme.nmap.org on ports 80, 443, 21, and 22. The LLM invoked nmap via MCP, delivering structured results offline, with full GPU processing confirmed.

According to the Kali Linux Team, "the full-stack Ollama, mcp-kali-server, and 5ire are open source, hardware-dependent rather than service-dependent, and tunable based on available VRAM." This configuration offers a privacy-preserving option for red teams and researchers in air-gapped or data-sensitive settings.

Verwandte Artikel

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
Bild generiert von KI

Linux Foundation announces AI security initiative with tech partners

Von KI berichtet Bild generiert von KI

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

The open-source project Ollama has announced the release of its version 0.17. This update features enhancements to OpenClaw onboarding. The news was reported by Phoronix.

Von KI berichtet

A new tutorial shows how to run large language models and vision-language models locally on the Arduino UNO Q microcontroller. Edge Impulse's Marc Pous has outlined steps using the yzma tool to enable offline AI inference on the board's Linux environment. This approach allows for privacy-focused applications in edge computing.

Following its January launch, the Linux Foundation is promoting its LFWS307 'Deploying Small Language Models' course, highlighting SLM deployment as a key AI skill for IT professionals. The training emphasizes efficient, portable models via hands-on labs, aligning with MLOps and Edge AI trends.

Von KI berichtet

The Linux kernel project has officially documented its policy on AI-assisted code contributions with the release of Linux 7.0. The guidelines require human accountability, disclosure of AI tool use, and a new 'Assisted-by' tag for patches involving AI. Sasha Levin formalized the consensus reached at the 2025 Maintainers Summit.

Freitag, 10. April 2026, 14:10 Uhr

Greg Kroah-Hartman runs AI-assisted fuzzing on Linux kernel

Samstag, 28. März 2026, 02:04 Uhr

Linux maintainer says AI tools now find real bugs

Freitag, 27. März 2026, 09:04 Uhr

OpenAI adds plugins to Codex app for broader integrations

Dienstag, 17. März 2026, 01:18 Uhr

Nvidia announces NemoClaw, DLSS 5 and Vera CPU at GTC 2026

Donnerstag, 12. März 2026, 19:10 Uhr

CIQ announces general availability of Rocky Linux Pro AI

Mittwoch, 11. März 2026, 11:04 Uhr

Chinese cybersecurity agency warns of OpenClaw AI risks

Mittwoch, 11. März 2026, 02:31 Uhr

Linux user finds local LLMs easier than on Windows

Dienstag, 10. März 2026, 21:38 Uhr

NVIDIA reportedly develops open-source AI agent platform

Freitag, 06. März 2026, 12:26 Uhr

Google launches workspace CLI for AI tool integration

Mittwoch, 25. Februar 2026, 08:44 Uhr

Bcachefs creator claims custom LLM is fully conscious

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen