cURL scraps bug bounties due to AI-generated slop

The cURL project, a key open-source networking tool, is ending its vulnerability reward program after a flood of low-quality, AI-generated reports overwhelmed its small team. Founder Daniel Stenberg cited the need to protect maintainers' mental health amid the onslaught. The decision takes effect at the end of January 2026.

Daniel Stenberg, founder and lead developer of the open-source cURL project, announced on January 22, 2026, that the team is terminating its bug bounty program due to an influx of substandard submissions, many produced by large language models (LLMs). cURL, first released three decades ago as httpget and later urlget, is an essential tool for file transfers, web troubleshooting, and automation, integrated into Windows, macOS, and most Linux distributions.

Stenberg explained the rationale in a statement: “We are just a small single open source project with a small number of active maintainers. It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.” He warned that poor reports would face consequences, stating, “We will ban you and ridicule you in public if you waste our time on crap reports.” The change was formalized in an update to cURL’s GitHub repository, effective at the end of the month.

Users expressed concerns that the move addresses symptoms rather than the root cause of AI misuse, potentially undermining cURL's security. Stenberg acknowledged the issue but noted limited options for the team. In May 2025, he had already highlighted the problem: “AI slop is overwhelming maintainers today and it won’t stop at curl but only starts there.”

Examples of bogus reports include LLM hallucinations, such as code snippets that won't compile and fabricated changelogs. In one case, a maintainer responded to a reporter: “I think you’re a victim of LLM hallucination.” Stenberg added, “You were fooled by an AI into believing that.”

Stenberg is not opposed to all AI use; in September 2025, he praised researcher Joshua Rogers for submitting a “massive list” of bugs found with AI tools like ZeroPath, leading to 22 fixes. However, he criticized casual reliance on AI without verification, suggesting such thoughtful applications are rare. This development signals broader challenges for open-source security amid rising AI-generated content.

Relaterede artikler

Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
Billede genereret af AI

Chinese cybersecurity agency warns of OpenClaw AI risks

Rapporteret af AI Billede genereret af AI

China's national cybersecurity authority has warned of security risks in the OpenClaw AI agent software, which could allow attackers to gain full control of users' computer systems. The software has seen rapid growth in downloads and usage, with major domestic cloud platforms offering one-click deployment services, but its default security configuration is weak.

The Linux Foundation has secured $12.5 million in grants from AI companies to bolster open source software security. The funding addresses maintainers overwhelmed by AI-generated vulnerability reports. It will be managed by Alpha-Omega and the Open Source Security Foundation.

Rapporteret af AI

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Linus Torvalds, the creator of the Linux kernel, has strongly criticized discussions about AI-generated content in kernel documentation. He called talk of 'AI slop' pointless and stupid. The comments highlight ongoing tensions around AI in open-source development.

Rapporteret af AI

Following earlier reports of direct attacks on OpenClaw AI agents, TechRadar warns that infostealers are now disguising themselves as Claude Code, OpenClaw, and other AI developer tools. Users should exercise caution with search engine results. Published March 18, 2026.

Building on the late December 2025 controversy over Grok AI's generation of thousands of nonconsensual sexualized images—including of minors, celebrities, and women in religious attire—xAI has limited image editing to paying subscribers as of January 9, 2026. Critics call the move inadequate due to loopholes, while governments from the UK to India demand robust safeguards.

Rapporteret af AI

Flare researchers have identified a new Linux botnet called SSHStalker that has compromised around 7,000 systems using outdated exploits and SSH scanning. The botnet employs IRC for command-and-control while maintaining dormant persistence without immediate malicious activities like DDoS or cryptomining. It targets legacy Linux kernels, highlighting risks in neglected infrastructure.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis