cURL scraps bug bounties due to AI-generated slop

The cURL project, a key open-source networking tool, is ending its vulnerability reward program after a flood of low-quality, AI-generated reports overwhelmed its small team. Founder Daniel Stenberg cited the need to protect maintainers' mental health amid the onslaught. The decision takes effect at the end of January 2026.

Daniel Stenberg, founder and lead developer of the open-source cURL project, announced on January 22, 2026, that the team is terminating its bug bounty program due to an influx of substandard submissions, many produced by large language models (LLMs). cURL, first released three decades ago as httpget and later urlget, is an essential tool for file transfers, web troubleshooting, and automation, integrated into Windows, macOS, and most Linux distributions.

Stenberg explained the rationale in a statement: “We are just a small single open source project with a small number of active maintainers. It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.” He warned that poor reports would face consequences, stating, “We will ban you and ridicule you in public if you waste our time on crap reports.” The change was formalized in an update to cURL’s GitHub repository, effective at the end of the month.

Users expressed concerns that the move addresses symptoms rather than the root cause of AI misuse, potentially undermining cURL's security. Stenberg acknowledged the issue but noted limited options for the team. In May 2025, he had already highlighted the problem: “AI slop is overwhelming maintainers today and it won’t stop at curl but only starts there.”

Examples of bogus reports include LLM hallucinations, such as code snippets that won't compile and fabricated changelogs. In one case, a maintainer responded to a reporter: “I think you’re a victim of LLM hallucination.” Stenberg added, “You were fooled by an AI into believing that.”

Stenberg is not opposed to all AI use; in September 2025, he praised researcher Joshua Rogers for submitting a “massive list” of bugs found with AI tools like ZeroPath, leading to 22 fixes. However, he criticized casual reliance on AI without verification, suggesting such thoughtful applications are rare. This development signals broader challenges for open-source security amid rising AI-generated content.

相关文章

Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
AI 生成的图像

中国网络安全机构警告OpenClaw AI代理软件风险

由 AI 报道 AI 生成的图像

中国国家网络安全机构警告OpenClaw AI代理软件存在安全漏洞,可能允许攻击者完全控制用户计算机系统。该软件最近下载量激增,主要云平台提供一键部署服务,但默认安全配置薄弱。

The Linux Foundation has secured $12.5 million in grants from AI companies to bolster open source software security. The funding addresses maintainers overwhelmed by AI-generated vulnerability reports. It will be managed by Alpha-Omega and the Open Source Security Foundation.

由 AI 报道

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Linus Torvalds, the creator of the Linux kernel, has strongly criticized discussions about AI-generated content in kernel documentation. He called talk of 'AI slop' pointless and stupid. The comments highlight ongoing tensions around AI in open-source development.

由 AI 报道

Following earlier reports of direct attacks on OpenClaw AI agents, TechRadar warns that infostealers are now disguising themselves as Claude Code, OpenClaw, and other AI developer tools. Users should exercise caution with search engine results. Published March 18, 2026.

Linus Torvalds, the creator of Linux, has turned to AI-assisted coding for a hobby project, marking a shift from his earlier criticisms of such tools. In January 2026, he updated his GitHub repository AudioNoise, crediting Google's Antigravity for generating Python code to visualize audio samples. This move highlights AI's role in experimental development while he focuses on core logic in C.

由 AI 报道

Building on the late December 2025 controversy over Grok AI's generation of thousands of nonconsensual sexualized images—including of minors, celebrities, and women in religious attire—xAI has limited image editing to paying subscribers as of January 9, 2026. Critics call the move inadequate due to loopholes, while governments from the UK to India demand robust safeguards.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝