The cURL project, a key open-source networking tool, is ending its vulnerability reward program after a flood of low-quality, AI-generated reports overwhelmed its small team. Founder Daniel Stenberg cited the need to protect maintainers' mental health amid the onslaught. The decision takes effect at the end of January 2026.
Daniel Stenberg, founder and lead developer of the open-source cURL project, announced on January 22, 2026, that the team is terminating its bug bounty program due to an influx of substandard submissions, many produced by large language models (LLMs). cURL, first released three decades ago as httpget and later urlget, is an essential tool for file transfers, web troubleshooting, and automation, integrated into Windows, macOS, and most Linux distributions.
Stenberg explained the rationale in a statement: “We are just a small single open source project with a small number of active maintainers. It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.” He warned that poor reports would face consequences, stating, “We will ban you and ridicule you in public if you waste our time on crap reports.” The change was formalized in an update to cURL’s GitHub repository, effective at the end of the month.
Users expressed concerns that the move addresses symptoms rather than the root cause of AI misuse, potentially undermining cURL's security. Stenberg acknowledged the issue but noted limited options for the team. In May 2025, he had already highlighted the problem: “AI slop is overwhelming maintainers today and it won’t stop at curl but only starts there.”
Examples of bogus reports include LLM hallucinations, such as code snippets that won't compile and fabricated changelogs. In one case, a maintainer responded to a reporter: “I think you’re a victim of LLM hallucination.” Stenberg added, “You were fooled by an AI into believing that.”
Stenberg is not opposed to all AI use; in September 2025, he praised researcher Joshua Rogers for submitting a “massive list” of bugs found with AI tools like ZeroPath, leading to 22 fixes. However, he criticized casual reliance on AI without verification, suggesting such thoughtful applications are rare. This development signals broader challenges for open-source security amid rising AI-generated content.