I love giving talks that explore big questions — the kind that are facing all of us right now and that nobody has fully figured out yet. “Is AI Killing Open Source Software?” is exactly that kind of question. It actually reminds me of a talk I gave really early in my career where I was worried that paying maintainers to work on open source would kill open source. (Spoiler: it didn’t.) I like doing research, having lots of conversations, and then bringing it all to an audience to start a bigger conversation.
That’s what I did at SCALE 23x in March, and I was super excited that the audience participated fully. I learned a lot from them, and I’m hoping we will all continue this conversation.
What I Thought Would Happen with AI
When I first started exploring this topic, I thought the answer would be pretty straightforward. AI makes it so easy to create software now that nobody will bother to share it. If it takes you 20 seconds to generate a feature, why would you bother contributing it back to a project? Anybody else could generate it in 20 seconds too, so it just wouldn’t feel worth your time.
And then when I talked to people, I found another layer. They wouldn’t contribute back because the code didn’t feel like theirs. They didn’t understand it as fully as code they’d written entirely by themselves, and they didn’t feel like they could maintain it if they contributed it upstream. So there’s this strange thing happening where AI helps you create code faster, but it also creates a kind of ownership gap.
I think both of those things are real. But when I brought up the topic, people’s main concern wasn’t what I expected.
The Slop Problem (and How Big It Actually Is)
What people really wanted to talk about was slop contributions. These have gotten a lot of press, and for good reason. Some really big projects – cURL is the famous example – have had serious problems with people submitting AI-generated PRs and bug reports that overwhelm maintainers who are already facing burnout. Daniel Stenberg ended up shutting down cURL’s bug bounty program after six years because the flood of low-quality, AI-generated security reports was taking so much time to deal with, and none of them actually identified a real vulnerability.
I think the projects most affected by this are the really famous ones, especially those that offered financial incentives like bug bounties. If you’re paying people to submit bug reports, and AI makes it trivially easy to generate plausible-sounding ones, you’re going to get a lot of them from people trying to make money.
But during my talk at SCALE, very few people in the audience said their own projects had been overwhelmed by slop PRs. In fact, one person told me that AI was actually making it easier for new people to contribute. They were getting new community members that way! And anyway we can encourage new contributors is really important right now.
The New Contributor Problem
A real problem we have, that we don’t have a good answer for yet is, how do new maintainers get started in this world of AI?
The traditional path used to be pretty clear: you find a “good first issue,” you submit a small PR, it gets reviewed, you learn something from the feedback, and you gradually build up from there. But AI has broken that. If a new contributor wants to contribute quickly, AI often helps them create a PR very fast (and we encourage people to use AI to learn!) But that first contribution, even with the help of AI, is frequently “slop”, because they don’t have the experience to review what the AI generated. They’re submitting code they don’t fully understand, and the whole point of that learning loop is lost.
I asked the audience at both Planet Nix and SCALE how they thought new contributors should get started instead. Multiple people (as if they couldn’t even hear each other in the room) suggested the same thing: new contributors should review PRs instead of submitting them. That’s the work maintainers don’t want to do anymore, and it would teach newcomers a lot about the codebase.
But then someone pointed out that AI could help them review the PR too.
There’s another path that’s closing off too. Documentation used to be a great way for new contributors to get started. You’d contribute to docs first, learn the project, and then move on to code. (Same with localization. People started translating docs.) But now some projects are generating all their docs with AI, so they’re not even taking docs contributions anymore. That’s another getting started path that’s disappearing.
Who’s Admitting They Use AI?
One fun data point from my research: 84% of developers are now using AI tools to code, but only about 29.5% are disclosing that they used AI to help write their code. Turns out, whether or not a developer admits it depends a lot on which tool they used. If they used Claude, they’re most likely to divulge that they had AI help (80.5% disclosure rate). GitHub Copilot users? Only 9%. And that is probably at least somewhat attributable to how those tools help you. GitHub Copilot is usually in your IDE where as Claude Code is a standalone tool.
So… Is AI Killing Open Source?
No. But it is for sure changing it.
AI is creating noise. It’s corrupting some incentives. It’s changing the type of project that will be successful. Small utility libraries, for instance, might get replaced by people just asking an LLM to generate what they need. (And that was another path to maintainership.) And it’s making things more time-consuming for maintainers in the short term, even as it promises to help in the long term.
But it’s also changing the way we create software solutions, and that’s really good in many cases. 36% of developers are using AI to learn new skills. AI-assisted tools are helping some projects with triage, finding real bugs, and managing review workloads. When used right, AI really helps drive better solutions.
There’s a lot of really good data available on how AI is affecting open source. I’ve included some of it in my slides. (These were generated by Claude and are useful and editable, but kind of ugly. For a previous talk I used NotebookLM slides and they were beautiful and not editable.)
I encourage you to keep reading about how AI is affecting open source software, thinking about it, and talking to others about how we can use AI tools to make open source software processes better.
What We Need to Do
We need to make sure maintainers have tools that enable AI to help them, not just tools that generate more work for them to review. And we need to give them time to figure out how AI is affecting their projects. We already ask a lot of maintainers. 60% are unpaid. 60% have considered quitting. 44% cite burnout. And now we’ve sprung this massive technology shift on them without giving them two weeks off to go figure out what to do about it.
If we want open source to thrive in the AI era, we need to support it: fund it, contribute upstream, submit quality bug reports and PRs (whether you use AI or not), and be transparent about when and how AI helped. Good documentation helps everyone, including the AI tools, navigate your project. And take a hard look at your dependencies: understand what you depend on, reduce where you can, and support what you can’t.
This transition is far from over, and I’d love to hear your thoughts. What are you seeing in your projects? How is AI changing the way you contribute? Let’s keep talking about it, learning, and trying new things.
Stormy Peters works at AWS on open source strategy and communities. She has spent her career at GitHub, Microsoft, Red Hat, Mozilla, and the GNOME Foundation building the programs and communities that help people be successful with open source software. She speaks regularly on open source, community strategy, and developer relations. Find her speaking history and past talks at stormyscorner.com/speaking.
