Well, with Subversion you can sort of have different repositories in one repository... I think just about nobody had a checkout of the monorepository (it was too large), so changes (e.g. in public API) in several parts of the stack were not perfectly synchronized. There was one check-in per atomic repository for such things.
Note that this number doesn't include any pre-git history -- all of those commits were made after April 2005 (v2.6.12-rc2 is the first commit within git).
It also has 622 releases. So on an average, 1400 commits make it to a single release. And releases are RC level not just version bumps. A single version bump has well over 10000 commits on an average.
Greg KH gives a pretty good talk about the rate of kernel development fairly regularly[1]. But yes, it's been consistently getting faster every kernel release.
Absolutely. It's a crude estimation. If anything, it's going to be even more commits per release. The point being, it's impressive that those many changes go into a single release of something as critical as the Linux kernel.
Number of commits highly depends on people in a project and workflow of the team. I make commits every 5, up to 10 minutes, so I do like 30-70 commits per day[1], some people in another team make a commit per day. Linux on github is close to 900k [2]
Chromium recently crossed a million bugs. And if you count Chromium and Blink together (they're in one repository for quite some time now), it's probably over a million commits too.
Nice plot. Interestingly, it seems that by far, relatively, most code from 1999 was removed. What did they add in that year, which got removed then (around 2014)?
relatedly, in the openbsd world, Ted Unangst is well known enough for doing old/unused code audit+removal that there is a slang verb 'tedu' (his handle, usage e.g. "it got tedu'd") which means basically zapping old stuff. See 1st comment in the twitter thread..
It's also probably one of the oldest open source repositories. OpenBSD pretty much pioneered the concept of making their VCS open to the public over the Internet (hence the name).
It was already ubiquitous in 1997 when I got started working on open source software, so I took it for granted. I was surprised to find out 20 years later how new anoncvs had been and how fast it spread to other projects like FreeBSD and Apache httpd.
Because that’s the number of commits they recently passed. What about being coders makes it more interesting for us to wait a few more years than talk about it now?
I read it more like a quantum quandry, you never know exactly how many commits there are, but we will celebrate one possible measure because it comes close and that's a landmark. One would seem to think his statement also applies to himself:
>If you think you've got a great way of measuring, don't be so sure of yourself -- you may have overcounted or undercounted.
By his own admission, his own counting method is probably flawed.
> .. because yes the code quality is mostly very high
because he is, actually, smart? I'm in totally different field, devops, don't laugh :), but I religiously follow their approaches to security and design in general.
Yes he is. I complimented on the code quality, and that doesn't come from itself.
But "being smart" does not require you to be arrogant and condescending. Theo is, and seems to need to say "I AM SMART" all the time. Like "Hey, I found this thing where the manpage says X but look the implementation is actually Y. POSIX says X, so probably Y is a bug". Answer: "I AM SMART. I will fix this". Ooooo-kay. I didn't say you weren't smart.
Imagine you're part of a development team where the tech lead in every thread says "I am the tech lead because I'm the smartest".
It's made me contribute less to OpenBSD than I otherwise would have (luckily other people are more welcoming, and Theo isn't a bottleneck on all things), and it's not just me. Other potentially good contributors have stayed away. Now, of course, other bad contributors have stayed away too.
The other bad aspect to arrogance is that it misses out on research the rest of the world has done, because you think nobody else can think. OpenBSD got W^X years and years after Linux (though OpenBSD got it by default first), because (in their own words) they don't look at what Linux is doing. OpenBSD missed out on W^X for years, and it took one more release for x86 to get it, because they said it couldn't be done (even though it worked just fine on Linux).
Looks like it was the same with the intel branch predict bugs. They say they did huge amount of research over weeks or months, and then just ended up with the kernel memory maps containing... exactly what Linux chose. Why did they do this from scratch?
I wouldn't say I'm bitter, but resigned to just accept that OpenBSD's way of doing things misses out on exactly what they want to achieve because of this attitude.