Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Keep it short. 3-7 minutes should be max.

Who has a 3-7m CI build here?



/me raises hand

On our newer services there is a CircleCI pipeline which parallelises work and takes ~1-2 minutes on a branch, and maybe an extra minute at most on master - where it automatically deploys to production if the build is green.

If you make the choice to prioritise this from the start, it isn’t all that difficult.


Maybe for software it's reasonable (tests should be parallelized more as you go above a 5m build), but for infrastructure it's ridiculous. No IaaS API responds remotely quick enough to bring up whole environments from scratch that quickly.


Agree. I would definitely put any infrastructure thing on the CD side though, as it is longer indeed.

You can test your software faster in CI using Docker compose. Something along these lines: https://fire.ci/blog/api-end-to-end-testing-with-docker/


Our entire test suite takes about six minutes to run, we run all our tests in CI with Capybara. Our CD pipeline runs the same tests but against Chrome, Firefox, Safari and Edge. It takes more than an hour.


Our biggest Go project has a sub 3 minute build time including running a bunch of tests, staticcheck, pushing to registry etc.

It doesn't have to spin up any DB or things like that though.


it takes 3 min to run linters here...


I would say for me that's a pretty reasonable estimate for microservice architecture applications / services, of course large legacy monoliths take longer but not more than say 15-20 minutes at most.


3m seems aggressive to do builds and spin up infrastructure for anything non-trivial.

Reading a bit closer, I see the author describes CI as a sanity check, "ensur[ing] the bare minimum" and doesn't consider deploying on every commit. Maybe 3-7m is more realistic then.

However, I'm slightly surprised by this definition of CI. According to Fowler [0], "Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time. ... The key test is that a business sponsor could request that the current development version of the software can be deployed into production at a moment's notice." So having CI gates on the development version that are weaker than the release tests would not seem to be continuous delivery according to his definition.

We're currently releasing on every commit and our CI build (which implements continuous delivery) takes about 15m.

[0] https://martinfowler.com/bliki/ContinuousDelivery.html


> the author describes CI as a sanity check

Which is a nonsense, since CI is the practice of merging to master frequently, in a state that can be released if need be.

We won't understand it unless we distinguish the practice from the supporting tools that help us do it safely:

in this case, the practice is frequent merge to shared trunk, and the supporting tools are as many automated checks before and after that merge as can be done quickly.

A "CI build" of a branch is a tool to help you do CI, but unless you merge that branch when it's green, you're not _doing_ CI.

Misunderstanding this and doing "CI on a branch" means that you are mistaking the tool for the practice, and not doing the practice: by delaying integration, you will be accomplishing the opposite of CI.


Yeah, I totally agree with Fowler’s definition of CI more than I do this article.

In my case it wasn’t a need to spin up infrastructure as much as it was just pulling a few container images and starting them, the longest of the CI builds were if you were say loading and indexing test data from a database (container) into ElasticSearch etc... but overall moving images around and starting containers to build and test some ruby / python was usually around 1-3 minutes or there abouts.


Hey, I've been working on a CI tool that skips the "non-trivial" bits for arbitrary Linux workflows, would love your feedback: https://layerci.com


Is this doing anything else than leveraging Docker multi layers caching?


Agree. CI=3-7 minutes. CD can be 30-60 if needed.


We have a large Java monolith application. Builds ran for 30 minutes. Then we said let's only run the unit tests and critical smoke tests. The build time went down to 7 minutes ... on 12 CPU and 32 GB of RAM build slaves :) There's always a way.


Building a decently sized C++ codebase takes 20 minutes. Then you want to run tests on it.


What about splitting it in smaller parts? And apply the CI process to each module?


I've worked on C++ codebases where linking modules took over 10 minutes with a trivial change. Per config. Granted, that was with BFD, and I sped it up a good bit by switching to gold where we could... which meant our Windows MinGW builds still spent 10+ minutes per config, since those were stuck with BFD. But at least our Android and Linux builds were a bit faster!

But I like to touch common headers - say to document logging macros, and printf-format annotations to catch bugs, or maybe to optimize their codegen - and those logging macros are in our PCHes for good reason - so that still means rebuilding everything frequently. Which ties up a lot of build time (30-60 minutes per config per platform, and I typically have a lot of configs and a lot of platforms!)


Sometimes that can help, but don't forget that you still need to run integration tests on the whole thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: