Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Calls to the DB are tested in integration tests

This might be our disagreement -- this is IMO outdated terminology, and in my experience not modern practice. It's super easy now-days to swap in a local lightweight database like sqlite which is 99.9% compatible with your production database, and use this for even tiny unit tests. Validating any single database call would definitely be a unit test in any of the codebases I've worked with.

"Integration" tests in my mind cover cross-service interfaces and infrastructure which is a PITA to access locally. For example, interacting AWS infrastructure whose behavior you can't reliably mock locally.



If you're spinning up a database, however lightweight, I'd still call it an integration test.

I think part of the terminology problem is that a lot of apps and services are basically code for turning an HTTP request into an SQL request, and then some more code for turning a SQL response into an HTTP response. That means there's very little in the way of discrete logic to unit test without a bunch of mocking. In a system like that, I think the only meaningful tests are integration tests.

For people who only work on that kind of glue code, it's easy to forget that testable units really do occur in other kinds of coding. But I promise they do.


> this is IMO outdated terminology, and in my experience not modern practice. It's super easy now-days to swap in a local lightweight database like sqlite which is 99.9% compatible with your production database, and use this for even tiny unit tests.

I actually see it as more "modern" to use an actual database.

I've been around long enough to have seen trends going from database -> mocks -> fake -> back to databases again!

Mocking is fine for some cases, as are fakes (e.g. lightweight SQLite wrapper you mention), but for anything non-trivial, such solutions often have "blind spots" - such as foreign keys, or unsupported data types. At the same time Docker was becoming a thing, I (and others) were growing tired of seeing issues related to these limitations.

At some point, it pays to have integration tests that just hit a real database. And with Docker, you can do that trivially - it takes only a few seconds to spin up and populate a fresh Postgres instance. I haven't looked back since.


https://www.testcontainers.org/modules/databases/ is brilliant for this in the JVM world.


There's a fair amount of truth to this. Many applications are glorified SQL generators, that effectively JIT compile the application logic to SQL. So if you want to test the application logic, then you need to run it on the runtime that it has been compiled for, that is: the database.

Testing the application code in isolation would be like trying to test that your compiled binary contains the machine code instructions you expect, rather than testing that it actually runs correctly. There are niche times when that's useful, but most of the time, not really.


>>This might be our disagreement -- this is IMO outdated terminology,

It really isn't. The concept of integration tests, and the test pyramid, is not only the standard practice but also basic concepts that somehow some developers struggle to understand and use.

> and in my experience not modern practice.

That could only start to make sense if modern practices consisted of screwing up their test plans and fail to do any form of test on how different components integrate.

I assure you that that is not the standard practice in any competent shop, big or small.

> It's super easy now-days to swap in a local lightweight database like sqlite which is 99.9% compatible with your production database, and use this for even tiny unit tests.

You're just stating that it's super easy to screw up badly by not knowing what you are doing. And I agree.

Take a step back and read what you're saying. You're replacing an external service accessed over a network by multiple clients with an internal service running in the same process that only your app accesses, generating entirely different databases and SQL, seed the database with between none and almost no data, and then run a few isolated tests.

And somehow this makes you believe you're testing how your system integrates?

I mean, even if you use frameworks with support for code-first persistence models you should be aware that different databases are generated by different targets. Take for example ASP.NET Core's Entity Framework Core, which notoriously fails to impose constrains with jts sqlite driver.

How, pray tell, do you believe your tests are meaningful if you're not even testing a system with the same topology, let alone the system you expect to put in production ?

This is by no means "modern". This is outright wrong on so many levels, including the mental model of how the system operates.

If you're really interested in best practices and how to do an adequate job testing your stuff, you should be aware that every single test beyond unit tests is supposed to be performed on the application intended to be deployed to production. The application is built and packaged after conducting unit tests, and that same package passes.through a whole obstacle course until it reaches production. It makes sense if you start to think about it. I mean, isn't the whole point of testing that you ensure that the app you want to run in production, and not a variant of it or a different build, runs as expected?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: