Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I test things that seem like they're important to test. I also do a lot of manual checking which boils down to "does it work?" When the manual checking is too tedious I'll write code to help. I don't do unit tests (but I don't think most people who think they're doing unit tests are, either). In general I have three big problems with the philosophy of testing, especially test-first. (Though I don't feel incredibly strongly about these--software is a big field of possibilities, to suggest One Way is the Only Way is pretty crazy.)

The biggest is that it encourages carelessness. I want to grow more careful and work with careful people, not the other way around. Tests don't seem to make people better at doing science--that is, people test the happy-case and don't try and falsify. Testing doesn't seem to make people better are writing code, and may even be hurtful. Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down. Sure, I guess your entire project might depend on that one innocent-looking line of code you just changed, but if that's true, you have some serious design problems and testing is going to make it hard to fix those. Because, thirdly, it hinders design, it's very easy to code yourself into a corner in the name of passing a test-suite.

Related to the design issue is a simple fact of laziness. Your code makes a test fail. Is your code wrong? Or is the test wrong? Or are both wrong? If just the code is wrong, the correct action is to fix your code to fit the test. (Which may have serious ramifications anyway.) If just the test is wrong, the correct action is to change the test. (How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.) If both are wrong, you have to change both. Obviously people will be motivated to assume that only one is wrong rather than both because both means more work.



> Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down

In my experience, testing frees you from that fear. You have empirical evidence that you haven't broken things.

My company does Continuous Integration as a service. You would be utterly amazed at how often our customers break their code with tiny innocuous changes.

> How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.

Try to think of testing in terms of the value it brings to your business. Adding the first few tests to a module has immense value. Adding tests for the edge cases has some value, but you're probably at break even unless it's breaking in production [1]. Adding tests to test the tests? I would say that is valueless in nearly all cases [2].

[1] Bonus: use Airbrake to find the edge cases that happen in real life, and only add tests for them

[2] If you're writing software for cars, planes, medical stuff or transferring money, there is probably value here.


Asking if the tests are correct is really asking if the requirements are correct. If this happens a lot it means developers are writing code before they really understand the requirements. If developers have to re-write behavioral level tests a lot, it probably means the product owner/project manager/managers/stake holders/etc. are changing the requirements. A lot of pain should be felt gathering and verifying what the customer wants before a single line of code is written. Really, code is bad and as little of it should be written as possible. Developers should yell loudly when they have to re-write behavioral level tests.

Testing at the behavioral level/systems level/UX level is really verifying a lot more than just "is this code right". It provides a way to check correctness on the specifications, correctness on the behavior, complete coverage of expected usage by the end user, and assures that only the code necessary to get the behavior to work is being written (to name a few).

The carelessness I see are developers writing code without fully understanding the needs of the stake holders. The industry would be in a lot better position if managers/product owners/stakeholders/etc. were expected to provide a good set of behaviors to develop against (as an example, Gherkin or similar tools) before they start pushing developers to "deliver something on time". Note this is at the systems/behavior level and not at the Unit level.

Unit level tests provide robustness. Developers can never assure that software has no "bugs".

Behavior level tests assure completeness. Developers can assure they are meeting the requirements (Developers can't assure they are making what the customer wants: but that is not the responsibility of a developer. That is the responsibility of product owner/project manager/etc. I'm not saying that a developer can't ware that hat, but a developer not wearing that hat should not be held responsible for failings to provide for the wants of the customer).

All that being said, I can not emphasis enough how important I think Behavior Level testing is.

My 3 cents.


What one person calls carelessness, another would say freeing up the time to consider other things. Such as, the code actually doing what it needs to. We are limited beings and can only keep so much in our heads at one time. If I have to remember how everything works at some level and then want to tackle how to clean it up (refactoring) or add something new without breaking it, that is a tremendous amount of state I am managing in my brain.

Better to write tests to assert something works as expected. Then focus on what you actually want to do, finally returning to your tests and focusing on your changes impact.

If people are writing shitty tests, that is a different problem.

As to your second point, I am fearful of code that does not have tests. I do not know what it does, I have next to no confidence that it does what it is supposed to and no way to validate that I haven't broken it if I change it.

I find the whole pushback for tests automation very odd. Here we are working towards automating some business process, while manually testing that it works. Why wouldn't we automate the testing too? If you are not good enough to automate most of your testing, what business do you have automating something else?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: