Hacker Newsnew | past | comments | ask | show | jobs | submit | dimes's commentslogin

I wrote Dihedral, a compile-time dependency injection framework for Go [0]. It was inspired by the Java framework Dagger. It worked pretty well, but was a little clunky with Go's syntax. Ultimately, decided it wasn't worth it given the simplicity of manually constructing Go objects in a service setting.

0: https://github.com/dimes/dihedral


I rewrote the backend on a team I used to work on. The service had a ton of unit tests. Given that this was a full rewrite, those unit test were useless. I spent the first few days writing a comprehensive suite of integration tests I could run against the existing service. These tests directly mimicked client calls, so the same tests should be just as valid for the rewritten service. Using these tests, I was able to catch 90%+ of potential issues before cutting over to the new service.

Personally, I find unit tests to be mostly useless. Every time I touch code with a unit test, I also need to change the unit test. Rather than testing, it feels like writing the same code twice.


> Personally, I find unit tests to be mostly useless. Every time I touch code with a unit test, I also need to change the unit test. Rather than testing, it feels like writing the same code twice.

I think they're mostly useless when refactoring, but they're useful when writing new code and and making relatively small to medium sized changes. For new code, it's helpful to me at least to express my intentions in a more concrete form and it gives me more confidence that I didn't miss something. For making relatively small changes, they help catch fined-gained regressions. Even if I meant to make a change, a failing test forces me to think about handling a particular case correctly that I might have forgotten.

The kind of unit test I do hate are the ones that are so mock-heavy that they're pretty much only testing the structure of your codebase (did you call all the methods the right order and nothing more?). I was once on a team where that was pretty much all they wrote, and they were very resistant to any level of integration unit testing because (I think) they read in a opinionated book somewhere that low level tests were good enough (they weren't).


When refactoring, unit tests confirm that you did it right (or wrong).


Except you often need to rewrite them, so now you've got two places (per 'unit') where you could have introduced a bug. Integration tests and E2E tests are far more valuable because they're attacking it at the business logic side, which is far less volatile, and particularly in a refactor, a useful invariant.


I often feel that people take "unit" test too literal and E2E as well. You can write perfectly valid, fast and useful partly integrated tests with common unit testing frameworks.

The other thing is that like you and come siblings have pointed out, many if not most people write unit tests all wrong and in the end just test the mocks. Those are really bad and you can just throw them away. Same with all those tests that just check that the right internal calls are being made. Tests nothing.

You need to attack the "business end" of your unit (or small groups of units). Inputs in and assert the outputs. Asserting that a certain collaborator was called can still make sense but if that's literally the only thing you do it's not very valuable at all.

You can generally see whether a unit test was a good unit test based on the fact that you were able to refactor the implementation of the method _without_ having to change the unit test. Yes, those definitely do exist, even in larger systems.


> You can generally see whether a unit test was a good unit test based on the fact that you were able to refactor the implementation of the method _without_ having to change the unit test. Yes, those definitely do exist, even in larger systems.

"Test the interface, not the implementation."


Even better, test the specification


The speci-what?

We might be in different types of software development. There's seldom an actual "specification" to a level of detail that you could test to in the sense you're probably thinking of (but I'm having to guess here for lack of detail and context from your end).

In the field I work in for example, a detailed specification in the way I'm guessing you mean would be prohibitively expensive and just not cost effective at all vs. the benefit you can get from throwing something together from imperfect information and improving upon it iteratively.

There was a (WP?) article on HN recently about "Releaseing software the right way" (or similar title), which basically said to use whatever approach actually makes sense in your circumstances. The example IIRC was hardware development (detailed specs) vs. a SaaS company.


In terms of something like embedded systems, aerotech, etc., the specification extends all the way to the unit. In terms of an SaaS, the specifications extend all the way to business logic such as (cartoon examples):

- don't charge the customer twice

- or when I click submit on the front-end the following possibilities happen according to the back-end response

- or when the back-end receives x, the inventory should be updated according to this business logic, as well as y

Say if you're doing this capital-A Agile style, all of these should be present on the acceptance criteria for any user story. As someone who's worked in rapid applications development for mobile, I can say reaching this level of specification increases speed, and reduces redundant communication. It often doesn't take half an hour for someone to write, and then it's iterated on by the team before implementation, and during.


> I often feel that people take "unit" test too literal

Perhaps. And this is where I think a useful distinction can be made between the "unit" (usually a function, or sometimes a single file, e.g., a C-style compilation unit) and a "system" (a collection of functions that perform complementing tasks, sometimes also a unit).

> You need to attack the "business end" of your unit (or small groups of units)

It's worth noting here, that sometimes the business logic extends all the way to the "unit". This is usually in very technical domains. Like say, writing a maths helper library for consumption by other programmers (either internally or externally) would often have a clearly specified outcomes at the unit level.


> Except you often need to rewrite them, so now you've got two places (per 'unit') where you could have introduced a bug.

That's not a bad thing, though. DRY might be fine for your main implementation, but redundancy is a time-tested way of catching errors (a.k.a. "double checking").


In theory, if adequate attention is spent on both maintaining the implementation as well as the tests, this is perfectly valid. In practice, this trade-off between expedience and verification goes towards the former when it comes to rapid development.


I'm a game dev and over time I've settled on using two groups of tests for my projects. Both at opposite ends of the spectrum.

.

1. Unit tests. But I only write them for stuff that needs them.. Eg. Some complex math functions that translate between coordinate systems; the point of the unit tests is to confirm that the functions are doing exactly what I think they are doing. With mathsy stuff it can be very easy to look at the output of some function and think, that looks fine, but in reality its actually slightly off, and not exactly what it should be. The unit tests are to confirm that its really doing what I think its doing.

.

2.Acceptance tests by a human. Theres a spreadsheet of everything you can do in the game and what should happen. Eg. press this button -> door should open.. As we add features we add more stuff to this list. At regular intervals and before any release several humans try every test on various hardware. This is to catch big / complex bugs and regressions. Its super tedious but it has to be done imo. Automating this would be an insane amount of work and also pointless as we are also testing the hardware, you get weird problems with certain GPUs, gamepads, weird smartphones etc.

.

I find those two types of tests to be essential, the bare minimum. But also anything in between, like some kind of automated integration testing is just a shittone of work and will only be useful for a relatively brief period of development, changes will quickly render those sort of tests useless.


Yes, totally agree. Any code that has complicated logic with few / no dependencies benefits from unit testing.


I've arrived at this exact same conclusion for frontend work as well. I always go for integration tests first, and only rely on unit tests if hitting some edge case is hard via integration test.


And to clarify, if any individual function reaches some arbitrary level of irreducible complexity, then I'll absolutely unit test that. It's kind of a "you know it when you see it" kind of thing.


I find that, in this life, you usually get what you pay for, and, compared to other options, unit tests' primary virtue is that they're inexpensive.


Unit tests help verify individual components of a system - which makes them top-of-mind for library code.

I think the issue with them lies in that most developers aren't shipping libraries, they're shipping integrated systems, so there's no component worth testing. (you can always invent one, but that's just overcomplicating the code).

At the same time, it's also genuinely hard to write good, principled tests of integrated systems, harder than it is to code up a thing that kinda-works and then manually debugging it enough to ship. You have to have the system set up to be tested, and feature complexity actively resists this - you fight a losing battle against "YOLO code" that gets the effect at the expense of going around the test paradigm.


How does this scale though? If you've got integration tests that include state, now you've got to either run your tests serially or set up and tear down multiple copies of the state to prevent tests from clobbering each other. As your project expands, the tests will take longer and longer to run. Worse, they'll start to become unreliable due to the number of operations being performed. So you'll end up with a test suite that takes potentially multiple hours to run, and may periodically fail just because. The feedback loop becomes so slow that it's not helpful during actual coding. At best, it's a semi-useful release gate. Is there another way?


> If you've got integration tests that include state, now you've got to either run your tests serially or set up and tear down multiple copies of the state to prevent tests from clobbering each other.

That is a very normal setup.

> Worse, they'll start to become unreliable due to the number of operations being performed. So you'll end up with a test suite that takes potentially multiple hours to run, and may periodically fail just because.

This is called flakiness and is generally a symptom not to be ignored, as it is almost always indicative of bigger issues. It's rare that flakiness is limited to test environments. Instead it's much more likely that whatever your smoke tests are experiencing is a something end-users are also intermittently hitting.

> The feedback loop becomes so slow that it's not helpful during actual coding.

Devs can write their own unit tests when working on their assigned tasks. Smoke tests are designed to run when you're trying to integrate those changes into the existing codebase. At that point, you have the calculus all wrong. Smoke tests slow down devs enough that they don't merge broken code into production. That is a useful release gate unto itself.

If unit tests pass but smoke tests fail, then often (the vast majority of the time in my experience) the issue is that either the dev didn't understand the task or, more often, didn't understand the system they were integrating into.


If you have some code that if its callers changed, they would stop using that code or use it on a different place, it's a unit and it's a good idea to unit test it.

If you have some code that if its callers changed you would want to change it too, then it's on the same unit as the calling code, and it's bad to divide it away.


You probably have resist fingerprinting turned on


Most likely. I don't remember all the settings that I have turned on at some point. :)


Will one be able to embed a Hetchr ATOM into Hetchr in the future?


Mind elaborating? All Hetchr ATOMs are embedded into your workspace.


Modern bridges made from concrete are not designed to last more than a century. I believe the expected usefulness of something like a cable stayed bridge is around 100 years. Compare that to a suspension bridge which can be used almost indefinitely.


It would be much better to store the blog content directly on the blockchain. This is very expensive to do on Ethereum, but should hopefully get cheaper over time.

Is IPFS really resistant to censorship? It seems like any state-based actor could easily block access to IPFS nodes if they were serving a specific CID.


I could be incorrect, but my understanding is that the DHT used to route through IPFS is not robust to attack. A bunch of nodes could join the DHT and start maliciously routing data in circles, and that would be a very low cost way to significantly disrupt data availability and uptime on IPFS.

A few papers exist that describe byzantine fault tolerant DHTs, but they make assumptions about the percentage of evil nodes, which requires some method of authentication / Sybil resistance to be effective. Also the network cost blows up substantially, and DHTs already aren't very fast in terms of loading things like web pages.


It is not censorship proof but it is resistant to censorship.

A state actor would need to block all IPFS hosts as soon as they begin to serve a specific CID. This is a game of whack-a-mole that would be difficult to maintain.


> but should hopefully get cheaper over time.

It will get cheaper to "save" but more expensive to access


I made a spread sheet comparing renting to buying a few years ago. The premise was that I wanted to compare my investment in my home vs. paying less to rent and investing the difference in the stock market. The assumptions were something like a 3.5% mortgage rate (30y) and 4% appreciation on my home, vs. a 7% appreciation in the market.

What I found is that on a time horizon of ~7 years, it absolutely makes sense to buy a home. The reason is that when you have a mortgage, you're making a highly leveraged investment. If you buy a house for 100k and put 20k down, then you're 5x leveraged. If you're able to sell the house for 110k. The price has increased 10%, but your ROI is 50%.

There was a definite inflection point after 7 years, however, where the amount of leverage decreases to the point that the higher gains in the market begin to dominate the modest increase in home value.


Did you include closing fees?

Those can take a big chunk out of the appreciation when you sell and buy, and could make a difference of not breaking even for an additional year or two.


Yes, fees were included for selling the house, but I did not include capitol gains taxes in the investment calculation.


You should because that can be a big factor in comparing returns between home ownership and equities. You can keep $250k of capital gains on your home tax free, while the lowest capital gains rate you’ll pay when selling equities is 15%. That can be a significant difference in what you take home.


> The reason is that when you have a mortgage, you're making a highly leveraged investment.

You can lever up an equity portfolio as well. Moreover, the going interest rate for a margin loan is only 1-2% (less than the cost of a mortgage).


Although mortgages have some big advantages:

1. Interest rate can be fixed for 30 years.

2. Interest is tax-deductible.

3. No margin call. If the price drops, you can wait until it recovers.

There’s really nothing similar available to the average person for other investments.


Is that the going rate for a margin loan? I have only found IBKR to have rates that low.


Can you just take the equity back out of your home and put it into the market?


The alternative methods of transport will be more expensive, which will lead to a decrease in demand.


Yeah, I’ll just not drive to work...

This has never been true for consumers. The only “demand” it impacts are companies, who then move their jobs to China where they can use coal powered plants.

There is no world where this artificial increase in prices is good.

Green energy is improving, nuclear is improving. It’s improving because it has to compete with alternatives, like oil. Green energy is not cheap or widespread enough today for consumers or industry. So it’s only the people of the country that lose.


This has always been true of consumers.

- Let's buy a smaller/more efficient car instead of the bigger/less efficient one

- Let's move closer to work

- Let's carpool

- Let's not buy our teenager a car just yet

Etc, etc. Cost of ownership effects all sorts of choices, always has, always will.


I was wondering, just for the effect, if i could translate this list into the world of computers

-Buy smaller/more efficient computers instead of the bigger/less efficient one

-let's move closer to work

-let's share computers

-let's not buy our teenagers a computer just yet

Feels strange to read. But ok, who knows, maybe compu, umm, smartphones fall out of favour some day as well.


> -Buy smaller/more efficient computers instead of the bigger/less efficient one

I've run laptop cpu based desktops instead of desktop cpus (chromebox, nuc, lots of similar products with less known branding). It probably makes a difference, especially it you have a dedicated GPU in the desktop. Max power is certainly reduced, usually idle power too, for raw compute tasks there's a balance because the tasks will take longer and maybe end up using similar power. For games and stuff, you'll get way less fps (or lower settings, or realistically both), but use less power if you play the same amount of time.

> -let's move closer to work

I dunno how much power difference this makes.

> -let's share computers

I've done this, one at a time limits max power consumption, but may lead to higher utilization. Multiple monitors and keyboard/mice is an option too, but more fiddly. Maybe a little power savings vs two desktops, but it worked better with two gpus, so maybe not.

> -let's not buy our teenagers a computer just yet

This is kind of like the sharing one earlier.


We do that stuff.

> -Buy smaller/more efficient computers instead of the bigger/less efficient one

Smaller process size and better power management makes your mobile device have better battery life. 80 Plus, energy star, performance-per-watt benchmarks.

> -let's move closer to work

Hughesnet and dial-up are not good WFH options.

> -let's share computers

Cloud, client/server.

> -let's not buy our teenagers a computer just yet

Cancelled Instagram-for-kids?


From a consumer perspective

- Cheaper phones, they're just slower

- Lower resolution videos and images, less data, less cpu cost.

- Single shared desktop instead of everyone having a laptop

- Literally the same

---

From a business/software eng perspective

- Less compute, let things take longer.

- More efficient programming languages. Simplify problems (e.g. use aggregate statistics instead of working on the whole data set).

- Timesharing on servers

- Pen and paper for people who don't really need one? Don't buy the delivery driver a computer, just tell them where to go? This one is hard to make a business analogy out of.


If you work with information, there is no fundamental reason to live close to "work". It is much more efficient to live where you want to live as there is no need to travel - you are already there.


Yep. Americans used to drive enormous boat sized cars, then suddenly a lot of people liked these small Japanese cars.

I wonder why that happened????


That’s my point, you should always try to do what benefits your people (as a leader). Idk what this is, it’s just disproportionately negatively impacting the poor and middle class.

At the same time, this forces jobs over seas and Biden lets Russia build a pipeline to Europe limiting the USAs ability to sell and compete. Literally none of this is helping the environment or the citizens he leads.


I'm just addressing this part "This has never been true for consumers. The only “demand” it impacts are companies", it's wrong, it does impact demand from consumers, substantially.

Whether impacting consumer demand is a net positive for the population (due to climate change) or net negative for the population (because it's preventing useful stuff) isn't something I really want to weigh in on on the internet, I don't hold strong enough or well supported enough opinions.


I, a consumer, chose to live in a place where I could get to work without a car (and not having to drive was an explicit part of my decision-making). Consumers make this kind of choice all the time. That not every consumer can choose not to drive doesn't mean the idea of trying to influence consumer behavior is unreasonable.


This has never been true for consumers.

Well thats just not true. People make decisions about what to drive and how to drive it and how far to drive it based on the price of gas.


Why not tax the pipeline to simulate that cost increase? Isn't that the best of all worlds?


In an idea world, probably, but unfortunately taxes are politically toxic in the US, to the extent that imposing a new tax whose costs will ultimately be born by consumers is impolitic in a way that creating new consumer costs whose revenue accrues to private actors isn't. Voters aren't rational, and elected officials can't do a lot about that.

Also, revoking the permit can be done entirely by executive action, where a new tax would require cooperation from Congress, which the administration would never get.


This was actually the argument of the governments of Canada, to build it, but add a carbon tax.

Canada now has a federal carbon tax, BC has a carbon tax and Alberta did have one, but the previous government was thrown out in favour of a deeply conservative one (that is now woefully unpopular so the pro-carbon tax party is probably coming back).

You can see one possible issue with the case of the Albertan government changing. Easy to add/remove taxes, hard to add/remove pipelines. There's also even some question of whether the carbon tax will be effective. It arguably hasn't done that much in BC. This is probably because for political reasons it has been set low. A previous conservative government "froze" the tax at low rates. Scientists note the tax needs to be literally $100s more per ton of CO2 to be effective.

In my experience the carbon tax has seemed like a good idea, but has seemed in practice more of a political tool to generate the public acceptance of pipelines. The actual CO2 impact of the pipeline is absolutely in zero way mitigated by the carbon tax.


that pipeline that now will never exist was substantially owned by the Canadian and ALbertan government. If it was built we would see a very efficient transfer of profit directly to new initiatives. Now we're no further ahead AND we own a pipe-less pipeline project.


Unfortunately that demand is sticky. In North America, we have built our entire societies around plentiful fossil fuels. We cannot just turn that ship around on a dime.


Except I doubt everyone is going to walk to work, eat only locally grown produce in the winter (hope you like potatoes), feed the world without fertilizer and replace plastics with ... what, smug self-satisfaction?


Why is it statistically impossible? Where one chooses to live is likely highly correlated with the job market for their skillset in that region.


My comment was more about the demographics of the group (young, white, male) than where they chose to live.


This stereotype seems like one you got from the press rather than from observing what’s happening on the ground - the best companies in Silicon Valley have overwhelmingly hired East and some South Asians over the last 5 years. The not-at-all-atypical team I am part of currently at Facebook is entirely East/South Asian - zero whites, zero blacks, zero Hispanics. It does represent over 40% of the planet, though.


What are you thoughts on the demographics of the NBA?


Even better. The demographics of Gangsta Rap please?

Downvoters: What is so problematic about this sentence verses the parent's sentence?

Care to explain?


Why is it, in your view, a better example?


Genuinely curious about alternatives. I have a simple node application that all I really need to do is run npm install and node index.js. Even with such a simple setup, provisioning hosts was a huge pain. Each one needed node installed on it, and I had to deploy the app to each new host. With k8s, I just change the number of hosts I want in a config file and everything just works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: