I know this isn't your actual question, but I think you should consider if the tone of your article represents the company you're trying to build. To me it reads as informal (comparison table "yes but weird), jokey (meme at top) & one-sided (can't find one point you lose at). It feels like poking at your competitor, not like an attempt at a neutral comparison. Now they're poking back via lawyers.
Maybe you do need a lawyer in this instance, maybe not, but if you're committed to this style when dealing with competitors you'll likely need a lawyer in the future.
Thank you for comments and for reading the article. I appreciate it very much!
Based on your feedback and other comments in this thread, I have realized that the jokey/cocky tone of voice (including the article's images) is damaging what I am trying to build, more than it helps what I am trying to build/communicate.
I am taking the article offline and will later replace it with a more mature no-nonsense comparison table.
Good on you for considering other perspectives, and acting. Many people only come to reinforce their existing ideas. FWIW your product itself looks good, if I'm ever in the market for a Datagrid type component, I'll consider it.
With the increasing balkanization of streaming TV/film services, I wonder if piracy will start making a comeback. When everything (or most things) are under one roof, streaming makes sense - a single bill, one place to find content etc. Wondering which show is on which platform, or having to consider signing up for a subscription to a new service for a single show is not a good user experience, not to mention the single monthly bill ballooning to 3 or more.
If you you call out any of the rampant self-aggrandizement, exaggeration, humble-brags, or in some case outright lies there's no real upside, and plenty of potential downside.
For a small team of capable engineers, working on something that has low consequence for any single change failing plus a strong business motivation to move fast: sure, do without code reviews if you want.
There is no universal "best practise" for building software, there is just whatever works best for your context. Practices can, and should change as your business context does. There are some things that are net beneficial for a team in the majority of situations though, I'd consider code reviews one of these things and make that the default.
A person doing manual testing, which is a gate to deployment would certainly be hard to make work for deploys as fast as you're targeting. It may still be valuable to have someone separate to the devs testing your product (in production, not as a gate to release) looking for things your automated testing, customer feedback or metrics may have missed. Whether this would be valuable is really context-dependent.
I've also arrived at this approach and don't think it's that uncommon - IMO the article is presenting a false dichotomy.
There are still gotchas to look out for in the team-embedded QA approach. In typical team sizes, you often end up with only one QA per team - you need to make sure they have cover (everyone needs a break), and they need support in their discipline (do something to share QA knowledge across teams).
Absolutely. Disappointing to see this sort of shallow sales pitch blogspam making it to the frontpage. I'm surprised more HN readers don't see through this.
The entire purpose of this article is to self-servingly attempt to convince the reader that their product is the only solution to QA problems.
It correctly identifies some challenges with QA, but this solution is certainly not the only way to have effective QA. That Rainforest is resorting to such a disingenuous presentation of solutions to QA issues makes me think they probably don't actually solve QA problems very well.
"We have researched what makes QA successful and X, Y, and Z are what we found. Here's how we believe we're solving Z for our customers" would be a much more honest pitch.
I totally see your point here - it's frustrating that how I presented it undermined the impact of the core point, which is that siloed ownership of quality is an anti-pattern, and an anti-pattern which is often a response to the limitations and design principles of quality tooling.
We are to my knowledge the only product built for this kind of cross-functional ownership, that's why I don't recommend other products.
While I agree the post is too marketing-y - to be honest I didn't expect it to appeal so much to the HN audience - it was written in good faith. We built the product because after 7 years of building a product for one of those silos, we became convinced that the only long-term durable solution was to empower everyone to own quality. Clearly I need to figure out how to strike the balance that you outline in your last sentence, which was the goal!
Oh c'mon. Of course a blog on the website of a company is ultimately trying to pitch their product, one way or another. That does not invalidate the points they bring up per se.
It obviously needs a bit of critical thinking when reading it and taking everything with a grain of salt, but that's something I really hope we can expect people on HN to be capable of?
QA also works best for features that the user can meaningfully interact with. The more esoteric, interconnected, or complex the failure is, the more it gets muddled with other reports and weird conditions (it breaks on 3pm when the moon is in equinox and I have my mouse pointed north). That’s an over exaggeration but the sentiment is accurate. QA will frequently lack a deeper understanding of how the software works and is connected. That ignorance can be valuable (finds blind spots) but has trade offs (assumptions are made based on imperfect observation of cause and effect where human biases lead you down dark hallways). Speaking from experience with lots of manual QA which is actually a rarity these days.
The other thing to consider is, if you’re careful, user feedback can be obtained more quickly. If you can keep things stable, then you won’t upset your users too badly and you’ll avoid the need for QA to act as a stand in.
I agree, this all makes sense. Although I think the team-embedded QA is generally the right thing, I wouldn't use it blindly in all cases. Some teams I manage only produce HTTP API's, these are ideal candidates for automated testing (incl. end-to-end integrated tests) and the developers are happy to own this without a QA on the team.
Agreed it's presenting a false dichotomy. The article ends with a pitch for Rainforest's "no-code" QA platform so it makes sense they want to present their product as the clear solution. I could take the article more seriously if it at least mentioned a more integrated approach.
Most projects I've worked on have been games where the studio was split into teams of about 6-10 people with one dedicated in-house QA member for some but not all teams, a few in-house QA workers not assigned to a specific team, plus a much larger external QA team (from the parent company/ publisher). That has worked great.
I've also been on projects with only external QA. It works, but the lack of internal QA has been a frequent annoyance.
Yeah - in my experience 6-10 people per team is typical, often with one of them being dedicated QA. Having this, plus a separate central QA team is one way to address the pitfalls of embedded QA I was pointing out - they can cover for time away for team QA, or act as bench capacity.
There is still competition to consider though. If a SaaS product has competitors that are more reliable, or more feature rich, they should win & retain more customers over time.
Data lock-in, due to proprietary file formats, makes switching software extremely difficult, especially for companies that interchange data with outside entities.
For instance, companies that do design and/or A/V work have no real alternatives to paying Adobe monthly--regardless of how little work Adobe puts into Creative Suite--because it's an industry standard and there are no alternatives that offer friction-free interoperability.
As a casual observer of self-driving, I feel like this is a really interesting part of the overall problem space. Until such time as a vehicle fleet is 100% autonomous, human piloted & autonomous vehicles need to co-exist. Humans bend & break driving rules by social interactions which autonomous vehicles don't currently participate in.
I’m a manager of ~25 developers (who report through a few leads). I also come from a dev background. IMO metrics are dangerous, but can also be useful. Often I find basic code metrics useful as smell tests, but would never use them alone, wouldn’t set them as goals, and wouldn’t use them alone to rank performance in a team.
If someone has a very low number of commits relative to others on their team, I’ll investigate deeper - review a few Pull Requests, check if they’re working on something outside source control etc. sometimes I find problems, other times not.
I would think this is a common approach. I use the same approach. Scales up to hundreds of developers.
I do generally try to enforce the goal that on any working day there should be at least one commit from the developer. If I notice a developer is consistently missing this metric it usually indicates trouble in the project they are working on or that the developer is just not delivering enough.
Maybe you do need a lawyer in this instance, maybe not, but if you're committed to this style when dealing with competitors you'll likely need a lawyer in the future.