Hacker Newsnew | past | comments | ask | show | jobs | submit | aaaaaaabbbbbb's commentslogin

Ah yes, one step outside of New York City, and I'm immediately in the boondocks.

Assuming one has access to a credit card and is financially literate, I agree that using BNPL is a hard sell (never having used it myself). But there is a large population--the unbanked--that lacks access to credit cards. For that population, the choice is not between credit card and BNPL, but BNPL and cash.

I found the article's description of how BNPL structurally differs from credit cards interesting, as it is a reasonable explanation for how BNPL can serve the unbanked and still have functioning credit risk models:

> Even with adverse selection for BNPL, the underwriting is for each transaction, not for all monthly spending like that in credit cards; so if a consumer misses a payment, the BNPL provider can stop lending immediately, as opposed to the credit card company which has to underwrite the person’s full ability and willingness to repay their debts. This tech-enabled granularity allows for legibility and hence greater precision and predictability.


> But there is a large population--the unbanked--that lacks access to credit cards. For that population, the choice is not between credit card and BNPL, but BNPL and cash.

Which is an indication that the status quo will not last.

The median person who cannot get a bank account and a credit card, is in that state due to a history of either fraud or nonpayment.

Sooner or later, that preponderance of risk will catch up to BNPL and it will either become less accessible, like credit cards, or it will become more like services that already service that population segment, like check cashing locations and payday loans.

Them having better technology and granularity like you say won't save them. At best it can delay the inevitable.


Yes, that's a good point. However, how much of this finer-grain lending flexibility translates into more favorable underwriting (to help cover the unbanked) is not clear. The author did not mention any data to that effect.


How does a person without a bank account repay their Klarna debt?


probably meant "underbanked" - has an account but credit shot so no cc's


If you have a lot of files, the initial (dev server) page load times increases linearly with the number of files you have.

With a slow bundler, that tradeoff made sense, but with a fast bundler, it is suboptimal.

Also, typically the application is split into multiple smaller bundles, so only a slice of the application is rebundled on change.


Next.js et al. provides a set of opinionated packages designed to enable a specific paradigm. For Next.js, that's server-side rendering. For Remix, that's progressive enhancement.

If you are happy with client-side rendering and do not desire React on the server, there is not a strong reason to use Next.js; it introduces complexity and churn.


Note that while Vite transpiles with esbuild, it bundles with Rollup, which is single-threaded JS.

Vite also uses esbuild to prebundle dependencies for the dev server, but this is separate from production builds.


If you are also looking for broader context beyond what a bundler is, I have written a broader exposition on frontend builds here, which may be useful in understanding how bundlers compare to adjacent build tools: https://sunsetglow.net/posts/frontend-build-systems.html.


Thanks, that's actually exactly what I was after without realising it!


Esbuild lacks the featureful plugin API that Webpack and Rollup have. It's pretty easy to avoid dependencies that require transpiler/bundler plugins and still have a great set of dependencies, but it's a deal breaker for many.


I have recently written a broader exposition on frontend build tooling, perhaps it will be useful: https://sunsetglow.net/posts/frontend-build-systems.html.

The performance gains in the recent past have mostly been due to moving away from single-threaded JavaScript to multi-threaded compiled languages. This requires a complete rewrite, so existing tools rarely take this step. We see this optimization in Farm alongside "partial bundling," which strikes a performance-optimal balance between full bundling (Webpack) and no bundling (Vite) in development.

Vite abstracts over a preconfigured set of "lower-level" frontend build tools consisting of a mixture of older single-threaded JavaScript tools and newer multi-threaded compiled language tools. Vite can adopt the partial bundling of Farm, but dispensing with its remaining JavaScript tools is a major breaking update.


> The performance gains in the recent past have mostly been due to moving away from single-threaded JavaScript to multi-threaded compiled languages.

This is overly simplistic. Parcel had far better performance than Webpack before they added native code or threading.

Webpack remained slow because it didn’t have a good public/private interface for its plugins, so the changes that could be made were limited.

> Vite can adopt the partial bundling of Farm, but dispensing with its remaining JavaScript tools is a major breaking update.

Turbopack and Parcel both have excellent performance without any compromises to their bundling strategy. Vite skipping this likely just simplifies it’s architecture. Bundling creates an opportunity to be slow, but it doesn’t necessitate it.


Rolldown (Rollup compatible) for Vite which is written in Rust and still in active development.


Busy reading this and it's great so far. One comment, you've referenced "tree shaking" a few times without explaining what it is. I think I know but it might help others to explain that before your reference it.


I would just use "dead code elimination" instead. They're the same but that actually tells you what it is.


Since tree-shaking is a common term across frontend build tooling documentation, I adopted it as well.

Dead code elimination in its traditional form also runs during code minification, which is a separate build step from bundler tree shaking. Having separate terms avoids ambiguity.


Tree shaking is the removal of unused exports, a very specific thing for JS. Dead code elimination is a broader term which includes tree shaking, but is usually used for the elimination of code after the compilation (or transpilation/minification in js/ts case) in the front-end world.

A practical example would be that tree-shaking wouldn't remove the variable b in "export function foo(x) { let b = 10; let a = 20; return x + a; }" but if this export isn't imported anywhere, then it would remove the whole function definition. Uglify.js, which does dead code elimination would remove the unused b variable.


Sure, tree shaking is just a very basic dead code elimination algorithm. But there's no reason to give it such a prominent and confusing name. Just call it "basic dead code elimination"! If you must be specific (why?) call it "dead export elimination".


I don't disagree with you, but on the other hand, it was really a hard problem in js (partly because functions having outside context and mainly because how mutable modules are (were, with commonjs)), so it became a huge race for optimization. Now it's really mostly dead code elimination because how much saner es modules are yet the name stays. But hey, we also don't call televisions "bigger monitors with built in spying OSS", names have a tendency to stick :)


Thanks for the feedback!

I struggled with the ordering since each section is somewhat mutually dependent; this is arranged more like a thematic history than a chronological one.

Tree-shaking naturally fits under bundling, and I'm afraid that explaining it earlier will make tree-shaking's explanation itself contextless since without bundling there is nothing to tree shake.

I can hyperlink those references to the tree-shaking section tomorrow so that there is an action for the confused.


Thanks. I did see that it was covered later as I continued reading the article and looking back I see that it's in the TOC as well so maybe I just didn't pay enough attention. I think just hyperlinking to that section would do the trick (and maybe on the first occurrence a small comment like (covered below) but that might not be necessary.

Thanks for the response and more importantly the article! It covered exactly all the points that were opaque to me about frontend build processes. I've also forwarded it to a couple of other backend devs that I know.


What helps me in writing on complex things is introducing a concept with a simple (perhaps even slightly wrong) explanation when you need it, before explaining it in greater detail and clearing up any prior simplifications when the reader is able to grok it fully.

Not to take away from the excellent writing in your original essay!


Thank you for this. This looks like just what I was looking for!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: