The advantage of frameworks is to have a "common language" to achieve some goals together with a team. A good framework hides some of the stupid mistakes you would do when you would try to develop that "language" from scratch.
When you do a project from scratch, if you work enough on it, you end up wishing you would have started differently and you refactor pieces of it. While using a framework I sometimes have moments where I suddenly get the underlying reasons and advantages of doing things in a certain way, but that comes once you become more of a power user, than at start, and only if you put the effort to question. And other times the framework is just bad and you have to switch...
I used Claude to document, in great detail, a 500k-line codebase in about an hour of well-directed prompts. Just fully explained it, how it all worked, how to get started working on it locally, the nuance of the old code, pathways, deployments using salt-stack to AWS, etc.
I don't think the moat of "future developers won't understand the codebase" exists anymore.
This works well for devs who write their codebase using React, etc., and also the ones rolling their own JavaScript (of which I personally prefer).
To make a parallel to actual human language: you can understand well a foreign language and not be able to speak it at the same level.
I found myself in that situation with both foreign languages and with programming languages / frameworks - understanding is much easier than creating something good. You can of course revert to a poorer vocabulary / simpler constructions (in both cases), but an "expert" speaker/writer will get a better result. For many cases the delta can be ignored, for some cases it matters.
> I used Claude to document, in great detail, a 500k-line codebase in about an hour of well-directed prompts
Yes, but have you fully verified that the documentation generated matches the code? This is like me saying I used Claude to generate a year long workout plan. And that is lovely. But the generated thing needs to match what you wanted it for. And for that, you need verification. For all you know, half of your document is not only nonsense but it is not obvious that it's nonsense until you run the relevant code and see the mismatch.
Most however are surely capable of understanding a simple metaphor, in which "magic" in the context of coding means "behavior occuring implicitly/as a black box".
Yes, it's not magic as in Merlin or Penn and Teller. But it is magic in the aforementioned sense, which is also what people complain about.
in my experience among personality types of programmers both laborers and artists are opposed to the reading of guides, I think the laborers due to laziness and the artists due to a high susceptibility to boredom and most guides are not written to the intellectually engaging level of SICP.
Craftsmen are naturally the type to read the guide through.
Of course if you spend enough time in the field you end up just reading the docs, more or less, because everybody ends up adapting craftsmen habits over time.
do people generally dislike magic once they have formed an opinion, or is it just that people who dislike magic are more prone to voicing that opinion, why, if magic is disliked by people experienced enough to form opinions, does it keep coming back around?
I would suppose the people who create "magic" solutions have at least voiced an opinion that they like magic and the people who take up those solutions the same, for the record I too dislike magic but my feeling is that I am somewhat in the minority on that.
It's funny how Lisp has been criticized for its ability to create a lot of macros and DSLs, then Java & JavaScript came along and there was an explosion of frameworks and transpiled languages in JVM, Node or the Browser.
"The problem with Scheme is all of the implementations that are incompatible with one another because they each add their own nonstandard feature set because the standard language is too small." Sometimes with an added subtext of "you fools, you should have just accepted R6RS, that way all Schemes would look like Chez Scheme or Racket and you'd avoid this problem".
Meanwhile in JavaScript land: Node, Deno, Bun, TypeScript, JSX, all the browser implementations which may or may not support certain features, polyfills, transpiling, YOLOOOOO
React is a weird beast. I've been using it for years. I think I like it? I use it for new projects too, probably somewhat as a matter of familiarity. I'm not entirely convinced it's a great way to code, though.
My experience with it is that functional components always grow and end up with a lot of useEffect calls. Those useEffects make components extremely brittle and hard to reason about. Essentially it's very hard to know what parts of your code are going to run, and when.
I'm sure someone will argue, just refactor your components to be small, avoid useEffect as much as possible. I try! But I can't control for other engineers. And in my experience, nobody wants to refactor large components, because they're too hard to reason about! And the automated IDE tools aren't really built well to handle refactoring these things, so either you ask AI to do it or it's kind of clunky by-hand. (WebStorm is better than VSCode at this, but they're both not great)
The other big problem with it is it's just not very efficient. I don't know why people think the virtual DOM is a performance boost. It's a performance hack to get around this being a really inefficient model. Yes, I know computers are fast, but they'd be a lot faster if we were writing with better abstractions..
I dunno, AI tools love adding not only useEffect but also unnecessary useMemo.
> I don't know why people think the virtual DOM is a performance boost.
It was advertised as one of the advantages when React was new, due to the diffing browsers would only need to render the parts that changed instead of shoving a whole subtree into the page and the having to do all of it (because remember this came out in the era of jquery and mustachejs generating strings of HTML from templates instead of targeted updates).
Patching the DOM existed long before React, that wasn't a new technique. IIRC, the idea was more that the VDOM helped by making batching easier and reducing layout thrashing, where you write to the DOM (scheduling an asynchronous layout update), read from the DOM (forcing that thenlayout update to be executed synchronously now).
That said, none of that is specific to the VDOM, and I think a lot of the impression that "VDOM = go fast" comes from very early marketing that was later removed. I think also people understand that the VDOM is a lightweight, quick-to-generate version of the DOM, and then assume that the VDOM therefore makes things fast, but forget about (or don't understand) the patching part of React, which is also necessary if you've got a VDOM and which is slow.
The "magic" of React though is in its name, it's reactive. If all you're doing is creating static elements that don't need to react to changes in state then yeah, React is overkill. But when you have complex state and need all your elements to update as that state changes, then the benefits of React (or similar frameworks) become more apparent. Of course it's all still possible in vanilla JS, but it starts to become a mess of event handlers and DOM updates and the React equivalent starts to look a lot more appealing.
Given the verbosity of Java's hello world vs Python's, you'd walk away with the conclusion that Java should never be used for anything, but that would be a mistake.
Thanks for this! I've mostly avoided getting too into React and its ilk, mainly because I hate how bloated the actual code generated by that kind of application tends to be. But also I am enjoying going through this. If I can complete it, I think I will be more informed about how React really works.
I feel like a lot of the comments here are from people who either weren't around for, or didn't grow up in, the era where adactio and the wider web dev scene (Zeldman, etc) were the driving force of things on the web.
If you've only been in a world with React & co, you will probably have a more difficult time understanding the point they're contrasting against.
I was around for that era (I may have made an involuntary noise when Zeldman once posted something nice about a thing I made), but being averse to "abstraction in general" is a completely alien concept to me as a software developer.
Yes, but I'm in so many words stating that that particular era of web dev was notorious for the discussion of "is this software engineering or not".
It's just such a different concept/vibe/whatever compared to modern frontend development. Brad Frost is another notable person in this overall space who's written about the changes in the field over the years.
The AI pilled view is coding is knitting and AI is an automated loom.
But it is not quite the case. The hand coded solution may be quicker than AI at reaching the business goal.
If there is an elegant crafted solution that stays in prod 10 years and just works it is better than an initially quicker AI coded solution that needs more maintenance and demands a team to maintain it.
If AI (and especially bad operators of AI) codes you a city tower when you need a shed, the tower works and looks great but now you have 500k/y in maintaining it.
The difference is the loom is performing linear work.
Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.
Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.
The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.
Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.
Anything that can be automated can be automated poorly indeed. But while it has been proven that textile manufacturing can be automated well (or at least better than a hand weaver ever could), the jury is still out if programming can be sufficiently automated at all. Even if programming can be completely automated, it's also unclear if the current LLM strategy will be enough or whether we'll have another 30 year AI winter before something better comes along.
The difference is that one can make good cloth with a loom using less effort than before. With AI one has to choose between less effort, or good quality. You can't get both.
I get the sentiment, but "I don’t like magic" feels like a luxury belief.
Electricity is magic. TCP is magic. Browsers are hall-of-mirrors magic. You’ll never understand 1% of what Chromium does, and yet we all ship code on top of it every day without reading the source.
Drawing the line at React or LLMs feels arbitrary. The world keeps moving up the abstraction ladder because that’s how progress works; we stand on layers we don’t fully understand so we can build the next ones. And yes LLM outputs are probabilistic, but that's how random CSS rendering bugs felt to me before React took care of them
The cost isn’t magic; the cost is using magic you don’t document or operationalize.
I took classes on how hardware works with software, and I still am blown away when I really think deeply about how on earth we can make a game render something at 200fps, even if I can derive how it should work.
When everything is magic I think we need a new definition of magic or maybe a new term to encapsulate what's being described here.
The key feature of magic is that it breaks the normal rules of the universe as you're meant to understand it. Encapsulation or abstraction therefore isn't, on its own, magical. Magic variables are magic because they break the rules of how variables normally work. Functional components/hooks are magic because they're a freaky DSL written in JS syntax where your code makes absolutely no sense taken as regular JS. Type hint and doctype based programming in Python is super magical because type hints aren't supposed to affect behavior.
If you have this attitude I hope you write everything in assembly. Except assembly is compiled into micro-ops, so hopefully you avoid that by using an 8080 (according to a quick search, the last Intel CPU to not have micro-ops.)
In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me.
Did someone ask about Intel processor history? :-) The Intel 8080 (1974) didn't use microcode, but there were many later processors that didn't use microcode either. For instance, the 8085 (1976). Intel's microcontrollers, such as the 8051 (1980), didn't use microcode either. The RISC i860 (1989) didn't use microcode (I assume). The completely unrelated i960 (1988) didn't use microcode in the base version, but the floating-point version used microcode for the math, and the bonkers MX version used microcode to implement objects, capabilities, and garbage collection. The RISC StrongARM (1997) presumably didn't use microcode.
As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.
My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.
The only reason why you regard JavaScript as “fundamental” is that it’s built into the browser. Sure, you can draw that line, but at least acknowledge that there’s many places to draw the line.
I’d rather make comparative statements, like “JavaScript is more fundamental than React,” which is obviously true. And then we can all just find the level of abstraction that works for us, instead of fighting over what technology is “fundamental.”
Sure you can, why can't you? Even if it's deprecated in 20 years, you can still run it and use it, fork it even to expand upon it, because it's still JS at the end of the day, which based on your earlier statement you can code for life with.
A good abstraction relieves you of concern for the particulars it abstracts away. A bad abstraction hides the particulars until the worst possible moment, at which point everything spills out in a messy heap and you have to confront all the details. Bad abstractions existed long before React and long before LLMs.
Are you seriously saying that you can't understand the concept of different abstractions having different levels of usefulness? That's the law of averages taken to cosmic proportions.
If this is true, why have more than one abstraction?
Is a compiler magic? Did they came from an electronic heaven? There are plenty of books, papers, courses,... that explains how compiler works. When people are talking about "magic", it usually means choosing a complex solution over a simple one, but with an abstraction that is ill-fitted. Then they use words like user-friendly, easy to install with curl|bash, etc to lure us into using it.
So you don’t like compilers? Or do you really full understand how they are working? How they are transforming your logic and your asynchronous code into machine code etc.
Yea, the pervasiveness of this analogy is annoying because it's wrong (because a compiler is deterministic and tends to be a single point of trust, rather than trusting a crowdsourced package manager or a fuzzy machine learning model trained on a dubiously-curated sampling of what is often the entire internet), but it's hilarious because it's a bunch of programmers telling on themselves. You can know, at least at a high level of abstraction, what a compiler is doing with some basic googling, and a deeper understanding is a fairly common requirement in computer science education at the undergrad level
Don't get me wrong, I don't think you need or should need a degree to program, but if your standard of what abstractions you should trust is "all of them, it's perfectly fine to use a bunch of random stuff from anywhere that you haven't the first clue how it works or who made it" then I don't trust you to build stuff for me
I think you're mistaken on that. Maybe me and the engineers I know are below average on this but even our combined knowledge of the kinds of things _real_ compilers get up to probably only scratches the surface. Don't get me wrong, I know what compilers do _in principle_. Hell I've even built a toy compiler or two. But the compilers I use for work? I just trust that the know what they're doing.
I'd wager a lot of money that the huge majority of software engineers are not aware of almost any transformations that an optimizing compiler does. Especially after decades of growth in languages where most of the optimization is done in JIT rather than a traditional compilation process.
The big thing here is that the transformations maintain the clearly and rigorously defined semantics such that even if an engineer can't say precisely what code is being emitted, they can say with total confidence what the output of that code will be.
> the huge majority of software engineers are not aware of almost any transformations that an optimizing compiler does
They may not, but they can be. Buy a book like "Engineering a Compiler", familiarize yourself with the Optimization chapters, study some papers and the compiler source code (most are OSS). Optimization techniques are not spell locked in a cave under a mountain waiting for the chosen one.
We can always verify the compiler that way, but it's costly. Instead, we trust the developers just like we trust that the restaurant's chef are not poisoning our food.
They can't! They can fairly safely assume that the binary corresponds correctly to the C++ they've written, but they can't actually claim anything about about the output other than "it compiles".
Not in any great detail. Gold vs ld isn't something I bet most programmers know rigorously, and thats fine! Compilers aren't deterministic, but we don't care because they're deterministic enough. Debian started a reproducible computing project in 2013 and, thirteen years later, we can maybe have that happen if you set everything up juuuuuust right.
They also realize that adding two integers in a higher level language could look quite different when compiled depending on the target hardware, but they still understand what is happening. Contrast that with your average llm user asking it to write a parser or http client from scratch. They have no idea how either of those things work nor do they have any chance at all of constructing one on their own.
Sure, obviously, we will not undersatnd every single little thing down to the tiniest atoms of our universe. There are philosophical assumptions underlying everything and you can question them (quite validly!) if you so please.
However, there are plenty of intermediate mental models (or explicit contracts, like assembly, elf, etc.) to open up, both in "engineeering" land and "theory" land, if you so choose.
Part of good engineering as well is deciding exactly when the boundary of "don't cares" and "cares" are, and how you allow people to easily navigate the abstraction hierarchy.
That is my impression of what people mean when they don't like "magic".
> Then, when it fails [...], you can either poke it in the right ways or change your program in the right ways so that it works for you again. This is a horrible way to program; it’s all alchemy and guesswork and you need to become deeply specialized about the nuances of a single [...] implementation
In that post, the blanks reference a compiler’s autovectorizer. But you know what they could also reference? An aggresively opaque and undocumented, very complex CPU or GPU microarchitecture. (Cf. https://purplesyringa.moe/blog/why-performance-optimization-....)
I'm not sure this is a useful way to approach "magic". I don't think I can build a production compiler or linker. It's fair to say that I don't fully understand them either. Yet, I don't need a "full" understanding to do useful things with them and contribute back upstream.
LLMs are vastly more complicated and unlike compilers we didn't get a long, slow ramp-up in complexity, but it seems possible we'll eventually develop better intuition and rules of thumb to separate appropriate usage from inappropriate.
the advantage of frameworks is that there are about 20-ish security/critical usage considerations, of which you will remember about 5. if you don't use a framework, you are so much more likelihood of getting screwed. you should use a framework when theres just shit you dont think of that could bite you in the ass[0]. for everything else, use libraries.
[0] this includes for example int main(), which is a hook for a framework. c does a bunch of stuff in __start (e.g. in linux, i don't know what the entrypoint is in other languages) that you honestly don't want to do every. single. time./ for every single OS
> And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.
Autovectorization is not a programming model. This still rings true day after day.
I've used React on projects and understand its usefulness, but also React has killed my love of frontend development. And now that everyone is using it to build huge, clunky SPAs instead of normal websites that just work, React has all but killed my love of using the web, too.
If you are the only person who ever touches your code, fine, otherwise I despise this attitude and would insta-reject any candidate who said this. In a team setting, "I don't like magic" and "I don't want to learn a framework" means: "I want you to learn my bespoke framework I'm inevitably going to write."
Predicated upon the definition of "magic" provided in the article: What is it, if anything, about magic that draws people to it? Is there a process wherein people build tolerance and acceptance to opaque abstractions through learning? Or, is it acceptance that "this is the way things are done", upheld by cargo cult development, tutorials, examples, and the like, for the sake of commercial expediency? I can certainly understand that seldom is time afforded to building a deep understanding of the intent, purpose, and effect of magic abstractions under such conditions.
Granted, there are limits to how deep one should need to go in understanding their ecosystem of abstractions to produce meaningful work on a viable timescale. What effect does it have on the trade to, on the other hand, have no limit to the upward growth of the stack of tomes of magical frameworks and abstractions?
> What is it, if anything, about magic that draws people to it?
Simple: if it's magic, you don't have to do the hard work of understanding how it works in order to use it. Just use the right incantation and you're done. Sounds great as long as you don't think about the fact that not understanding how it works is actually a bug, not a feature.
Or is just a specialization choice. Taxi drivers don't care how a car works, they hire a mechanic for that. Doctors don't care how a catscan works they just care that it provides the data they need in a useful format.
I like the definition of magic I learned from Penn Jillette, (paraphrased): magic is just someone spending way more resources to produce the result than you expected.
This analogy baffles me. I don't think anybody here is making the argument that we must know how all of our tools work at a infinitesimally fundamental level. Rather, I think software is an endless playground and refuge for people who like to make their own flavours of magic for the sake of magic.
I feel like I'm responding more to the op. Maybe a more concrete example, there are several hit games, Undertale is one I know personally, where the creator is an artist who learned just enough programming in a relatively high level language to ship a hit and beloved game. They didn't need to know the details of how graphics get put on the screen, nor did they need to learn memory management or bytes and bits.
> I don’t like using code that I haven’t written and understood myself.
Maybe it's true for the author but it's not true for lots of productive people in every field and there's plenty of examples of excellence operating at a higher level.
> Sounds great as long as you don't think about the fact that not understanding how it works is actually a bug, not a feature.
That's such a wrong way of thinking. There is simply a limit on how much a single person can know and understand. You have to specialize otherwise you won't make any progress. Not having to understand how everything works is a feature, not a bug.
You not having to know the chemical structure of gasoline in order to drive to work in the morning is a good thing.
But having to know how a specific ORM composes queries targetting a specific database backend, however, is where the magic falls apart; I would rather go without than deal with such pitfalls. If I were to hazard a guess, things like this are where the author and I are aligned.
> to know how a specific ORM composes queries targetting a specific database backend, however, is where the magic falls apart
I've never found this to be a particular problem. Most ORMs are actually quite predictable. I've seen how my ORM constructs constructs queries for my database and it's pretty ugly but also it's actually also totally good. I've never really gained any insight that way.
But the sheer amount of time effort I've saved by using an ORM to basically do the same boring load/save pattern over and over is immeasurable. I can even imagine going back and doing that manually -- what a waste of time, effort, and experience that would be.
I know magic has a nice Arthur C. Clarke ring to it, but I think arguing about magic obscures the actual argument.
It's about layers of abstraction, the need to understand them, modify them, know what is leaking etc.
I think people sometimes substitute magic when they mean "I suddenly need to learn a lower layer I assumed was much less complex ". I don't think anyone is calling the linux kernal magic. Everyone assumes it's complex.
Another use of "magic" is when you find yourself debugging a lower layer because the abstraction breaks in some way. If it's highly abstracted and the inner loop gives you few starting points ( while (???) pickupWorkFromAnyWhere() )). It can feel kafkaesque.
I sleep just fine not knowing how much software I use exactly works. It's the layers closest to application code that I wish were more friendly to the casual debugger.
To me, it's much less of an issue when it works, obviously, but far more of a headache when I need to research the "magic" in order to make something work which would be fairly trivially implemented with fewer layers of abstraction.
I think it's "this is the way things are done in order to achieve X". Where people don't question neither whether this is the only way to achieve X, nor whether they do really care about X in the first place.
It seems common with regard to dependency injection frameworks. Do you need them for your code to be testable? No, even if it helps. Do you need them for your code to be modular? You don't, and do you really need modularity in your project? Reusability? Loose coupling?
This person's distinction between "library" and "framework" is frankly insane.
React, which just is functions to make DOM trees and render them is a framework? There is a reason there are hundreds of actual frameworks that exist to make structure about using these functions.
At this point, he should stop using any high level language! Java/python are just a big frameworks calling his bytecode, what magical frameworks!
I also don't like magic, but React is the wrong definition of magic in this case. It's an abstraction layer for UI and one that is pretty simple when you think about it conceptually. The complexity is by third party library that are building on top of it, but proposing complex machineries instead of simple ones. Then you have a culture of complexity around simple technology.
But it does seems that culture of complexity is more pervasive lately. Things that could have been a simple gist or a config change is a whole program that pulls tens of dependencies from who knows who.
> I’ve always avoided client-side React because of its direct harm to end users (over-engineered bloated sites that take way longer to load than they need to).
A couple of megabytes of JavaScript is not the "big bloated" application in 2026 that is was in 1990.
Most of us have phones in our pockets capable of 500Mbps.
The payload of an single page app is trivial compared to the bandwidth available to our devices.
I'd much rather optimise for engineer ergonomics than shave a couple of milliseconds off the initial page load.
React + ReactDOM adds ~50kb to a production bundle, not even close to a couple of mbs. React with any popular routing library also makes it trivial to lazy load js per route, so even with a huge application your initial js payload stays small. I ship React apps with a total prod bundle size of ~5mb, but on initial load only require ~100kb.
The idea that React is inherently slow is totally ignorant. I'm sympathetic to the argument that many apps built with React are slow (though I've not seen data to back this up), or that you as a developer don't enjoy writing React, but it's a perfectly fine choice for writing performant web UI if you're even remotely competent at frontend development.
This reads like a transcript of a therapy session. He never gives any real reasons. It's mostly a collection of assertions. This guy must never have worked on anything substantial. He also must underestimate the difficulty of writing software as well as his reliance on the work of others.
> I don’t like using code that I haven’t written and understood myself.
Why stop with code? Why not refine beach sand to grow your own silicon crystal to make your own processor wafers?
Division of labor is unavoidable. An individual human being cannot accomplish all that much.
> If you’re not writing in binary, you don’t get to complain about an extra layer of abstraction making you uncomfortable.
This already demonstrates a common misconception in the field. The physical computer is incidental to computer science and software engineering per se. It is an important incidental tool, but conceptually, it is incidental. Binary is not some "base reality" for computation, nor do physical computers even realize binary in any objective sense. Abstractions are not over something "lower level" and "more real". They are the language of the domain, and we may simulate them using other languages. In this case, physical computer architectures provide assembly languages as languages in which we may simulate our abstractions.
Heck, even physical hardware like "processors" are abstractions; objectively, you cannot really say that a particular physical unit is objectively a processor. The physical unit simulates a processor model, its operations correspond to an abstract model, but it is not identical with the model.
> My control freakery is not typical. It’s also not a very commercial or pragmatic attitude.
No kidding. It's irrational. It's one thing to wish to implement some range of technology yourself to get a better understanding of the governing principles, but it's another thing to suffer from a weird compulsion to want to implement everything yourself in practice...which he obviously isn't doing.
> Abstractions often really do speed up production, but you pay the price in maintenance later on.
What? I don't know what this means. Good abstractions allow us to better maintain code. Maintaining something that hasn't been structured into appropriate abstractions is a nightmare.
>> Abstractions often really do speed up production, but you pay the price in maintenance later on.
> What? I don't know what this means. Good abstractions allow us to better maintain code. Maintaining something that hasn't been structured into appropriate abstractions is a nightmare.
100% agree with this. Name it well, maintain it in one place ... profit.
It's the not abstracting up front that can catch you: The countless times I have been asked to add feature x, but that it is a one-off/PoC. Which sometimes even means it might not get the full TDD/IoC/feature flag treatment (which aren't always available depending upon the client's stack).
Then, months later get asked to created an entire application or feature set on top of that. Abstracting that one-off up into a method/function/class tags and bags it: it is now named and better documented. Can be visible in IDE, called from anywhere and looped over if need be.
There is obviously a limit to where the abstraction juice isn't worth the squeeze, but otherwise, it just adds superpowers as time goes on.
> I get that. But I still draw a line. When it comes to front-end development, that line is for me to stay as close as I can to raw HTML, CSS, and JavaScript. After all, that’s what users are going to get in their browsers.
No it’s not. They will get shown a collection of pixels, a bunch of which will occupy coordinates (in terms of an abstraction that holds the following promise) such that if the mouse cursor (which is yet another abstraction) matches those coordinates, a routine derived from a script language (give me an A!) will be executed mutating the DOM (give me a B!) which is built on top of more abstractions than it would take to give me the remaining S.T.R.A.C.T.I.O.N. three times over. Three might be incorrect, just trying to abstract away so that I don’t end up dumping every book on computers in this comment.
Ignorance at a not so fine level. Reads like “I’ve established myself confidently in the R.A.C. band, therefore anything that comes after is yucky yucky”.
When you do a project from scratch, if you work enough on it, you end up wishing you would have started differently and you refactor pieces of it. While using a framework I sometimes have moments where I suddenly get the underlying reasons and advantages of doing things in a certain way, but that comes once you become more of a power user, than at start, and only if you put the effort to question. And other times the framework is just bad and you have to switch...
reply