Hacker Newsnew | past | comments | ask | show | jobs | submit | atomicnumber3's commentslogin

Programming is a necessary but not sufficient condition for software products to exist. So while the programming has to be good, so too do many other things, like product vision, product management, project management, and of course there still needs to be feedback between all of the above so that engineering isn't implementing a misunderstood version of the product and that product isn't asking for 5 years and a PhD research team. And on and on and on. Typing the code is like 2-10% of actually ending up with a software project and it's more toward the 2% for a software business.

So while AI made coding maybe 110% faster, it has also made literally every other person in the process lose their gd minds and they're wanting to break or skip everything else in the process to just shit out code faster.


Going faster only works WHEN you know EXACTLY (or close to it) what you want.

Going faster when experimenting? Nah you actually need a mix of slow and fast, and mostly slow stuff up-front.

There's a fundamental misunderstanding of how people actually do stuff imo - its akin to force fitting a square peg in a round hole. Im sure many are hoping its just a 'your organisation is designed wrong' problem. I doubt it though.


I meant 10% faster btw, typo

I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.

I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.

This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.

And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.


Pure 'vibe coding' is essentially technical 'tittytainment'. Using AI for the horizontal spread while you enforce vertical architectural depth is true deep work.

This is great and - something about programming has always felt adjacent to esotericism and the occult to me. Serial Experiments Lain is kind of in this vein too.

Luckily electromagnetism is the great equalizer. I'm imagining guerrilla warfare involving giant (in terms of GWh stored) Jerry-rigged capacitors driving electromagnets that are lobbed into places that would be extremely unappreciative surprise recipients of magnetic fields with flux densities measured in full Teslas

The point of wayland, though, is that back then 13-year-old you would get an application that "works" but to support myriad things (like HiDPI) you'd have to DIY it. Whereas now, sure a 13 year old perhaps won't write directly to wayland's APIs, but you'll use a library and have a much more globally usable result. And honestly probably have a better time - less effort for the same result, and with a more maintainable project in the long run.

HiDPI has always been perfectly supported by X11.

The only problem that has existed is that originally there was a single DPI value, not a different DPI value for each monitor.

This has never created any problem for the people using multiple monitors with the same resolution, but only for the people who have used multiple monitors having different resolutions and who might have not liked the changes in windows size when moving a window from a monitor to another monitor.

That was indeed a problem, but it really affected a rather niche use case and it was also trivial to solve without any change in the X11 design, by just making DPI a per monitor variable, which was done long ago.

So criticizing X11 about a supposed problem with HiDPI is incorrect. I have used only multiple 4k monitors with my PCs, with X11, for more than a dozen years and I never had any problem with HiDPI, with the exception of many Java programs written by morons, which ignore the system settings and which also do not allow the user to change the font used by them. I do not know which is the problem with the Java programmers, but I never encountered programs with such a behavior, except those written in Java. Moreover, the Java programs are also the only that had problems with monitors using 10-bit per color component.

While X11 itself never had problems with supporting HiDPI, at least not in the XFCE that I am using, I heard that other desktop environments have created problems with HiDPI that have nothing to do with X11, by not exposing the X11 DPI settings but providing instead some "window scaling" settings, which is something that I do not know how it is implemented, but there are good chances that it is implemented in a wrong way, judging from the complaints that I have seen. I cannot imagine how one could use correctly a "window scaling" factor, because the font rendering program must know the true DPI value when rendering for instance a 12-point font. If rendering is done at a wrong DPI and then the image is scaled, the result is garbage, so in that case it would not be surprising that people claimed that HiDPI works badly in X11, when in fact it was Gnome or whatever desktop environment was used who was guilty for bad support, not X11. I never had to fight with those desktop environments, but I assume that even those would have worked correctly with HiDPI, when using xrandr to configure X11, instead of using the settings of the desktop environment.


There is nothing "niche" about plugging in a modern (e.g. made within last 5 years) laptop into an external display.

These kind of posts just show how disconnected from reality some of y'all are from what most Linux desktop users nowadays actually need from the desktop platform.


I always plug my laptop into one or two external displays.

Even without configuring distinct DPIs per monitor that was not a problem for me, because on the small screen of the laptop I kept only some less important application, like the e-mail program, while working on the bigger external displays, so I had no reason to move windows between the small screen of the laptop and the bigger external displays.

But like I said, setting a different DPI value for each monitor has been added to X11 many years ago, I do not remember how many.

I do not see why one would want to move windows between the external displays and the laptop, when you have connected external displays, so I consider this a niche use case, i.e. moving windows between small screens and big screens. I agree with you that having simultaneously big screens and small screens is not niche, so I was not referring to this.

Without a per-screen DPI value you cannot control the ratio between the sizes of a window when is moved between the big screen and the small screen, but even when you control the ratio, moving windows between screens of different sizes does not work well because you must choose some compromise, e.g. if you keep the same physical size some windows from the big screen will not fit on the small screen and if you make the windows occupy the same fraction of the screen size they will change their sizes during moving and they will be more difficult to use on the small screen.

But like I have said, this no longer matters as the problem has been solved even for this niche use case. I do not even remember if this problem still existed by the time when Wayland became usable.


Asymmetric laptop+monitor setups are very popular, people buy stands to prop their laptop up next to their monitor and just use both as normal displays. When i only had one monitor at my desk, i did this a lot, but i see people at work propping up their MBP next to two other monitors and using all 3. The idea that people wouldn't regularly go back and forth between both screens is sadly completely wrong.

You don't even need fedora - clean arch install, install vim gnome and Firefox, and boom your computer now just works.

Hyprlands been making waves

A commodity yes, but could be wrapped in to work very nicely with the latest and greatest in python tooling. Remember, the only 2 ways to make money are by bundling and unbundling. This seems like a pretty easy bundling story.

Juggalos, bronies, 9th doctor fans, billionaires, royals (baseball team), and royals (landed nobility)?

In _Inside Job_, it was Juggalos, the Illuminati, the Catholic Church, Cognito Inc [the main feature of the show, kind of the Deep State], the Atlanteans, and the Reptoids.

It really is past time for another expansion set for Illuminati.

This is absurdly misanthropic and dehumanizing.

I firmly believe that every single person on this entire planet has a depth to them that far, far exceeds anything an LLM could even begin to approximate. I'm sorry you're in a position that you can't see that at all - that each and every one of them feel happiness and sadness and love and hate and fear and rage and inspiration and passion and are utterly human. I hope you see it someday.


I agree with you, every living being is beautiful in their own way. And good on their own way. But we not very well equipped to follow the rules of logic. The most capable of us can follow a very limited number of logical threads, with very limited steps. I don't consider it dehumanizing because attaining true knowledge is not necessarily human or natural for the human apparatus to process. Given enough strain and effort, we can only dabble in the elevated grounds of information and logic. Most people cannot, that is okay. I am not comparing llms to humans in a broad sense, but specific to reasoning, llms are complete shit and so it is the average human. I am not impressed by llms if that was your impression and I am not saying the average human is inferior or deserve suffering lol

Yes, somehow. I have been dealing with an awful lot of people who basically have what are theoretically logic degrees who suddenly just take LLMs at face value, or quote them to me like that actually means anything. People I formerly thought were sane.

I don't mean to put words in your mouth but from what I've seen, in person but mostly online, but the "problem" (and I put that in quotes because I don't even know what to call it... it seems deeper than a mere "problem") is that they quote them as if they are autonomous, sentient beings.

The problem is that LLM output looks like a human conversation. People believe it.

Which is more believable?

“The sky is filled with a downpour of squealing pigs. Would you like me to suggest the best type of umbrella?”

“Sky pigs squealing”


I am not sure I would even say "believe", I would think of it more as short-circuiting our critical thinking. I think it taps into something at the core of our tribal instincts. It was famously present in even basic systems like Eliza. And it's not just machines... The same tricks are used by conmen, politicians, and psychopaths, which is more negative than I intend. Even with good intentions and positive outcomes, I feel we need to remember that we drive it, not the other way around.

People just don't like to be played for fools. Perhaps us giving into this is progress? I'd give a big ol' "fuck you" to anyone who claims it is, but I'm also pretty old.

Some of this might depend on the source.

I’ve seen some people quote AI like you’re saying. However, when I preface something with “ChatGPT said…”, my intention is to convey to the listener that they should take it with a grain of salt, as it might be completely bull shit. I suppose I should consider who I’m talking to when I make that assumption.


it’s a slightly orthogonal problem to using the active voice of “XYZ says…”, it’s treating the text continuation engine as an “other” that may know better than they do, playing into sci fi conceptions of AI having its personal positronic brain or whatever, having its own ideas and deciding to carve a horse out of driftwood.

It’s not quite anthropomorphizing either that’s the issue, need a word for “treating it as tho it were a machine conscious that exists alongside humanity*”, how does cyborgropomorphizing sound

   * and not merely a markov chain running in Sam Altman’s closet

You might consider prefixing with 'ChatGPT claims…' as a clearer expression of uncertainty.

Surely the correct conclusion is to question the value/veracity of those degree issuing institutions and rituals?

And if you previously were unaware of the insanity and irrationality passing under the surface of such human activity, I guess it can come as a bit of a shock :)


>take llms at face value

It happened with science, politics, traditional media, history books, "good engineering practices" applied to IT, OOP,tdd,DDD,server side rendering, containerization... Literally every bullshit shilled to the moon is accepted without second guessing and you would be without a job, in an asylum, for questioning 2 of them in a row.

Why is it different now? EVERYTHING is bullshit, only attention matters. And craftsmanship.


and ruthless efficiency

I don't think this has anything to do with sanity. This has to do with people for seeking self confirmation instead ot disproval.

For pretty much everything there is a conspiracy theory out there claiming the opposite, and these types usually started out searching the internet for someone else who believes the same that they did at the time.

But, as we all know, this technique will eventually lead to overfitting. And that's what those types of people have done to themselves.

Well, and as lack of education is the weakness of democracy, there's a lot of interested parties out there that invest money in these types of conspiracy websites. Even more so after LLMs.

Whoever controls the news controls the perpetual presence, where everything is independent of the forgotten history.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: