Hacker Newsnew | past | comments | ask | show | jobs | submit | lukax's commentslogin

Combine that with CVE-2025-24010 and any website was able to read any file on developers' computers.

https://github.com/advisories/GHSA-vg6x-rcgg-rjx6


Looks nice but still a bit sad that Flutter is used instead of something native given that they don't need the app to be cross-platform.

Well, even Microsoft uses React Native for a lot of Windows-only apps.


The React Native reason is called C++/WinRT, the teams that internally rioted against C++/CX and came up with C++/WinRT (now in maintenance, go figure), never cared one single second about the Visual Studio experience (using C++/CX was similar to the Delphi/VB like experience from C++ Builder), that everyone else was losing.

Thus React Native is the only alternative left those teams had to have some kind of nice GUI design experience alongside C++/WinRT, when using WinAppSDK, think Qt with QML/C++.

Agree on the Flutter comment.


I'm glad I'm not alone in missing native apps. I get that it's easier to code a note-taking Electron app, but every time I look at a Linux terminal app written in JavaScript it makes me want to cry.

I know I'm beating a dead HN horse here, but how the hell is it possible that megabytes of embedded JavaScript in websites, to the point that LinkedIn uses about half the RAM I had for my first computer loading 32 MB of JavaScript files. Jjira loads 50 (!) MB of javascript, enough to include the uncompressed version of War and Peace 16 times. That's the size of a full installation of the entire Windows 95 operating system. GitLab's static landing page is 13MB of JavaScript, larger than that of Twitter.

What the hell are we doing here? Why can I spin up a 100 MHz, 8MB RAM VM with a hard drive size that's 1/16th of your entry level MacBook's RAM and have apps open immediately? I understand some backsliding as things get more high level, but a website loading 50 megabytes of JavaScript, to me, is like a bright neon sign that screams "something has gone wrong terribly here". Obviously programs, websites and operating systems have become incredibly more complex but your $200 A-series Samsung phone has about 8 cores at 2.2 GHz each. A $200 processor when Windows 95 was released had one core at 0.1GHz. making the processing power about 164x faster. Keep in mind this $200 includes a fully functioning computer in the palm of your hand. The actual CPU part for a midrange phone like the Samsung A15 5G is the Dimension 6100+, which costs all of $25 bucks.

There must be some sort of balance between ease of prototyping, creating and deploying an application without bundling it with Electron or building websites that use tens of megabytes of a scripting language for seemingly nothing. Especially when we can see that a fast, usable website (see this very website or the blogs of many of the people who post here, compared to Reddit for your average medium or substack blog).

How the hell do we fix this? The customers clearly don't want this bloat, companies clearly shouldn't want it either (see research that indicates users exposed to a 200 millisecond load delay on Google, performed 0.2-0.6% fewer searches, and effect that remained even when the artificial delay was removed. This was replicated by Microsoft, Amazon and others. It's frequently brought up that Amazon has said that every 100 milliseconds of page load time cost them 1% in sales, though it's hard to find definitive attribution to this), programmers should hopefully not want to create crappy websites just like mechanics should hopefully not want to install brake pads that squeal every time the user breaks.

This got way longer than the two sentences I expected the post to be, so my apologies.

[1] https://tonsky.me/blog/js-bloat/ [2] Velocity and the Bottom Line, Shurman and Brutlag


> The customers clearly don't want this bloat, companies clearly shouldn't want it either

Both of these statements are false. If this was really the case, then competing company/dev could implement native counterpart and just siphon all the users – I've only seen this happen with CLI tools (e.g. esbuild, rollup, uv, etc.)


Most of the JS bloat comes from really aggressive analytics, error tracking, and a/b testing. Not many developers are willing or given approval to remove these features up for smaller bundle sizes.


> The customers clearly don't want this bloat,

citation needed. the customers clearly want it, for example most programmers chose VS Code over a native app


If there was a vs code native alternative that was parity, that might not be a case. That's apples and oranges


Sublime Text? Sure, doesn't have the long tail of extensions, but surely most people don't need those. The biggest issue with ST being the fact that it costs money...


> The biggest issue with ST

The biggest issue is that it’s not open source.


And we don't want to pay for tools, while expecting to be paid ourselves, rigth?


Zed?


I did not chose VSCode, I only touch it, because there are SDKs whose programmers decided only to support VSCode as Electron fans.

Thus I begrudgingly use VSCode when forced to do so, otherwise I use the IDEs of each OS vendor.


Or you could use Soniox Real-time (supports 60 languages) which natively supports endpoint detection - the model is trained to figure out when a user's turn ended. This always works better than VAD.

https://soniox.com/docs/stt/rt/endpoint-detection

Soniox also wins the independent benchmarks done by Daily, the company behind Pipecat.

https://www.daily.co/blog/benchmarking-stt-for-voice-agents/

You can try a demo on the home page:

https://soniox.com/

Disclaimer: I used to work for Soniox

Edit: I commented too soon. I only saw VAD and immediately thought of Soniox which was the first service to implement real time endpoint detection last year.


If you read the post, you'll see that I used Deepgram's Flux. It also does endpointing and is a higher-level abstraction than VAD.


Sorry, I commented too soon. Did you also try Soniox? Why did you decide to use Deepgram's Flux (English only)?


I didn't try Soniox, but I made a note to check it out! I chose Flux because I was already using Deepgram for STT and just happened to discover it when I was doing research. It would definitely be a good follow-up to try out all the different endpointing solutions to see what would shave off additional latency and feel most natural.

Another good follow-up would be to try PersonaPlex, Nvidia's new model that would completely replace this architecture with a single model that does everything:

https://research.nvidia.com/labs/adlr/personaplex/


I second Soniox as well, as a user. It really does do way better than Deepgram and others. If your app architecture is good enough then maybe replacing providers shouldn't be too hard.


I'm using them, how has it been like working there? I see they have some consumer products as well. I wonder how they get state of the art for such low prices over the competition.


Wow, XSS just waiting to happen.

  <h3>${this.getAttribute('title')}</h3>


It looks similar to Lit code, but it's not Lit, so yes, it is XSS waiting to happen all right. If it were Lit it would be escaped. It would start with html` which evaluates to a TemplateResult and the render() function only accepts a TemplateResult.


How? If the attribute is not trusted doesn’t that mean the dom is already compromised?


Never any mention of Soniox and they are on the Pareto frontier[1]

https://www.daily.co/blog/benchmarking-stt-for-voice-agents/


Also see how it compares to other providers:

https://soniox.com/compare


Not really in the PagedAttention kernels. Paged attention was integrated into FlashAttention so that FlashAttention kernels can be used both for prefill and decoding with paged KV. The only paged attention specific kernels are for copying KV blocks (device to device, device to host and host to device). At least for FA2 and FA3, vLLM maintained a fork of FA with paged attention patches.



Maybe AWS ParallelCluster which is a managed SLURM on AWS.


It's not that simple to safely parse HTTP request form. Just look at Go security releases related to form parsing (a new fix released just today).

https://groups.google.com/g/golang-announce/search?q=form

5 fixes in 2 years related to HTTP form (url-encoded and multipart).

- Go 1.20.1 / 1.19.6: Multipart form parsing could consume excessive memory and disk (unbounded memory accounting and unlimited temp files)

- Go 1.20.3 / 1.19.8: Multipart form parsing could cause CPU and memory DoS due to undercounted memory usage and excessive allocations

- Go 1.20.3 / 1.19.8: HTTP and MIME header parsing could allocate far more memory than required from small inputs

- Go 1.22.1 / 1.21.8: Request.ParseMultipartForm did not properly limit memory usage when reading very long form lines, enabling memory exhaustion.

- Go 1.25.6 / 1.24.12: Request.ParseForm (URL-encoded forms) could allocate excessive memory when given very large numbers of key-value pairs.

Probably every HTTP server implementation in every language has similar vulnerabilities. And these are logic errors, not even memory safety bugs.


I consider it a small win that those are _only_ 'resource exhaustion' attacks. Denial of service potential to be sure. Something nice to avoid / have limits on also for sure.

However I'd rather have that than a more dire consequence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: