The main thing holding most people back from web-based IDEs is restricted filesystem and tools integrations, but cloud office suites are extremely popular. Google has excellent infrastructure for distributed build and test cycles built into Cider to go along with the entirely remote version control system.
Best of luck on your web-based demos! Dropping people into a working dummy environment with a few tutorial prompts should really help conversions.
I have an XOR128-style mode and a byte-transpose/byte-split-like mode, but I should not claim that as a proper Chimp128 or Arrow Parquet byte-stream-split comparison yet. I willadd direct baselines for Chimp128 and Arrow/Parquet BSS+zstd to the harness.
Apple could easily support eGPUs if they wanted to, but they choose to have vertical integration over fragmentation or usefulness. It's the same as them not supporting OpenGL or Vulkan: they could if they wanted to be a better gaming/porting target, but compatibility of any sort is not a priority.
Apple did recently approve drivers for both nvidia and amd, but not for gaming purposes.
Apple supported OpenGL plenty, just that the world moved.
Apple created metal, shortly after Vulkan was created.
"They could support it if they wanted to" is almost a tautology.
Of course they could.
But then they have to support another thing.
They are on the hook when something goes wrong.
It's not that it's reserving power, but rather that you hit some bottleneck on a 3070 Ti before running into thermal limits-- it's likely limited by either tensor core saturation or RAM throughput. Running the workload with Nvidia's profiling tools should make the bottleneck obvious.
Generally the bottleneck is RAM throughput. Inference, in particular token generation, especially on a single user instance, is not all that computationally complex; you're doing some fairly simple calculations for each parameter, the time is dominated by just transferring each parameter from RAM to the cores. A 31B dense model like Gemma 4 has to transfer 31B parameters (at 16 bits per parameter for the full model, though on consumer hardware people generally run 4-8 bit quantizations) from RAM to the cores, that's a lot of memory transfer.
Prompt processing or parallel token generation can do a bit more work per memory transfer, as you can use the same weights for a few different calculations in parallel. But even still, memory bandwidth is a huge factor.
They are bullshit machines because they do not have an internal mental model of truth like a human does. The flagship models bullshit less, but their fundamental architectures prevent having truth interfere with output.
"Bullshit" is a human concept. LLMs do not work like the human brain, so to call their output "bullshit" is ascribing malice and intent that is simply not there. LLMs do not "think." But that does not mean they're not incredibly powerful and helpful in the right context.
I sort of agree. In this context "bullshit" means "speech intended to persuade without regard for truth", and while it's true that LLM output is without regard for truth, it's not an entity capable of the agency to persuade, although functionally that is what it can appear like.
I'm glad there's many teams with automated scans of pypi and npm running. It elevates the challenge of making a backdoor that can survive for any length of time.
That's a narrow fair use exception. Many of these open game engines are effectively 1:1 decompilations of the original games, and it would be shocking if they were not effectively copyrighted the same as the original.
I don't think this has been tested in court, but the recent flood of Nintendo game decompilations is likely to change that.
Pre BSD's (386BSD) where in the same case with AT&T Unix and after a few years of rewritting code under BSD licenses they were perfectly ok to ship, from NetBSD 0.9 to FreeBSD, OpenBSD was a NetBSD fork.
Current OpenTTD has no former TTD code since decades ago. I remember Solene@ from OpenBSD (now ex-user) playing OpenTTD for MacPPC (PowerPC G4) a few years ago as she had in issue with mouse input.
Microsoft released a video that covers effectively all of the Xbox One security system, and it's referred to extensively in the talk. The specific methods of glitching don't require any insider knowledge.
They also told everyone they added more anti glitching to later hardware revisions; which by the process of elimination tells everyone they thought this was possible.
The whole initiative was a success when it gave them a year; an unqualified triumph when it gave them the whole generation; they really are not going to be to sad after 12 years.
Right, as Markus says - even gods can bleed. And he's right: Tony Chen's team did god-level work with the Xbox One security system, so what must have followed in the Xbox Series S is truly unknowable. I don't think there's even a tech talk on it. This talk is probably the most elite hacking talk I've ever watched. Everyone who worked on this stuff at MS can and obviously should be very proud of what it took - especially as this probably won't have any commercial impact on Xbox game devs or multiplayers.
Many engineers get paid a lot of money to write low-complexity code gluing things together and tweaking features according to customer requirements.
When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.
A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.
Complexity is not where the value to the business comes from. In fact, it's usually the opposite. Nobody wants to maintain slop, and whenever you dismiss simplicity you ignore all the heroic hard work done by those at the lower level of indirection. This is what politics looks like when it finally places its dirty hands on the tech industry, and it's probably been a long time coming.
As annoying as that is, we should celebrate a little that the people who understand all this most deeply are gaining real power now.
Yes, AI can write code (poorly), but the AI hype is now becoming pure hate against the people who sit in meetings quietly gathering their thoughts and distilling it down to the simple and almost poetic solutions nobody else but those who do the heads down work actually care about.
> A huge fraction of the software economy runs on CRUD and some business logic.
You vastly underestimate the meaning of CRUD applied in such a direct manner. You're right in some sense that "we have the technology", but we've had this technology for a very long time now. The business logic is pure gold. You dismiss this not realizing how many other thriving and well established industries operate doing simple things applied precisely.
Most businesses can and many businesses do run efficiently out of shared spreadsheets. Choosing the processes well is the hard part, but there's just not much computational complexity in the execution, nor more data than can be easily processed by a single machine.
That's a false dilemma. If that's what you want, you absolutely can use the AI levers to get more time and less context switching, so you can focus more on the "simple and poetic solutions".
Best of luck on your web-based demos! Dropping people into a working dummy environment with a few tutorial prompts should really help conversions.
reply