Just browse their Facebook profile. It's enough of an evidence. If that's not enough, just like couple of their posts and you're gonna witness the b.s. in their ads.
"As temperature approaches zero from the negative side, the model output will again be deterministic — but this time, the least likely tokens will be output."
I understand this as, a negative number far from zero is also quite random (just with a distribution that will produce unlikely tokens).
Yep! Very large negative temperatures and very large positive temperatures have essentially the same distribution. This is clearer if you consider thermodynamic beta, where T = ±∞ corresponds to β = 0.
> Personally, I hate it; I don't like magic or black boxes.
So, no compilers for you neither ?
(To be fair: I'm not loving the whole vibe coding thing. But I'm trying to approach this wave with open mind, and looking for the good arguments in both side. This is not one of them)
Accidental non-deterministic compilers are fairly easy if you use sort algorithms and containers that aren't "stable". You then can get situations where OS page allocation and things like different filenames give different output. This is why "deterministic build" wasn't just the default.
Actual randomness is used in FPGA and ASIC compilers which use simulated annealing for layout. Sometimes the tools let you set the seed.
I think you're misunderstanding.
AI is not a black-box, and neither is a compiler. We(as a species) know how they work, and what they do.
The 'black-boxes' are the theoretical systems non-technical users are building via 'vibe-coding'. When your LLM says we need to spin up an EC2 instance, users will spin one up. Is it configured? Why is it configured that way? Do you really need a VPS instead of a Pi? These are questions the users, who are building these systems, won't have answers to.
If there are cryptographically secure program obfuscation (in the sense of indistinguishability obfuscation) methods, and someone writes some program, applies the obfuscation method to it, publishes the result, deletes the original version of the program, and then dies, would you say that humanity "knows how the (obfuscated) program works, and what it does"? Assume that the obfuscation method is well understood.
When people do interpretabililty work on some NN, they often learn something. What is it that they learn, if not something about how the works?
Of course, we(meaning, humanity) understand the architecture of the NNs we make, and we understand the training methods.
Similarly, if we have the output of an indistinguishability obfuscation method applied to a program, we understand what the individual logic gates do, and we understand that the obfuscated program was a result of applying an indistinguishability obfuscation method to some other program (analogous to understanding the training methods).
So, like, yeah, there are definitely senses in which we understand some of "how it works", and some of "what it does", but I wouldn't say of the obfuscated program "We understand how it works and what it does.".
(It is apparently unknown whether there are any secure indistinguishability obfuscation methods, so maybe you believe that there are none, and in that case maybe you could argue that the hypothetical is impossible, and therefore the argument is unconvincing? I don't think that would make sense though, because I think the argument still makes sense as a counterfactual even if there are no cryprographically secure indistinguishability obfuscation methods. [EDIT: Apparently it has in the last ~5 years been shown, under relatively standard cryptographic assumptions, that there are indistinguishability obfuscation methods after all.])
There are plenty of smart devices (including lighbulbs, sensor movements, and what not)t hat use bluetooh, or protocols like Zigbee that enable all kind of functionality without wifi password.
Well, if you break everything down to the lowest level of how the brain works, then so do humans. But I think there's a relevant higher level of abstraction in which it isn't -- it's probabilistic and as much intuition as anything else.
Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.
Perhaps you can explain your point in a different way?
Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.
Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
> Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.
Not in the way that would apply problem of non-computability of Turing machine.
> Perhaps you can explain your point in a different way?
LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word. The model code does not solve a (let's say) NP problem to find solution to a puzzle, the only thing is doing is finding next best possible word through statistical models built on top of neural networks.
This is why I think Gödel's theorem doesn't apply here, as the LLM does not encode strict and correct logical or mathematical theorem, that would be incomplete.
> Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
I agree with you, though I had different angle in mind.
> You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.
> Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
This has been the real game changer for me.
As instant gratification is not so instant, I can go way larger stretches of time without checking the phone.
Also, as nowadays most people message, and calls are not so common (at least in my circles), there is no "harm".
But it would be silly to classify opioid use as gambling, which was the proposal. Figuring out the real issue, and banning that for kids, might well be a good idea. But to my mind the thinking from parents is still very open around what kids should be allowed to do, so it might be too soon.
https://www.state.gov/releases/office-of-the-spokesperson/20...
https://apnews.com/article/chile-united-states-china-visa-sa...