Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
BriggyDwiggs42
8 months ago
|
parent
|
context
|
favorite
| on:
We Can Just Measure Things
If you knew that the model never changed it might be very helpful, but most of the big providers constantly mess with their models.
cwillu
8 months ago
|
next
[–]
Even if you used a local copy of a model, it would still just be a semi-quantitative version of “everyone knows ‹thing-you-don't-have-a-grounded-argument-for›”
layer8
8 months ago
|
prev
[–]
Their performance also varies depending on load (concurrent users).
BriggyDwiggs42
8 months ago
|
parent
[–]
Dear god does it really? That’s very funny.
wiseowise
8 months ago
|
root
|
parent
[–]
Why are you surprised? It’s a computational thing, after all.
BriggyDwiggs42
8 months ago
|
root
|
parent
[–]
It’s not that crazy, just the architecture of differently quantized models and so on that you’d need to do that is impressive considering.
layer8
8 months ago
|
root
|
parent
[–]
The models are the same, it's the surrounding processing like "thinking" iterations that are adjusted.
BriggyDwiggs42
8 months ago
|
root
|
parent
[–]
That only works for LRMs no? Not traditional LLM inference.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: