For you... But the results are different for different users.
For me Google shows the .net site first the github one as second.
Asking chatgpt 5.2 (Auto mode) to search for the nanoclaw site, it says the same, first links the .net site and shows the github as an optional page.
When I try to give it a hint by asking "are you sure?" it still even hallucinates that it's linked from the github:
"Yes — nanoclaw.net is the official documentation/site for the NanoClaw project, in the sense that it’s the project’s published homepage and is directly linked from its canonical open-source repository. It describes the project, features, installation steps, and links to the source code on GitHub, which is the authoritative source for the project’s codebase."
Chatgpt 5.2 (Thinking mode) and Claude gets it right the first try, they asnwer with the official .dev page first and claude shows the .net second as "another site covering the project".
Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it.
And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.
It all depends on how cheap they can get.
And another interesting thought: what if you could stack them? For example you have a base model module, then new ones come out that can work together with the old ones and expanding their capabilities.
It sounds similar, but doesn't sound the same to me.
Also how would you determine the similarity allowed? Maybe if we would have such a measure they could use that in voice model training to not allow that much similarity to a single voice, but if we don't have an agreed upon value for that than it's a subjective "sounds the same to me" rule then it's hard to follow that.
Ok, they can say that don't train on their voice, but it's very likely that a blend of voices from an "allowed" set could produce a very similar voice to his.
Yea, LLMs have prompt-, harness- and even random seed variability, and it leaves you wonder maybe with a better prompt or system instruction a model could perform better. Too bad most benchmarks don't report that variability, because it could reveal that the model may only perform well if it's prompted in the style of their training data and not generalize well to unseen prompt styles. Also it could explain some of the benchmark vs real world usage gaps.
I remember some papers about earlier models having around 15% prompt variability, and with different tool use sometimes there are even more significant jumps. And if I remember correctly the reasoning models improve some of these because lot of the early prompting tricks is included in them like "thinking step-by-step", "think carefully" and some other "magic" methods.
Also another trick is to ask the models to rephrase the prompt with their own words because that may produce prompt that better align with their training prompts.
For sure the big model developers are aware of these and constantly improving it, I just don't see too much discussion or numbers about it.
I haven't been able to find it again, but a few years ago I read a paper that found that certain prompts massively improved the performance of some LLMs on benchmarks. But the same prompt massively reduced the performance of some other LLMs. I assume this is still true, though perhaps not as dramatically as before.
with a high speed camera any vibrating reflective object like a potato chips bag can become a weak microphone if you have line of sight even behind a soundproof window:
https://www.youtube.com/watch?v=FKXOucXB4a8
Yes it's in a controlled environment not in a real world noisy environment.
But this is more stealthy than a camera and could potentially work with non-line-of-sight or even through walls.
And based on that I could imagine with a combination of a camera and this method, you could train the model on data where both the camera and this method is seeing the individual and then continue to track them with the wifi sensing + the trained model even where the camera cannot see them anymore.
But yea real world is noisy, so it could be very challenging.
You are confusing it with the earlier methods. This is similar but not the same method that doesn't use RSSI or CSI and it is passive.
This approach relies solely on the "unencrypted parts of legitimate traffic".
The attacker does not need to send any packets or "generate" their own traffic; they simply "listen" to the natural communication between an access point and its clients.
BFI is much more complex than simple signal strength. RSSI is an aggregation of information that the researchers describe as "not robust" for fine-grained tasks
In contrast, BFI is a high-resolution, compressed representation of signal characteristics.
This rich data allows the system to distinguish between 197 different individuals with 99.5% accuracy, something impossible with basic RSSI.
While older CSI methods often focused on walking directly between a specific transmitter and receiver (Line-of-Sight), BFI allows a single malicious node to capture "every perspective" between the router and all its legitimate clients.
Also CSI requires specialized hardware and custom firmware, this one isn't, just wifi module in monitor mode.
Thank you for adding this context of this particular research, I do see that it relies on MU-MIMO information, which does rely on more powerful WiFi infrastructure than the basic ESP32's I am referring to.
Yea, but most advertisers come only after something went viral, not when you are building something and you try to say to potential advertiser: "this will go viral trust me bro". And such small viral things are usually short lived, by the time the advertisers come it will probably starts to die down.
But yea, maybe he would have got a little more financial support than donations even if he puts up ads after it went viral.
Another way he could benefit from this is when people want his skills to build them similar things, so it's basically already an advertisement for his skills.
This is such a weird comment. Not all advertisement follows the influencer model. Banner ads have been funding small internet operations since before hacker news existed. Do we really don't have long term memories at all?
Only true for our current computers and not true with reversible computing.
With reversible computing you can use electricity to perform a calculation and then "push" that electricity back into a battery or a capacitor instead of dumping it to the environment.
It's still a huge challenge, but there is a recent promising attempt:
"British reversible computing startup Vaire has demonstrated an adiabatic reversible computing system with net energy recovery"
Actually pretty cool - I was about to comment “nice perpetual motion machine” but looked into a bit more and it’s much more interesting than that (well, a real perpetual motion machine would be interesting but…)
This kind of stuff could trigger the next revolution in computing, as the theoretical energy consumption of computing is pretty insignificant. Imagine if we could make computers with near-zero energy dissipation! A "solid 3D" computer would then become possible, and Moore's law may keep going until we exhaust the new dimension ;)
For me Google shows the .net site first the github one as second.
Asking chatgpt 5.2 (Auto mode) to search for the nanoclaw site, it says the same, first links the .net site and shows the github as an optional page. When I try to give it a hint by asking "are you sure?" it still even hallucinates that it's linked from the github:
"Yes — nanoclaw.net is the official documentation/site for the NanoClaw project, in the sense that it’s the project’s published homepage and is directly linked from its canonical open-source repository. It describes the project, features, installation steps, and links to the source code on GitHub, which is the authoritative source for the project’s codebase."
Chatgpt 5.2 (Thinking mode) and Claude gets it right the first try, they asnwer with the official .dev page first and claude shows the .net second as "another site covering the project".
reply