I understood that it was common to post the paper to arxiv when it was in the process of being peer reviewed by a journal. The problem is the extended lag time between submission and final publication with most journals which pre-print archives like arxiv try to solve. Do people really just "submit to arxiv" and then that it? I didn't know that, but its not common in my field to use arxiv so I have not kept up with all practices.
There have been cases where arxiv has been used as a venue of final publication, but the general intent is that stuff submitted there will eventually find a home in the peer-reviewed literature. There are sometimes exchanges of arguments there that never make it to prime-time, but that's not the dominant use-case.
The reality is, though, that people working in a given field are much more likely to use the arxiv version as the basis for further work, simply because it is available so much earlier. It is not uncommon to reach the point of publication and then run around to try and find out where all the arxiv submissions you used were published, which can sometimes be challenging.
Cool! I'm glad to see RFID and localization being merged together like this. I think this really shows that RFID is a much more suitable technology for this type of application than a computation-heavy CV algorithm.
There was an article the other day posted on this. It operates at 24 and 60 ghz whereas most RFID is in the UHF (900 MHz band). This radio harvests energy to run an oscillator and actively transmits data whereas RFID only reflects back power (no active rf transmission). Also, this device require a fairly massive (42 dBm / ~20 watts rf) for a few cm operating distance compared to RFID which achieved several meters of distance using 36 dBm / 4 watt transmitters. The biggest achievement is the size reduction. In the previous HN thread, someone poured out that this is academically very interesting, but probably practical.
45 dBm is crazy! Do you know the FCC limitations for that frequency? For the ISM bands its +36 dBm EIRP maximum. I'm not familiar with the rules for the higher GHz frequencies.
Maybe you can describe this a bit more. The Great Seal Bug was used for wireless audio recording in the 40's, but this operated by using a microphone/cavity to modulate the load of an antenna; thereby embedding audio in the reflected fields.
Bell did a similar demonstration in the 1890s using a mirror to embed reflections in ambient light and demonstrated wireless audio transmission over 200 meters.
However, I'm not sure why audio would cause a frequency shift in a radio circuit. Perhaps you are meaning the antenna will be perturbed and that could possibly be recovered? I'd be interested to know.
In the times that I was still building radio transmitters (for a very illicit living, selling them to pirate radio stations in Amsterdam) I had to hot-melt each and every long wire and coil in place so that it wouldn't vibrate.
The oscillator circuitry of a transmitter is (even when crystal controlled) sensitive to mechanical perturbation, which typically leads to spurious AM and FM modulation of the outgoing signal. To demonstrate the effect I once held a half our session on air with a guy on the other side of the city by just talking to the circuit board.
In a PLL or crystal controlled transmitter modulating the carrier in such a coarse way is much harder. Typically the modulation is done using a capacitive diode (a varicap) which is a diode whose capacitance changes with the reverse voltage. Because this voltage has to be applied to the diode somehow (in the days before SMD) this meant that that wire was again susceptible to microphony because air pressure on the wire changed it's location relative to the ground plane and that caused a measurable frequency shift. Not nearly as big a shift as in the older stuff but it was definitely a factor.
Wifi radios as much more robust than the stuff that I built. But I suspect that given a sensitive enough detector a residual audio component might be extracted from an otherwise non-audio signal by direct interaction between the sound waves and the transmitter hardware.
In a nutshell, it is very much harder to make something that does not exhibit microphony than to make something that does. You'd have to take that into account from the beginning of the design.
Its far-field, so it couples completely differently than NFC tags. Also, NFC communicates by reflecting signals, this device communications by active transmission (according to the article, I haven't read their paper yet).
What's new? Its tiny, and I'm very curious how they got that an oscillator to work at extremely low powers. But, it is really just an extension of RFID/NFC/IoT miniaturization work. But then again, just about everything starts out that way.
Its a like to a blog on compressive sensing pointing out that sensing (or rather sampling) is mathematically equivalent to using the identity matrix. It goes on as an introduction to compressive sensing.
Not for any practical measurements. This would work in say an anechoic environment – save for the wall under test – but you'd end up sampling multipath constructive/destrictuve interference instead of the wall reflections.
>I'm hoping for precise indoor navigation, Google and Apple are currently working on that. All you'd have to do is walk around with your phone, measuring signal strength and then visualize that.
I'd wager there are _a lot_ more folks that that looking the problem. Its difficult to just use signal strength as this is heavily multi path dependent and time-varying as anything (e.g., moving your phone and hand around) within the environment changes.
There are results in the literature floating around that show some basic success, but nothing at all like the dreams of indoor GPS we're all hoping for. It's a fun problem space, but still in its infancy.