Hacker Newsnew | past | comments | ask | show | jobs | submit | ivegotnoaccount's commentslogin

Wait, 114°C? Is that safe, especially long term? I guess Intel has margins in place, but I recall their processors having a Tjunction around 100°C


> You should be able to get a few MB on a page easily

A few hundred Kb, maybe, but a few MB seem unlikely. If I take the best case scenario (no optical aberration, perfect sensor...), my phone has a 48MP camera. We cannot have 48M * 3 colors as those 48MP are after Bayer matrix handling. We also cannot have more than one bit of information per pixel, due to printing often not having "real" greyscale but some kind of dithering and you don't know for sure how it would interact with your sensor. So in the best case, 6MB. But then, you need to add error correction: books prints are not perfect, especially at a sub-0.1mm scale. They are also not perfectly planar as paper is grainy. And the camera is not exactly at a 0 angle... In the 3 axis. So you would need to have the printed pixels be way larger than the sensor ones to be able to offset this. And so on.


I think you confused printed pixels with camera pixels along the way.


I explicitly talked about the best case scenario, which would have been "one camera pixel is able to retrieve precisely the information of one printed pixel (or several printed pixels that act as one)"

From there, I simply listed sources of error that would make it worse and words. The only place I could see conflation occurring is when I wrote about "sub 0.1mm" which indeed refers to the printed thing, and correspond roughly to the tolerance that would be needed to achieve millions of "printable" pixels, but even if it's about the printed pixels, it still limits the amount of information that would be stored.

Would you mind indicating where there seem to be confusion according to you? Re-reading it quickly, the comment seems consistent and indicates that both the camera pixels AND the printed pixels would cause issue.


I don’t know, still seems feasible. High quality full page images can be printed at ~2500x3500 pixels per page [1] (random print shop from the internet, but representative AFAICS), and that’s just what printers recommend for human readable images, they can produce images easily to 1200 DPI.

In another example, take the Machine Identification Codes (MIC) tracing dots that basic home printers can produce [2], (I bet the publishing industry could do much better to produce a high density grid.)

These dots have a stated diameter of 0.1 mm, and on an 8”x10” area one could get a grid of 80x25.4x25.4x(1/0.1mm)x(1/0.1mm)= ~5.1M dots.(wikipedia claims larger spacing, but that must be the protocol, printing itself allows for highly precise positioning) And that’s just for one color. Use CMYK (you could imagine a sub-project to design a 8bit color based dot scheme) and compression, and even with ECC losses I can see a few MB of encoded source per page being possible. And writing that decoder would be a great exercise!

[1]: https://www.docucopies.com/image-resolution/

[2]: https://en.m.wikipedia.org/wiki/Machine_Identification_Code


The dithering in the page pixels don't limit the amount of information you can get through your camera pixels.


Not sure I want to use what seems like a fully-closed standard for something as frequent as sharing locations when open alternatives exist.


Also doesn't "blink" nor have what's inside it "disappear" from perception.


Isn't it on their consumer lines only that Intel removed AVX-512?

Saphire Rapids is indicated as having support for it.

I know that even though you pointed to an EPYC CPU, all Zen4 support it, but Intel probably released it more for the professional users than their non-pro ones.


Yes it's only the Alder Lake (ie cheaper, consumer oriented CPUs) in which it has been removed. Server chips still have it AFAIK.

Even on Alder Lake, the official explanation is that it has both P(erformance) and E(fficiency) cores, with the E cores being significantly more power efficient and the P cores being significantly faster. The P cores have AVX512, the E cores don't. Since most kernels have no idea that this is a thing and treat all CPUs in a system as equal, they will happily schedule code with AVX512 instructions on an E core. This obviously crashes, since the CPU doesn't know how to handle those instructions. Some motherboard manufacturers allowed you to work around this by simply turning off all the E-cores, so only the P-cores (with AVX512 support) remained. Intel was not a fan of this and eventually disabled AVX512 in hardware.

As ivegotnoaccount mentioned, the Sapphire Rapids range of CPUs will have AVX512. Those are not intended for the typical consumer or mobile platform though, but for servers and big workstations where power consumption is much less of a concern. You would probably not want such a chip in your laptop or phone.


It would have been possible to devise a system call to 'unlock' AVX512 for an application that wants to use it, which would pin it to only be scheduled on P cores.


You end up with the issue of what happens if a commonly used library (or even something like glibc) wants to use AVX512 for some common operation, you could end up with most / all processes pinned to the P cores.


If you explicitly have to request AVX512 that might discourage glibc from using it.


Yup. I think it was from Commitstrip.

edit: found it. https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...?


Yeah that's the one.


Well, as explained in some of my other comments, it doesn't seem that effective. Most GPT-generated garbage will probably be GPT-3 based, which doesn't seem to too often trigger as "fake" (got more "human" results when I tested it with ChatGPT), while on the other hand, it says that several of my comments are fake with >99% certainty.


I see, that's obviously an issue. I only tested it with something I generated and someone's long form blog post.


While I understand the reasons why the author think it will go that way, I'm not fully convinced:

- [in average rich countries in 2030] Fast gigabit internet is cheap and everywhere (5G or mesh wifi) => Glad to hear that it will be that way, unlike the 4G for which we got the same promises yet for which I often have issues with my phone in the suburbs of a middle-sized city of a rich country. Also glad to hear ISP issues will be a thing of the past. Sorry for the slight sarcasm here, but this point is a prerequisite for the whole post.

- Outside of work for which expenses are not mine to pay, I'm really not a fan of paying extra for cloud services when I can do without.

- On the "editing code" part => That seem to introduce quite a lot of friction. What if the tool I like to use are not supported as part of this workflow? Which of those support private clouds if I'm working on things that must be kept private?

Last but not least, though this one is more a nitpick of mine than a real complain, but the article uses "localhost" quite ambiguously as sometime it refers to "running things locally" and other as a concept of the whole development machine (editing things locally, having the code on your drive...) and talk about things like OS development: While this may work fine for webdevs, I'm not so sure about other kinds of developers. GUI dev would get added friction, same for work needing low latency (the article talks about edge computing, but that requires paying even more if your structure is small). Maybe that works for Tesla OS devs, but not all companies have what is needed to create a full-fledged cloud environment for embedded developers.


Seems to have major biases against who knows what sentence structures. Even without trying to make it say fake, some of my messages and text I write in it are pretty confident I'm GPT-2...


https://news.ycombinator.com/item?id=32447928 is marked as nearly 100% fake, whereas I can assure you it was written by a human.

Maybe I was just unlucky with the comment I tried it with (took the longest one I saw in my history), but I don't think I would have liked seeing it either removed or spat at for being considered as "AI generated"...

The detector also thinks this comment is fake. Seems influenced by flavors of mistakes.

Idiomatic ones. Spelling ones. Grammar. All non-native speakers will easily get flagged. Does not look spot-on for now. Checked all those assertion live-typing on the demo. 0.09% real.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: