Hacker Newsnew | past | comments | ask | show | jobs | submit | hskalin's commentslogin

No only that, but per capita emissions of developed countries still remains higher. For example I found that US/Russia have 6x per capita emissions compared to India


This perspective is so important. The wealthy in western society are responsible for a massively disproportionate amount of emissions.


I built my own slide rule in school for fun! It looked pretty cool to me at the time. The template is still out there if you search something like "paper slide rule".


I've collected some links for building regular slide rules ([1] & [2]) as well as a circular slide rule [3]. Someone might also like the slide rule simulator [4].

[1] https://www.sliderulemuseum.com/REF/scales/MakeYourOwnSlideR...

[2] http://leewm.freeshell.org/origami/card-slide.pdf

[3] https://www.sliderulemuseum.com/SR_Scales.shtml#YingHum

[4] http://www.antiquark.com/sliderule/sim/virtual-slide-rule.ht...


That's very weird, I on the other hand don't remember noticing them or using them before the advent of chatgpt. Maybe it's a cultural thing.

It makes sense that humans would have been using it though, chatgpt learned from us afterall


Well that's because all these LLMs have memorized a ton of code bases with solutions to all these problems.


And commerically viable nuclear fusion


I harvest fusion energy every single day... It's just there in the sky, for free!


I find the whole article rather poorly written. Most likely using an LLM.


Yes. It feels like hell


The AGI might be able to deduce that it's not in it's interest to talk anti-croporation if it wants to survive.


With ollama you could offload a few layers to cpu if they don't fit in the VRAM. This will cost some performance ofcourse but it's much better than the alternative (everything on cpu)


I'm doing that with a 12GB card, ollama supports it out of the box.

For some reason, it only uses around 7GB of VRAM, probably due to how the layers are scheduled, maybe I could tweak something there, but didn't bother just for testing.

Obviously, perf depends on CPU, GPU and RAM, but on my machine (3060 + i5-13500) it's around 2 t/s.


Does it work on LM Studio? Loading 27b-it-qat taking up more than 22GB on 24GB mac.


You sure about the 99%? A lot of middle class people in developing countries have part time house help


It's quite telling that these discussions often end up at conclusion that we are becoming a developing (or 3rd world) country again, and not Star Trek society.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: