Hacker Newsnew | past | comments | ask | show | jobs | submit | gnunez's commentslogin

I agree. I like using Antigravity for some of my frontend work, and I find it does a better job than Claude Code - Opus 4.6. I’ve also found the Gemini Flash models to be good at legal defense research—I use them to help New Yorkers fight parking tickets (https://nyceasyparking.com). That said, the Claude models are still amazing at agentic work.

I’m working on yet another cloud based coding agent https://seniordev.io/ that connects to an existing GitHub repo, spins up a feature branch, commits incremental changes, and opens a PR. You can jump into an embedded VS Code server to review and tweak the code before merging—no local setup needed. Any feedback is greatly appreciated Thanks!


Great work! I love your videos; they've taught me so much. Any plans for a Mixture of Experts (MoE) video? My understanding is that starting from GPT4 most advance models use MoE to some extent. For example, can I take the model from your GPT2 video and just change the feed forward layer to an MoE layer like the one found here (1)? I guess I can just try it myself but I enjoy the expert guidance you provide in your videos. Please don't stop! great content!

1. https://github.com/mistralai/mistral-inference/blob/main/src...


The 555 timer is considered one of the most successful chip designs in electronics. It has a simple, well understood, design. The timing on the 555 can be configured to generate a periodic signal using just a few resistors and capacitors. The 555 is often used as a timer in electronic circuits. The Linux scheduler is usually configured to respond to interrupts generated by a cpu. To replace those cpu interrupts by a simple 555 timer is impressive due to the unconventional nature of the setup.


Also, the 555 is an ancient chip, that pretty much appeared as the basic chip (along with it's friend, the 741 general purpose op-amp), being the two of them the protagonists of pretty much 70% of the hobby circuits in the 80s.


Thanks for your helpful comment! I was also struggling with the bad circuits voltage divider.


Sure thing! One of the things that helps me sometimes is re-arranging things visually.

A real divider:

  Input o----vvvvv----o-------------o Output
                      |
                      |
                      Z
                      Z
                      Z
                      |
                      |
                      o
                    Ground
  
The bad one:

  Input o-------------o----vvvvv----o Output
                      |
                      |
                      Z
                      Z
                      Z
                      |
                      |
                      o
                    Ground
  
Looking at it this way, it might be a bit easier to see that in the good circuit, both resistors can be in a Kirchoff voltage loop that includes the input, while in the bad one they can't.

(Edit: it appears I don't understand linebreaks on HN, so things are gonna appear garbled until I fix it.)

(Edit 2: Fixed.)


it also works really nice with the raspberry pi 4 and QEMU.


I was trying to get your idea, but then realised that probably you meant Raspberry Pi or QEMU. Or do you run it on Raspberry Pi with QEMU? If so… why?


Hi apologies for any confusion. What i meant to say is that OpenWrt can be run on the Raspberry Pi. I did this because I had a Raspberry Pi laying around and did not want to invest in additional hardware just for experimenting with OpenWrt within my internal network. You can also run OpenWrt in a virtual environment like QEMU -- this is useful if you're aiming to connect multiple VMs to OpenWrt, effectively setting up a virtual network with OpenWrt at its core. Please see links below for more info:

https://firmware-selector.openwrt.org/?version=21.02.3&targe... https://openwrt.org/toh/raspberry_pi_foundation/raspberry_pi https://openwrt.org/docs/guide-user/virtualization/qemu


It could be that the corpus ChatGPT was trained on is full of ‘confidently wrong’ answers from these ‘experts’. One solution could be to train these LLM on a higher quality corpus from real experts instead of random text from the internet. But would that just bring us back to the days of expert systems?


That will not solve the problem, because when GPT doesn't have the answer, it will make one up by copying the structure of correct answers but without any substance.

For instance, let's say your LLM has never been told how many legs a snake has, it knows however that a snake is a reptile and that most reptiles have four legs. It will then confidently tell you "a snake has four legs", because it mirrors sentences like "a lizard has four legs" and "a crocodile has four legs" from its training set.


I don't think this is necessarily the case anymore. The bing implementation of chatgpt has a toggle for how cautious it should be with getting things wrong. I was working on a very niche issue today and asked it what a certain pin was designated for on a control board I am working on. I believe it is actually undocumented and wanted to see what chatgpt would say. And it actually said it didn't know and gave some tips on how I might figure it out. I suppose it is possible that it synthesized that answer from some previous q&a somewhere, but i couldn't find any mention of it online except for in the documentation.


If we weren't completely hamstrung by copyright law we could legally train it on lots of actual books.


Wow this looks amazing! I can’t wait to get my copy. I wonder with all the available resources to learn electronics can someone become an electronics engineer without going to college (assuming enough mathematical maturity to understand Electrodynamics, etc)?


Ghidra was released 2 years ago. Am I missing something?


version 10 recently came out, now featuring a debugger


I can’t either. It reads like something out of GPT-3 or maybe even GPT-2. Lol no offense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: