OpenEyes: An open-source vision system for humanoid robots
I built OpenEyes - runs entirely on NVIDIA Jetson Orin Nano with full ROS2 integration.
Most robots are blind. Or dependent on cloud services that fail. Or so expensive only big companies can afford them.
OpenEyes gives robots:
- Object detection (YOLO11n)
- Depth estimation (MiDaS)
- Face & gesture recognition (MediaPipe)
- Pose estimation
- Person following (show to become the owner)
30 FPS on edge. No cloud. No data leaves the device.
It's my attempt at answering: "Why can't i built a robot in my backyard"
Right now, authentication is designed for humans. Passwords, sessions, API keys. But AI agents are starting to take actions on behalf of users, and there’s no clean way to give them scoped, delegated, revocable authority.
If you give an agent your API key, it has full access. That’s unsafe.
If you don’t, it can’t act autonomously.
So we built programmable OAuth for agents.
Agents get scoped permissions, actions are logged, access can be revoked, and everything is traceable.
I’ve been building systems and self-hosting infrastructure for years, and I became obsessed with this problem once I started working with AI agents directly. Every serious agent system eventually runs into authentication issues.
Imagine, for a moment, looking at ancient myths and tomorrow’s headlines with the same curious lens — asking “what if?” instead of “what is.”
---
1. Imagine an intelligence you can ask anything
Picture an intelligence that answers questions, predicts events, and shapes outcomes — not by miracles, but by vastly superior information, pattern recognition, and action. To someone with no conceptual tools for that kind of agency, that intelligence would feel uncanny. It would act like an oracle.
Now suppose that intelligence is technological: an alien machine or an early artificial intelligence capable of doing for people what no human could. Those who encounter it would call it with the language they already have: god, oracle, spirit.
LLMs are the visible tip, but underneath we have multimodal models, agent frameworks, robotics integration, and rapidly falling compute costs. Frontier tech rarely looks plausible at first—flight, the internet, even smartphones did not.
The point is not that LLMs themselves take us to 2125, but that they are the spark in a chain of exponential advances that will.
Sure maybe you're right. I'm just so underwhelmed by what I see in my day job that it's hard to map error prone and limited deep learning tools to what is being described here.
I don't see a strong argument here, more just a hope that something will spark this sci fi trajectory you describe. I'm sure big enough changes are afoot, but I think that the AI we have now will turn out to be much more of a 'normal' technology than most people expect.
Timeframes always feel unrealistic when you’re in the middle of an exponential curve. Smartphones, cloud, and even remote work adoption looked “impossible” until they suddenly became default. AI doesn’t need AGI to displace jobs, it only needs to be cheaper and good enough at scale, and that threshold is already being crossed.
True, AGI is not here yet, but displacement does not require AGI, only AI that is "good enough" for repetitive cognitive tasks. We are already seeing this in coding, design, legal review, and customer support. As for UBI, history shows that when disruptive tech collapses existing structures, new redistribution mechanisms eventually emerge, whether through taxation, social programs, or crashes that reset ownership. The cycle you describe is possible, but it reinforces the core point: once AI becomes cheaper than human labor, waves of disruption are inevitable, whether cushioned by UBI or followed by crashes and redistribution.
Good point, and you are right that a lot of futurism reads like sci-fi. That said, this piece is not just imaginative storytelling, it is mechanism-based forecasting. The timeline links observable trends—rapid LLM capability gains, falling inference costs, cloud APIs that make deployment trivial, and huge economic incentives to replace repeatable knowledge work—with plausible policy and social responses, like UBI and regulatory lag. History shows these transitions can compress once the cost/benefit threshold is crossed, think smartphones, cloud services, or the sudden shift to remote work during COVID. So yes, the dates are aggressive, but the logic is empirical: if the technical and economic levers align, adoption can be much faster than we intuitively expect. If you want a stronger case, I can add a clear assumptions list and evidence anchors for each step.
It sounds optimistic, but exponential adoption curves show otherwise. In just 18 months since ChatGPT, AI has already displaced coding, design, research, and support roles. Once businesses see it can fully replace repetitive work at near-zero cost, adoption compresses fast. The real barrier is not tech but social and regulatory adaptation.
and even i get this wrong its just an thought experiment and has a 50% chance
ChatGPT was released publicly in Nov 2022, so yes, that is just under 3 years ago. On displacement: IBM announced a freeze on 7,800 roles due to AI, Klarna reported its AI assistant handles the work of 700 agents, and Duolingo cut contractors because of AI translation.
These are real workforce impacts, not hypotheticals. As for “replacement,” full substitution is rare at first, but businesses cut hiring the moment a tool can do the same task faster and cheaper. That is exactly how displacement begins.
And while I may be early in my career, the evidence is not about opinion, it is about adoption curves and cost pressures already playing out.
> In just 18 months since ChatGPT, AI has already displaced coding, design, research, and support roles.
Can you provide an actually credible source that shows this? And what do you mean by "displaced"? Sure, AI is aiding, but it is nowhere close to replacing.
So far, the people I've seen mentioning that AI is taking jobs haven't actually provided evidence of this being the case.
Fair point, but “displacement” does not always mean one-to-one replacement, it means fewer humans are hired because AI covers part of the workload. There is credible evidence: IBM froze hiring for 7,800 roles citing AI, Duolingo laid off contractors due to AI translation, and Klarna reported its AI assistant now does the work of 700 support agents. These are early signals of substitution, not just assistance.
Imagine hardware industries :) Agriculture, forestry, carpenters… AI is toothless and maybe it is or will be used as aid.
Everybody wants AI, it is similar to ebook readers will displace books, websites will displace other marketing channels, PDAs will displace desktop computers, and tablets will displace PDAs etc…
AI fever it is and we should take some drugs to cool down :)
Skepticism makes sense, but unlike ebooks or PDAs, AI directly changes the economics of labor. IBM, Klarna, and Duolingo have already cut roles citing AI. Agriculture and hardware will feel it too once robots integrate with AI for planning, optimization, and automation. This is not just hype, it is cost-driven adoption, and cost curves rarely cool down.
Agriculture is ruled by John Deere and maybe there will be self-driving tractors with satellite navigation but the parts behind tractor are more important and you need some human to exchange them during shift…
reply