This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
No, what you're thinking of as "agents" is the problem. You want workflows.
Think of it like laying down the rails / train tracks, before trains go over them. The trains can only go over the approved tracks, nothing else.
If you have new types of capabilities and actions, it can propose them, but your organization will have policies to autoreject them, or require M-of-N approval of new rails.
You don't just want open-ended ad-hoc exploration by agents to be followed immediately by exploitation before you wake up.
I'm building Safebox and Safecloud, where this won't be the case anymore. Not only will you have a decentralized hosting network that can sideload resources (e.g. via a browser extension that looks at your "integrity" attribute on websites) but also the websites will require you to be logged in with a HMAC-signed session ID (which means they don't need to do any I/O to reject your requests, and can do so quickly)... so the whole thing comes down to having a logged in account.
As far as server-to-server requests, they'll be coming from a growing network of cryptographically attested TPMs (Nitro in AWS, also available in GCP, IBM, Azure, Oracle etc.) so they'll just reject based on attestations also.
In short... the cryptographically attested web of trust will mean you won't need cloudflare. What you will need, however, to prevent sybil attacks, is age verification of accounts (e.g. Telegram ID is a proxy for that if you use Telegram for authentication).
Why would you assume it needs to be? You don’t think that websites on the Internet might not want to allow random bots and scrapers to waste their resources, and require people to have an account in order to access non-static resources on the website? You do realize that API keys exist, right?
Why does ChatGPT slow down so much when the conversations get long, while Claude does compaction?
My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API –- when it should have been running quantized models on its own server.
We currently already use video.js, and our framework us used all over the place, so we’d be the perfect use case for you guys.
How would we use video.js 10 instead, and for what? We would like to load a small video player, for videos, but which ones? Only mp4 files or can we somehow stream chunks via HTTP without setting up ridiculous streaming servers like Wowsa or Red5 in 2026?
That's great! It looks like you have a pretty extensive integration with the prior version of Video.js, so migrating will take some work, but I think worth it when you can make the time. That said, for Beta it works with browser-supported formats and HLS, with support for services like Youtube and Vimeo close behind as we migrate what we haver in the Media Chrome ecosystem[1]. So if that's what you need maybe hold your breath for a few weeks.
What are you supporting today that requires Wowza or Red5? The short answer is Video.js is only the front-end so it won't help the server side of live streaming much. I'm of course happy to recommend services that make that part easier though.
Thank you for your feedback. Yep I definitely understand that Video.js is just the front end. I want to avoid using Wowza / Red5 and just want to serve chunks of video files, essentially, buffering them and pasting them to the "end of the stream" laying down tracks ahead of the video.js train riding over those tracks.
So I'm just wondering whether we can do streaming that way, and video.js can "just work" to play the video as we fetch chunks ahead of it ("buffering" without streaming servers, just basic HTTP range requests or similar).
You should check out HLS and DASH. If you're already familiar and you're not using them because they don't meet your requirements, then apologies for the foolish recommendation. If not, this could solve your problem.
Apparently, Iron Beam still exists (or at least was demoed in 2024).
Originally, the lasers were going to be mobile, but now they have to be stationary, so it will work like the game Missile Command, except you have unlimited ammo, but no concurrent shots, and the missiles can't be rotating (like a rifled bullet would).
That's much more feasible-sounding that I'd assumed (coming from low expectations).
The US has deployed operational anti-drone lasers for a few years now, though not widely. They are still quite new but they have several kills. This is probably operational field testing for fine-tuning the design prior to producing them at scale though.
But it requires A LOT of work to make sure it is actually safe for people and organizations. And no, an .md file saying “PLEASE DONT PWN ME, KTHX” isn’t it at all. “Alignment” is only part of the equation.
This all reads, to put it politely, like it's being written by someone who is not all there and being convinced by letting AI write everything that they have a coherent idea. Or just trying to put a bunch of buzzwords together to get people to buy something. Do you have any code or actual demos of "your" "work" to share? Your homepage's "See It in Action" section is just more AI slop articles in video form.
https://community.safebots.ai/t/researchers-gave-ai-agents-e...
reply