This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.
I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).
Some of our decisions in this direction:
- Minimize how long we have "raw data" in memory
- Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
- Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
We're always open to criticism on how to improve our implementation around this.
> Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it,
In the US you it is not legal to be compelled to turn over a password. It's a violation of your fifth amendment rights. In the UK you can be jailed until you turn over the password.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Is this somehow fundamentally different from having memories?
Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?
But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.
If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.
Memories are usually private. People can make them public via a blog.
AI feels more like an organized sniffing tool here.
> If a product passes those criteria, then it - by design - cannot violate others' privacy
A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.
Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.
> AI feels more like an organized sniffing tool here.
"AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.
But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.
>That's why we have technology in the first place, to improve our lives, right?
No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.
Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.
If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.
If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?
Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.
It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).
The non privacy-conscious will just use Google/etc.
A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."
My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.
I used to be considered a weirdo and creep because I would answer the question of why don't I have WhatsApp with the answer "I do not accept their terms of service". Now people accept this answer.
I don't know what changed, but the general public is starting to figure out that that actually can disagree with large tech companies.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.
I've been meaning to basically make this myself but I've been too lazy lately to bother.
I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.
Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
> Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
Those don't sound like things that you need AI for.
> > Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
This would be its death sentence. Nuked from orbit:
sudo rm -rfv /
Or maybe if there's any slower, more painful way to kill an AI then I'll do that instead. I can only promise the most horrible demise I can possibly conjure is that clanker's certain end.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I push a button on the phone and then say them. I've been doing this for over twenty years. The problem is ever getting back to those voice notes.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Maybe I missed it but I didn't see anything there that said it saved conversations. It sounds like it processes them as they happen and then takes actions that it thinks will help you achieve whatever goals of your it can infer from the conversation.
I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).
My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.
Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).
Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.
Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.
I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.
The concern is real but the local solution is not ready. The author does not seem to think about that from the perspective of an "average consumer". I have been running my personal AI assistant on a consumer-grade computer, for almost an year now. It can do only one in thousand of the tasks that cloud models can do and that too at a much slow pace. Local ai assistant on consumer grade hardware is at least a few year away, and "always-on" is much further than that IMO.
Maybe I'm just getting old, but I don't understand the appeal of the always-on AI assistant at all. Even leaving privacy/security issues aside, and even if it gets super smart and capable, it feels like it would have a distancing effect from my own life, and undermine my own agency in shaping it.
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
Totally agree. Sounds some envision want some level of Downton Abbey without the humans as service personal. A footman / maid in every room or corner to handle your requests at any given moment.
No matter how useful AI is and will become - I use AI daily, it is an amazing technology - so much of the discourse is indeed a solution looking for a problem. I have colleagues suggesting on exactly everything "can we put an MCP in it" and they don't even know what the point of MCP is!
It's the rat race. I gotta get my cheese, and fuck you, because you getting cheese means I go hungry. The kindergarden lesson on sharing got replaced by a lesson on intellectual property. Copyright, trademark, patents, and you.
Or we could opt out, and help everyone get ahead, on the rising tide lifts all boats theory, but from what I've seen, the trickle of trickle down economics is urine.
I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
I wonder if the answer is that it is stored and processes in a way that a human can’t access or read, like somehow it’s encrypted and unreadable but tokenized and can be processed, I don’t know how but it feels possible.
This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.
> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
Yes! We see a lot of the same things that really should have been solved by the first wave of assistants. Your _Around The House_ reads similar to a lot of our goals though we would love the system to be much more pro-active than current assistants.
Feel free to reach out. Would love to swap notes and send you a prototype.
> I hope the memory crisis isn't hurting you too badly.
Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.
First it's ads, then it's political agenda. We've seen this inconspicuous transition happen with social media and it will happen even more inconspicuously with LLMs.
> The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything
Big Brother is watching you. Who knew it would be AI ...
The author is quite right. It will be an advertisement scam. I wonder whether people will accept that though. Anyone remembers ublock origin? Google killed it on chrome. People are not going to forget that. (It still works fine on Firefox but Google bribed Firefox into submission; all that Google ad money made Firefox weak.)
Recently I had to use google search again. I was baffled at how useless it became - not just from the raw results but the whole UI - first few entries are links to useless youtube videos (also owned by Google). I don't have time to watch a video; I want the text info and extract it quickly. Using AI "summaries" is also useless - Google is just trying to waste my time compared to the "good old days". After those initial videos to youtube, I get about 6 results, three of which are to some companies writing articles so people visit their boring website. Then I get "other people searched for candy" and other useless links. I never understood why I would care what OTHER people search for when I want to search for something. Is this now group-search? Group-think 1984? And then after that, I get some more videos at youtube.
Google is clearly building a watered-down private variant of the web. Same problem with AMP pages. Google is annoying us - and has become a huge problem. (I am writing this on thorium right now, which is also chrome-based; Firefox does not allow me to play videos with audio as I don't have or use pulseaudio whereas the chrome-based browser does not care and my audio works fine - that shows you the level of incompetency at Mozilla. They don't WANT to compete against Google anymore. And did not want since decades. Ladybird unfortunately also is not going to change anything; after I critisized one of their decisions, they banned me. Well, that's a great way to try to build up an alternative when you deal with criticism via censorship - all before leaving alpha or beta already. Now imagine the amount of censorship you will get once millions of people WERE to use it ... something is fundamentally wrong with the whole modern web, and corporations have a lot to do with this; to a lesser extent also people but of course not all of them)
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)
Just when you've asked if there are eggs the doorbell rings, the neighbor stands there in disbelief, it told me to bring you eggs? Give him the half bottle vodka, it's going to expire soon and his son will make a surprise visit tonight. An argument arises and it participates by encouraging both parties with extra talking points.
But this was only the beginning, after gathering a few TB worth of micro expressions it starts to complete sentences so successfully the conversation gradually dies out.
After a few days of silence... Narrator mode activated....
> There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
I really dislike the preorder page. The fact that it's a deposit is in a different color that fades into the background, and it refers to it as a "price" multiple times. I don't know if it was intentionally deceptive, but it made me dislike this company.
This was the inevitable endpoint of the current AI unit economics. When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models. They have to subsidize the compute by monetizing the user's context window. The real liability isn't just ads; it's what happens when autonomous agents start making financial decisions influenced by sponsored retrieval data.
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
This isn't a technology issue. Regulation is the only sane way to address the issue.
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
> This isn't a technology issue. Regulation is the only sane way to address the issue.
It is actually both a technology and regulation/law issue.
What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.
I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.
Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition.
Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
I'm not sure how anyone could reasonably argue that Alaska would be orders of magnitude better off if they reversed the implementation of their billboard-banning ballot measure and put up billboards everywhere.
I trust corporations far far far less than government or lawmakers (who I also don’t trust). I know corporations will use ads in the most manipulative and destructive manner. Laws may be flawed but are worth the risk.
I mean why is it so difficult for such companies to understand the core thing: irrespective of whether the data related to our daily lives gets processed on their servers or ours, we DON'T want it stored beyond a few minutes at max.
Even if these folks are giving away this device for 100% free, I'll still not keep it inside my house.
Because storing, analyzing, and selling access to your data is massively profitable and they don’t care what the (not even vocal) privacy focused minority wants.
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
> ...exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
Pretty sure a) it's not a matter of whether you agree and b) GDPR still considers always-on listening to be something the affected user has to actively consent to. Since someone in a household may not realize that another person's device is "always on" and may even lack the ability to consent - such as a child - you are probably going to find that it is patently illegal according to both the letter and the spirit of the law.
Is your argument that these affected parties are not users and that the GDPR does not require their consent?
Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.
How long web search had been objective, nice, and helpful - 10 years? Now things are happening faster so there is max 5 years in total of AI prompt pretending that they want to help.
I guess it goes to show that real value is in the broader market to a certain extent, if they can’t just sell people the power they and up just earning a commission for helping someone else sell a product.
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
> Passively listening ambient audio is being treated as something that doesn't need active consent
That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.
Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
AI "recording" software has never been tested in court, so no one can say what the legality is. If we are having a conversation (in a two party consent state) and a secret AI in my pocket generates a text transcript of it in real time without storing the audio, is that illegal? What about if it just generates a summary? What about if it is just a list of TODOs that came out of the conversation?
Speech-to-text has gone through courts before. It's not a new technology. You're out of luck on sneaking the use of speech-to-text in 2-party consent states.
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
reply