Hacker Newsnew | past | comments | ask | show | jobs | submit | sam0x17's commentslogin

Have fun when using this service is itself used in court as evidence for creating a malicious copy

Get a pet and narrate everything you do, you can even talk to the pet, they love that. I do this whenever my husband is away and I find it pretty calming/enjoyable

I've always felt very alone in my view on this, so don't feel bad if you disagree with me because most people probably do, but I just feel super morally icky when I hear about how part of our justice system is built around "retribution" / "vindication". Like it is one thing to punish, it is quite another to allow others to derive some sort of satisfaction from that punishment, even if they were victims, I just find it sick. It means as a society we are no better than the perpetrators at the end of the day.

> it is quite another to allow others to derive some sort of satisfaction from that punishment

I sometimes see this behavior in close friends, and it totally changes the way I see them. I don't know if it's a moral failing on their part, but I just don't experience the desire for vengeance the same way they do, and it really scares me to see how they experience it. What will they do when they start to have mental decline, and (incorrectly) decide they were wronged in some way? :(


I think this thought process is something that only people who have never been wronged can afford. There comes a time in life where the punishment must fit the crime, even if its only to make an example of the criminal.

Life is hard enough, we should deter crimes at every possibility, people are rarely punished for every evil they commit.


I was mugged as a teenager, and my house was burned down as an adult because a drug dealer lived on the same street.

Does that count me as sufficiently wronged to not be dismissed for sharing the parent posters viewpoint?


If it doesn't.. it wasnt.

See this is my point though, it shouldn't matter what has happened to you, if that matters, then this is 100% emotional and not based on reason or justice.

You're right. Justice should be served with reason and justice, at the moment it is served with far too much compassion for the criminal.

You're definitely not alone and I 100% share the thought in your last sentence.

I feel you, but hear me out. OP is right. I've wanted pretty much everything he's talking about here for years, I just never thought of all of this in as quite a formal way as he has. We need the ability to say "this piece of code can't panic". It's super important in the domains I work in. We also need the ability to say "this piece of code can't be non-deterministic". It's also super important in the domains I work in. Having language level support for something like this where I add an extra word to my function def and the compiler guarantees the above would be groundbreaking

IMO rust started at this from the wrong direction. Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic, cannot allocate unless the developer wrote the allocation, etc.

Rust instead has all these implicit things that just happen, and now needs ways to specify that in particular cases, it doesn't.


The problem isn't implicit things happening.

He's talking about this problem. Can this code panic?

    foo();
You can't easily answer that in Rust or Zig. In both cases you have to walk the entire call graph of the function (which could be arbitrarily large) and check for panics. It's not feasible to do by hand. The compiler could do it though.

"Panic-free" labels are so difficult to ascribe without being misleading because temporal memory effects can cause panics. Pusher too much onto your stack because the function happened to be preceded by a ton of other stack allocations? Crash. Heap too full and malloc failed? Crash. These things can happen from user input, so labelling a function no_panic just because it doesn't do any unchecked indexing can dangerously mislead readers into thinking code can't crash when it can.

There's plenty of independent interest in properly bounding stack usage because this would open up new use cases in deep embedded and Rust-on-the-GPU. Basically, if you statically exclude unbounded stack use, you don't even need memory protection to implement guard pages (or similar) for your call stack usage, which Rust now requires. But this probably requires work on the LLVM side, not just on Rust itself.

Failable memory allocations are already needed for Rust-on-Linux, so that also has independent interest.


What about doing something that Java does with the throws keyword? Would that make the checking easier?

I think that's exactly what's being asked for here (via the "panic effect" that the article refers to)

although, I think i'd prefer a "doesn't panic" effect just to keep backwards compatibility (allowing functions to panic by default)


Or effect aliases. But given that it's strictly a syntactic transformation it seems like make the wrong default today, fix it in the next edition. (Editions come with tools to update syntax changes)

Something like that, except you probably also want to be able to express things like “whatever the callback I’m passed can throw, I can throw all of that and also FooException”. And correctly handle the cases when the callback can throw FooException itself, and when one of the potential exceptions is dependent on a type parameter, and you see how this becomes a whole thing when done properly. But it’s doable.

> Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic

The zig compiler can’t possibly guarantee this without knowing which parts of the code were written by you and which by other people (which is impossible).

So really it’s not “the developer” wrote the thing that does the panic, it’s “some developer” wrote it. And how is that different from rust?


Huh? It seems to me that in these respects the two languages are almost identical. If I tell the program to panic, it panics, and if I divide an integer by zero it... panics and either those are both "the developer wrote the thing" or neither is.

In Zig, dividing by 0 does not panic unless you decide that it should or go out of your way to use unsafe primitives [1]. Same for trying to allocate more memory than is available. The general difference is as follows (IMO):

Rust tries to prevent developers from doing bad things, then has to include ways to avoid these checks for cases where it cannot prove that bad things are actually OK. Zig (and many others such as Odin, Jai, etc.) allow anything by default, but surface the fact that issues can occur in its API design. In practice the result is the same, but Rust needs to be much more complex both to do the proving and to allow the developers to ignore its rules.

[1]: https://ziglang.org/documentation/0.15.2/std/#std.math.divEx...


Could you clarify what's going on in the Zig docs[0], then? My reading of them is that Zig definitely allows you to try to divide by 0 in a way the compiler doesn't catch, and this results in a panic at runtime.

I'd be interested if this weren't true, since the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.

It doesn't look like Zig does either of these.

[0]: https://ziglang.org/documentation/master/#Division-by-Zero


> the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.

I don't think it's very cumbersome if the compiler checks if the divisor could be zero. Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access. The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it [1]. In my experience, integer division by a variable is not all that common in reality. (And floating point division does not panic, and integer division by a non-zero constant doesn't panic either). If needed, one could use a static function that returns 0 or panics or whatever is best.

[1] https://github.com/thomasmueller/bau-lang/blob/main/README.m...


>Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access.

For Rust, this is not accurate (though I don't know for the other languages). The type system instead simply enforces that pointers are non-null, and no checks are necessary. Such a check appears if the programmer opts in to the nullable pointer type.

The comparison between pointers and integers is not a sensible one, since it's easy to stay in the world of non-null pointers once you start there. There's no equivalent ergonomics for the type of non-zero integers, since you have to forbid many operations that can produce 0 even on non-0 inputs (or onerously check that they never yield 0 at runtime).

>The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it... In my experience, integer division by a variable is not all that common in reality

That's another option, but I hardly find it a real solution, since it involves the programmer inserting a lot of boilerplate to handle a case that might actually never come up in most code, and where a panic would often be totally fine.

Coming back to the actual article, this is where an effect system would be quite useful: programmers who actually want to have their code be panic-free, and who therefore want or need to insert these checks, can mark their code as lacking the panic effect. But I think it's fine for division to be exposed as a panicking operation by default, since it's expected and not so annoying to use.


The syntax in Kotin is: "val name: String? = getName(); if (name != null) { println(name.length) // safe: compiler knows it's not null }" So, there is no explicit type conversion needed.

I'm arguing for integer / and %, there is no need for an explicit "non-zero integer" type: the divisor is just an integer, and the compiler need to have a prove that the value is not zero. For places where panic is fine, there could be a method that explicitly panics in case of zero.

I agree an annotation / effect system would be useful, where you can mark sections of the code "panic-free" or "safe" in some sense. But "safe" has many flavors: array-out-of-bounds, division by zero, stack overflow, out-of-memory, endless loop. Ada SPARK allows to prove absence of runtime errors using "pragma annotate". Also Dafny, Lean have similar features (in Lean you can give a prove).

> I think it's fine for division to be exposed as a panicking operation by default

That might be true. I think division (by non-constants) is not very common, but it would be good to analyze this in more detail, maybe by analyzing a large codebase... Division by zero does cause issues sometimes, and so the question is, how much of a problem is it if you disallow unchecked division, versus the problems if you don't check.


More specifically, Zig will return an error type from the division and if this isn't handled THEN it will panic, kind of like an exception except it can be handled with proper pattern matching.

I can't find anything related to division returning an error type. Looking at std.math.divExact, rem, mod, add, sub, etc. it looks to me like you're expected to use these if you don't want to panic.

Actually you're right, I was going by the source code which was in the link of the comment you replied to, but I missed that that was specifically for divExact and not just primitive division.


I have no argument against using the right tool for a job. Decorating a function with a keyword to have more compile-time guarantees does sound great, but I bet it comes with strings attached that affect how it can be used which will lead to strange business logic. Anecdotally, I have not (perhaps yet) run into a situation where I needed more language features, I felt rust had enough primatives that I could adapt the current feature set to my needs. Yes, at times I had to scrap what I was working on to rewrite it another way so I can have compile-time guarantees. Yes, here language features offer speed in implementing.

Could you share a situation where the behavior is necessary? I am curious if I could work around it with the current feature set.

Perhaps I take issue with peers that throw bleeding edge features in situations that don't warrant them. Last old-man anecdote: as a hobbyist woodworker it pains me to see people buying expensive tools to accomplish something. They almost lack creativity to use their current tools they have. "If I had xyz tool I would build something so magnificent" they say. This amounts to having many, low-quality single-purpose built tools where a single high-quality table saw could fit the need. FYI, a table-saw could suit 90% of your cutting/shaping needs with a right jig. I don't want this to happen in rust.


> Could you share a situation where the behavior is necessary?

The effects mentioned in the article are not too uncommon in embedded systems, particularly if they are subject to more stringent standards (e.g., hard realtime, safety-critical, etc.). In such situations predictability is paramount, and that tends to correspond to proving the absence of the effects in the OP.


Ah, the embedded application. Very valid point. I'm guilty of forgetting about that discipline.

I do wonder if it is possible to bin certain features to certain, uh, distributions(?), of rust? I'm having trouble articulating what I mean but in essence so users do not get tempted to use all these bells and whistles when they are aimed at a certain domain or application? Or are such language features beneficial for all applications?

For example, sim cards are mini computers that actually implement the JVM and you can write java and run it on sim cards (!). But there is a subset of java that is allowed and not all features are available. In this case it is due to compute/resource restrictions, but something to a similar tune for rust, is that possible?


I guess the no_std/alloc/std split is sort of like what you're talking about? It's not an exact match though; I think that split is more borne out of the lack of built-in support some targets have for particular features rather than trying to cordon off subsets of the language to try to prevent users from burning themselves.

On that note, I guess one could hypothetically limit certain effects to certain Rust subsets (for example, an "allocates" effect may require alloc, a "filesystem" effect may require std, etc.), but I'd imagine the general mechanism would need to be usable everywhere considering how foundational some effects can be.

> Or are such language features beneficial for all applications?

To (ab)use a Pixar quote, I suppose one can think of it as "not all applications may need these features, but these features should be usable anywhere".


> Could you share a situation where the behavior is necessary? I am curious if I could work around it with the current feature set.

But this kinda isn't about "behavior" of your code; it's about how the compiler (and humans) can confidently reason about your code?


Sorry I don't understand. The result we all want is at compile-time ensuring some behavior cannot happen during runtime. OP argues we need for features built into the language, I am trying to understand what behavior we cannot achieve with the current primatives. So far the only compelling argument is embedded applications have different requirements (that I personally cannot speak to) that separate their use case from, say, deploying to a server for your SaaS company. No doubt there are more, I am trying to discover them.

I am biased to think more features negatively impact how humans can reason about code, leading to more business logic errors. I want to understand, can we make the compiler understand our code differently without additional features, by weidling mastery of the existing primatives? I very well may be wrong in my bias. But human enginuity and creativity is not to be understated. But neither should lazyness. Users will default to "out of box" solutions over building with language primatives. Adding more and more features will dilute our mastery of the fundamentals.


For example, in substrate-based blockchains if you panic in runtime code, the chain is effectively bricked, so they use hundreds of clippy lints to try to prevent this from happening, but you can only do so much statically without language level support. There are crates like no_panic however they basically can't be used anywhere there is dynamic memory allocation because that can panic.

Same thing happens in real time trading systems, distributed systems, databases, etc., you have to design some super critical hot path that can never fail, and you want a static guarantee that that is the fact.


Perhaps there are similarities to Scala, from my anecdotal observation. Coming from Java and doing the Scala coursera course years ago, it feels like arriving in a candy shop. All the wonderful language features are there, true power yours to wield. And then you bump into the code lines crafted by the experts, and they are line for line so 'smart' they take a real long time to figure out how the heck it all fits together.

People say "Rust is more complex to onboard to, but it is worth it", but a lot of the onboarding hurdle is the extra complexity added in by experts being smart. And it may be a reason why a language doesn't get the adoption that the creators hoped for (Scala?). Rust does not have issues with popularity, and the high onboarding barrier, may have positive impact eventually where "Just rewrite it in Rust" is no more, and people only choose Rust where it is most appropriate. Use the right language for the tool.

The complexity of Rust made me check out Gleam [0], a language designed for simplicity, ease of use, and developer experience. A wholly different design philosophy. But not less powerful, as a BEAM language that compiles to Erlang, but also compiles to Javascript if you want to do regular web stuff.

[0] https://gleam.run


At least from what I’ve seen around me professionally, the issue with most Scala projects was that developers started new projects in Scala while also still learning Scala through a Coursera course, without having a FP background and therefore lacking intuition/experience on which technique to apply when and where. The result was that you could see “more advanced” Scala (as per the course progression) being used in newer code of the projects. Then older code was never refactored resulting in a hodgepodge of different techniques.

This can happen in any language and is more indicative of not having a strong lead safeguarding the consistency of the codebase. Now Scala has had the added handicap of being able to express the same thing in multiple ways, all made possible in later iterations of Scala, and finally homogenised in Scala 3.


I agree. IMO, Scala can be written in Li Haoyi's way and it's a pleasure to work with. However, the FP and Effect Scala people are too loud and too smart that if I write Scala in Li Haoyi's way, I feel like I'm too stupid. I like Rust because of no GC, no VM and memory safe. If Rust has features that a Joe java programmer like me can't understand, I guess it'll be like Scala.

I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python. It just does not match my experience at all. I've been a professional rust developer for about three years. Every time I look at python code, it's doing something insane where the function argument definition basically looks like line noise with args and kwargs, with no types, so it's impossible to guess what the parameeters will be for any given function. Every python developer I know makes heavy use of the repl just to figure out what methods they can call on some return value of some underdocumented method of a library they're using. The first time I read pandas code, I saw something along the lines of df[df["age"] < 3] and thought I was having a stroke. Yet python has a reputation for being easy to learn and use. We have a python developer on our team and it probably took me about a day to onboard him to rust and get him able to make changes to our (fairly complicated) Rust codebase.

Don't get me wrong, rust has plenty of "weird" features too, for example higher rank trait bounds have a ridiculous syntax and are going to be hard for most people to understand. But, almost no one will ever have to use a higher rank trait bound. I encounter such things much more rarely in rust than in almost any other mainstream language.


The language itself is not more complex to onboard. For Scala also not. It feels great to have all these language features to ones proposal. The added complexity is in the way how expert code is written. The experts are empowered and productive, but heightens the barrier of entry for newcomers by their practices. Note that they also might expertly write more accessible code to avoid the issue, and then I agree with (though I can't compare to Python, never used it).

Hm, you claim that Rust and Scala are not more complex to onboard than Python... but then you say you never used Python? If that's the case, how do you know? Having used both, I do think Rust is harder to onboard, just because there is more syntax that you need to learn. And Rust is a lot more verbose. And that's before you are exposed to the borrow checker.

I am not mentioning Python at all. I contrasted Rust with Scala.

Well, the parent wrote "I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python." And you wrote "The language itself is not more complex to onboard." So... to contract Rust with Scala, I think it's clearer to write "The language itself is not more complex to onboard _than Scala_."

To that, I completely agree! Scala is one of the most complex languages, similar to C++. In terms of complexity (roughly the number of features) / hardness to onboard, I would have the following list (hardest to easiest): C++, Scala, Rust, Zig, Swift, Nim, Kotlin, JavaScript, Go, Python.


I see the confusion. ChadNauseam mentions Python to another comment of mine, where I mentioned Gleam. In your list hardest-to-easiest perhaps Gleam is even easier than Python. They literally advertise it as "the language you can learn in a day".

Thanks a lot! I wasn't aware of Gleam, it really seems simple. I probably wouldn't say "learn in a day", any I'm not sure if it's simpler than Python, but it's statically typed, and this adds some complexity necessarily.

> I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python.

Most people conflate "complexity" and "difficulty". Rust is a less complex language than Python (yes, it's true), but it's also much more difficult, because it requires you to do all the hard work up-front, while giving you enormously more runtime guarantees.


Doing the hard work up front is easier than doing it while debugging a non-trivial system. And there are boilerplate patterns in Rust that allow you to skip the hard work while doing throwaway exploratory programming, just like in "easier" languages. Except that then you can refactor the boilerplate away and end up with a proper high-quality system.

> And then you bump into the code lines crafted by the experts, and they are line for line so 'smart' they take a real long time to figure out how the heck it all fits together.

Thing is, the alternative to "smart" code that packs a lot into a single line is code where that line turns into multiple pages of code, which is in fact worse for understanding. At least with PL features, you only have to put in the work once and you can grok how they're meant to be used anywhere.


You can see:

* no-panic: https://docs.rs/no-panic/latest/no_panic/

* Safe Rust has no undefined behavior: https://news.ycombinator.com/item?id=39564755


would be great to have no alloc too as op requested.

Yeah what a lot of people are missing here is tons of small startups are laying people off, but it's not because they don't need engineers, it's because they are out of runway because their entire vertical (usually some sort of SaaS, often b2b SaaS) is basically now nonexistent. Traditionally businesses favored buying software over building it for cost reasons. Now they can cheaply build exactly what they want instead of paying through the teeth for something that is only slightly like what they want. This doesn't mean the work is gone, but it does largely mean large swathes of the SaaS vertical will be gone. The work itself is shifting to the individual businesses that were once the customers of the SaaS.

SWEs will be fine, all these small VC-funded startups building another CRUD app will not.


I work for a startup that makes a b2b SaaS that is _way_ too complex for anyone to spec out in a markdown file, especially when taking things like ITAR compliance into consideration.

We have seen steady growth and there’s been no signs of slowing down.

Our software facilitates order/quote/factory floor workflow automation with auditable trails in the manufacturing space, with cad file analysis and complex procedural pricing equations for quote generation, alongside a Shopify style storefront and many more goodies. We interface with things like shipping, taxes, erp integrations, and so much more.

I don’t see anyone vibe coding an alternative to our software even if they could. Manufacturers have enough on their plate managing their factory floors.

That said, we facilitate $millions in manufacturing orders per week and our engineering team is 3 people. We couldn’t do what we do without AI, and we would have needed to hire more engineers to handle the scale of our business if it weren’t for the power of Claude Code and Cursor.


There is a lot less of having the picture-taking-man come to take a picture for $$

I see it the opposite way actually with respect to the CS degree. If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow until the last few people who had to learn before AI are dying out like the last COBOL developers

AI has fundamentally broken the education system in a way that will take decades for it to fully recover. Even if we figure out how to operate with AI properly in an educational setting in such a way that learners actually still learn, the damage from years of unqualified people earning degrees and then entering academia is going to reverberate through the next 50 years as those folks go on to teach...


What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none.

That time when you got to internalise through first hand experience what good & bad look like is when you built the skill/intuition that now differentiates competent LLM wielding devs from the vibers. The problem is that expectations of juniors are inevitably rising, and they don't have the experience or confidence (or motivation) to push back on the 'why don't you just AI' management narrative, so are by default turning to rolling the dice to meet those expectations. This is how we end up with a generation of devs that truly don't understand the technology they're deploying and imho this is the boringdystopia / skynet future that we all need to defend against.

I know it's probably been said a million times, but this kinda feels like global warming, in that it's a problem that we fundamentally will never be able to fix if we just continue to chase short term profit & infinite growth.


> What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none

I would say that baptism by fire _is_ where the quality of an academic education comes from, historically at least. They are the same picture.


Agreed. I remember (a long time ago) being on an internship (workterm) and after doing some amount of work for the day, I spent some time playing around with C pointers, seeing what failed, what didn't, what the compiler complained about, etc.

That's not something enthusiasts here and elsewhere want to hear, that's pretty obvious also in this discussion. People seems extremely polarized these days.

AI is either the next wheel or abysmal doom for future generations. I see both and neither at the same time.

In corporate environment where navigating processes, politics and other non-dev tasks takes significantly longer than actual coding, AI is just a bit better google search. And trust me, all these non-dev parts are still growing and growing fast. Its useful, but not elevating people beyond their true levels in any significant way (I guess we can agree ie nr of lines produced per day ain't a good idea, rather some Dilbert-esque comic for Friday afternoon).


Agree that AI is a force multiplier in small-cap and a better search at best in large-cap due to internal bureaucracy.

The bigger question (that remains to be seen and played out) is whether AI will be a forcing function towards small-cap. If smaller companies can build the same products as larger companies with fewer people then they can compete on price and win, hollowing out the revenue base of large-cap companies and leading to their downfall due to the dwindling workforce not having the culture to adapt. Of course reality is more complicated (moats, larger companies making too-good-to-be-true offers to acquire smaller competitors to prevent the emergence of genuine competition, boys-club who-you-know-not-what-you-know corruption via lawfare ...) so, yeah, remains to be seen.


> If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow

In my experience, target schools are the only universities now that can make their assignments too hard for AI.

When my university tried that, the assignments were too hard for students. So they gave up.


This comment would make sense 6 months ago. Now it is much, much, much more likely any given textually answerable problem will be way easier for a bleeding edge frontier AI than a human, especially if you take time into account

What university is assigning undergrads assignments too hard for AI?

Funnily enough, my Science Fiction class graded like that.

If you didn't have high information density in essays you were torn into. AI was a disadvantage due to verboseness.

Most people dropped the class and prof went on sabbatical.


Besides being tough, it's shaping students' writing in a specific direction. That dense style I think of as 19th century English philosophy prose, though I hear it may still be the ideal in parts of Europe.

We're now reaching the point where people have gone their whole college education on AI, and I've noticed a huge rise in the number of engineers that struggle to write basic stuff by hand. I had someone tell me they forgot how to append to a list in their chosen language, and couldn't define a simple tree data structure with correct syntax. This has made me very cautious about maintaining my fluency in programming, and I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.

  > I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.
same, but its hard to do when $work has set a quota on ai usage and # of ai-related prs every month...

That’s an insight that a project I’m working on has built upon: https://unratified.org/connection/ai/higher-order-effects/#1...

Education and training and entry level work build judgement.


"Those who can't, do..."

Streisand effect I think this will boost sales

I hope so. I will never type a single thought of my own or personal detail into an OpenAI product again. I have no doubt at some point OpenAI will be asked by DoD to hand over customer data and they will do so. If I use AI at all for nonprofessional reasons it will be Anthropic/Claude.

US Gov doesn't really need a contract with anthropic to force them to hand over customer data do they? What would prevent that from happening if they are still a US based company?

Well I guess Proton cannot be trusted. You know what they say, centralization corrupts absolutely

What Proton sell you is reduction of anxiety. But that's a lie.

The whole idea of encrypted email is pointless. There's absolutely no guarantee it's encrypted in transit or encrypted at rest on any machines it transits through unless you encapsulate the messages with PGP and then you still leave a trail of envelopes everywhere. Any government who wants your data will come round and beat it out of you or the provider as best as they can. And if you have the pay the provider, as evidenced here, they can point to you and then beat you for it. Beating being metaphorical or otherwise.

Use any old shitty email provider and make sure you can move off it quickly if you need to. Standard IMAP, not weird ass proprietary stuff like proton. Think carefully what you do and say. Use a side channel for anything that actually requires security.


Thanks for pointing that out. I always do too. I'm always surprised how many people here aren't aware of this.

You can pay proton anonymously according to some other comments here...

As a long time Proton customer...I am fairly certain Proton has always been completely upfront that they will comply with lawful requests for information from the Swiss authorities, if response is obligated by Swiss law. Therefore this isn't especially surprising.

The key is and always has been to make sure that someone like Proton simply doesn't have the information so they can't give it away.

This is just impossible. If they're going to be sending your email to gmail then they need to see what's in it. So they will have the data at some point. You have to trust their brown eyes that they don't look at it while it's going through their inbound and outbound servers. But they're selling it as a technical protection, not a trust-based one.

Personally, if you want private Comms, just don't use email. The protocol is just not suitable.


Exactly, you can use bitcoin, even cash. You can even add credits with PayPal or a credit card, in which case Proton (I assume) won't remember your payment data. But if you attach credit card info permanently to your account then it can be retrieved.

It's wild to me that people are downvoting this. Nobody is going to jail for you...

I don't think any commercial entity can be trusted to break the law on behalf of customers who only pay a small fee each

In this case, it was Swiss courts who forced them to comply, not foreign courts.

And from what little I can tell from the article, it was account payment data, not content from the account.

Proton was never designed or advertised to resist this kind of threat.


Given they were praising Trump, Vance, and gang - I called it then.

I cancelled my Proton account when all of that hit Mastodon. Their VPN was good, but I dont support nazies and their toadies.


I wasn't even aware of anything around Proton and specific US political parties. Thank you for your post, as it led me to some searching.

The single most useful link I found was this Reddit thread:

https://www.reddit.com/r/ProtonMail/comments/1i2nz9v/on_poli...


Good enough reason to never trust them.

In trying to check this claim (I thought Proton did sensible things), I found that the submitted news article is not new at all:

> [Proton's] homepage touts that “With Proton, your data belongs to you, not tech companies, governments, or hackers.” However, [...] Proton previously handed over an IP address at the request of French authorities made via Europol to Swiss police. Yen wrote a Twitter post at the time, stating, “Proton must comply with Swiss law. As soon as a crime is committed, privacy protections can be suspended and we’re required by Swiss law to answer requests from Swiss authorities.” ---https://theintercept.com/2025/01/28/proton-mail-andy-yen-tru...

Big surprise: swiss company complies with swiss law!

And the same happened now, quoting the part of the submission that you can read without signing up:

> privacy-focused email provider Proton Mail handed over payment data related to a Stop Cop City email account to the Swiss government, which handed it to the FBI.

Anyway, regarding your claim, it's a whole rabbit hole of statements they made but broadly speaking it sounds like you're right: Vance supported legislation which Proton campaigned for and, subsequently (as of 2025-01), Proton loves the US Republican Party, believing they would stand up for 'the little guy'. To be fair, they bring some evidence that sound like it can be verified and back this opinion up somewhat, but even if it's a correct opinion on this sub-topic, it's still supporting authoritarianism. Anyway, this is where I'm going to stop trying to politically analyze their situation and just not recommend Proton anymore...


It seems they aren't violating the promise about "your data", and are handing over "their data" (IP addresses, payment data)

That seems quite irrelevant in the bigger picture. I'm not recommending a company that campaigns for an authoritarian party, not even to other people in that camp. It's just immoral

[flagged]


We do care. Someone's gotta stand up to it.

They’ll still be in business in 20 years. So much for all that standing up.

Well I care. I am more informed because of their comment. I now know that I must avoid Proton.

More informed by that comment, really? Did you read this[0]? As someone disinterested in the topic, the controversy seems very overblown and a knee jerk response. His position seems to have been pretty consistent over time.

[0]: https://medium.com/@ovenplayer/does-proton-really-support-tr...


I don't know what Proton did regarding Trump, but if you follow this principle to the end you might as well ditch technology and live in the forest. I'm not being hyperbole, everyone does business with or endorses someone on either side who does stupid shit.

[flagged]


[flagged]


> The only whinging I see is people using terms like Nazi for everyone they disagree with

what an oddly specific example


What’s oddly specific about it?

And oddly, suddenly the daily 10 AM Jane Street BTC sells stop and suddenly crypto is able to rally...


Probably coincidence - general market is up strongly too. Or, too hard to tell anyway.


You have to connect this to the market or the article. I certainly don’t believe Jane Street is keeping crypto down somehow. What kind of non-conspiracy theory are you proposing?


The article links to https://x.com/InvestWithD/status/2026381475776692426 which purportedly quotes a Jane Street insider that after pausing BTC activity, they expect BTC will go up.

I personally don't understand how any of this works.


The tweet used as the source is a troll. That guy uses the "I just got off the phone with..." template all the time to joke/troll about things going on in markets.


Ah, what a world.


I suspect this theory is cope from people who missed the top.


I bet that's the case. Crypto bros need a villain to justify being trigger-happy top buyers flooded by greed every time. Jane Street is particularly useful to KoLs who have been shilling crypto all the way down from the top. There is already strong confirmation bias from everyone yapping "See? You lost moniez because of market manipulation! I'm a successful trader (here is my paid substack and exchange reflink)"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: