My reading of [1] is that Palantir does data fusion. Their software, when installed on an organization's peripheral systems by their FDEs, centralizes all the org's data (within the org - not at palantir), and allows the org's management to do analyses on the pool.
I'm guessing that people are scared that the state will install one big palantir instance on all its systems. So that anything any part of the state learns about you, in any context or interaction, can be effortlessly used against you in every other context (perhaps via parallel construction in a lawsuit).
Basically, the fear would be that palantir makes mass surveillance data actionable, fuses surveillance programs, and incorporates most IT into mass surveillance programs.
The government would become less like a series of seperate agencies, more like a big consciousness that knows things (knows centrally, everything it was told anywhere).
Note this is just my interpretation of the fear.
Its fuzzy. Others may know more about palantir than me and thus have a more precise and grounded concern.
Funny, because I was thinking of Evangelion's predecessor, Gunbuster, in which cadets are shown undergoing grueling physical training both in and out of their mechs to prepare for space combat.
You're saying maybe people have mistakenly accepted incorrect proofs now and again, so some theorems that people think are proven are unproven. I agree that this seems very likely.
In practice when proofs of research mathematics are checked, they go out to like 4 grad students. This isn't a very glamorous job for those grad students. If they agree then it's considered correct...
But note this is just the bleeding edge stuff. The basic stuff is checked and reproven by every math undergrad that learns math. Literally millions of people have checked all the proofs. As long as something is taught in university somewhere, all the people who are learning it (well, all the ones who do it well) are proving / checking the theory.
Anyway, when the scientific community accepts a bad proof what effectively happens is that we've just added an extra axiom.
Like when you deliberately add new axioms, there are 3 cases
- Axiom is redundant: it can be proven from the other axioms. (this is ... relatively fine? we tricked ourselves into believing something that is true is true, the reason is just bad.)
This can get discovered when people try to adapt the bad proof to prove other things and fail.
Also people find and publish and "more interesting", "different" proofs for old theorems all the time. Now you have redundancy.
- Axiom contradicts other axioms: We can now prove p and not p.
I wonder if this has ever happened? I.e. people proving contradictions, leading them to discover that a generally accepted theorem's proof is incorrect. It must have happened a few times in history, no?
o/c maybe the reason this hasn't happened is that the whole logical foundation of mathematics is new, dating back to the hilbert program (1920s).
There are well known instances of "proofs" being overturned before that, but they're not strictly logically proofs in the hilbert-program sense, just arguments. (Of course they contain most of the work and ideas that would go into a correct proof, and if you understand them you can do a modern proof)
Cauchys proof that, if a sequence of continuous functions converges [pointwise] to a function, the limit function is also continuous (cauchys proof only holds for uniform convergence, not pointwise convergence - but people didnt really know the difference at the time)
- Axiom is independent of other axioms: You can't prove or disprove the theorem.
English doesn't have a "I'm just hypothesizing all of this" voice, if it did exist this post should be in it. I didn't do enough research to answer your question. Some of the above may be wrong, e.g. the part about the 4 grad students.
One should probably look for historical examples.
Maybe not individual warrants (at least not warrants to do non-scalable collections like hardware bugs in one's phone - I.e. warrants that, most users, with high probability, are not subject to). But mass surveillance, e.g. NSA, even with 'mass warrants' (e.g. Verizon-FISA warrant), that everyone is subject to, is probably in most people's attacker model. I don't have a study handy, but it seems reasonable that most users use signal to protect against mass surveillance and signal advertises itself as being good for this.
Also Marlinspike and Whittaker are quite outspoken about mass surveillance.
If cloudflare can compile a big part of the "who chats with whom" graph, that is a system design defect.
I thought it was digits only but see there's always been the option to use an alphanumeric passphrase as the "PIN". That prevents brute-forcing for anyone that bothered to use one, right?
It was only digits initially (https://old.reddit.com/r/signal/comments/oc6ow4/so_a_four_di...), with nothing preventing very easy ones like "1234", but even after they fixed it they continued to call it a PIN and many people would just assume is a number ("number" is right in the acronym), and often a very short one. Most people didn't want to set a PIN at all, they'd been being nagged about setting one and then got nagged again and again to reenter it.
It was not clear to most people that their highly sensitive info was being uploaded to the cloud at all let alone that it was only protected by the PIN. I wouldn't be surprised if a lot of people picked something as simple as possible.
Their announcement post says "at least 4 digits, but they can also be longer or alphanumeric", though maybe the feature had launched before that was written? https://signal.org/blog/signal-pins/
Doesn't matter. As long as the code is open source and e2ee, Signal staff could be official NSA employees, it wouldn't matter (in the short term - in the long term, you would see these things to change, of course.)
I'd change my mind on Signal if you can demonstrate an attack that assumes an evil signal operator, or evil signal servers.
Signal know they just need to keep themselves open to the possibility of this kind of demonstration. Then any mistrust, combined with the fact that there is no exploit at the next CCC or defcon, becomes evidence that it's secure. More mistrust -> More attempts to prove its insecure + no demonstration of insecurity -> better argument that its secure. It's a negative feedback loop. It's also honest - you could actually break it. Did I miss how you can break it? Link to the demo.
Signal the program doesn't trust signal the organization, as it should be. That's the core idea. It's what lets them not get fucked by the government. They cooperate fully and ensure they have nothing to tell (privacy by design. data minimization. self blinding). And by having a lot of users they make themselves impossible to ban and thereby protect the whole concept.
Whittaker is very smart politically. The software isnt perfect, sure. It's polished and reliable and secure. Make a better one... it is fine.
Also, are you reading what she's saying? This is not what compromise looks like. Here is how compromise looks like: When you see them starting to talk about protecting people by establishing police control to fight the bogeyman. When they start talking about the threats here, threats there, enemies here, enemies there... When they say, because of big tech, we need things like DSA (enforcement regimes, access for police) [1]. Whittakter says because of big tech, we need a lot of open source projects backed by nonprofit organizations that dont advertise, dont surveill, and have no incentive to start doing it... and that build stuff that has no backdoors and makes no affordances for state or anyone else in power to compromise it.
[1] and then plugins like E-Evidence, and finally rules like in England that prohibit privacy by design... which would prohibit: Signal... but which the english are not enforcing because of protests by: Signal.
Note that abolishing the common law reasonableness standard isn't the only change Netanjahu had planned originally. He's been forced to backtrack a lot.
The original proposals intended to allow the government to override supreme court decisions generally (not just ones based on the reasonableness doctrine). I dont see how this wouldn't have eliminated all legal limits to government power in Israel.
In a sense there already aren't any limits: 50% of parliament is enough to change the basic laws.
Biggest threat to democracy is moral panics about fake enemies instigated by politicians (specifically, demagogues). This helps with that by replacing politicians with direct democracy. It's more feasible than full-on direct democracy since it avoids the need for everyone to become an expert on every legal area. You can delegate your vote to a non-politician expert you know personally - and to different people depending on the proposal.
Heat pumps can't be more efficient than the theoretical Carnot heat engine running in reverse, whose efficiency is T_outside / delta_T. In this case it's (273-28)K/98K = 2.5.
I guess being 2x as efficient (cheap) as electric resistive heating isn't super-terrible, but it's not great either.
Compare this to a favorable groundwater heat pump configuration with good radiators and insulation where the 'outside' (groundwater) is maybe 10°C and the target temp 30°C (close to room temp): (273+10)K/20K = ~14.
I'm guessing that people are scared that the state will install one big palantir instance on all its systems. So that anything any part of the state learns about you, in any context or interaction, can be effortlessly used against you in every other context (perhaps via parallel construction in a lawsuit).
Basically, the fear would be that palantir makes mass surveillance data actionable, fuses surveillance programs, and incorporates most IT into mass surveillance programs.
The government would become less like a series of seperate agencies, more like a big consciousness that knows things (knows centrally, everything it was told anywhere).
Note this is just my interpretation of the fear.
Its fuzzy. Others may know more about palantir than me and thus have a more precise and grounded concern.
[1] https://archive.ph/6ljwy#selection-2539.194-2539.400
See also: https://redlib.privadency.com/r/Futurology/comments/4o02p3/o...