Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dear Eric, the proper response is I'm sorry (gigaom.com)
75 points by mattjung on Feb 19, 2010 | hide | past | favorite | 63 comments


Whenever privacy as a topic comes up in discussions on the forums I follow, a lot of people chime in with a fairly dismissive "no harm, no foul" kind of attitude. They don't mind the Eric Schmidts and Jonathan Schwartzes and Mark Zuckerbergs of the world proclaiming the death of privacy, because apparently no-one in the Facebook generation cares.

It's odd how when something like this becomes public, when everyday, non-geek people actually appreciate the implications of what is going on, there never seems to be a shortage of people who care, and whatever Schmidt says there obviously are examples of real harm being done.

I have been saying for a while that I think privacy and data protection will have to get worse before they get better. Right now, our societies are drifting into a situation where governments and megacorps can build databases for whatever purpose they want, because as long as that is all they are doing, the average guy in the street doesn't know or care.

But as we are seeing increasingly frequently now, those databases are subject to both deliberate abuse and accidental compromise. There can be consequences for very large numbers of people and/or very serious consequences for some of those people.

We need serious laws, with company-destroying penalties attached, to protect the privacy of individuals and regulate the collection of any kind of potentially sensitive, personally identifiable data in any database, and we need them before the frog is dead.


We need serious laws, with company-destroying penalties attached, to protect the privacy of individuals

We already have them, and I don't think we don't need more. There is a fundamental difference between the government spying on citizens or abusing the information they collect, and companies doing so. People voluntarily sign onto Facebook every day, and the company does not violate its privacy policy (if it did, you'd have grounds for a law suit). If Facebook users find its privacy policy acceptable, I don't think establishing contrarian laws is a proper response. It's not the government's place to institute laws to protect consumers from themselves. I don't believe Facebook abuses the customer's trust (and there are already contract laws against such things). The proper action here is educating people about privacy, not adding more legislation.


Lenders have begun mining Facebook and similar graphs, and turning down credit for people whose graph ("friends") includes people who are deemed risky.

So, companies are or may soon be dictating who are our friends. If we transgress against their judgment, we are punished in real and financial terms.

Examine this from the other side of those relationships, and you have corporate driven ostracism of members of society who are not deemed "worthy".

And Facebook now provides no means of opting out of participating in this graphing. Whenever a "friend" uses an Facebook application, that application has access not only to their list of friends but to yours.

Recently, Facebook took information that was shared by users under an agreement and setting that kept it private (e.g. profile pictures), and forced it, pre-emptively, to be public. Except for the weasel words that were surely already in their user agreement, they broke if not their legal contract then certainly their social contract with their users.

What is reasonable? Will we all have to acquire legal degrees and spend hours each day analyzing endless online and offline contracts, just to participate in society?


> Lenders have begun mining Facebook and similar graphs, and turning down credit for people whose graph ("friends") includes people who are deemed risky.

Source(s)?


Googling, I find this one claims they are not using friends credit scores, but are using other aspects of your social graph and content:

http://www.wtopnews.com/?nid=111&sid=1869379

A different news item, upon which I based my comment, said IIRC that they were using friends credit worthiness as a metric. Call my cynical, but I find it hard to believe they won't, sooner or later and as such data aggregates throughout commercial databases and so beyond the requirement to request reporting from (in the U.S.) the "big three" credit reporting agencies.

I don't have that item at hand, but here are a couple of more that Google turned up (quick search and scan). The link above and the first below both seem to be based on the final, third link.

http://www.internetevolution.com/author.asp?section_id=697&#...

http://www.creditcards.com/credit-card-news/social-networkin...


The problem is that most sites' "privacy policy" is squishy. Even if they guarantee they won't give out "private information" to anybody, they are free to define what information they consider "private." (For example, some web email providers do not consider your email address to be private information. Go figure!)

I suspect we will need laws that establish a clear baseline for what does and does not constitute "private information," and what services must offer in the way of protecting that information. This is one place where the free market will not be incented to fix the problem on their own, and cannot be trusted to do so.


Furthermore, the Government, who has a lot more data and can do a lot more damage with it, will be a lot more hard pressed to listen to any silly laws you pass.


> There is a fundamental difference between the government spying on citizens or abusing the information they collect, and companies doing so.

I respectfully disagree. It doesn't matter whether, for example, an identity thief obtains the personal information necessary to take over my life from a government database leak or from a private company. The consequences are just as damaging.

Moreover, I find no meaningful distinction between the power of government and the power of megacorps any more. The gap between the power of the individual and the power of the state/megacorp is so vast either way that either is a fundamental threat to quality of life if they screw up.

> People voluntarily sign onto Facebook every day, and the company does not violate its privacy policy (if it did, you'd have grounds for a law suit).

Well, firstly, I challenge your claim that they always operate within their privacy policy. For example, Facebook collected vast amounts of information about me without my knowledge or consent, because a huge part of their modus operandi is to get friends to contribute information about each other and develop their database via networking effects.

Secondly, I don't think their privacy policy really matters, because it's one of those legal cop-out things (if it even has any legal weight at all) that companies hide behind when they do something unreasonable. As we see in cases like the recent Buzz mess, there is a world of difference between what people might think they are agreeing to and what a company's lawyers think their privacy policy lets them get away with. This is a well-known legal problem, and indeed most jurisdictions have provisions in law specifically for cases where there is an agreement between two parties with disproportionate power to negotiate the terms.

In any case, these companies can and do vary their privacy policies on a whim. A privacy policy is a PR exercise, and little more. The big names have shown repeatedly that they cannot be trusted to safeguard sensitive information properly, and that they will push and if necessary break the boundaries of reasonable behaviour as much as they can get away with. People's privacy is too important to allow that kind of unregulated power, precisely because once lost it is difficult to ever regain.

Finally, the "you can raise a lawsuit" argument is pretty meaningless in a pay-your-own-fees jurisdiction. No individual is likely to receive sufficient compensation for any abuses even to cover the costs of bringing the case, unless there has been a serious event that follows a breach. The big companies know this, so they can happily go around risking everyone's privacy, and write off the costs of paying out compensation in any cases that do go to court as a mere business expense. This is not an effective deterrent, which is why I say we need company-destroying penalties for people who deliberately violate people's privacy or abuse their databases. Making it a criminal act and throwing the directors in jail is an acceptable alternative. Making them turn up in court as a formality and issuing a fine so puny that a first-line manager can pay it out of his expenses budget is not.

> It's not the government's place to institute laws to protect consumers from themselves.

Oh, I disagree. For one thing, I disagree with your characterisation of people as mere consumers. In many cases where databases are being built, the information is collected indirectly, not via a simple supplier/consumer relationship. And of course there are some services that are effectively essential, or indeed things you are legally required to have, and while you may be a consumer of such a service you don't get any choice to opt out (or your only choice is between providers with similarly dangerous policies regarding personal data).

In any case, I think it is the government's job to protect the little guy. It is one of the most fundamental and necessary requirements of any civilised government that it protect the individual against abuses by those more powerful than the individual alone can fight. Pretty much all law is based on this premise.

Of course people have to take reasonable responsibility for their own behaviour. I'm not disputing that, or saying that the government should clean up after someone who is grossly negligent or knowingly does something that will endanger themselves. But there is a difference between that and expecting an average person going about their daily life to fully comprehend all the legal subtleties and dangers in the world around them, which no-one on this planet can do completely.

> The proper action here is educating people about privacy, not adding more legislation.

I'm all for putting education ahead of legislation, and I absolutely agree that awareness about privacy issues needs to be raised.

But you have to have legislation to back things up. There's no point raising awareness if you don't provide an effective redress if megacorps violate the privacy policies that an educated user carefully read. There's no point expecting people to look after themselves if you're going to uphold a 20 page privacy policy written by a team of professional lawyers that would take a typical person several days and several thousand in legal fees to understand.

What is needed is a combination. By all means, make education the priority. But right now, there are abuses going on that affect even savvy users; it's not as if everyone who just got done over by Google's Buzz problems is a clueless person who should have known that by signing up to a completely unrelated Google service they were opting in to this.


Uhh, I voluntarily signed up for GMail and GMail ONLY. This however does not imply that I agreed that Google can package their "Facebook-Killer" without as much as asking.

In my opinion that's actually worse then Apples completely slimy attempt to smuggle their software onto my box without asking. Just because I had the bad sense to install Itunes.

Don't be evil; Ha!


"This however does not imply that I agreed that Google can package their "Facebook-Killer" without as much as asking."

I'm curious; did you sign into Gmail one day and see the Buzz splash asking if you wanted to check it out?

If so, did you click yes or no?


Has your privacy been compromised in a way that conflicts with their privacy policy? If that's the case, you can file a lawsuit. Otherwise, the legal system cannot help you (and should not be able to help you). If this is a huge concern for you and it wasn't covered in their privacy policy, you shouldn't have used their product. If you implicitly trusted them, you can stop doing so now.

I agree that they are wrong. I just don't think we need more legislation to patch it up. It's between consumers and the companies - the government has nothing to do with it.


And when privacy policies change, do I have to re-evaluate each time/product that may or may not be involved (since google products are sort of all merged into one)? If I do re-evaluate, and decide I no longer agree with the policy, and I delete my account is my data really gone or will the data still be there, just hidden, and part of the new policy which may give them more rights to for example sell that information? I believe facebook makes notifications of these changes, does GMail?

There are lots of issues that a simple 'privacy policy' document does not solve and may that it adds.


Uhh, I voluntarily signed up for GMail and GMail ONLY. This however does not imply that I agreed that Google can package their "Facebook-Killer" without as much as asking.

In my opinion that's actually worse then Apples completely slimy attempt to smuggle their software onto my box without asking. Just because I had the bad sense to install Itunes.

Don't be evil; Ha!


Why do we need the kind of privacy you speak of? Because while people nod in approval to your rhetoric, their behavior shows that they feel free to give it up the "privacy" of what they're eating for breakfast en masse.


I get the point you are trying to make, but I think it would only be valid if people fully understood the transaction they are making and chose to make it anyhow. I do not think we are there. Nor do I think this is because people are "stupid"; these issues are complicated. Over time, many of these people have discovered the issues that can arise and regret their decisions in hindsight; many more will come to the same realization in the future.

It should also be pointed out that privacy advocates in general are not saying these transactions should be impossible; what they are saying is that it should be possible to not conduct these transactions. There's basically no option to participate on Facebook and, say, pay them money to leave you the hell alone.


it would only be valid if people fully understood the transaction they are making and chose to make it anyhow

It's really not that hard: if you're trying to keep something from your ex-husband, don't tweet or Buzz it publicly.

it should be possible to not conduct these transactions

But that's the case now - I have the option to exert complete control over who gets to see my facebook profile, my tweets, and my buzz-es.


"I have the option to exert complete control over who gets to see my facebook profile, my tweets, and my buzz-es."

To my understanding, those buttons aren't anywhere near as powerful as you may think; I'm pretty sure Facebook affiliates see right through them if you do much more than glance at an app. How many randomly-selected people will be able to correctly answer that question?

Pointing out that there are parts of privacy that people do get ("Gosh, maybe I shouldn't make a post about how much fun my mistress and I had last night") doesn't negate the point that there's a huge chunk that they don't understand at all, and therefore we can't look at their actions as endorsement of the current way private information is being handled.

(Both of you who have replied do not seem to be getting my point here. I'm not making the general purpose argument that "we should have privacy". I'm making the argument that we can't read people's apparent acceptance of the current privacy regime as real acceptance when they don't actually know what the current regime is. Which is why my post here focuses on the issue of what an average person understands is going on, rather than whether what is going on is intrinsically good or bad.)


Do people ever fully understand what they're getting into, I'd say not.

As for your second point; you can take facebook as they present it or you don't have to use it, it's not for you to decide what they can and cannot do


If you can say both of those things at once, you missed my point. You can't deliberately "take it or leave it" (your second paragraph) if you don't know what "it" is (your first paragraph).

The point is not that I demand that Facebook be legally constrained to provide me a high-privacy option. I would fight such a law. The point is that nearly nobody would even understand what such a choice is, and in an environment where nobody fully understands the implications of their actions, it is not a legitimate argument to then cite people's actions as supporting the current system. You could argue that people would still make the same choices even if they did fully understand the transactions they are making (and I'd counterargue after that), but you can't base an argument on current people making conscious choices when they aren't in fact making conscious choices.


People should just assume everything that is on the internet and not encrypted is public. It's safe that way. Heck, people think email is private. I think people need to realize this is something new and that their might be negative consequences that they cannot foresee. They should think, if this is data is public what is the worst that can happen, if this data is correlated with other data i have online what can happen.


I believe the reason is freedom of choice. Clearly not everyone wants to be part of this system. Do you not know of people who lock their profiles on facebook, or people who strictly avoid social networking sites in the first place? If that is the case, I would be very surprised. Do we remove their ability to choose?


Did you actually read the article we're discussing? There's a great example of why privacy is important right there.

As for people being willing to give up their privacy, I think I already addressed that in my previous post: most of those people don't understand the implications and just go along with it because of peer pressure, but when they discover things like the Buzz problem or Facebook's Beacon mess, they are actually quite upset.


Of course I read it. IMO, it's the-sky-is-falling linkbait. Just one example - a woman not realizing the privacy settings the Buzz account she opted into is not comprable to publishing Eric Schmidt's home address. If I were a Google shareholder, I would either expect him to move, or a security detail at his house for a year after such a leak. The cost of his being killed by a disgruntled employee/shareholder/user/random person are just too high.

What percentage of facebook users would you guess know what "Facebook Beacon mess" you're referring to? In particular, more or less than 0.1%?


> Just one example - a woman not realizing the privacy settings the Buzz account she opted into is not comprable to publishing Eric Schmidt's home address.

No, it's not, because in her case there was probably far more sensitive information than just her address, and it was being routed directly to someone who was known to be physically abusive and to that person's friends. Schmidt's address, on the other hand, is presumably a matter of public record already as he's a director of a corporation, and as far as I'm aware he has not personally been physically abused for a prolonged period at any point in his past.

Also, I'm not familiar with exactly how the abused ex opted into Google's new service and thus attracted the unwanted attention. She certainly seems to feel that the information was shared more widely than previously automatically and without her knowledge or consent. Could you explain her misunderstanding and how she opted into the extra service without knowing it?

> If I were a Google shareholder, I would expect a security detail at Eric's house for a year after such a leak.

You don't think one of the most high profile and wealthiest men in the US makes any security arrangements normally?

> What percentage of facebook users would you guess know what "Facebook Beacon mess" you're referring to?

I don't know, but it was enough that Facebook themselves pulled a fast U-turn when they got the feedback, and they haven't gone anywhere near the same idea since. Presumably they have a pretty good idea of how strong the opposition was to pull out like that.

To answer your other question, I find it unlikely that if 99.9% of Facebook users were happy with a change, they would work so hard and so fast to revert it.


Google doesn't know which of her most-emailed contacts are likely to abuse her. She does. It's her responsibility to set her privacy settings and buzz accordingly. I'd support laws that would prohibit Google intentionally misleading people about what is public and what stays private, but Google buzz is not a good example of that.

Schmidt's address is certainly not a matter of public record. Read the original source [1]. The town and his wife's name were found through a local newspaper, and his home address was found through her political donations, which she unwisely entered as originating from her home address, without realizing that those are a matter of public record and would be published by the FEC. The only role Google played here is indexing the newspaper article and providing an the software The Huffington Post used to visualise FEC data. Google did not "collect" this in any sense appropriate here.

1. http://news.cnet.com/Google-balances-privacy,-reach/2100-103...


I agree with everything that you say except that if we let things get worse I believe it will be to late to make them better.


There is a Pandora's box effect here, to be sure.

I think the best we can hope for is for one or two really spectacular failures, by organisations on the scale of Google or Facebook, where the potential damage becomes obvious to everyone but because of the scale of the problem the plug gets pulled quickly before too much damage can be done.

IMHO, this would be preferable to slowing building up lots of databases and having an endless stream of small-scale leaks, each of which is also damaging to those involved, but none of them big enough to make this a serious political issue.


Can you step me through this worst-case situation you're thinking of? People's most-emailed contacts being auto-added to their buzz stream doesn't count.


> People's most-emailed contacts being auto-added to their buzz stream doesn't count.

Does that depend on how many people's previosly private information is now available to abusive ex-partners? Is it OK in this case, because so far only one person has come forward and exposed that threat? What if it happened to 10 people? 100? 1,000,000?


Does that depend on how many people's previosly private information is now available to abusive ex-partners?

Some people will use Buzz to announce their previously private information to the world, knowing they have abusive ex-partners. Others will stab themselves in the face with a nice pen. Neither Google nor Montblanc is to blame.


I think the question is, did the person make the previously private information available more widely, or did Google (or Facebook, or whoever else controls the big database and previously implied that access to that data would be restricted)?


Agreed.


If we fix the problems now, wouldn't it get better in 75 years or so, when all the currently surveilled people are dead?

(Admittedly, that's still a terrible situation.)


I completely disagree -- the law moves slow, we don't want to jump the gun and get bad laws. If in the meantime some people are inconvenienced or worse someone dies (stalker finds them etc). Those are risks we have to take for progress. If you have a stalker stay away from new things, don't hurt everyone by forcing them to slow down.

Premature laws are worse than no laws.


I agree that bad laws are worse than no laws, but how long exactly does it take? It's really not that hard to think through the consequences of modern technology providing massive data storage, fast data mining, and a worldwide communications network with near instant data transmission and plenty of bandwidth. It's also not as if these are new developments or things we could not see coming.

I can completely accept not rushing out a law in five minutes, but this has been coming for at least a decade, and it's been a huge and blindingly obvious issue for several years, ever since the social networking sites and data hoarders like Google got big. There have been plenty of serious data loss cases, including those with very serious consequences. There have been plenty of cases where databases have come incorrect, deliberately or carelessly, and those have serious consequences too.

Failing to update privacy and data protection laws by now isn't caution, it's negligence.


He's not very good as the public face of Google, is he?

Especially as his privacy blind spot corresponds to the one PR topic that has the ability to genuinely harm Google.

I wonder why the geek-friendly faces of Sergey and Larry don't appear more often and why someone doesn't persuade Eric to stay in his lair, errr, office a bit more.


No, he's not a good public face is you are want to hear him tell you what you want to hear. He is an excellent public face if you want him to tell you want really went down. It's refreshing and encouraging. No company should have to play to the lowest common denominator.


No company should have to play to the lowest common denominator.

Well, if you boast that you have hundreds of millions or even a billion users... You might want to spend a lot of time thinking about the lowest common denominator.


I understand what you mean, and why you say it. I do, however, disagree. From a pure business stand point, it makes sense. From a technological stand point, I think it hurts them. I'm having a hard time expressing this nicely, so I'll say it plainly as a I can. Hopefully no one takes offense.

I don't want Google wasting time on things that are only there because people are dumb. I'd rather them spend time on things that help people get smarter, not allow them to remain ignorant.

Again, this is a lot harsher than I intend, so hopefully you understand the meaning. My apologies in advance.


I don't want Google wasting time on things that are only there because people are dumb. I'd rather them spend time on things that help people get smarter, not allow them to remain ignorant.

I think we are in violent agreement on this principle. In the case of Buzz, I think they failed us both. I understand we shouldn't have to build software that throws up massive confirmation dialogs "Really, REALLY publish those public photos on Picasa of you mountain biking on a "sick day" to the following 12 frequently emailed people at your workplace including your boss and the dragon lady in HR?"

But at the same time, making people smarter shouldn't mean making people figure out the non-obvious consequences of the choices you give them. I still can't figure out exactly how the original Buzz worked or didn't work... I think it boiled down to whether you created a profile, but I'm not sure and I'm not going to try it to find out.

I hope Google find the middle ground and do a better job of designing software that has great user interfaces, software that makes its affordances visible without nannying or nagging users.


"Software that makes its affordances visible without nannying or nagging users."

That's a difficult task, one which I think we'll be wrestling with for a long time.


I get the feeling that Eric doesn't always remember he's in PR and that no prizes are awarded for arguing with the company's critics.

He should really stick to simple, upbeat, sympathetic messages. To be honest I'm surprised he didn't learn that after his last privacy debacle.


The thing is, the last one wasn't even bad at all. I don't think he actually made the usual "if you're not doing anything bad what do you have to hide" argument, I think he said "if you have something you want to keep secret, maybe you shouldn't be on the Internet" which I agree about.

Though I guess your point is about PR, and I guess I have to agree it could have been phrased more positively.


If I had to guess, Eric Schmidt thinks he can't apologize. At this point, they've set a big precedent: they're willing to automatically network you based on information that you've given them. If they apologized, they'd have to neuter future roll-outs and remove this automatic user base, removing a major competitive advantage. How many other businesses have products that are popular enough to piggyback a social network? Not many. Google can, and they want to keep it that way.


The level of outrage being directed at google over this seems to be far greater than that directed at facebook when they made a lot of their users' information public by default.


Google messed with email. Email is one of our oldest electronic media; we all know how it is supposed to work. Changing the default privacy rules of your email inbox and address book without asking you very clearly and explicitly in advance is... well, sabotage. Also: Bait and switch.

Facebook is a sandbox. People kind of understand that. I sign on to Facebook knowing that (a) I don't know for sure what it is; (b) its inventors aren't quite sure what it is, either; (c) we're going to find out by trial and error. Whereas I sign on to GMail because I want a personal email system, which despite all its flashy trappings retains the underlying semantics of email. If I wanted a social network, or another sandbox, I'd have signed up for one. And I'm not sure anyone wants a thing that impersonates a personal email system for a decade and then decides to become something else overnight without warning or clarity.


> Facebook is a sandbox. People kind of understand that.

Which is why I tend to cite Beacon as their biggest failing.


"very clearly and explicitly in advance"

This is the problem. They did ask for these things, just not clearly and explicitly. However, they did have these things in place, and people did use them. Just not everyone did.

So, what does he apologize for? A mistake that didn't happen or for mistakes that did happen? Granted, the popular opinion wants them to apologize for the perceived slight, but doing so would be admitting to something that in reality, didn't happen as popular opinion would suggest.

Either way, any apology would be bad. In this case, being honest is probably the best thing to do.


Two things:

1) When your company motto is "Don't be evil" people tend to hold you to a higher standard.

2) Facebook was a place where you'd put stuff that at least some other people would see. Google, particularly search, is a place where some of the things you'd type you wouldn't want anyone to see.


1) Don't be evil doesn't mean don't make mistakes. Mistakes happen. 2) Despite the mistake, the features and protections were put in place to prevent the problems reported. His comment is right on the money. Maybe not what everyone wanted to hear, but what everyone wants to hear isn't always the truth.


I forgive mistakes. I don't forgive denial. As a user, I'd like to hear "I'm sorry" rather than "our users' concerns were unjustified and they don't know what they're talking about"


First, they've already issued their apologies for the confusion publicly. I don't see the need to repeat what's been said. Secondly, I don't find anything wrong with what he said. Again, it might not be popular, but I can't find fault in it. After all, they can't very well apologize for things that didn't happen, and it would be insulting for them to try.


The leader of a company absolutely should repeat what his company's PR statement/blog/whatever said, not contradict it. The fault is that it's the opposite of his company's official statement.


I don't believe Buzz was a mistake. I think it was a deliberate action that had an unexpected reaction. I think the execs at Google thought that people didn't care as much about privacy as it turned out we actually do.


That's awesome, but when you're Google and your entire business is based on people trusting you with their data you tell them whatever the hell will make them keep handing their data over.


Also, Facebook threw up a big window telling you to review the new settings, and it didn't allow you to do anything else on the site till you had checked the privacy settings.


In other words: Facebook knew it's average user were less than intelligent, and they had to force them to accept the privacy settings. Google assumed it's average user cared about privacy, and so would investigate the matter themselves.


What the other guys said.

Also: Facebook is for kids playing Farmville and posting drunk pictures. I don't care what happens on there since all they have on me at this point is pretty much my name. I use Google daily for a variety of tasks and my life would be severely diminished if I had to stop using Google, so I would rather not. However, their behavior makes me question some of their services, policies and views on who can look at what. I wouldn't say I'm outraged, but I am at least surprised at the lack of tact from Eric.


I think people expect/ed better from Google


Eric Schmidt is painting a pretty grim picture, especially when considering his other views on privacy:

"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines -- including Google -- do retain this information for some time and it's important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities." - Eric Schmidt


You are twisting his words (or at least your comment makes it seem like you are). He doesn't suggest that you shouldn't do these things. Rather that the reality is these days, if you don't want people to find things out, the only guaranteed way of preventing that is by not doing it. This isn't his view on privacy; rather, it's his view on the state or privacy. Where it currently stands.


At some point the extreme conclusion of what happened to the lady with the violent ex will happen. Once you pass a certain number of users you need to expect people using your service aren't that computer savvy and really aren't going to spend the time doing more than the bare minimum to use your service (e.g. check e-mail, post a picture for friends). After all, a lot of people are put on these services by their tech savvy children or siblings who spend the bare minimum of time setting up a todo list or basic tutoring on how to use this service for its basic purpose. It seems pretty much a bad idea to change the rules and not think of the implications to a more public revealing of data to world for these people. Particularly when you take what is generally regarded as a private service (e-mail) and turn it into a public service.


bs. GOOG is business, not personal -- inherently no privacy.

Along with the running-up fanfare of "privacy settings", GOOG knows exactly what it is doing.

And the title just emphasizes cynicism: It's easier to ask forgiveness than it is to get permission.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: