Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The internet in the 90s seemed more fun and more open. Perhaps it's because the only people really participating were not interested in abusing sites or people. It was mostly nerds sharing nerdy things. Once money got involved, and free everything was available, it turned into this soup of bots, trolls, AI, fake this and that, big money, swaying public opinion, and gross content.

The 90s had fair share of the above as well:

* bots: I used to write bots to troll 90s HTML chatrooms (I was young and an idiot). IRC bots have been around since forever at well.

* trolls: trolling is older than the web. Platforms like IRC and newsgroups used to be rife with trolls if you wondered into the wrong place or said something stupid to the wrong people.

* AI: this isn't really a web problem but more just a natural advancement of technology. I mean we had bots in the 90s so you can bet if AI was as far along then as it is now then we'd have seen AI then as well.

* fake this and that: this has always been a problem. Let's not forget that Snopes.com was launched in 1994.

* big money: The web definitely attacks big money now, but even in the 90s some businesses were sinking huge quantities of money in the bet that it would pay off big. Probably the most famous example being Amazon, who were founded in 1994.

* swaying public opinion: I agree here. This more recent trend of using user identifiable information to target persuasive pieces (eg what Cambridge Analytica were doing) is very worrying too.

* gross content: shock sites are nearly as old as the web itself. Goatse, for example, is so old it's now part of the mainstream consciousness.

* spam: Spam on forums is less of a problem now than it's ever been thanks to new techniques in user verfication (captcha and similiar, developers more aware to validate users with an activation email, etc). And spam email is an order of magnatude better now that it's been in years.

* sock accounts: To be honest I think this is another area where they were more common then than they are now. This time I think it is due to the current trend of using real world identities. In the late 90s it was particularly easy to create sock accounts due to how easy it became to create a multitude of free email accounts (eg Yahoo Mail).

* very often lose out to the big sites: this is where I think the biggest shift has happened. People seem less interested to stumble on new content than thye did in the 90s. Of course this might just be age bias on my part; I was in college in the 90s so had both the time and the social crowd to stumble upon random stuff online. Whereas these days I'm older and look for different requirements from the web so for me I look to it more as a tool than a toy.

I'm not saying things are better now nor then (actually I do kind of miss the 90s web) but there was definitely still a darker undercurrent present even in the 90s.



There were some bad actors in the 90s, but the scale is different now. And the effect is now raised to the level of concern that elections are affected. Some people are even talking about how it is threatening democracy. We don't think an arms race of technology with the bots and bad actors is going to work long-term. We need to change the economics so that bad actors go broke trying to act bad, while real people have to pay very little and not have to give up privacy. Government regulation might solve some problems, but it might also just put Big Social Media in bed with politicians and then little guys will be prevented from playing, and/or the control over online speech will just flip back and forth between opposing ideologies every election.


> There were some bad actors in the 90s, but the scale is different now.

It depends on what you're measuring. Take trolls for example; is the proportion actually any bigger? Sure there are more trolls but there's also more users on the whole so you'd expect the number of trolls to also grow while the percentage might remain the same.

> And the effect is now raised to the level of concern that elections are affected. Some people are even talking about how it is threatening democracy.

That's not really the same point you were making in your first post. I agreed with you about how worrying those specific cases are but you were originally complaining about a more general problem of rot and giving examples of stuff that also existed in the 90s. But yes, I too am concerned about targetted "marketting" being abused in a way that is new to anything we had seen in the decades previous.

> We need to change the economics so that bad actors go broke trying to act bad, while real people have to pay very little and not have to give up privacy.

I don't disagree with you on principle but it's a lot easier said than done. I mean just look at how hard it has been getting a handle on spam email and as a result it's now harder than ever to host your own mail server.

Ultimately I don't think it is possible to have privacy / anonymity and to prevent spam. I also don't think it's possible to prevent bad actors from automation while allowing the good actors to do the same things on the cheap. The problem is the same controls that are used to make it difficult for bad actors will also make it difficult for the good ones, And equally the same controls that give us privacy also make it easy for malicious sock accounts to be created. It's a double edged sward like that of free speech allowing opinions we don't want to hear amongst those that we do need to here.

I think the best approach is education. There was a time when we were taught not to trust what we read online. Not to trust other people online. But things have since flipped and perhaps it's time to re-educate everyone to be cautious of anything presented online?


> I also don't think it's possible to prevent bad actors from automation while allowing the good actors to do the same things on the cheap. The problem is the same controls that are used to make it difficult for bad actors will also make it difficult for the good ones,

That's how RealPerson.io is different. It's pay to play so it doesn't try to out-tech the bad actors, and so it doesn't make it harder for the good actors.


I can think of a few ways it makes it harder for the good actors just from an initial scan of the site:

* It's an additional service that people need to discover / learn and sign up for

* It's not free. While the cost might seem cheap for people like ourselves in well paid jobs, not everyone has a disposable income. Anyone in poorly paid jobs / unemployed, expensive bills or family etc wouldn't want to or even might not be able to afford such a service

* It requires people pay with a bank card, which excludes anyone who doesn't have a bank account / credit card (only a small group of people but they do exist). It also excludes anyone who doesn't feel comfortable with entering payment details online (I personally only use PayPal these days on all bar a very small handful of sites)

* I also don't trust handing "identity token" over to that site any more than I trust Facebook. What happens if/when they get hacked? Will they then have my bank card details? Will the be able to use my identity token to access other sites? These points matter to me because I know nothing about the company and they are gearing themselves up for being an obvious target for attackers.

So in summary there is no such thing as a perfect solution. By making it harder for bad actors you're going to make it harder for at least some good actors as well. That is an inescapable truth.


> I also don't trust handing "identity token" over to that site any more than I trust Facebook. What happens if/when they get hacked? Will they then have my bank card details? Will the be able to use my identity token to access other sites? These points matter to me because I know nothing about the company and they are gearing themselves up for being an obvious target for attackers.

A RealPerson code is not an access token. It's a unique code generated for a particular website. It's more like a coupon code and RealPerson.io will tell a website if it's valid. The website still handles creating the account and authentication etc., like it has before and does it however it wants, using whatever authentication it wants. But now the website can make a backend call and ask RealPerson.io if the code given is valid, meaning someone (who knows who) generated a code for this site. That's it. Then the website can validate that no other user has used that code when signing up on their website. The website doesn't know what account on RealPerson.io has the code. The website doesn't know what other websites the user uses (the codes are unique to each website). So RealPerson.io just knows codes and websites, and websites just know if the codes are valid. Nothing else is shared.

Stripe processes the payment and credit card details are not stored in RealPerson.io. There are no identity tokens to steal. You have codes but you generate those on demand when you are signing up on websites. Once they are used, then there's nothing more you can do with them.

RealPerson.io doesn't have any personal details on you besides the payment token from Stripe. No bank details. No usernames or passwords for other sites. No usage on other sites.


Yeah, a lot of things that are seen as modern social media problems have been issues online (especially on community websites) for a while now.

Of course, part of the reason they seem worse now is because sites like Facebook, Twitter and Reddit have completely ignored all community management advice found online and done nothing to discourage bad actors or keep the quality control up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: