I want to make sure it's said that effective altruism is intertwined with the rationalist movement (Scott Alexander[1], Eliezer Yudkowsky[2], Robin Hanson[3]) and that, among these so called rationalists and the parts of the effective altruist community where they hold sway, there are a lot of advocates of AI risk mitigation research (what this means, who knows). These people see AI as the greatest risk to humanity, if you agree to a very long list of tenuous assumptions and implications.
I don't think the one paragraph in this article where this is mentioned does enough to emphasize this part of the community. Many members of effective altruism use it as a front to recruit people into their belief system, which is centered around devotion to / fear of future AIs that exist solely as thought experiments. And while their leaders don't necessarily explicitly endorse it, the communities they foster tend to also be fairly right wing / race realist / misoginistic / bigoted.
Many of the people in the rationalist and effective altruist community believe that if they don't help create AI any future AIs will create a hell and punish them in it. Seriously. That is a serious belief.
First off, yes, there is overlap between effective altruism and the rationalism community. I think that makes sense when you want to try to use reason instead of intuition to make decisions.
I fail to see why you should qualify them as "so called" rationalists. Though besides Yudkowsky (and maybe even him) I honestly haven't heard enough of them to defend them. If you do want to sling in some defamatory remarks at their expense I feel like they should be backed up (or left unsaid), though.
You mention that some people see AI as the greatest risk to humanity. Perhaps I misunderstand but the way you phrase this, it sounds like you think this is absolutely ridiculous. If so, why would that be? And what long list of assumptions would you need to agree to? And why would it be bad for there to be a common starting point for discussing this? I think there's a fair amount of uncertainty in how any superintelligent entity would act, so a certainty of AI being terrible seems silly. However, a strong belief that it can pose a large threat seems, honestly, evident.
You say people use "AI risk" as a front to recruit people into their belief system. There is so much wrong with this... first off, why is it a front? That implies deception. Secondly, it is one of many facets. The core of EA is the desire to do have a (large) positive impact. If some people think that they can make their impact by working on the AI safety issue, why do you feel the need to portray that as nefarious? Finally, "belief system" sounds incredibly dogmatic. EA is not a church. Yes, there is a set of beliefs that most people in the EA community would ascribe to. But I don't experience EA as some echo chamber where everyone is forced into some kind of mold. Rather, people challenge both each other's and their own ideas. There's inevitably going to be some biases and filters, but your portrayal of EA as a cult (purposefully or not) is inaccurate.
As far as EA communities tending to be alt-right... what on earth are you smoking? I help run a local chapter and the focus is highly left-wing. And anyone I've noticed that's slightly more right wing is definitely not of the misoginistic or racist side. I recently listened to an 80000 hours podcast with Bryan Caplan, and noticed he's libertarian. While I think libertarian views are mostly bonkers, at the very least the way his libertarian views showed (e.g. arguing for open borders) are not insane on the level I'm used to from libertarians. Either way, this is an exception. Even if you can list some well known names that also have some strange views, I can say with a high degree of certainty that it is not even remotely representative of the community as a whole, especially not as I've seen it in the Netherlands.
FINALLY: I've honestly only ever seen Roko's Basilisk being mentioned on a meme page for EA. So much for taking it seriously.
Eloquent is good if your have prior programming experience. It starts easy but the 4th chapter goes quickly to a level above what I'd expect a beginner to understand without some exposure to programming or general CS theory. The exercises include a recursive list and recursive deep comparison.
I don't think the one paragraph in this article where this is mentioned does enough to emphasize this part of the community. Many members of effective altruism use it as a front to recruit people into their belief system, which is centered around devotion to / fear of future AIs that exist solely as thought experiments. And while their leaders don't necessarily explicitly endorse it, the communities they foster tend to also be fairly right wing / race realist / misoginistic / bigoted.
Many of the people in the rationalist and effective altruist community believe that if they don't help create AI any future AIs will create a hell and punish them in it. Seriously. That is a serious belief.
Check out more: https://rationalwiki.org/wiki/Effective_altruism
http://benjaminrosshoffman.com/effective-altruism-is-self-re...
[1] http://slatestarcodex.com/2018/03/26/book-review-twelve-rule...
[2] https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs...
[3] https://twitter.com/robinhanson/status/989535565895864320?la...