Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even a marginal reduction in the risk of nuclear war, for instance, would be a very good thing.

Nobody disagrees with this. But since you brought it up, I think it's worth asking: what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?



I believe the general consensus is that nuclear war is a great risk and important to prevent but the effectiveness of philanthropic efforts is low. Fighting malaria is still a more effective form of altruism.

https://www.givingwhatwecan.org/cause-areas/long-term-future...

https://80000hours.org/problem-profiles/nuclear-security/

https://www.effectivealtruism.org/articles/carl-robichaud-fa...


> what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?

Nothing that I'm aware of. In general I think that most of the current "longtermist" organizations are useless - but that doesn't mean they're incorrect.


Here’s how I think about it: they aren't necessarily incorrect, but they are behaving in a misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective nor particularly altruistic (in the sense that Singer uses).

I don’t belong to the EA movement, so maybe it just isn’t my place to say what should or shouldn’t be EA. But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.


> misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective

I agree that they're ineffective, but I don't see any evidence that the MIRI types are lying - they're trying to help, they think they're helping ... and they're mostly wrong. It happens.

> But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.

Are you sure that's what's actually going on? Donations and especially effort are not fungible. The techy libertarian futurist types in the MIRI orbit, for instance, are so far as I can tell simply not as bothered by poverty as Singer et al. A world in which they're not worried about AI risk is not a world in which they give just as much to other causes; it's a world in which they give less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: