I don't know about "EA rationales", but rule utilitarianism is unpopular among philosophers largely because it's believed to collapse into act utilitarianism. The phil101 caricature is that rule utilitarianism says "no stealing, because 'no stealing' is a rule that increases utility" and act utilitarianism says "steal if you think it maximizes utility" - but on the one hand "no stealing, except for bread to feed your starving child" is an even better rule, and so on; and on the other hand not incorporating uncertainty into your decision procedure isn't being a good act utilitarian, it's just being an idiot. And so in practice they endorse the same decision procedures in pursuit of the same end.
> Consequently, we should divert money away from the welfare of people living today and towards efforts that maximize the likelihood of a galaxy filled with colonized planets. If that sounds ridiculous, it's because it is.
It sounds ridiculous to you because you framed it in an uncharitable way. Another way of saying "maximize the likelihood of a galaxy filled with colonized planets", so far as contemporary people are concerned, is "minimize the likelihood of civilizational collapse". Even a marginal reduction in the risk of nuclear war, for instance, would be a very good thing.
> Even a marginal reduction in the risk of nuclear war, for instance, would be a very good thing.
Nobody disagrees with this. But since you brought it up, I think it's worth asking: what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?
I believe the general consensus is that nuclear war is a great risk and important to prevent but the effectiveness of philanthropic efforts is low. Fighting malaria is still a more effective form of altruism.
> what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?
Nothing that I'm aware of. In general I think that most of the current "longtermist" organizations are useless - but that doesn't mean they're incorrect.
Here’s how I think about it: they aren't necessarily incorrect, but they are behaving in a misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective nor particularly altruistic (in the sense that Singer uses).
I don’t belong to the EA movement, so maybe it just isn’t my place to say what should or shouldn’t be EA. But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.
> misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective
I agree that they're ineffective, but I don't see any evidence that the MIRI types are lying - they're trying to help, they think they're helping ... and they're mostly wrong. It happens.
> But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.
Are you sure that's what's actually going on? Donations and especially effort are not fungible. The techy libertarian futurist types in the MIRI orbit, for instance, are so far as I can tell simply not as bothered by poverty as Singer et al. A world in which they're not worried about AI risk is not a world in which they give just as much to other causes; it's a world in which they give less.
> Haskell is basically a category theory framework.
Haskell probably draws more from category theory than any other mainstream language, but in absolute terms that's still not very much. It's okayish for modelling cartesian closed categories, but if you want any more structure than that things get quite painful. Even something as simple as a category with finitely many objects requires stupid amounts of type-level boilerplate.
No, it isn't. A star-autonomous cartesian category is just a preorder. Chu(Set, n) is star-autonomous but not a preorder, and therefore not cartesian. Or more concretely: cartesian categories are models for type systems with copying and deleting, but the Chu construction builds a model for a linear type system.
Indeed, but from the other way around. Tolkien hates industrialization, he hates technology as a whole. He hates GRRM-esque "Does Aragorn have a tax policy that make sense?".
So he would hate Thiel and Luckey for their technology and control, who is like Sauron and the Orcs. Maybe more on Luckey, since Virtual Reality would be a horror show from a traditionalist mind, anyway.
But it's fine, since the entire point of the Sagas for Anglo culture is for the people in it to twist it the way they want to, after all.
> But some sets of data and some operations on them that fulfill some formally stated requirements are just an abstract algebra, aren't they?
Not quite. A variety of algebras (which is usually what people have in mind when they talk about "algebraic structures" in general) is a collection of operations with equational laws, meaning that they're of the form "for all x_0, x_1, ... (however many variables you need), expression_1(x_0, x_1, ...) = expression_2(x_0, x_1, ...)", where the expressions are built up out of your operators.
Fields are the classic example of a structure studied by algebraists which is nonetheless not a variety of algebras: division isn't described by equational laws, because it's defined for everything except zero. This makes fields much harder to work with than e.g. groups or rings, in both math and programming.