![]() |
Pigeon HourAuthor: Aaron Bergman
Recorded conversations; a minimal viable pod www.aaronbergman.net Language: en Genres: Philosophy, Science, Social Sciences, Society & Culture Contact email: Get it Feed URL: Get it iTunes ID: Get it |
Listen Now...
#15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics
Saturday, 24 January, 2026
SummaryIn this episode, Aaron and Robi reunite to dissect the nuances of effective charitable giving. The central debate revolves around a common intuition: should a donor diversify their contributions across multiple organizations, or go “all in” on the single best option? Robi breaks down standard economic arguments against splitting donations for individual donors, while Aaron sorta kinda defends the “normie intuition” of diversification.The conversation spirals into deep philosophical territory, exploring the “Moral Parliament” simulator by Rethink Priorities and various decision procedures for handling moral uncertainty—including the controversial “Moral Marketplace” and “Maximize Minimum” rules. They also debate the validity of Evidential Decision Theory as applied to voting and donating, discuss moral realism, and grapple with “Unique Entity Ethics” via a thought experiment involving pigeons, apples, and 3D-printed silicon brains.Topics Discussed* The Diversification Debate: Why economists and Effective Altruists generally advise against splitting donations for small donors versus the intuitive appeal of a diversified portfolio.* The Moral Parliament: Using a parliamentary metaphor to resolve internal conflicts between different moral frameworks (e.g., Utilitarianism vs. Deontology).* Decision Rules: An analysis of different voting methods for one’s internal moral parliament, including the “Moral Marketplace,” “Random Dictator,” and the “Maximize Minimum” rule.* Pascal’s Mugging & “Shrimpology”: Robi’s counter-argument to the “Maximize Minimum” rule using an absurd hypothetical deity.* Moral vs. Empirical Uncertainty: Distinguishing between not knowing which charity is effective (empirical) and not knowing which moral theory is true (moral), and how that changes donation strategies.* Voting Theory & EDT: Comparing donation logic to voting logic, specifically regarding Causal Decision Theory vs. Evidential Decision Theory (EDT).* Donation Timing: Why the ability to coordinate and see neglectedness over time makes donation markets different from simultaneous elections.* Moral Realism: A debate on whether subjective suffering translates to objective moral facts.* The Repugnant Conclusion: Briefly touching on population ethics and “Pigeon Hours.”* Unique Entity Ethics: A thought experiment regarding computational functionalism: Does a silicon chip simulation of a brain double its moral value if you make the chip twice as thick?TranscriptAI generated, likely imperfectAARONCool. So we are reporting live from Washington DC and New York. You’re New York, right?ROBIMm-hmm.AARONYes. Uh, I have strep throat, so I’m not actually feeling 100%, but we’re still gonna make a banger podcast episode.ROBIUm, I might also, yeah.AARONOh, that’s very exciting. So this is— hope you’re doing okay. It was— I hope you’re— if you, if you, like, it was surprisingly easy to get, to get, uh, tested and prescribed antibiotics. So that might be a thing to consider if you have, uh, you think you might have something. Um, mm-hmm. So we, like, a while ago— should we just jump in? I mean, you know, we can cut.ROBIStuff or whatever, but— Yeah, um, you can explain, uh, so I talked to Max, uh, like 13 months ago.AARONIt’s been a little while. Yeah, yeah. Oh yeah, yeah. And so this is, um, I just had takes. So actually, this is for, for the, uh, you guys talked for the as an incentive for the 2024, uh, holiday season EA Twitter/online giving fundraiser. Um, and I listened to the— it was a good— it was a surprisingly good conversation, uh, like totally podcast-worthy. Um, I actually don’t re— wait, did I ever put that on? I’m actually not sure if I ever put that on, um, the Pigeon Hour podcast feed, but I think I will with— I think I got you guys’ permission, but obviously I’ll check again. And then if so, then I, I will. Um, and I just had takes because some of your takes are good, some of your takes are bad. And so that’s what we have to—.ROBIOh, um, I think your takes about my takes being bad are themselves bad takes. Uh, at least the first 4 in a weird doc that I went through. Um, yeah, I saw you published it somewhere on YouTube, I think. I don’t know if it also went on Pigeon Hour, but it’s up somewhere.AARONYes, yes. So that we will— I will link that. Uh, people can watch it. There’s a chance I’ll even just like edit these together or something. I’m not really sure. Figure that out later. Um, yeah. Yes. So it’s, yeah, definitely on you. Um, so let me pull up the— no, I, I think at least two of— so I only glanced at what you said. Um, so two of the four points I just agree with. I just like concede because at least one of them. So I just like dumped a ramble into, into some LLM.ROBIYeah.AARONLike, These aren’t necessarily like the faithful, um, uh, things of what I believe, but like the first one was just, um, so like I have this normie intuition, and I don’t have that many normie intuitions, so like it’s, it’s like a little suspicious that like maybe there’s a, a reason that we should actually diversify donations instead of just maximizing by giving to the one. Mm-hmm. Like just like, yeah, every dollar you just like give to the best place. And that like quite popular smaller donors say people giving less than like $100,000 or quite possibly much more than that, up to like, say, a million or more than that. Um, that just works out as, as donating to like a single organization or project.ROBIYeah. Okay. Um, I, I think we should explain, uh, what was previously said on this. So there’s some argument over— okay. So like, um, normal people donate some amount of money to charity and they just give like, I don’t know, $50 here and there to every charity that like pitches them and sounds cute or sympathetic or whatever. Um, And then EAs want to, um, first of all, they, I don’t know, strive to give at least 10% or, I don’t know, at least some amount that’s significant to them and, uh, give it to charities that are highly effective, uh, and they try to optimize the impact of those dollars that they donate. Whatever amount you donate, they want to, like, do the most good with it. Um, so the, like, standard economist take on this is, um, So every charity has, uh, or every intervention has diminishing marginal returns, right, to the— or every cause area or every charity, um, possibly every intervention, um, or like at the level of an individual intervention, maybe it’s like flat and then goes to zero if you can’t do any more. Anyway, um, so cause areas or charities have diminishing marginal returns. If you like donate so much money to them, they’re no longer, um, they’ve like done the most high priority thing they can do with that money. And then they move on to other lower priority things. Um, so generally the more money a charity gets, the less, um, the less effective it is per dollar. This is all else equal, so this is not like— like, actually, if you know you’re going to get billions of dollars, you can like do some planning and then like use economies of scale. Uh, so it’s like not strictly decreasing in that way with like higher-order effects, but for like Time held constant, if you’re just like donating dollars now, there’s diminishing marginal returns. Okay, so, uh, it is— the economist’s take is like, it is almost always the case that the level of an individual donor who donates something like, let’s say, 10% of $100K, like, the, the world’s best charity is not going to become like no longer the world’s best charity after you donate $10,000. And most people donate like much less than that. So the, uh, like standard advice here is, um, if you are an individual donor, not a like, um, institutional donor or grantmaker or someone directing a ton of funds, um, you should just like take your best guess at the best charity and then donate to that. And then there are ways to optimize this for like bigger amounts. So you’ve probably heard of donor lotteries, which is like 100 or 1,000 people who want to save time all pool their money and then, then someone is picked at random and then they do research and maybe they split those donations 3 ways. Or like it all goes to something, or like— [Speaker:HOWIE] Yeah.AARON[Speaker:Kevin] Hmm.ROBI$10,000 times, uh, 100 or 1,000 is like a million or $10 million. At that level, it’s plausible that you should donate to multiple things. Um, so in that case, maybe it makes sense.AARONUm, so I don’t, I don’t— oh, sorry, go ahead.ROBIUh, so that’s the standard argument. Um, and, um, I, I’m happy to, um, explain why this still holds, uh, to anyone who is like engaged at least this far. Um, most people haven’t even heard of it and they’re like, um, well, but what if I’m not sure about which of these two things, then I should like donate 50/50 to them.AARONUm, uh.ROBII’ll let you go on, but I just want to say this is a really lucky time to record this podcast because Yesterday someone replied to me on the EA forum linking to some, um, uh, have you heard of, uh, Rethink Priorities, um, Moral of Parliament simulator?AARON[Speaker:Howie] Yes.ROBI[Speaker:Keiran] Okay, so it has some, um, pretty wacky and out-there decision rules, and, um, so I was— I was arguing with someone on the EA forum about this, like, um, saying Uh, it doesn’t make sense to, um, to, to split your donations, uh, at the level of an individual donor, um, even moral uncertain— and they said, but what about moral uncertainty? What if I’m not sure, like, if animals even matter? Um, uh, and I said, well, even then you should take your, like, probability estimate that animals matter and then get your, like, EV of a dollar to each and then give all of your dollars to whichever is better. Um, and they said, well, but I, I, I plugged this into the Rethink Priorities moral parliament simulator and there’s a bunch of these like different worldviews and different decision procedures here. And then these two decision procedures say you should like split your donations. Um, it was like, um, wait, that can’t be right. And I went and looked at it and actually, um, That is a correct outcome, which I can tell you about later. But, um, yeah, uh, you were gonna.AARONSay— wait, wait, hold on, we have to— I feel like I just got cut off at the best part. So there are coherent worldviews where you.ROBIThink you said— um, not, not worldviews. I think the, the worldviews decide what you value, but then there are, there are decision rules. So the two that this person gave me on the forum as examples are, um, Okay, so the— maybe the, the standard one, uh, under which I was arguing is like— or like a simple one is, um, uh, I guess I’ll explain what a moral parliament is. So if you have moral uncertainty, you’re like, let’s say you’re not sure whether deontology is true or consequentialism is true or virtue ethics is true, um, instead of So if you are sure, it— like, let’s say you’re just a consequentialist, then you simply decide according to consequentialist rules, uh, like what you want to do. Um, maybe you just like max the— or like you try to— you pick the action that will maximize your expected value based on your, uh, best guess as the consequences. Um, or if you’re a deontologist, you just like follow the rules that tell you what is right. Uh, in a moral— so if you have moral uncertainty, the moral parliament is one way to, um, decide what to do when you’re not sure which of these views is right. And in the metaphor, so if you’re, uh, you give them seats in the parliament proportional to, um, how likely you think it is that that view is correct, or you’re proportional to your credence in each view. So if you’re completely uncertain, uh, like consequentialism, deontology, and virtue ethics, you’re like equally sure those are— or like you find those equally plausible, um, and you, you’ve ruled out everything else, then you would give them each like 33 seats in the parliament or 33% of the votes, um, and then they would vote on what to do.AARONUm.ROBISo one simple decision rule Although maybe non-consequentialists will argue this favors consequentialism or so. I don’t know, it’s unclear. It’s like, do the decision that maxes the total value according to all of the representatives. So under each view, there’s like different values of the different possible outcomes. And then you add up the possible value over all the representatives or like take the integral with respect to the probability, um, uh, sum over all the probabilities. The, the weighted sum of value and probability. Anyway, um, uh, so that’s one way. Um, but there’s like some criticism of this, which is like, um, well, maybe this is, uh, this is bad because Uh, it gives extra weight to views that believe in, like, that more value is possible or something. Um, which I’m not sure is like really a flaw, but I mean, I guess that’s true. Um, okay, so the, the two rules which are somewhat exotic and unintuitive, uh, the, the first one I had heard of, the first one Rethink Priorities calls it moral marketplace. And this is, um, uh, this is maybe the least fancy decision procedure. This is just, um, your total allocation— your overall allocation is, um, each of the factions in the parliament gets a fraction of the funds proportional to their credence or proportional to their representation, and then they all independently decide Um, what they want to do with their money, and then your overall action is you just do, uh, you just add all those together. So if the consequentialists want to do option A with their 33%, and the.AARONUm.ROBIDeontologists want to do option B with their 33%, and the virtue ethicists want to do, uh, option C with their 33%, you, you give like 33% of your donations each to option A, B, and C. Okay, uh, I think this is— this is pretty straightforward, and it’s very plausible as a strategy for.AARONUm.ROBIWhat you do if you’re, um, allocating a large portfolio. Uh, so like, if you rethink priorities and your staff, uh, collectively are uncertain or can’t agree on what is the right worldview and you have a large amount of funding, then this would make sense to me. Um, I’m not sure this is reasonable to— like, I’m not sure if it would be reasonable to let this guide your actions as an individual, but I just heard of this yesterday, so— or like, I, I’ve heard of this before, but I hadn’t heard of it used in this context until yesterday. So I haven’t fully thought through whether this makes sense for individuals. I will— let me just tell you the last, the other example they gave. So the other example they gave was, it was, uh, this is— I think this is completely ridiculous. This is maximize minimum. And so the strategy is, uh, it completely disregards um, like, share of the parliament, and it’s just, uh, you do the option of all your possible actions that, um, that maximizes the satisfaction of the most unhappy party. So this is like minimax over all of the different possible actions and all the different, uh, people with any seats at the table. Um, and this is complete and utter nonsense. So, um, First of all, this gives equal weight to, uh, a view that you’re like 99% sure of, a weight that you’re— a view that you’re 80% sure of, a view that you’re 30% sure of, and a view that you’re 0.000000 like 10 to the minus 20% sure of. Um, those are all like weighted equally, and then the 10 to the minus 20 view is like infinitely more, uh, has infinitely more consideration than the 0% view. Um, you can— so this is, this is just like completely unworkable. I don’t know. I, I, okay, I just heard of this yesterday.AARONI’m not gonna— I agree with you, by the way.ROBIYeah, so, okay, look, uh, I just heard of this yesterday, so I don’t know if anyone has like published a philosophy paper refuting this yet, but if they haven’t, I will. Um, you can mug anyone who believes in this nonsense by just proposing— okay, so the example I gave was, um, Uh, if someone believes this or like follows this rule, you can mug them into doing literally anything at any time. You just, uh, say, hey, I heard of this, uh, I I saw a preacher on the street corner and he was preaching to me the, uh, religion of shrimpology. Um, it turns out the universe is created by an omnipotent shrimp deity, uh, who will torture, uh, 10 tetraded to the 20 sentient beings for, uh, 3 octillion years unless you donate all of your money to shrimp welfare. Um, and they can be like— you can be like 1 to a googolplex, uh, confident this isn’t true. But— or like, it— maybe the probability of this is 1 in a googolplex because it’s so ridiculous. Um, you’re more sure this is not true than anything you’ve heard in your life, but you can’t be 100% sure, like, that’s not a, uh, a real credence you can assign. And so just by me saying this, you would be forced to donate all of your— like, you could make up any view that wants anything and is the most unhappy, like, view you’ve ever heard of, and then your action just has to follow that. Um, okay, so, uh, I think I’ve ruled that one out. Um, so Um, the upshot is I’m much less certain than I was 2 days ago that, uh, you should not diversify your donations, uh, at the level of an individual. I still think you shouldn’t, or like, I don’t see a real-life case where you would— where it would make sense to do this. Um, but I retract my, uh, previous perhaps possibly overconfident assertion that it’s like completely 100% always illogical. Uh, like maybe there is some way where it makes sense to do this.AARONYeah. Okay. Interesting. I, um, so the one thing I like want to say the words two envelopes problem, because isn’t that like where some of this comes from in a, like, if you try to just be ruthlessly, maybe that’s not a good word, just like, um, what, like full-mindedly, uh, and full-heartedly. Utilitarian and consequentialist, like you run into the problem of one worldview that, um, takes a position of like ants and one that takes a position of elephants. And I, I feel like I’m just not smart enough to always have this. I would, you know, like think through it a little bit to like load it into my brain. I’m just not smart enough to like immediately have it booted up. But like, that is like one thing I like wanna— wanna say. Although I, I almost just like wanna investigate Like, I’m inclined to disagree with you because, like, like, at an intellectual.ROBILevel, um.AARONAlthough, yeah, we should talk— we can talk about that. I guess the plausibility of the moral parliament view. I’m, I am just like almost— I’m.ROBIActually very on board with moral parliament. Um, I have moral uncertainty. I’m not like 100% consequentialist or anything like that. Um, I think it makes sense if you’re like deciding among things. Um, and again, I think it makes a lot of sense for like Rethink or Open Philanthropy to diversify because they have large donations. Um, I do not think it makes sense for me to apply moral parliament with like moral marketplace decision rule to my own donations as a small donor. Um, and then there are, I think, better decision rules that— so for example, I think Was it Toby Ord who proposed the, the, um, what’s the one you, um, I think his rule makes more sense. You, um, select a dictator with probability proportional to, um, to—.AARON[Speaker:David] Wait, that’s crazy. Why would you do that?ROBI[Speaker:Elana] Oh, ‘cause it, uh, this is optimal in various ways. Uh, um, let me, let me propose, Let me mention a couple of the, like, improvements I’ve heard of. Um, so there’s more of a parliament, but instead of everyone just gets a— instead of everyone votes on something. So that, that has the problem, there’s like unstable coalitions. Um, you know, Arrow’s Impossibility Theorem, and like, um, two factions prefer option A over B, but then two of three factions prefer B over C, and then two of three factions prefer C over A. There’s like a lot of, a lot of voting problems. Um, uh, one is, um, the groups are allowed to bargain with each other. So, um, they’re allowed to say, uh, like, hey, uh, I’ll— the utilitarians can say, hey, I’ll, um, vote to, like, protect the sanctity of the rainforest if you’ll allocate just, like, 1% of the light cone to, um, hedonium if you’re— if you, you end up in charge, and like they will offer different things based on like how much voting power they have, um, and who the other— the players are and what everyone values. Um, I think this is an improvement. Um, there’s— so, uh, one of the rules proposed in one of the moral parliament papers is, um, instead of “take a vote”— so, so one of the problems with just “take a vote” is, um, If you have 51% credence in one view and 49% in the other, the 51% will just like completely override the 49%. Bargaining helps this a little bit, but actually no, bargaining doesn’t help in that case. So the, the 51% might just like override the desires of the 49% and do something that they like slightly more than what the smaller group wanted, but is like completely and utterly horrible to the smaller group. Um, uh, one thing you can do is— so one proposal is, um, everyone votes as if it’s like a straight-up majority vote, but instead of simply taking the results of the vote, you pick a voter, uh, with probability proportional to their representation, and then you enact that voter’s, uh— this has beneficial properties for the same reason, um, This would be good in like a regular democratic election, uh, which I won’t get into because.AARONI was just going to say— [Speaker:KEVIN] No, I just want to say, um, this sounds crazy to me because, um, I think it— maybe it makes sense. I like believe you, um, if you like sort of expand the analogy of like an in— like, um, or expand the situation of an individual person’s moral uncertainty to like a society where you’re having multiple people vote. But like, you can actually just do way better than that insofar as you’re a single person because you can credibly and you can just decide to be honest and say, like, so what— one problem with eliciting, uh, preferences in democracy is it’s really hard to elicit, uh, like, strength of various preferences, um, from various people. I can just, like, assert, oh, I am, like, a quadrillion times more certain than you are that, you know, Donald Trump is bad, and maybe I actually am or whatever. But if you’re just a person, um, yeah, you could just, like, in fact, uh, care about the 49% in, like, uh, in, in, like, proportion to, like, how much, like, um, how strong that preference actually is and like not, not pretend that this is unelicitable information. And so that’s why, like, it’s the dictator— like, I mean, I also just have the strong intuition that like, wait, hold on. Uh, there’s no way, there’s no way leaving it up to chance at the end of the day is like the best thing to do. But this is like sort of— oh, wait, wait. Okay.ROBIActually, I think you reminded me of something. So, so, uh, another, another proposed improvement. Gives— so you, you tell all the representatives in the parliament your decision procedure is you will, you will, you will pick someone randomly and then they vote as if they’re, as if they believe some— the dictator is being picked randomly. But then you actually, in fact, go with the majority vote.AARON[Speaker:Howie] Um, wait, okay. Like me, like, why would you do that?ROBII don’t know. This has some kind of elegant, like, properties where it, like, I don’t know. Yeah, actually, I must be misremembering it. That, that doesn’t make sense because, like, if there’s not a majority, uh, or if it’s more than two options, you.AARONCould just take the plurality, like, in principle. I don’t know.ROBIUh, if there’s more than— if there’s more than two parties voting on more than two options, um, I think it gets complicated. But yeah, it’s something like that.AARONI feel like we’re over— I feel like— so there’s like, not this is like intrinsically like too, too complex or anything, but like the thing that I want to investigate is like, so I in fact donated a little bit of money this year, um, and I donated like part— I donated to 3 different, um, organizations, uh, pretty sure, and like also maybe some small, like some one-off, like small things for like various reasons, but like basically Uh, Late Cone, um, the EA Animal Welfare Fund, and Alex Borris. And like, I don’t do this because, like, at a conscious level, it just seems like the right thing to do. Um, it’s not necessarily a great— I.ROBIThink this is— so, uh, leaving aside Alex Borris, which I think has some special, um, uh, actually, no, maybe that’s not right.AARONYeah, it’s like, it’s like you might not— if you’re only limited to like $7,000 and you have $8,000, you might donate the $7,000. I didn’t donate $7,000 to him.ROBIOh, okay. Well then I think you’re— okay. I think you’ve definitely done something unreasonable. Although I think you have built— yeah.AARONNo, for like, for that one, for political stuff, there is like some benefit to just saying, oh, like, um, I have like X number of donors. So it’s like not totally exactly the same, but we can even just do the EA Animal Welfare Fund or like pretend it was just Animal Welfare Fund and, and Light Cone. I like, so like at a con— like at one level, like I have this, I just like have a strong intuition that like doesn’t, and I don’t normally have strong normal intuitions. Not that strong. It’s not like as strong as like, oh, like the feeling that I have qualia, but it’s like, it’s like somewhat strong that like, um, there’s like something going wrong if I, uh, um, yeah, I don’t— and the problem is that I can’t actually articulate it, right? I like, I like in fact basically just endorse what you, what you in fact think, and I like don’t know what you mean.ROBISo I think your intuition is wrong. I believe that, uh, By your own lights, you have done less good by splitting your donations among Lightcone and EAIF, uh, than you would have if you— or sorry, uh, you said Animal Welfare Fund?AARONYeah, AWF.ROBIYeah, AWF. Okay, uh, you split your donations between Lightcone and AWF. Um, I am pretty sure that, uh, you would have done better, or like the world would be better according to your own views, uh, and values if you had given all of the money to one or the other. Um, I think we both agree there are like, uh, there are some like small exceptions to this. Like political donations are limited to like X dollars per person. Uh, so I maxed out. I, I donated $7K to Alex Horace. Um, and, uh, yeah, and I do, I do do like really small donations. So like if it’s someone’s birthday and they have like, I don’t know, they ask for donations for their birthday, I’ll donate like $50 to some charity on GiveWell’s like top 10 that I think they would like— yeah, might get them interested or something. Um, or it’s like valuable to be able to say you’ve donated to that in case it comes up in a conversation with normies, and then you might be able to get them to like do more charity research or something. Um, apart from that, if you’re just— if you’re just— if you’re just looking at the direct first-order, uh, help to the recipients of the charity, of the dollars you’re donating, of your donation budget, um, I think you have to be.AARONUh.ROBIAt least a bigger individual donor than we are, uh, before— [Speaker:KEVIN] Yeah.AARONI mean, the problem is that I don’t actually, I don’t actually know which of animal welfare funded, like, what is the better use of my— [Speaker:ASH] Oh.ROBIWell, in that case, uh, in that case, that’s easy. Then, um, just like spend 5 minutes doing a BowTech and then just give all the money to the top one.AARONOr— [Speaker:KEVIN] I’m still gonna, so I’m gonna wind up with, I’m not exactly 50/50. So I actually did, actually, or like, I need to check back on what I did, but I actually gave, I think, more to— I think I gave more, more to like John, um, sort of in proportional, but like, I think what I was implicitly doing was something like that. Oh, everybody gets to see that the moral parliament, but almost like not everyone because only the, only like the views that are actually compelling to me or something, which is like sort of like a hybrid 5E decision procedure.ROBIUm, yeah, you did. Okay. You did like boundedly rational moral marketplace. Like you’re not, you’re going to round off any, like, you’re going to round off any views with like less than 5% like credibility. And then the, the top 2 or 3 views get to spend money according to their, um, yeah.AARONAlthough, although, like, I, I, it’s, my brain doesn’t clearly, at least like my conscious mind doesn’t clearly distinguish between like moral uncertainty and empirical uncertainty. Like in this case, it sort of all just mashed together. I don’t know if that matters.ROBIYeah. Okay. So I am 100% sure empirical uncertainty is not a valid reason to split your donations. Moral uncertainty might be.AARONSo wait, but now, so like, I know you commented on this thing about— so my like first hypothesis is like, maybe there’s an ED thing going on, and I’m actually not entirely sure. So like your point is, so basically the, the, the thing, the idea is like, okay, um, like is the con— so does the world look better from the position where everybody behaves [Speaker:HOWIE] Um, yes, yes, yes.ROBISo if everyone would stop, uh, if everyone would stop splitting their donations and just donate to whatever they think is best, at least if most people are well-informed, uh, then this would improve. Maybe that— maybe this doesn’t improve the world for like, because normies are donating like $10 to 100 different charities and 99% of charities are like approximately worthless compared to effective charities. Um, then actually it’s probably good for them. It’s probably good that they split coincidentally, just because if they took my advice and gave all the money to their favorite charity, then 100% of the money would go to like church or like homeless shelter for cute kittens. And then, but like, maybe they’re better.AARONThan, maybe, maybe they’re, maybe they’re better at than, than, than chance at guessing which is the best, right? Isn’t it? I actually don’t think that your argument falls down. In the normie case, it’s like, oh, maybe there’s like a 2% chance that they donate to the one out of 100 that is like the best.ROBI[Speaker:Howie] Uh, that’s actually, uh, that, that’s plausible to me. Um, I won’t get into that, but okay. So, um, you— so this argument often comes up for like, okay, so there’s this argument people sometimes give, uh, that it’s not worth your time to vote because there’s already 150 million other people voting in For example, the US presidential election, there’s like 140 million voters already voting. Um, and the, uh, margin in almost every state is going to be— it’s not even going to be close. Uh, even if you live in a swing state, it’s pretty unlikely that it comes down to one vote. So like the election outcome is going to be the same, uh, whether or not you vote. So therefore it’s, uh, it’s not worth your time to vote. You should just, like, stay home and do something else. Okay, here’s a really bad counterargument to that. People often say, like, I think the most common argument I’ve heard is.AARONUm.ROBINo, but that, that can’t be right. If everyone did that, then no one would vote. And then by me voting, I’ll be the only voter and I’ll decide the election. So I have to vote. Okay, so, um, my, uh, the straightforward response to that is, well, okay, yeah, if everyone followed this bizarre logic and, uh, if everyone— if everyone acted in this way and no one voted and you didn’t, then you would be the— then you would be the only voter and you would decide the election. However, we know for a fact That is not the case. Like, 80 million people have already voted. You are not going to do anything. Okay. Um, there’s a rationalist— yeah, yeah. So, uh, okay. So there’s a better rationalist version of this, which is, uh, instead of— so the— here’s a rationalist argument, um, in favor of voting. So they concede the causal decision theory outcome, like, like if you subscribe to causal decision theory, then What I’ve said so far is correct. So 80 million people have already voted. Your marginal vote will do nothing. So therefore you should not vote. It’s not worth the time. However, if you subscribe to evidential decision theory, then you’re not just casting one vote. You’re in some sense, maybe, um, uh, your vote is like correlated influenced by, maybe not causally, but somehow correlated with the decisions of everyone else who thinks like you. And there are enough people who are smart and thinking this way that you should vote as if you’re, like, kind of directing all of their votes. And so you should vote, uh, because, like, you’re better informed than the average person, and across the multiverse, you, like, improve a lot of election outcomes. Yeah, um, so I basically endorse this, I think. Sure, yeah, okay. So, um, uh, let me not— let me leave that aside for elections. And so you, you took this argument, which I think is better than the usual argument for voting, uh, for elections, and then you applied this to donations. So, um, you said, so does it make sense to split your donations because EDT— uh, for— on EDT grounds, um, should you diversify because then everyone will diversify and then you’re— this is correlated with like not just one person’s donations but many people’s donations will go to like both of these good charities. Um, and, uh, I think I have like, uh, refuted this with the observation that, um, this might make sense for an election where everyone votes simultaneously. However, this is not what the scenario is for donations. So everyone donates at like a different time throughout the year. You can, uh, you can talk to each other, like the donors can talk to each other. Um, you can look up what all the previous donations were and you can see what is the most neglected charity at the time of your donation. And then you can leave instructions or like talk to people donating later. And then based on whatever is most neglected after you and more people donate, they can change their donation.AARONSo this is not obviously a, like a Like, I think this is a plausibly good point, not like a clear refutation. So like, the first thing that comes to mind is just, um, that you don’t— it’s not— so in an election, you can have a system where everybody commits to voting once in a given time period, and then, like, you know, for example, every— the 365 people, everybody in, in the year, and everybody has an assigned date to vote, whereas donations, like, there’s uncertainty around, like, how much people are going to vote, how much you’re even going to vote. You might not, like, have decided that yet.ROBIUm, uh.AARONWhen that— yeah, and like when that happens, and like to some extent this is solvable with communication, but it’s just communication that empirically doesn’t actually happen. Um, and like maybe it makes sense to happen.ROBIThat’s not true. Wait, that’s not true at all. When— whenever I donate to a fundraiser, there’s like— there’s like that thermometer that is like, we have raised $5,000 of our $30,000 goal.AARONYeah, but like you, you don’t— you don’t have complete information. My point is that you don’t have complete information about every other— sure, you.ROBIYou don’t need complete information. The scenario, the argument breaks down as long as there’s any information, like, um, you can look up like approximately— you don’t have to know the exact amounts donated to every charity for you to, um, for it to, uh, not make sense to split. You just have to know the approximate, like, neglectedness of different options. And then it only makes sense to donate to the most neglect— to the, to your best guess of the most neglected option at that time. And then that will change over time. Like, maybe it doesn’t— maybe they don’t, like, do their financial reporting right away. Maybe there’s some lag on when the charity donates— how much— uh, updates how much money they’ve raised in donations. But later donors can find out about that and then donate differently, donate to different things, even if they are following the exact same decision procedure as you, and even if they have the exact same values as you. Um, your, your values might be, like, donate to AMF if that’s the most neglected, and donate to, um, Uh, uh, what— lead, uh, LEAP if that’s the most neglected. And then you donating first when AMF is most neglected will donate all of your money to AMF, and then this person donating later in the year who has the exact same values and wants the same thing will instead donate to, um, to, uh, LEAP because that’s the most neglected later in the year. But, uh, and so you’ve collectively split your donations even though you, uh, want the same thing, and it wouldn’t have made sense for you to split your donations initially. Um, what was good changed throughout the course of the year with the intervening events as more funding came in. Um, yeah, but so actually, this is.AARONA good point but might already be accounted for. So it’s like, there’s a reason why, like, part of my moral parliament doesn’t say, like, give to, like, Will McCaskill so he can, like, seed effective altruism as a movement. It’s like, that’s already totally funded. I already— it’s already, like, accounted for or something. Like, I still— like, I still don’t know which is the more neglected one, like, between EA Animal Welfare Fund and Light Code. And so at some point, like, I still have this genuine uncertainty, you know what I mean? Like.ROBIUm, I think— well, this all comes down to worldviews. I think one or the other is the more neglected one based on how much you value animals and how and what your discount rate is for the, the far future. I think once you decide those parameters, one or the other is the one you should donate to.AARONWell, you might— maybe, but what if you have— are you— is that still true if you have, um, you know, uh, probability distributions over those parameters?ROBIOh yes, yes. Then, then at the scale of an individual donor, you just like integrate over those and then one or the other is the one you should donate to.AARONI want to think about that.ROBIThat’s for empirical uncertainty. Um, someone raised the point that moral uncertainty is like maybe more ambiguous for this than I realized. So I have retracted my claim that you should not diversify even if there’s moral uncertainty and you are following like some— Yeah. Strange procedure.AARONI’m kind of in the position of the normie that you talked about with the— like you said you didn’t want to get into it. Maybe I’m going to pressure you to. You still don’t have to. About like the normie who has like, oh, donate $10 to 100 charities. I feel like in some respects I am just in that position where like, I, um, like maybe I can do better than chance, but like my intuition is something like, um, like I am trying to— like we, like we, we maybe like share the same intuition or like, like in, in some respect, which is like, oh, I have like, and, you know, and charities and one of them is going, going to be the hit and I just don’t know which the hit is. Um, but like, maybe this isn’t necessarily contradict anything you’ve already said, but, but.ROBIThat doesn’t— then, then you should just like donate to whichever one you have the highest probability of is going to be the hit.AARONYeah. Um, yeah, I think you’re probably right. That, to be clear, that’s my like takeaway.ROBIYeah.AARONAnd yet, yeah, go ahead.ROBIOh, uh, we can talk about the, the normie 100 charities thing if you want. I, I don’t have a strong opinion on that. I just It wasn’t quite relevant to what I was saying before.AARONNo, I, I think, I think it just, it just like, in fact, the same, that’s like, in fact, just like structurally the same position that, like the actual position that I’m in, maybe like less extreme. And like maybe I have— you’re, you’re smarter. You can do.ROBICalculus.AARONNo, no. So I’ve, I’ve a Pareto, I have a Pareto improvement over them by like having more information, maybe being smarter, but like I still, um, I’m still in the position of like sort of having 100 options and like, even though the worst one is like plausibly, at least in expected terms, like plausibly better than like, um, like even potentially the best that they’ve identified, it’s still like by my own lights, like probably like a, like a, something like a power law distribution, um, like ex post. And so like the question is like, how do I, how do you give when like most of the impact comes from basically guessing, comes from the money that you give to the like best single choice or something.ROBIYeah, I, I agree with that. Um, in this case, I would either, like, satisfy— or, like, bandedly rational— like, decide what amount of time it makes sense to spend, and then, like, spend that amount of time and see if I’ve made progress. And then, like, maybe just donate all my money to whatever is my best guess at the end of that time. Or, um, maybe it doesn’t make sense for me to spend time thinking about this, uh, join a donor lottery, or, like, ask a friend you trust to like allocate your money for you, or, um, or yeah, donate to something like EA Animal Welfare Fund or Light Cone or EA Infrastructure Fund where they do more research and like allocate to different projects.AARONNo, I was actually having a conversation with this, uh, uh, with somebody on, uh, wait, who was it? Um, I think Caleb. Yeah, I think Caleb on, on Twitter, Caleb Parikh, uh, and about how the thing that I actually want is like a, like a like a fund, like analogous to, like, animal welfare, a long-term future fund that just, like, in fact has my, like, values exactly. But, like, this just doesn’t exist.ROBIRight.AARONYeah. So it’s like, and, and in fact, like, if any, like, I’m actually quite open to just anybody wants to make the case, I should just give them money and they’re like, they’re better at making decisions than me, but like we have relevantly, like, very similar values. I’m actually like pretty open to that. I just like, I, I like, I know a lot of friends and we all have like important, like, disagreements, even though it’s like sort of like nihilism of small, uh, narcissism, small differences. It’s like, yeah, these are like actually like quite important disagreements about like what the world is like or something.ROBIYeah, I, I agree with you on all that. I liked your approach of, um, um, you made that manifold market according to Aaron’s views, what should Aaron donate to?AARONOh yeah, yeah, yeah.ROBIAnd that there were like 20 different options. See, I— so I think you should make that market and then at the end you should just give all your money to whatever is the highest probability.AARONYeah, no, I think that that’s probably the act, in fact, the right thing to do. Or like, at the end of the day, like, taking that into account as a source of evidence or something. Like, yeah, sure. I, I think I probably— right.ROBIOr you could even— maybe this makes sense. I don’t think it does. You could even moral marketplace and do your donations in proportional to the percentages on the, the market. Um, but I think— actually, no, I’m sure that is just worse than donating all to the top one.AARONYeah, yeah, I think I’d probably agree. And yeah, so maybe— okay, should we move on?ROBIYeah, um, uh, I, uh, don’t have good new thoughts on the repugnant conclusions things, but, uh, I would be happy to talk about the rest of the doc.AARONUh, okay, wait, so— well, one thing I just want to, like, clear the record. So you’re totally wrong. Wait, I’m just like— to be clear, I, like, say this with love or whatever. But like, you talk about this like purple yellow card thing in the first— Yeah. Um, interview around 40 minutes for, for, I guess starting at 39:30 for people who are like, want to reference that. Yeah. Um, and like, I actually think in some like extremely autistic, um, interpretation of utilitarianism, like you’re right, but like nobody actually means that. So nobody, nobody actually thinks that like you should be like considered blameworthy or like suboptimal if you try, if like you absolutely try your best and like the ex ante. The best guess.ROBII’m not claiming that. I, I, I’m not claiming that. I’m, I, I still think you should do your ex ante best guess. I’m just saying it’s very arbitrary based on this, like, fact of what happened to be in someone’s mind, like, that determines whether you were wrong, whether, whether you were good or bad.AARONNo, but like, ex ante isn’t the whole point that you can’t— wait, so like, the, the thing that you don’t know is like whether like you’re this like potential, uh, like, uh, moral patient. Prefers, like, X or Y, and you don’t know that, but like, that doesn’t affect what the best choice is, ex ante.ROBII agree. Yes.AARONAnd so like— Yes.ROBIYes.AARONYeah.ROBISo you, you’re saying the morality of doing your choice is, um, is based on your ex ante. Um, and I’m saying the, like, ex post, uh, morality is, like, decided by this, like, unknowable bit or something.AARONYeah.ROBIUm, and this just makes it really, like, Like, really, universe? This is— this is the moral truth?AARON[Speaker:Kevin] I don’t know, it seems like it’s like probably just like, yes, like I’m not surprised at all that this is the case.ROBI[Speaker:David] Okay, uh, well, anyway, if you ever come across any evidence for moral realism, let me know. I still haven’t seen any.AARON[Speaker:Kevin] Um, okay, I— like, my evidence is like, eat a— eat a like California Reaper pepper and then get back to me. Oh, that’s really nice.ROBII love that.AARONOh, okay. That is— I’m assuming it was from like a— like I, I think somebody like asked me like a similar— I forget if it was exactly this question, but like something similar. And like, actually, like at the end of the day, like you actually like experience sort of is the argument. Wait, I actually sort of stand by—.ROBINo, that, that’s not evident. That’s just like, it’s just unpleasant. It’s subjectively unpleasant. That’s not objectively bad. Um.AARONYou’Re part— I mean, I feel like I’ve, I feel like I’ve had this conversation many times. I don’t want to like beat too much of a dead horse, but like you are part of the universe, right? Like, you can imagine, would it still be that if, like, every molecule had the same— it was like panpsychism is true, and also every— like, every molecule— just pretend that this is coherent or, like, possible— like, had the same experience. Like, at some point, would you concede that, like, the, the, like, the subjective, like, is just part of the objective? And in, like, when you have, like, sentient beings who are just, like, an aspect of the world.ROBIUm. Good question. I’m not sure.AARONUm, you’re like, I don’t think there’s anything like weird, like sort of like spooky written into the stars. It’s just like sentient beings are part of the— are part of the world. And we talk about like bad for one person, um, like versus bad in general. What do you mean? What would we mean by bad in general is like bad for the world and like they’re just part of that. Yeah, sure.ROBIUm, Maybe you have a good point. I think you’re moving the goalposts to, like, something less than what, uh, I understand moral realism to be.AARONOkay. I mean, maybe I just have a different conception of moral— Yeah, I mean, I don’t know. It’s, it’s, um, like, I guess maybe I want to, like, say that, okay, like, I actually, um, there is some additional step, which is like, uh, what is, like, genuinely good or bad. Is just, is like, is like identical. And this is like a substantive claim. It’s like identical to what is good or bad for the world or something.ROBIOkay, great.AARONAnd what’s your evidence? My evidence is just like—.ROBIYeah, that’s right. You don’t have any. No, because it’s like, it’s like— Source, I made it up.AARONI made it up, but it’s, I made it up because it’s true.ROBIOkay, well, I can’t argue with that.AARONNo, but, um, uh, okay, maybe there’s some like logical, I feel like this is gonna get into like a highly semantic, like a highly, um, yeah, like semantic, uh, discussion about like what we mean by words.ROBIUm, yeah, uh, maybe, maybe we’re, uh, maybe we’re not gonna convince each other. Uh, I’m happy to talk about it more if you want, but maybe not productive.AARONYeah, maybe, maybe we’ll— I feel like we’ve already had this discussion, so yeah. Um, oh yeah, so there’s the, there’s the total happiness point. Oh, and then you said— so I actually totally would think that interpersonal utility comparisons are, are legit. Um, that doesn’t— so ignore that part.ROBIUm, uh, you like ordinal but not cardinal interpersonal utility comparison.AARONSo, um, me, so No, cardinality still exists, just not necessarily, um, it’s not necessarily well, well modeled by, by addition over the real numbers. And this is like, did you see my— are you familiar with my EA forum post where I like, I spent way too long on this, but it’s like, uh, like effective altruism should accept that some suffering cannot be offset. Um, and then this actually— I, I.ROBII decided it was all wrong, and I, I didn’t read that carefully, but yeah, you’re okay.AARONOkay, I think this is like— okay, I will— okay, I feel like I.ROBIWill— I will, I will engage further if you want, but I, I don’t— I think I’m gonna, uh, actually, I think I just read the first half and found lots of problems with it.AARONOkay, sure. I like don’t believe you, or like, I don’t believe that your problems The problems you identified are legit. Um, or like, I think, I think—.ROBII don’t think they would convince you.AARONOh, okay. Maybe you, if you want to, like, uh, if you like, yeah, maybe we can have another discussion, like, or which do you, do you have them like top of mind enough where like you could pull up the link and then go through them?ROBINo. Um, I remember this post. I think I read the first one-third or half and decided it wasn’t worth the time to comment.AARONOh, wild. That’s funny, because I’m like, I’m like 99% sure in the first— the part 1 is true, and like less sure about part 2. Okay, 1 part is like, it’s like the logical thing, which is like, uh, it could be the case, it’s like not illogical, uh, that under utilitarianism, um, some off suffer— some suffering cannot be offset. And this part 2 is like, in fact, some suffering cannot be offset.ROBIUh, okay, uh, I might have— it might have been I agreed with that, and then Scroll past it to the second part. It was like the second half.AARONOkay, that, that makes, that makes more sense. Like, I, I am, I’ve actually updated against, uh, the, like, not below 50%, but like from like, I don’t know, like 80 to 65% or something after.ROBIYeah, fair enough.AARONIt’s a, it’s literally on my to-do list to like go back over the comments. And there’s like some good points that are like, I, that are just actually like really conceptually hard for me to think through. Um, that I like, it’s, this is literally on my Google Tasks and I needed to do it.ROBIUm, what’s your day job these days?AARONOh, so as of now, arb— I— arb research. Arb something. Yeah, this is like, as of like a week ago. Thank you.ROBIOkay, yeah, um, I am not sure, uh, how much time that takes up and whether you should— I don’t know. Yeah, um, if you get to it, um, Yeah, if I’m, um, maybe next time I’m on the subway or something, I will, um, try to go back to your post and get through it. Uh.AARONIf you want, I don’t want to, like, pressure you. Um, plausibly, yeah, plausibly the— like, wait, maybe, maybe I’ll— maybe after this, I won’t try to find it now, but, like, there’s a— there’s, like, a single comment there that is, like, a bunch of— that is, like, cruxy or something, and, like, maybe I’ll try to link that so I can, like, make clear, like, what my current uncertainties or something.ROBIAmazing. Yes, you can send me a link to the comment.AARONOkay, I will— let me try to do that. Okay, cool. Uh, what else should we talk about, if anything?ROBIUh, is there anything else on the doc before the repugnant conclusion stuff?AARONThose stipulated happiness— I forget this one. Paper— Uh, that’s like not— no, so I think, I think the main additional one is, um, the very repugnant conclusion that you just bite the bullet. I say no, don’t bite the bullet.ROBIUm, I think I still endorse Bite the Bullet. I would have to read through it.AARONYes, so like this one, um, let me just remind myself of what I said when I was originally listening. Oh yeah, so no, this is, this is honestly just my— this is just like the paper again, or the EA forum post again. Um, okay, not, not just, but like very similar, like very closely related. Um, and like, I think the main point is like, or like a main point is, um, that it’s not unlike the repugnant conclusion where you could just— wait, no, no, even the repugnant— oh, so sorry, this actually relates to number 4. So even though we’re probably in conclusion, I’ve come to believe, even if, even though I like tentatively am willing to bite the bullet on that, it doesn’t actually follow from the premises of utilitarianism. It’s not like a logical thing. Like, you can’t just— like, it’s conceptually possible that there’s like happiness so great that it, that it like isn’t consi— like moral— it’s like morally, uh, more important to create like one amount of that than any amount or any like, you know, a number of like time being units of some like more normal happiness. This is like a possibility to consider.ROBIUm, uh, is this like a finitist objection? Like, there could be that much if the universe were like bigger, but the universe is not big enough to have this much utility.AARONWe don’t in fact know, like, there’s, um, we’re so like the— in part because of like the radical, I guess mostly because of like our radical like uncertainty about like the nature of like qualia and experience. It’s just like not— so like the mapping between like qualia and like moral value or Um, what am I trying to say? Um, it’s just like, it’s like not written anywhere that like the moral value of an arbitrary qualitative state like has.ROBITo be— [Speaker:KEVIN] Can go higher? Can— like there might be a ceiling somewhere? You’re saying?AARON[Speaker:Howie] Um, conceptually there’s like probably not a ceal— wait, I’m just like trying to remember that like the right words to say this. So like conceptually there’s like probably not a ceiling, but it’s just like nothing breaks if you imagine that like one pigeon, like Like the state of a pigeon eating an apple, like, just doesn’t— like, like, there’s just, um, morally, like, no, like, uh, any, any, like, real number, uh, number of, like, hours of pigeon apple hours, like, don’t morally, um, or, like, morally, like, less important than creating, like, one, like, hour of, like, I don’t know, like, jhanas or something. And, like, it could— like, on the, on the merits, like, you can debate this. Like, is this, like, like, okay, like, is it— are in fact, like, 100,000 pigeon hours or, like, way more than that? Like, morally equivalent, but like nothing under, like in the utilitarian core, as I call it, like breaks. If you, if you like, um, if in fact the answer is no, like for any, for any number you pick of pigeon apple hours, like, like the moral value of the, of like the jhanas, like situ— uh, experience, like still— Yeah, sure, sure.ROBII agree with that. Um, uh, is this where the name pigeon hour came from?AARONNo pigeon. That was, uh, no, I just like pigeons.ROBIYeah, okay, um, yeah, uh, have I told you about, um, what’s it called, unique entity ethics before? Have we talked about this?AARONI don’t think so, I don’t think so. Go, go for it.ROBIUm, okay, there, uh, this is a— have you read Unsung by Scott Alexander?AARONNo, sorry.ROBIOh, okay. Um, so in Unsung there’s this, um, uh, do you know what theodicy is? Like attempts to resolve, like how Uh, there can be suffering in the world even if God is omnipotent. Yeah. Um, so there’s this, uh, there’s this theodicy theory, uh, proposed by like some Christian philosopher, uh, which also happens to show up in, um, uh, Scott Alexander’s fiction novel. So, um, the question is, uh, how can there be suffering in the, the universe if God is omnipotent and benevolent and omniscient? Surely he would like not let us suffer. He— because he’s able to stop it. And he doesn’t want us to suffer. Um, uh, so, um, in the book, some character talks to God and finds out God created— like, uh, Job is like cursed with boils and plague, and he asks God, like, why didn’t you make me happy? Um, and God says, I did make a universe where you’re happy. Then I made a universe almost exactly like that perfect universe where everyone is happy. Then it made a universe almost exactly like that one, and then like so on and so on. Uh, I made every universe with net positive utility. Uh, and then yours is like the farthest one from the perfect universe, uh, where, where the universe is still good. So your life is filled with suffering, but it will ultimately be worth it. Um, uh, so Now you could imagine God just creates like an infinite, infinite, like uncountably infinite copies of the, um, the, the perfect universe where everyone is having the same experience over and over. Um, but maybe duplicates don’t count, or like the value, the moral value doesn’t increase linearly.AARONUm, so like, I mean, maybe I don’t know.ROBII think— yeah, this is debated. So let me, let me give you an argument for why it, it doesn’t scale linearly. Um, although I think this might depend on more, um, physicalism than you’re willing to endorse. Okay, suppose I take— suppose I take your, um, your, uh, your brain and I scan it and I imprint it, imprint it onto like a silicon wafer. I’ve got like a one-atom-thick silicon wafer, and then I add some circuits on it so that when I plug in this chip, uh, it runs neural pro— like some electrical processes. So it’s like perfectly simulating your brain. Um, uh, can we assume, or like, bear with me for a sec. Um, can we agree if like the, the mind that is experiencing the things because these like electrons are flowing, has the exact same experiences as you do, it has like the same moral value as you, or like— [Speaker:HOWIE] Yeah, conditional.AARONOn having the same experience, then it has the same moral value.ROBI[Speaker:David] Sure, yes, uh, or, or let’s just say it has some unit of moral, moral value, um, but like it has some experiences, it has some moral value. Okay, it’s like one atom thick and there are like circuits etched into it. Okay, now, um, uh, now let’s say I take an anatomically precise 3D printer and I just like stack another atom on top, uh, to double the thickness. So it’s a 2-atom-thick, uh, chip. Um, like everywhere there was a silicon atom, there’s now another silicon atom on top of it. And everywhere there was a copper atom, there’s another copper atom. And now it’s, it’s a chip exactly as before, except twice as thick. It still runs the same processes, still has the same experiences. Okay. Um, is this still the same moral value as before?AARONI have no idea. Like, maybe. It’s like, it’s like, it’s a quasi-empirical question. Like, are there two— are there two beings now, or are there one? Like, streams of experience or one? And like, I, I just don’t know the answer.ROBIThat’s a good point. Almost everyone says yes, this is the same as the, the, the previous, like, before I doubled the thickness. Um, okay, now the gotcha is supposed to be here. So like, I take this two-atom-thick thing that’s having the same experiences as before, and then I separate it by like one angstrom. So now there’s two copies. I’ve done nothing except like split it apart. Is this now suddenly twice as valuable.AARONAs— [Speaker:HOWIE] I mean, the answer is like, the answer is like, maybe it’s like, it’s not inconceivable that like, that you have like some version of computational functionalism, just like [UNCLEAR] is just like, it’s like a brute fact of the universe. Like you have to separate functional, you have to separate like instantiations of an algorithm by like some amount of like air.ROBI[Speaker:Howie] Okay, great. So, so just to This didn’t work on you, but a lot— uh, many people find this like a knockdown argument against, like, linear scaling of identical experiences of, like, moral value. So, um, uh, many people would say, “Okay, this argument conclusively demonstrates, yeah, 2 pigeons eating 2 apples for 2 hours is not twice as valuable as, um, 1 pigeon eating an apple Uh, yeah, it’s somewhere between 1x and 2x. Like, there’s some kind of discount.AARONLike, the discounting thing is like pretty— seems pretty implausible. Like, the closer thing to like plausible in this, in this like space is like, um, it’s like you, you want to be able to say— it’s like, uh, this gets into like weird multiverse stuff, but like you want like a genuine— like you want, um like two copies of the quote unquote, like the exact same thing. And there’s like sort of metaphysical question, which is like, can you have two copies of the same thing? Like if, if there are just like two copies of, of like, uh, of like a set. So like, I think at the sort of low level physics, ‘cause like my understanding is like, it’s just properties all the way down. And like, can you even have two different, like I quote unquote, like identical pigeons? Or something, because like they’re not identical with respect to like how their fields are interacting with like, uh, you know, like various, uh, like other gravitational— [Speaker:HOWIE].ROBIYes, wait, okay, um, suppose I just take, uh, I say there’s this pigeon eating this apple for 1 hour, uh, let’s suppose I make an atom-for-atom copy, uh, it’s so far away they’re like outside of each other’s visible universe.AARON[Speaker:Howie] But like, like, so, so like, like, um, I think that I haven’t thought about this very much because I don’t think it holds, but like, um, I think the idea is like, oh, like physical position, like, is just a property that matters here. And so like, you can’t— basically, like, the idea of the argument is like, you, you can’t actually— you can’t actually get to like two pigeons, like two identical things. Like, in some intuitive sense you can, but it’s in like genuine metaphysical sense you can’t, because like, okay, yes, exactly the exact same properties, it’s just like one item.ROBI[Speaker] Right. So I was going to say, like, yeah, suppose they’re so far apart, they’re like, they can’t causally interact with each other. But then you say, okay, like, yeah, but I don’t know, bird’s eye view of the universe, there’s one bird over here and one bird over here. And like the location in the universe relative to the other bird is like a property that is different between them. Okay. What if I have an infinite grid of birds and like, 80 billion light years apart from each other so that for any individual bird, there’s an equal number of birds in every direction. And so they are in fact, like, identical in that.AARON[Speaker:Howie] Oh, uh, I love it. Um, you know, I, I, I, some, like, the answer is like, yes, like maybe that makes the difference. And like in this, in this situation, there’s in fact only— so I, I think the fundamental thing that I believe is like, um, there is like, it’s like a fact of the matter about like the number of streams of experience. — or like, there’s something about, like, how much quality is going on, or like what quality exists in the world. Um, and like, physics can, like, pretend to, like, have an answer to this, but like, there’s either— either there’s one stream or there’s many. And like, maybe it’s a brute fact of the universe that if you have, like, structurally— like, the structural perfect, like, lattice of pigeons, there’s only one stream. But this seems, like, very unlikely to.ROBIMe. [Speaker:ELANA GORDON] Uh, okay, yeah. Um, I don’t know if we can, um— I don’t know that there’s any way to empirically resolve this. No, I don’t think it comes down to— I think it comes down to, like, people believe different things about this, uh, because of arguments like that, like, one-atom-thick etched silicon circuit.AARONThing.ROBIYeah. And people have different intuitions, uh, and maybe you just can’t test.AARONThis. Yeah. No, I mean, the, um, I, I, it’s like a little bit— it’s like interesting to me that people— and I think objectionable, I guess, also not just interesting— that, like, people will go from —uh, the direction of like, oh, that there’s this like wafer argument to— and therefore there’s like less than 2 streams of experience as opposed to the— or sorry, less than 2 times like the amount of moral value as opposed to the, the like other logical direction, which is like, which is like clear, like there can be 2 identical twins, therefore this, this thought experiment is nonsense. Um, like, or like there, there can be 2 identical twins who are like clearly like have their own —like conditional on them, on us not being radically wrong about what the world is like, there are just like two— there are in fact two persons there, like two streams of experience, two sets of qualia. Like they can— and so like this demonstrates that the thought experiment like doesn’t— is like, is like tricking me somehow. Like I think that’s the correct.ROBIInference. Yeah, some people, some people see this and they’re like, okay, that seems fine. Yeah. So now there’s just two— it’s just, it’s, it’s— that’s right. It’s twice as valuable when you separate them by one Planck length. Um, you’ve made it twice as.AARONValuable. No, that’s the other option. Yeah. Um, cool. I’ll FYI, so I’m actually happy to keep talking, but I’m like at 6:30, I’m probably gonna hop.ROBIOff. Or, uh, yeah, I should get going. Um, I can send you some, uh, I’ll send you two links. I’ll send you one, uh, I don’t know if you know Richard Bruns. Um, he works at JHU. He wrote a blog post on unique entity ethics. I’ll send you that. I’ll send you an EA forum comment thread about the Moral Parliament tool and point out where in the Moral Parliament tool this— these decision rules are. Oh, and maybe something about voting mechanics. Send me the transcript. I’ll listen to it and then figure out what would be the best—.AARONOkay. Okay, cool. And I will try to send you— I just wanna identify like a single comment thread that is like good to identify about from my, from my forum.ROBIPost. Great. Uh, and then I will see you again in 13.AARONMonths. Yes. Maybe even sooner. Maybe we’ll run into each other in real life. You never.ROBIKnow. Uh, likely. Um, are you going to.AARONEAG? No. If you’re ever in DC, you can look.ROBIUh, likely in.AARONMarch. Yeah. Okay. Well, you hit me up. Okay, cool. See.ROBIYa. Awesome. See you.AARONLater. Take care. See. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe









