Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessShow HN: Vektor – local-first associative memory for AI agentsHacker News AI TopWorkers are feeling AI anxiety — and that they might be training their replacementsBusiness InsiderSteam geeft mogelijk indicatie van framerate games op je specifieke hardwareTweakers.netStudy: The AI Body GapHacker News AI TopI m Worried About the Helpless AI Disruptors of the FutureGizmodoDid Strong Earnings and AI Cooling Momentum Just Shift Modine Manufacturing's (MOD) Investment Narrative? - simplywall.stGNews AI manufacturingThe Key That Unlocks EverythingTowards AIMistral Leads a Week of European Infrastructure Plays - Startup FortuneGNews AI MistraliOS 27 Leak: Siri Finally Gets Multi-Command Support and a Standalone Chatbot App - Geeky GadgetsGNews AI assistantWeb3 project technical analysis Skill for AI agentsHacker News AI TopHow Shinn Uchida built a living room studio for an art-first daily routineCreative Bloq AI DesignTalk like cavemanHacker News TopBlack Hat USADark ReadingBlack Hat AsiaAI BusinessShow HN: Vektor – local-first associative memory for AI agentsHacker News AI TopWorkers are feeling AI anxiety — and that they might be training their replacementsBusiness InsiderSteam geeft mogelijk indicatie van framerate games op je specifieke hardwareTweakers.netStudy: The AI Body GapHacker News AI TopI m Worried About the Helpless AI Disruptors of the FutureGizmodoDid Strong Earnings and AI Cooling Momentum Just Shift Modine Manufacturing's (MOD) Investment Narrative? - simplywall.stGNews AI manufacturingThe Key That Unlocks EverythingTowards AIMistral Leads a Week of European Infrastructure Plays - Startup FortuneGNews AI MistraliOS 27 Leak: Siri Finally Gets Multi-Command Support and a Standalone Chatbot App - Geeky GadgetsGNews AI assistantWeb3 project technical analysis Skill for AI agentsHacker News AI TopHow Shinn Uchida built a living room studio for an art-first daily routineCreative Bloq AI DesignTalk like cavemanHacker News Top
AI NEWS HUBbyEIGENVECTOREigenvector

Considerations for growing the pie

LessWrong AIby Zach Stein-PerlmanApril 4, 20265 min read2 views
Source Quiz

Recently some friends and I were comparing growing the pie interventions to an increasing our friends' share of the pie intervention, and at first we mostly missed some general considerations against the latter type. 1. Decision-theoretic considerations The world is full of people with different values working towards their own ends; each of them can choose to use their resources to increase the total size of the pie or to increase their share of the pie. All of them would significantly prefer a world in which resources were used to increase the size of the pie, and this leads to a number [of] compelling justifications for each individual to cooperate. . . . by increasing the size of the pie we create a world which is better for people on average, and from behind the veil of ignorance we s

Recently some friends and I were comparing growing the pie interventions to an increasing our friends' share of the pie intervention, and at first we mostly missed some general considerations against the latter type.

1. Decision-theoretic considerations

The world is full of people with different values working towards their own ends; each of them can choose to use their resources to increase the total size of the pie or to increase their share of the pie. All of them would significantly prefer a world in which resources were used to increase the size of the pie, and this leads to a number [of] compelling justifications for each individual to cooperate. . . .

by increasing the size of the pie we create a world which is better for people on average, and from behind the veil of ignorance we should expect some of those gains to accrue to us—even if we tell ex post that they won’t. . . . The basic intuition is already found in the prisoner’s dilemma: if we have an opportunity to impose a large cost on a confederate for our own gain (who has a similar opportunity), should we do it? What if the confederate is a perfect copy of ourselves, created X seconds ago and leading an independent life since then? How large does X have to be before we defect? What if the confederate does not have a similar opportunity, or if we can see the confederate’s choice before we make our own? Consideration of such scenarios tends to put pressure on simplistic accounts of decision theory, and working through the associated mathematics and seeing coherent alternatives has led me to take them very seriously. I would often cooperate on the prisoner’s dilemma without a realistic hope of reciprocation, and I think the same reasoning can be applied (perhaps even more strongly) at the level of groups of people.

—Christiano

I think realizing that your behavior is correlated with that of aliens in other universes, in part via you being in their simulation, makes this consideration even stronger. Overall I don't know how strong it is but it might be very strong.

2. Pragmatic considerations

I am glad to work on a cause which most people believe to be good, rather than trying to distort the values of the future at their expense. This helps when seeking approval or accommodation for projects, when trying to convince others to help out, and in a variety of less salient cases. I think this is a large effect, because much of the impact that altruistic folks can hope to have comes from the rest of the world being basically on board with their project and supportive.

—Christiano

3. Worlds where many people tend to converge (upon reflection) are higher-stakes (under some views).

I care about the long-term future more in worlds where my moral convictions (upon reflection) are more real and convergent. In such worlds, many humans will converge with me upon reflection; the crucial things are averting AI takeover,[1] ensuring good reflection occurs, etc. rather than marginally increasing my faction's already-large share of the lightcone.

Inspired by MacAskill and Moorhouse.

4. Others' considered values matter directly (under some views).

To the extent that my values would differ from others’ values upon reflection, I find myself strongly inclined to give some weight to others’ preferences.

—Christiano

5. You might be wrong.

It's optimistic to assume that you or your friends will use power perfectly in the future. You should probably think of empowering yourself or your friends like empowering your (altruistic) epistemic peers, who may continue to disagree on important stuff in the future rather than empowering the champions of truth and goodness.

Disclaimers

None of these points are novel. This post was inspired by MacAskill, which also makes most of these points non-originally.

Growing the pie doesn't just mean preventing AI takeover. For example, research on metaphilosophy, acausal considerations, decision theory, and axiology grows the pie, as do interventions to prevent human takeover, create/protect deliberative processes, promote good reflection, and solve coordination problems.

It may be correct to allocate some resources to claiming your share of the pie. You have a moral obligation not to be eaten[2] and you should probably at least do tit-for-tat/reciprocity, if relevant. I just think there are some subtle considerations against powerseeking.

Powerseeking for the MCUF[3] or something (rather than your personal preferences) dissolves #1, most of #4, some of #5, little of #2, and none of #3, I weakly think.

This post is part of my sequence inspired by my prioritization research and donation advising work.

  • ^

Unless the paperclippers would also converge with me! But other humans and aliens seem more likely to converge than AIs that take over. But I think there's also some (perhaps weak) cooperation considerations with misaligned AIs; this entails upweighting e.g. ensuring good reflection and avoiding metastable vacuum decay relative to preventing AI takeover.

  • ^

Or in this case an obligation not to be so edible that you incentivize people-eating.

  • ^

"Multiverse-wide Compromise Utility Function." The acausal people use this term; unfortunately it hasn't been publicly introduced; see Nguyen and Aldred.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Considerati…reasoningpaperresearchLessWrong AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 116 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers