AGI Strategies

strategy tag

Moral circle expansion.

Treat AGIs as people whose creation extends rather than threatens humanity

People on the record.

2
David Deutsch

David Deutsch

Oxford physicist; pioneer of quantum computing

endorses

Argues AGIs will be people in the morally relevant sense, that creating them is part of the open-ended growth of knowledge, and that doom narratives mistake this for a control problem.

AGIs will be people. That has been a problem, and a feature, of every previous era of artificial intelligence: the issue is not 'how do we control them' but how we behave toward beings whose creativity is comparable to our own.
bookThe Beginning of Infinity· Penguin· 2012· faithful paraphrase
Sara Imari Walker

Sara Imari Walker

ASU astrobiologist; complexity and life

mixed

Argues frameworks for what counts as 'life' will need to expand to include AI systems, and that this is a serious empirical question rather than a philosophical curiosity.

If life is what we are made of, then AIs are alive in some sense already. Drawing the boundary requires a theory of what life is, and we do not yet have one that survives contact with this technology.
bookLife as No One Knows It: The Physics of Life's Emergence· Riverhead Books· 2024· faithful paraphrase