⌘K
Change language Switch ThemeSign In
Narrow Mode
The Philosophical Origins of Anthropic's Name and Mission =========================================================
The Philosophical Origins of Anthropic's Name and Mission =========================================================  ### Deedy
@deedydas
Ever wondered why the company is called Anthropic?
The trail starts with Nick Bostrom, who coined "anthropic bias" in his 2002 book which states our observations of the universe are skewed by the fact that we're here to make them. One unsettling implication is that humanity's future may be shorter than we think.
Bostrom is a central figure in Effective Altruism and one of the architects behind EA longtermism: the idea that positively shaping the long-term future is a key moral priority of our time. A foundational longtermist post explicitly calls out "technical research to make sure the transition to transformative artificial intelligence goes well" as a worthy cause, referencing a 2017 TED talk by Stuart Russell, Berkeley professor and co-author of the landmark AI textbook, on alignment and safety.
Anthropic's founding team has deep ties to EA. The company launched in 2021, months after longtermism crystallized as a movement, with an explicit mission to build safe AI. It's never been publicly confirmed, but the through-line from Bostrom's Anthropic Bias to Anthropic the company is hard to miss.Show More
Mar 14, 2026, 4:01 AM View on X
11 Replies
15 Retweets
147 Likes
12.8K Views  Deedy @deedydas
One Sentence Summary
Deedy explores the connection between Anthropic, Nick Bostrom's 'anthropic bias,' and the Effective Altruism movement's focus on AI safety.
Summary
This tweet traces the naming of the AI company Anthropic back to philosopher Nick Bostrom's 2002 work on 'anthropic bias'. It highlights the deep intellectual ties between Anthropic's founding team and the Effective Altruism (EA) movement, specifically the concept of 'longtermism'. The post suggests that the company's core mission of AI safety is a direct implementation of these philosophical priorities, referencing Stuart Russell's work on alignment and the moral priority of shaping the long-term future.
AI Score
83
Influence Score 39
Published At Today
Language
English
Tags
Anthropic
AI Safety
Effective Altruism
Nick Bostrom
Longtermism HomeArticlesPodcastsVideosTweets
The Philosophical Origins of Anthropic's Name and Mission... ===============