Age, Biography and Wiki

Nick Bostrom (Niklas Boström) was born on 10 March, 1973 in Helsingborg, Sweden, is a Philosopher and writer (born 1973). Discover Nick Bostrom's Biography, Age, Height, Physical Stats, Dating/Affairs, Family and career updates. Learn How rich is he in this year and how he spends money? Also learn how he earned most of networth at the age of 51 years old?

Popular As Niklas Boström
Occupation N/A
Age 51 years old
Zodiac Sign Pisces
Born 10 March, 1973
Birthday 10 March
Birthplace Helsingborg, Sweden
Nationality Sweden

We recommend you to check the complete list of Famous People born on 10 March. He is a member of famous Philosopher with the age 51 years old group.

Nick Bostrom Height, Weight & Measurements

At 51 years old, Nick Bostrom height not available right now. We will update Nick Bostrom's Height, weight, Body Measurements, Eye Color, Hair Color, Shoe & Dress size soon as possible.

Physical Status
Height Not Available
Weight Not Available
Body Measurements Not Available
Eye Color Not Available
Hair Color Not Available

Who Is Nick Bostrom's Wife?

His wife is Susan

Family
Parents Not Available
Wife Susan
Sibling Not Available
Children Not Available

Nick Bostrom Net Worth

His net worth has been growing significantly in 2023-2024. So, how much is Nick Bostrom worth at the age of 51 years old? Nick Bostrom’s income source is mostly from being a successful Philosopher. He is from Sweden. We have estimated Nick Bostrom's net worth, money, salary, income, and assets.

Net Worth in 2024 $1 Million - $5 Million
Salary in 2024 Under Review
Net Worth in 2023 Pending
Salary in 2023 Under Review
House Not Available
Cars Not Available
Source of Income Philosopher

Nick Bostrom Social Network

Instagram
Linkedin
Twitter
Facebook
Wikipedia Nick Bostrom Wikipedia
Imdb

Timeline

1973

Nick Bostrom (Niklas Boström ; born 10 March 1973 in Sweden) is a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test.

He is the founding director of the Future of Humanity Institute at Oxford University.

Born as Niklas Boström in 1973 in Helsingborg, Sweden, he disliked school at a young age and spent his last year of high school learning from home.

He was interested in a wide variety of academic areas, including anthropology, art, literature, and science.

1994

He received a B.A. degree from the University of Gothenburg in 1994.

1996

He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996.

During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.

He also did some turns on London's stand-up comedy circuit.

2000

In 2000, he was awarded a PhD degree in philosophy from the London School of Economics.

His thesis was titled Observational selection effects and probability.

He held a teaching position at Yale University from 2000 to 2002, and was a British Academy Postdoctoral Fellow at the University of Oxford from 2002 to 2005.

Bostrom's research concerns the future of humanity and long-term outcomes.

He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".

Bostrom is mostly concerned about anthropogenic risks, which are risks arising from human activities, particularly from new technologies such as advanced artificial intelligence, molecular nanotechnology, or synthetic biology.

2002

Bostrom is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014).

Bostrom believes that advances in artificial intelligence (AI) may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".

He views this as a major source of opportunities and existential risks.

2005

In 2005, Bostrom founded the Future of Humanity Institute, which researches the far future of human civilization.

He is also an adviser to the Centre for the Study of Existential Risk.

2008

In the 2008 essay collection, Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relationship between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.

In a paper called The Vulnerable World Hypothesis, Bostrom suggests that there may be some technologies that destroy human civilization by default when discovered.

Bostrom proposes a framework for classifying and dealing with these vulnerabilities.

He also gives counterfactual thought experiments of how such vulnerabilities could have historically occurred, e.g. if nuclear weapons had been easier to develop or had ignited the atmosphere (as Robert Oppenheimer had feared).

2014

In 2014, Bostrom published Superintelligence: Paths, Dangers, Strategies, which became a New York Times Best Seller.

The book argues that superintelligence is possible and explores different types of superintelligences, their cognition, the associated risks.

He also presents technical and strategic considerations on how to make it safe.

Bostrom explores multiple possible paths to superintelligence, including whole brain emulation and human intelligence enhancement, but focuses on artificial general intelligence, explaining that electronic devices have many advantages over biological brains.

Bostrom draws a distinction between final goals and instrumental goals.

A final goal is what an agent tries to achieve for its own intrinsic value.

Instrumental goals are just intermediary steps towards final goals.

Bostrom contends there are instrumental goals that will be shared by most sufficiently intelligent agents because they are generally useful to achieve any objective (e.g. preserving the agent's own existence or current goals, acquiring resources, improving its cognition...), this is the concept of instrumental convergence.

On the other side, he writes that virtually any level of intelligence can in theory be combined with virtually any final goal (even absurd final goals, e.g. making paperclips), a concept he calls the orthogonality thesis.

He argues that an AI with the ability to improve itself might initiate an intelligence explosion, resulting (potentially rapidly) in a superintelligence.

Such a superintelligence could have vastly superior capabilities, notably in strategizing, social manipulation, hacking or economic productivity.

With such capabilities, a superintelligence could outwit humans and take over the world, establishing a singleton (which is "a world order in which there is at the global level a single decision-making agency") and optimizing the world according to its final goals.

Bostrom argues that giving simplistic final goals to a superintelligence could be catastrophic:

"Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."

Bostrom explores several pathways to reduce the existential risk from AI.

He emphasizes the importance of international collaboration, notably to reduce race to the bottom and AI arms race dynamics.

He suggests potential techniques to help control AI, including containment, stunting AI capabilities or knowledge, narrowing the operating context (e.g. to question-answering), or "tripwires" (diagnostic mechanisms that can lead to a shutdown).