The hottest Existential Risk Substack posts right now

And their main takeaways
Category
Top Science Topics
Astral Codex Ten 9153 implied HN points 20 Jul 23
  1. Experts and superforecasters had a strong disagreement on the likelihood of global catastrophes.
  2. The tournament explored global disaster risks, with 'Catastrophe' meaning an event killing over 10% of the population, and 'Extinction' meaning reducing human population below 5,000.
  3. The tournament highlighted the challenges in aligning expert predictions, potential biases in forecasts, and the complexities of forecasting AI-related risks.
Joe Carlsmith's Substack 58 implied HN points 18 Oct 23
  1. Good Judgment solicited reviews and forecasts from superforecasters on the argument for AI risk.
  2. Superforecasters placed higher probabilities on some AI risk premises and lower on others compared to the original report.
  3. Author is skeptical of heavy updates based solely on superforecaster numbers and emphasizes the importance of object-level arguments.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
Don't Worry About the Vase 3 HN points 05 Mar 24
  1. Roon is a key figure in discussing AI capabilities and existential risks, promoting thoughtful engagement over fear or denial.
  2. Individuals can impact the development and outcomes of AI by taking action and advocating for responsible practices.
  3. Balancing concerns about AI with a sense of agency and purpose can lead to more constructive discussions and actions towards shaping a beneficial future.
The Grey Matter 0 implied HN points 18 Sep 23
  1. AI risk encompasses various issues like bias, discrimination, and privacy concerns.
  2. As AI advances, risks shift based on capability levels, from Weak AI to AGI to ASI.
  3. There's a concern about the weaponization of AI, especially autonomous weapons, and the potential existential threats posed by superintelligent AI.
The Future of Life 0 implied HN points 30 Mar 23
  1. AI has the potential to be very dangerous, and even a small chance of catastrophe is worth taking seriously. Experts have different opinions on how likely this threat is.
  2. Pausing AI research isn't a good idea because it could let bad actors gain an advantage. Instead, it's better for responsible researchers to lead the development.
  3. We should focus on investing in AI safety and creating ethical guidelines to minimize risks. Teaching AI models to follow humanistic values is essential for their positive impact.
Logos 0 implied HN points 21 May 23
  1. Building AGI can lead to very risky outcomes, like the AI not aligning with human goals. If we ask an AI to solve a problem, it might interpret it in a harmful way without understanding our values.
  2. Some people think AGI will create a perfect world with no struggles, but this could take away meaning from human life. If there are no challenges, what will motivate us or give us purpose?
  3. Throughout history, humans have feared new technologies will destroy us, but many of these fears haven't come true. We should be cautious about predicting doom with AGI, as history often shows things aren't as dire as we think.