The hottest Existential Risk Substack posts right now

And their main takeaways
Category
Top Science Topics
Astral Codex Ten 9153 implied HN points 20 Jul 23
  1. Experts and superforecasters had a strong disagreement on the likelihood of global catastrophes.
  2. The tournament explored global disaster risks, with 'Catastrophe' meaning an event killing over 10% of the population, and 'Extinction' meaning reducing human population below 5,000.
  3. The tournament highlighted the challenges in aligning expert predictions, potential biases in forecasts, and the complexities of forecasting AI-related risks.
Joe Carlsmith's Substack 58 implied HN points 18 Oct 23
  1. Good Judgment solicited reviews and forecasts from superforecasters on the argument for AI risk.
  2. Superforecasters placed higher probabilities on some AI risk premises and lower on others compared to the original report.
  3. Author is skeptical of heavy updates based solely on superforecaster numbers and emphasizes the importance of object-level arguments.
thezvi 3 HN points 05 Mar 24
  1. Roon is a key figure in discussing AI capabilities and existential risks, promoting thoughtful engagement over fear or denial.
  2. Individuals can impact the development and outcomes of AI by taking action and advocating for responsible practices.
  3. Balancing concerns about AI with a sense of agency and purpose can lead to more constructive discussions and actions towards shaping a beneficial future.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
The Grey Matter 0 implied HN points 18 Sep 23
  1. AI risk encompasses various issues like bias, discrimination, and privacy concerns.
  2. As AI advances, risks shift based on capability levels, from Weak AI to AGI to ASI.
  3. There's a concern about the weaponization of AI, especially autonomous weapons, and the potential existential threats posed by superintelligent AI.