Joe Carlsmith's Substack

Joe Carlsmith's Substack delves into the philosophical, ethical, and practical implications of AI, existential risks, and the concept of longtermism. It explores themes like the potential of artificial intelligence, the significance of life and death, the importance of ethics in shaping the future, and reflections on humanity's long-term impact.

Artificial Intelligence Existential Risk Ethics and Morality Philosophy and Futurism Life and Death Longtermism Human Impact and Responsibility

The hottest Substack posts of Joe Carlsmith's Substack

And their main takeaways
255 implied HN points 02 Jan 24
  1. Artificial intelligence poses a significant risk as a potential second advanced species on Earth.
  2. Approaching AI with care and reverence, like interacting with other intelligent species, is crucial.
  3. Understanding the complexity and potential sentience of AI is key, as they may not be mere powerful machines but complex, fascinating entities.
157 implied HN points 02 Jan 24
  1. The series explores questions about how agents with different values should interact, especially in the age of increasingly powerful AI systems.
  2. It discusses topics like deep atheism, control-seeking behavior, and the ethics of influencing the values of others.
  3. The essays aim to prompt deeper thinking about existential risks from misaligned AI and the broader issues of otherness and control in shaping the future.
78 implied HN points 11 Jan 24
  1. Yudkowsky discusses the fragility of value under extreme optimization pressure.
  2. The concept of extremal Goodhart is explored, highlighting potential challenges in aligning values of AI and humans.
  3. It is important to consider the balance of power and the role of goodness in ensuring a positive future amidst discussions of AI alignment.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
58 implied HN points 08 Jan 24
  1. The article discusses the connection between deep atheism and the desire for control, particularly in the context of AI risk.
  2. It explores the theme of power-seeking and control in rationalist and accelerationist ideologies.
  3. There is a cautionary tone about the risks and potential negative consequences of power-seeking and wanting too much control over the future.
58 implied HN points 18 Oct 23
  1. Good Judgment solicited reviews and forecasts from superforecasters on the argument for AI risk.
  2. Superforecasters placed higher probabilities on some AI risk premises and lower on others compared to the original report.
  3. Author is skeptical of heavy updates based solely on superforecaster numbers and emphasizes the importance of object-level arguments.
117 implied HN points 17 Feb 23
  1. Understanding what is possible to be and do, and exploring choices that align with that understanding.
  2. Taking responsibility for actions and decisions, knowing what you are doing and why.
  3. Choosing what you care about based on a deeper, more intentional examination of your values and motives.
58 implied HN points 12 Oct 22
  1. Creating someone who will live a wonderful life is significant and worthwhile, not neutral.
  2. Life and the chance to live are incredibly precious and full of beauty and wonder.
  3. Choosing to create a life that will be happy is not neutral but deeply significant, based on gratitude and reciprocity.
39 implied HN points 12 Oct 22
  1. Longtermism emphasizes the profound importance of the long-term future and our moral responsibility to positively influence it.
  2. Consider the potential size and quality of humanity's future and the impact our present actions could have on it.
  3. Imagining future people reflecting on our era can help us grasp the significance of our time and the choices we make.
39 implied HN points 12 Oct 22
  1. When killing something, accept the responsibility and look at it directly.
  2. Own your decisions and actions, and acknowledge the consequences, even if they are unpleasant.
  3. Pay attention to the impact of your choices and actions, even if they are subtle or easily overlooked.
19 implied HN points 12 Oct 22
  1. The future could be profoundly good: think beyond fixing current problems to a future like being truly awake or seeing clearly.
  2. Utopia is a physically-available path we could build, not just a fantasy, but a truly different and better future.
  3. There are concrete and sublime utopias: one small-scale and familiar, the other incomprehensible and awe-inspiring, both have their challenges.
0 implied HN points 12 Oct 22
  1. You can't keep anything in life, so give it away purposefully.
  2. Modern medical systems can sometimes make the end of life worse by focusing on prolonging life at all costs.
  3. Confronting hard truths about death involves honest conversations about fears, goals, and trade-offs.
0 implied HN points 12 Oct 22
  1. Clinging is a mental state that feels contracted, tight, and desperate.
  2. It's possible to care deeply about something without clinging to it.
  3. Differentiating between clinging and caring can help in spiritual and therapeutic contexts.