The hottest Human Values Substack posts right now

And their main takeaways
Category
Top Technology Topics
Humanities in Revolt 758 implied HN points 29 Mar 24
  1. Humanistic psychology focuses on mental wellness rooted in universal, objective values, not just societal norms.
  2. Mental health involves love, reason, identity, and objectivity, all essential for human flourishing.
  3. Acts of self-sacrifice for a higher cause can be seen as expressions of deep moral convictions and values, rather than mere suicide.
Asimov’s Addendum 79 implied HN points 31 Jul 24
  1. Asimov's Three Laws of Robotics were a starting point for thinking about how robots should behave. They aimed to ensure robots protect humans, obey commands, and keep themselves safe.
  2. A new approach by Stuart Russell suggests that robots should focus on understanding and promoting human values, but they must be humble and recognize that they don’t know everything about our values.
  3. The development of AI must consider not just how well machines achieve goals, but also how corporate interests can affect their design and use. Proper regulation and transparency are needed to ensure AI is safe and beneficial for everyone.
Humanities in Revolt 219 implied HN points 21 Jun 23
  1. The peace movement highlighted the importance of embodying intrinsic values, such as truth, justice, autonomy, and integrity, above achieving immediate results.
  2. Recognizing and enacting self-justifying values allows us to find meaning and purpose in the face of life's futility.
  3. Activists in the peace movement worked to promote human dignity, freedom, and justice, rejecting defeatism and continuing to embody their principles despite facing challenges and setbacks.
Humanities in Revolt 119 implied HN points 17 Feb 23
  1. Do our best with what we have, be strategic, and recognize that even small contributions are valuable for social change.
  2. Reject the idea that perfection is necessary for worthwhile efforts, and avoid letting the pursuit of perfection lead to inaction.
  3. Social change can often defy expectations, and historic examples remind us that perseverance and action can lead to progress, even in the face of setbacks.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
The Future of Life 0 implied HN points 30 Mar 23
  1. AI has the potential to be very dangerous, and even a small chance of catastrophe is worth taking seriously. Experts have different opinions on how likely this threat is.
  2. Pausing AI research isn't a good idea because it could let bad actors gain an advantage. Instead, it's better for responsible researchers to lead the development.
  3. We should focus on investing in AI safety and creating ethical guidelines to minimize risks. Teaching AI models to follow humanistic values is essential for their positive impact.