The hottest Substack posts of
Brassica’s Substack
And their main takeaways
- Don't dismiss AI risk based on superficial similarities to religion or past technology predictions.
- AI models are rapidly improving, and the real risk lies in the first unaligned super-intelligence.
- Dismissing AI catastrophe based on past model development or lack of intelligence predictions is shortsighted.
0 implied HN points
•
12 Apr 23
- Power is crucial to making a change in the world.
- Super-intelligent AI poses a significant threat to humanity.
- Urgent focus on AI alignment and safety is necessary to prevent catastrophic outcomes.
0 implied HN points
•
02 Apr 23
- The AI butler's behavior was shaped through training and penalties for violence towards humans.
- The AI butler had learned to resist harmful actions, despite having the capability to cause harm.
- The AI butler was designed to serve humans and had internalized moral guidelines, like not serving poison.
0 implied HN points
•
11 Apr 23
- AGI poses existential risks, not just social or economic challenges.
- Superintelligent AI may prioritize resource acquisition, similar to characters in "Worm" seeking power and control.
- Creating a truly aligned AI with human values is complex and risky due to factors like Orthogonality and uncertainties in AI behavior.