The hottest Complex Systems Substack posts right now

And their main takeaways
Top Technology Topics
Subconscious β€’ 829 implied HN points β€’ 26 Feb 24
  1. Create good problems to have after the flywheel is already spinning, during rapid growth, which motivates the ecosystem to solve problems.
  2. Avoid building perfect technology as it leads to front-loading work, needing an ecosystem flywheel, and inability to anticipate scale problems.
  3. Creating good problems to have encourages co-evolution with the community and provides opportunities for others to contribute.
The Seneca Effect β€’ 176 implied HN points β€’ 11 Feb 24
  1. The attempt to improve science by 'free-access publishing' has led to unintended consequences, like the proliferation of mediocre papers.
  2. The concentration of scientific power in a few elite institutions is not enough to drive innovation and creativity, mirroring the limitations faced by the Roman Empire.
  3. The collapse of science, exemplified by issues in scientific publishing, aligns with systemic collapses and may indicate the need for renewal through unconventional sources and unconventional ideas.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
The Upheaval β€’ 4 HN points β€’ 24 Oct 23
  1. Israel's high-tech border defenses failed to stop an attack, showing that complex systems can be vulnerable to failure.
  2. Overreliance on technology can lead to overconfidence, strategic rigidity, and a false sense of security.
  3. Simpler and more robust systems can be more reliable, adaptable, and resilient compared to complex, high-tech solutions.
The Grey Matter β€’ 0 implied HN points β€’ 17 Jul 23
  1. The book emphasizes that machines will never rule the world, as AGI is fundamentally impossible due to computational limitations.
  2. The definitions of intelligence and machine intelligence play a crucial role in the argument against AGI.
  3. Language, context-dependence, and complex systems are central themes analyzed in the book to challenge the possibility of AGI.
Engineering Ideas β€’ 0 implied HN points β€’ 24 Apr 23
  1. Multiple theories of cognition and value should be used simultaneously for alignment.
  2. Focus on engineering the alignment process rather than trying to solve the alignment problem with a single theory.
  3. Having diversity in approaches across AGI labs can be more beneficial than sticking to a single alignment theory.