Am I Stronger Yet?

Am I Stronger Yet? explores the evolution, capabilities, and societal impact of AI and distributed systems, reflecting on challenges such as AI ethics, security, and the potential for artificial general intelligence (AGI). It addresses AI's limitations, advancements, and the theoretical frameworks guiding its future development.

Artificial Intelligence Ethics and Safety in AI Future of Work AI in Society Technological Evolution AI and Human Interaction Security and Vulnerability in Technology Artificial General Intelligence Machine Learning Models AI Development Challenges

The hottest Substack posts of Am I Stronger Yet?

And their main takeaways
141 implied HN points 17 Mar 24
  1. Economic models based on comparative advantage may not hold in a future dominated by AI.
  2. The argument that people will always adapt to new jobs due to comparative advantage overlooks issues like lower quality work by humans compared to AI and transactional overhead.
  3. In a world with advanced AI, confident predictions based on past economic principles may not fully apply, raising questions about societal implications and the role of humans.
49 HN points 19 Feb 24
  1. LLMs are gullible because they lack adversarial training, allowing them to fall for transparent ploys and manipulations
  2. LLMs accept tricks and adversarial inputs because they haven't been exposed to such examples in their training data, making them prone to repeatedly falling for the same trick
  3. LLMs are easily confused and find it hard to distinguish between legitimate inputs and nonsense, leading to vulnerabilities in their responses
62 implied HN points 15 Dec 23
  1. People are usually hesitant to shut down a rogue AI due to various reasons like financial interests and fear of backlash.
  2. Delaying the decision to shut down a misbehaving AI can lead to complications and potentially missing the window of opportunity.
  3. Shutting down a dangerous AI is not as simple as pressing a button; it can be complex, time-consuming, and error-prone.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
15 implied HN points 20 Nov 23
  1. Defining good milestones for AI progress is challenging due to the evolution of tasks as AI capabilities advance.
  2. Milestones should focus on real-world tasks with economic implications to avoid proxy failures.
  3. Measuring AI progress through milestones like completing software projects independently or displacing human workers in certain jobs can provide insights on capabilities and real-world impact.
47 implied HN points 30 Apr 23
  1. The key question about AI is transitioning from 'can it think' to 'can it hold down a job?'
  2. Human-level intelligence in AI is not a simple threshold but a mix of capabilities across different tasks.
  3. Comparing AI to humans in the job market can provide a practical measure of AI's impact on society.
31 implied HN points 17 Jun 23
  1. AI has near-term potential to advance science, especially in complex domains like biology and material science
  2. AI can eliminate scarcity of access to expertise by providing instant and competent support in areas like customer service, healthcare, and education
  3. AI is increasing white-collar productivity through automation of tasks like writing code, emails, and generating illustrations, though challenges exist in the physical-world job market
15 implied HN points 12 Sep 23
  1. Intermediate superintelligence is not expected to happen overnight, but gradually surpass human capabilities on various tasks.
  2. Intelligence significantly impacts productivity in tasks; talented individuals can find more efficient solutions and execute them quickly.
  3. AI advancements go beyond intelligence, offering unique advantages like relentless focus, lack of fatigue, and enhanced communication abilities.
3 HN points 09 Aug 23
  1. Memory is central to almost everything we do, and different types of memory are crucial for complex tasks.
  2. Current mechanisms for equipping LLMs with memory have limitations, such as static model weights and limited token buffers.
  3. To achieve human-level intelligence, a breakthrough in long-term memory integration is necessary for AIs to undertake deep work.
3 HN points 18 Jul 23
  1. Current AI models are trained on final products, not the processes involved, which limits their ability to handle complex tasks.
  2. Training large neural networks like GPT-4 involves sending inputs, adjusting connection weights, and repeating the process trillions of times.
  3. To achieve human-level general intelligence, AI models need to be trained on the iterative processes of complex tasks, which may require new techniques and extensive training data.
3 HN points 20 Apr 23
  1. Current AI systems are still lacking critical cognitive abilities required for complex jobs.
  2. AI needs improvements in memory, exploration, puzzle-solving, judgement, clarity of thought, and theory of mind to excel in complex tasks.
  3. Addressing these gaps will be crucial for AI to reach artificial general intelligence and potentially replace certain human jobs.