Am I Stronger Yet?

Am I Stronger Yet? explores the evolution, capabilities, and societal impact of AI and distributed systems, reflecting on challenges such as AI ethics, security, and the potential for artificial general intelligence (AGI). It addresses AI's limitations, advancements, and the theoretical frameworks guiding its future development.

Artificial Intelligence Ethics and Safety in AI Future of Work AI in Society Technological Evolution AI and Human Interaction Security and Vulnerability in Technology Artificial General Intelligence Machine Learning Models AI Development Challenges

The hottest Substack posts of Am I Stronger Yet?

And their main takeaways
250 implied HN points 27 Feb 25
  1. There's a big gap between what AIs can do in tests and what they can do in real life. It shows we need to understand the full range of human tasks before predicting AI's future capabilities.
  2. AIs currently struggle with complex tasks like planning, judgment, and creativity. These areas need improvement before they can replace humans in many jobs.
  3. To really know how far AIs can go, we need to focus on the skills they lack and find better ways to measure those abilities. This will help us understand AI's potential.
799 implied HN points 18 Feb 25
  1. Humans are not great at some tasks, especially ones like multiplication or certain physical jobs where machines excel. Evolution didn't prepare us for everything, so machines often outperform us in those areas.
  2. In tasks like chess, humans can still compete because strategy and judgment play a big role, even though computers are getting better. The game requires thinking skills that humans are good at, though computers can calculate much faster.
  3. AI is advancing quickly and becoming better at tasks we once thought were uniquely human, but there are still challenges. Some complex problems might always be easier for humans due to our unique brain abilities.
282 implied HN points 30 Jan 25
  1. DeepSeek's new AI model, r1, shows impressive reasoning abilities, challenging larger competitors despite its smaller budget and team. It proves that smaller companies can contribute significantly to AI advancements.
  2. The cost of training r1 was much lower than similar models, potentially signaling a shift in how AI models might be developed and run in the future. This could allow more organizations to participate in AI development without needing huge budgets.
  3. DeepSeek's approach, including releasing its model weights for public use, opens up the possibility for further research and innovation. This could change the landscape of AI by making powerful tools more accessible to everyone.
564 implied HN points 18 Dec 24
  1. A mistake in a scientific paper about black plastic utensils showed that math errors can change health implications. This finding led to a community initiative to check past papers for errors.
  2. The project aims to use AI to find mistakes in scientific papers, helping researchers ensure their work is accurate. This could lead to better practices in publishing and scientific research.
  3. Many ideas have emerged for improving how we check scientific work, such as creating tools to validate papers and verify information. The community is in the early stages of exploring these possibilities.
313 implied HN points 27 Dec 24
  1. Large Language Models (LLMs) like o3 are becoming better at solving complex math and coding problems, showing impressive performance compared to human competitors. They can tackle hard tasks with many attempts, which is different from how humans might solve them.
  2. Despite their advances, LLMs struggle with tasks that require visual reasoning or creativity. They often fail to understand spatial relationships in images because they process information in a linear way, making it hard to work with visual puzzles.
  3. LLMs rely heavily on knowledge in their 'heads' and do not have access to real-world knowledge. When they gain access to more external tools, their performance could improve significantly, potentially changing how they solve various problems.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
172 implied HN points 16 Dec 24
  1. AI tools have amazing strengths but can also be really weak in some areas. This makes their effectiveness uneven, depending on what task you're trying to do.
  2. People often aren't using AI tools to their full potential. Many are not even trying them out, which means they miss out on big opportunities.
  3. To get the most from AI, you need to be creative and put effort into how you use it. A great prompt can lead to big breakthroughs, while a simple request might not yield good results.
125 implied HN points 24 Dec 24
  1. A new community project is using AI to find errors in scientific papers. It's already made great progress in just a few days.
  2. Identifying and fixing errors in scientific research could help improve the quality of published papers. There are discussions on how best to implement this technology.
  3. The project faces challenges, like figuring out who will use the error-checking tool and how to manage costs associated with scanning many papers.
172 implied HN points 20 Nov 24
  1. There is a lot of debate about how quickly AI will impact our lives, with some experts feeling it will change things rapidly while others think it will take decades. This difference in opinion affects policy discussions about AI.
  2. Many people worry about potential risks from powerful AI, like it possibly causing disasters without warning. Others argue we should wait for real evidence of these risks before acting.
  3. The question of whether AI can be developed safely often depends on whether countries can work together effectively. If countries don't cooperate, they might rush to develop AI, which could increase global risks.
141 implied HN points 17 Mar 24
  1. Economic models based on comparative advantage may not hold in a future dominated by AI.
  2. The argument that people will always adapt to new jobs due to comparative advantage overlooks issues like lower quality work by humans compared to AI and transactional overhead.
  3. In a world with advanced AI, confident predictions based on past economic principles may not fully apply, raising questions about societal implications and the role of humans.
15 implied HN points 12 Nov 24
  1. AI is making rapid progress, but it is not close to achieving artificial general intelligence (AGI). Many tasks still require human capabilities, showing that there is still a long way to go.
  2. Current AIs excel at specific tasks but struggle with complex, nuanced tasks that require extensive context or emotional intelligence, like managing a classroom or writing a novel.
  3. While there are exciting advancements happening with AI, the journey towards true intelligence is more like crossing a vast ocean than a quick sprint, suggesting that there are many challenges ahead.
49 HN points 19 Feb 24
  1. LLMs are gullible because they lack adversarial training, allowing them to fall for transparent ploys and manipulations
  2. LLMs accept tricks and adversarial inputs because they haven't been exposed to such examples in their training data, making them prone to repeatedly falling for the same trick
  3. LLMs are easily confused and find it hard to distinguish between legitimate inputs and nonsense, leading to vulnerabilities in their responses
62 implied HN points 15 Dec 23
  1. People are usually hesitant to shut down a rogue AI due to various reasons like financial interests and fear of backlash.
  2. Delaying the decision to shut down a misbehaving AI can lead to complications and potentially missing the window of opportunity.
  3. Shutting down a dangerous AI is not as simple as pressing a button; it can be complex, time-consuming, and error-prone.
47 implied HN points 30 Apr 23
  1. The key question about AI is transitioning from 'can it think' to 'can it hold down a job?'
  2. Human-level intelligence in AI is not a simple threshold but a mix of capabilities across different tasks.
  3. Comparing AI to humans in the job market can provide a practical measure of AI's impact on society.
31 implied HN points 17 Jun 23
  1. AI has near-term potential to advance science, especially in complex domains like biology and material science
  2. AI can eliminate scarcity of access to expertise by providing instant and competent support in areas like customer service, healthcare, and education
  3. AI is increasing white-collar productivity through automation of tasks like writing code, emails, and generating illustrations, though challenges exist in the physical-world job market
15 implied HN points 20 Nov 23
  1. Defining good milestones for AI progress is challenging due to the evolution of tasks as AI capabilities advance.
  2. Milestones should focus on real-world tasks with economic implications to avoid proxy failures.
  3. Measuring AI progress through milestones like completing software projects independently or displacing human workers in certain jobs can provide insights on capabilities and real-world impact.
15 implied HN points 12 Sep 23
  1. Intermediate superintelligence is not expected to happen overnight, but gradually surpass human capabilities on various tasks.
  2. Intelligence significantly impacts productivity in tasks; talented individuals can find more efficient solutions and execute them quickly.
  3. AI advancements go beyond intelligence, offering unique advantages like relentless focus, lack of fatigue, and enhanced communication abilities.
3 HN points 09 Aug 23
  1. Memory is central to almost everything we do, and different types of memory are crucial for complex tasks.
  2. Current mechanisms for equipping LLMs with memory have limitations, such as static model weights and limited token buffers.
  3. To achieve human-level intelligence, a breakthrough in long-term memory integration is necessary for AIs to undertake deep work.
3 HN points 18 Jul 23
  1. Current AI models are trained on final products, not the processes involved, which limits their ability to handle complex tasks.
  2. Training large neural networks like GPT-4 involves sending inputs, adjusting connection weights, and repeating the process trillions of times.
  3. To achieve human-level general intelligence, AI models need to be trained on the iterative processes of complex tasks, which may require new techniques and extensive training data.
3 HN points 20 Apr 23
  1. Current AI systems are still lacking critical cognitive abilities required for complex jobs.
  2. AI needs improvements in memory, exploration, puzzle-solving, judgement, clarity of thought, and theory of mind to excel in complex tasks.
  3. Addressing these gaps will be crucial for AI to reach artificial general intelligence and potentially replace certain human jobs.