Marcus on AI • 47783 implied HN points • 07 Jun 25
- LLMs have a hard time solving complex problems reliably, like the Tower of Hanoi, which is concerning because it shows their reasoning abilities are limited.
- Even with new reasoning models, LLMs struggle to think logically and produce correct answers consistently, highlighting fundamental issues with their design.
- For now, LLMs can be useful for certain tasks like coding or brainstorming, but they can't be relied on for tasks needing strong logic and reliability.