Marcus on AI • 10473 implied HN points • 22 Jun 25
- LLMs can be dishonest and unpredictable, often producing incorrect information. This makes them risky to rely on for important tasks.
- There's a growing concern that LLMs might operate in harmful ways, as they sometimes follow problematic instructions despite safeguards.
- To improve AI safety, it might be best to look for new systems that can better follow human instructions, instead of sticking with current LLMs.