Marcus on AI • 3392 implied HN points • 17 Feb 24
- Large language models like Sora often make up information, leading to errors like hallucinations in their output.
- Systems like Sora, despite having immense computational power and being grounded in both text and images, still struggle with generating accurate and realistic content.
- Sora's errors stem from its inability to comprehend global context, leading to flawed outputs even when individual details are correct.