A note on anthropomorphising language models 176 implied HN points • 13 Jun 23 🕹 Technology AI Language Models Chatbots Machine Learning New literature is always built on existing text. Prompting chat models is similar to asking a writer to expand on a fragment. Chatbot conversations can be viewed through the lens of 'theory of mind' and 'agency.'
YudBot: correct representations of uncomfortable views in GPT-4 157 implied HN points • 07 Apr 23 🕹 Technology AI Ethics Chatbot Simulation GPT-4 struggles to represent uncomfortable views accurately Challenges in overcoming RLHF censoring in ChatGPT models Importance of separating actual opinions from censoring influence in AI simulations
The Linguistic Mirage 117 implied HN points • 16 Mar 23 🎭️ Culture Linguistics Identity Authenticity Communication Technology In a world of evolving language, human and algorithmic expressions blend together The omnipresence of LLMs blurs the boundaries between signs and meanings Encountering LLMs can lead to a questioning of identity and authenticity
How come GPTs don't ask for clarifying information? 2 HN points • 17 Jun 23 🕹 Technology Artificial Intelligence Machine Learning Chatbots Human-Machine Interaction GPTs don't typically ask for clarifying information during interactions. There are ways to encourage GPT-like systems to ask questions for better results. The training data and user preferences influence why GPTs may not ask for clarifications.
The Map Becomes The Territory (WMTP 1 of 3) 1 HN point • 22 Apr 23 🕹 Technology AI Language Models Text generation Machine Learning The series explores questions about Large Language Models and how they impact reasoning capacity. The article discusses misunderstandings and implications of models like GPT-3 in completing prompts. It emphasizes the importance of having compatible blueprints for understanding complex concepts.