Introducing ConfirmationBiasGPT 10 implied HN points • 03 Dec 23 🎭️ Culture Media Community Identity Purpose Reality New tools can help launder opinions into looking like scientific knowledge. Increasing personalization of content can lead to fragmentation of shared reality. Fragmentation of shared reality may disrupt community, identity, and purpose in life.
Coding LLMs are here to stay: Here’s how to write your code so that LLMs can extend it 10 HN points • 16 Jul 23 🕹 Technology Coding Large Language Models Software Engineering Unit Testing Express precise instructions for Large Language Models to be productive. Design codebases with reduced complexity and ambiguity for better LLM performance. Employ widely-used coding conventions and avoid hidden logic to facilitate LLM understanding.
Using GPT for backend logic is 10x harder 10 implied HN points • 29 Mar 23 🕹 Technology AI Personalization Using GPT for backend logic can be 10x harder than expected. Cherry-picking successful results from GPT requires a lot of effort and experimentation. GPT works well in chatbot interfaces with human feedback but is challenging for backend logic with no real-time corrections.
Large Language Models and the end of Moore’s Law 3 HN points • 22 Mar 23 🕹 Technology Computing Machine Learning AI Chip Design Moore's Law may be reaching its limits due to the challenges of making transistors smaller. Investment in computation offers exponential returns, impacting various industries beyond just computing. Machine learning, especially through large language models, is advancing rapidly and reshaping how we use technology.
Prompting GPT is hard - here’s what I learned building a creativity training app using the ChatGPT API 2 HN points • 28 Mar 23 🕹 Technology AI APIs Prompting Latency Model performance You can ask GPT to determine if user submissions belong to a category Allow users to enter any category they want for more flexibility Simplicity matters in prompt engineering to improve AI performance