The Counterfactual

The Counterfactual explores cognitive science, AI, and statistics through topics like large language models, their cognitive capabilities, tokenization, theory of mind, human irrationality, language understanding, and the impact of AI on culture and communication. It discusses methods for evaluating linguistic and statistical claims and the broader cognitive implications of AI technologies.

Cognitive Science Artificial Intelligence Language Models Statistics Human Cognition Language Understanding AI and Society Ethics in AI Cognitive Diversity

The hottest Substack posts of The Counterfactual

And their main takeaways
119 implied HN points β€’ 22 Jul 22
  1. Language is shaped by how we use it, and machine learning models might influence our language by suggesting words or phrases. Over time, these suggestions could change the way we communicate.
  2. The widespread use of predictive text and language models could either slow down language change by promoting similar expressions, or lead to new and unexpected language innovations.
  3. We could see personalized language models that adapt to individual users, potentially changing how we write and understand language, and encouraging less need for clarity in communication.
59 implied HN points β€’ 20 Feb 23
  1. Cognitive science and linguistics are often too focused on English, which means we miss out on understanding how different languages work. Studying only a few languages makes it hard to see the full picture of language and cognition.
  2. Different languages influence how we think and perceive the world. For example, some languages have unique ways of expressing colors or time that can change how speakers of those languages understand these concepts.
  3. To improve our understanding of cognition, researchers need to include a wider variety of languages in their studies. We should explore languages beyond English to get a better grasp on how the human mind works across different cultures.
39 implied HN points β€’ 29 May 23
  1. Large language models (LLMs) like GPT-4 are often referred to as 'black boxes' because they are difficult to understand, even for the experts who create them. This means that while they can perform tasks well, we might not fully grasp how they do it.
  2. To make sense of LLMs, researchers are trying to use models like GPT-4 to explain the workings of earlier models like GPT-2. This involves one model generating explanations about the neuron activations of another model, aiming to uncover how they function.
  3. Despite the efforts, current methods only explain a small fraction of neurons in these LLMs, which indicates that more research and new techniques are needed to better understand these complex systems and avoid potential failures.
59 implied HN points β€’ 07 Dec 22
  1. Understanding language might not need physical experiences. This means that Large Language Models could potentially understand language differently than humans do.
  2. People can grasp abstract concepts and visual information even without direct experiences, like those who are blind or those with aphantasia. This challenges the idea that you must physically experience something to understand it.
  3. Using language itself can be a way to learn about the world. Language helps us form ideas and understand concepts, even if we haven't experienced everything firsthand.
59 implied HN points β€’ 04 Oct 22
  1. Recommendation systems can help us find new favorites but also risk making our choices repetitive. If we're only shown what we already like, we might miss out on discovering exciting new things.
  2. There's a balance between exploring new options and sticking to what we know. Too much of either can lead to boredom or discomfort, so it’s important to mix both approaches in our choices.
  3. Serendipity, or those happy accidents that lead to great moments, can be lost with strict recommendation systems. Sometimes the best experiences come from unexpected encounters, not just from things we already enjoy.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
59 implied HN points β€’ 17 Aug 22
  1. Consumer acceptance of cultured meat varies widely. Some people are very open to trying it, while others are quite resistant and refuse even to consider it.
  2. Concerns about the unnaturalness and safety of cultured meat are significant barriers to its acceptance. Many people are worried about how it is made, even if it tastes similar to traditional meat.
  3. Economic factors are key in determining whether people will choose cultured meat over conventional options. If the price of cultured meat becomes competitive, it could lead to more widespread adoption.
59 implied HN points β€’ 17 Jul 22
  1. The newsletter will cover topics like language, statistics, and AI, mixing research with personal thoughts. Expect both solid research reviews and imaginative columns about the future.
  2. Posts will be written in a clear, clean format using Substack. This platform helps catch mistakes easily and connects with a larger community of writers and readers.
  3. The author aims to write about things that are interesting and useful, hoping to share knowledge and insights that spark curiosity in readers.
39 implied HN points β€’ 19 Sep 22
  1. GPT-3 understands 'some' to mean 2 out of 3 letters, but it doesn't change this meaning based on how much information the speaker knows. Humans, however, adjust their understanding based on the context.
  2. When asked if the speaker knows how many letters have checks, GPT-3 gives the right answer if asked before the speaker uses specific words, like 'some' or 'all'. But afterwards, it relies on those words too much.
  3. GPT-3's way of interpreting language is different from how humans do it. It seems to have a fixed meaning for words without considering the situation, unlike humans who use context to understand better.
39 implied HN points β€’ 07 Sep 22
  1. Language models like GPT-3 could have different effects on how language evolves, including slowing it down, speeding it up, or having no effect at all.
  2. One possible outcome is that language models might make our communication more concise, which could lead to unusual and harder-to-understand language forms.
  3. While GPT-3 can generate reasonable ideas about language change, it's important to be skeptical of its understanding and treat its responses as interesting but not always reliable.
1 HN point β€’ 08 Jul 24
  1. Mechanistic interpretability helps us understand how large language models (LLMs) like ChatGPT work, breaking down their 'black box' nature. This understanding is important because we need to predict and control their behavior.
  2. Different research methods, like classifier probes and activation patching, are used to explore how components in LLMs contribute to their predictions. These techniques help researchers pinpoint which parts of the model are responsible for specific tasks.
  3. There's a growing interest in this field, as researchers believe that knowing more about LLMs can lead to safer and more effective AI systems. Understanding how they work can help prevent issues like bias and deception.
19 implied HN points β€’ 27 Mar 23
  1. Disgust sensitivity and gender are important factors in whether people want to try cultured meat. Generally, men are more willing than women, and those who feel more disgusted are less likely to try it.
  2. How people feel about cultured meat really matters. If they express positive feelings, they're more likely to want to try it and even pay extra for it.
  3. Even with different factors considered, only about 25% of what makes people willing to try cultured meat can be explained. This shows there's still a lot to discover about what influences these decisions.
19 implied HN points β€’ 16 Jan 23
  1. People often think cultured meat is unnatural, which makes them hesitant to eat it. This feeling comes from a fear of trying new foods and being disgusted by the idea of lab-grown meat.
  2. Discussions around meat can shift when we point out that conventional meat production is also unnatural. Many people are surprised to learn how modern farming practices are much different from what might be considered natural.
  3. It helps to show cultured meat as food rather than as a lab product. When people see it served in a plate instead of in a lab, they tend to feel more positive about it.
19 implied HN points β€’ 20 Dec 22
  1. Metaphors shape how we think about emotions like anger. For example, saying we need to 'blow off steam' suggests that expressing anger can help relieve it.
  2. Some people feel that expressing anger, like 'picking at a wound,' can make it worse over time. It may lead to more anger instead of helping to heal it.
  3. Choosing a metaphor for anger depends on the person and situation. Both 'blowing off steam' and 'picking a scab' have valid points about handling anger, but they suggest different approaches.
19 implied HN points β€’ 06 Nov 22
  1. Understanding language in humans often relies on their behavior. When people respond or react to language, we assume they understand it.
  2. There are deeper properties, like grounding or compositionality, that some believe are essential for true understanding. These properties are often debated in relation to how we define understanding.
  3. The ongoing discussion about human language understanding can help us figure out if machines, like language models, can genuinely understand language too.
0 implied HN points β€’ 07 Feb 23
  1. It's tough to tell if text is written by a human or a language model like ChatGPT. People are concerned about students using it for school work or spreading false information.
  2. There are different methods being proposed to detect machine-generated text, like checking word patterns or adding hidden markers to the text. However, each method has its own challenges and limitations.
  3. As more tools become available for generating text easily, it raises worries about the quality and authenticity of online content. Many fear this could make online information less trustworthy.
0 implied HN points β€’ 16 Nov 22
  1. Humans understand language through experiences and actions. This means that we connect words with real-world meanings based on what we sense and do.
  2. Large Language Models (LLMs) struggle with understanding because they learn only from text. They lack the real-life experiences that humans have to ground their understanding in reality.
  3. Research shows that our brains activate specific areas related to actions when we comprehend language. This suggests that our ability to understand words may rely on these experiences and not just on the words themselves.
0 implied HN points β€’ 13 May 24
  1. Subscribers can vote on topics each month for future posts. This means readers have a say in what gets discussed.
  2. Past post topics have included readability and tokenization in language models. These topics show a focus on language and technology.
  3. There’s a free trial offered for new subscribers. People can explore content before committing to a paid subscription.
0 implied HN points β€’ 01 Apr 24
  1. The study tested whether human readers find text easier or harder to read when modified by a large language model. The results showed that people did indeed rate the 'easier' texts as more readable than the 'harder' ones.
  2. While different readability metrics correlated with human ratings, they were often more aligned with each other than with actual human judgment. This suggests that while these tools can help gauge readability, they might not capture all aspects of what makes a text readable.
  3. The research highlights that 'readability' is complex and subjective. Future studies should explore how different audiences might interpret readability, and consider other factors like comprehension and enjoyment when assessing text.