The hottest Terminology Substack posts right now

And their main takeaways
Category
Top Art & Illustration Topics
DrawTogether with WendyMac 1336 implied HN points 24 Sep 23
  1. The color wheel is the foundation of color mixing, composed of primary, secondary, and tertiary colors.
  2. Understanding the history of color theory helps to appreciate the significance of the color wheel.
  3. Primary colors (red, yellow, blue), secondary colors (orange, violet, green), and complementary colors play key roles in color mixing and relationships.
Weight and Healthcare 738 implied HN points 27 Dec 23
  1. Using percentages without proper context can be misleading, it's crucial to provide a full picture for accurate interpretation.
  2. Understanding the difference between relative and absolute risk in statistics can prevent manipulation and provide a clearer view of the data.
  3. Different methods for handling dropouts in trials, like LOCF and BOCF, can impact outcomes significantly and need careful consideration in research.
Weight and Healthcare 459 implied HN points 13 Dec 23
  1. The weight loss industry manipulates terminology to market weight loss as a treatment for obesity, leading to misconceptions and ineffective interventions.
  2. The term 'weight-related conditions' is often used inaccurately to imply causation, ignoring confounding variables like weight stigma and healthcare disparities.
  3. The concept of 'sustained weight loss' is sometimes misrepresented by the weight loss industry to imply success, when in reality, it often refers to temporary weight loss followed by regain.
UX Psychology 198 implied HN points 17 Nov 23
  1. The specific terminology used to describe AI systems significantly impacts user perceptions and expectations.
  2. Research shows that labeling a system as 'AI' versus 'algorithmic' affects trust, satisfaction, and acceptance after errors.
  3. Transparency, explainability, and careful terminology choices are essential in maintaining user trust and satisfaction with AI systems.
ScaleDown 11 implied HN points 07 Jun 23
  1. Before Transformers like the Transformer model, RNNs and CNNs were commonly used for sequence data but had their limitations.
  2. Tokenization is a crucial step in processing data for models like LLMs, breaking down sentences into tokens for analysis.
  3. The introduction of the Transformer model in 2017 revolutionized NLP with its attention mechanism, impacting how tokens are weighted in context.
Get a weekly roundup of the best Substack posts, by hacker news affinity: