The hottest Substack posts of Rozado’s Visual Analytics

And their main takeaways
283 implied HN points 29 Jan 25
  1. DeepSeek AI models show political preferences similar to those of American models. This suggests that AI might reflect human biases in their programming.
  2. The findings indicate that AI can carry the same ideologies as the people who create and train them. It's important to be aware of this influence.
  3. For those curious about how political preferences impact large language models, there are more detailed analyses available to explore.
150 implied HN points 28 Jan 25
  1. OpenAI's new o1 models are designed to solve problems better by thinking through their answers first. However, they are much slower and cost more to run than previous models.
  2. The political preferences of these new models are similar to earlier versions, despite the new reasoning abilities. This means they still lean left when answering political questions.
  3. Even with their advanced reasoning, these models didn't change their political views, which leads to questions about how reasoning and political bias work together in AI.
183 implied HN points 23 Jan 25
  1. Large language models (LLMs) like ChatGPT may show political biases, but measuring these biases can be complicated. The biases could be more visible in detailed AI-generated text rather than in straightforward responses.
  2. Different types of LLMs exist, like base models that work from scratch and conversational models that are fine-tuned to respond well to users. These models often lean towards left-leaning language when generating text.
  3. By using a combination of methods to check for political bias in AI systems, researchers found that most conversational LLMs lean left, but some models are less biased. Understanding AI biases is essential for improving these systems.
350 implied HN points 13 Dec 24
  1. English Wikipedia mentions far-right political extremism three times more than far-left extremism. This shows a noticeable difference in how each side is portrayed.
  2. The terms used to describe political extremism vary, with 'extreme' often linked more to the right and 'radical' to the left. However, the overall trend still favors right-wing mentions.
  3. These patterns in Wikipedia echo trends found in news media, suggesting that the way political extremism is discussed might be influenced by broader social and historical factors.
383 implied HN points 28 Oct 24
  1. Most AI models show a clear left-leaning bias in their policy recommendations for Europe and the UK. They often suggest ideas like social housing and rent control.
  2. AI models have a tendency to view left-leaning political leaders and parties more positively compared to their right-leaning counterparts. This means they are more favorable towards leftist ideologies.
  3. When discussing extreme political views, AI models generally express negative sentiments towards far-right ideas, while being more neutral toward far-left ones.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
316 implied HN points 22 Feb 24
  1. Customizable AI systems could be an alternative to one-size-fits-all AI systems, offering users the freedom to adjust settings based on their preferences.
  2. There's a debate about balancing truth and diversity/inclusion in AI systems, which raises questions about who should control how these systems are configured.
  3. Personalized AI systems where users can adjust settings themselves present a potential solution to the truth vs. values trade-off, though they come with risks like filter bubbles and societal polarization.
150 implied HN points 08 Mar 23
  1. Spanish media stand out for mentioning sexism and misogyny much more than media in other countries.
  2. The frequency of references to gender bias in Spanish media increased significantly between 2004 and 2008.
  3. Despite lower rates of violence towards women, Spanish media heavily focus on gender prejudice, posing a discrepancy with the sociological reality.
2 HN points 26 Feb 24
  1. There are AI models being tested on their 'Wokeness' based on various dimensions like Social justice and Climate Sustainability.
  2. Google's Gemini is not the most 'Woke' AI, with other companies having developed even more 'Woke' AIs.
  3. Experimental fine-tuned AI models like LeftWing GPT and Depolarizing GPT have been created for specific ideological alignments.