jonstokes.com $5 / month

Jonstokes.com delves into the complexities and societal implications of AI and ML, exploring themes like AI safety, censorship, and power dynamics. It discusses the technical nuances of generative AI, its potential to innovate across fields, regulatory challenges, and ethical considerations surrounding AI's impact on society and individual freedoms.

AI/ML Technology Censorship and Power AI Safety and Ethics Generative AI Applications AI Regulation and Governance Innovation and Societal Change

The hottest Substack posts of jonstokes.com

And their main takeaways
587 implied HN points 01 Mar 23
  1. Understand the basics of generative AI: a generative model produces a structured output from a structured input.
  2. Complex relationships between symbols require more computational power to relate them effectively.
  3. Language models like ChatGPT don't have personal experiences or knowledge; they use a token window to respond based on the conversation context.
391 implied HN points 30 Mar 23
  1. The AI safety debate involves technical details about AI systems like GPT-4 and cultural dynamics around the issue.
  2. The discussion includes concerns about regulating and measuring AI capabilities, as well as the divisions and allegiances within different groups.
  3. Some groups, like the Intelligence Deniers, have strong beliefs about AI being a scam and hold firm against AI progress, leading to potential divisions among AI safety proponents.
319 implied HN points 21 Feb 23
  1. Generative AI is rapidly changing many aspects of society, affecting everything from artistic creation to education.
  2. Efforts to detect AI-generated content are ineffective, posing challenges for access control and gatekeeping.
  3. AI tools have the potential to enhance educational experiences, improve learning outcomes, but may also disrupt traditional credentialing systems.
237 implied HN points 28 May 23
  1. Foundation models for large language models go through fine-tuning phases to make them more user-friendly.
  2. Humans play a critical role in shaping the values and behaviors of these models during the fine-tuning process.
  3. Supervised fine-tuning involves exposing the model to smaller sets of carefully selected examples to anchor its output and establish dominant language structures.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
206 implied HN points 10 Jun 23
  1. Reinforcement Learning is a technique that helps models learn from experiencing pleasure and pain in their environment over time.
  2. Human feedback plays a crucial role in fine-tuning language models by providing ratings that indicate how a model's output impacts users' feelings.
  3. To train models effectively, a preference model can be used to emulate human responses and provide feedback without the need for extensive human involvement.
237 implied HN points 01 May 23
  1. AI safety involves the debate between AI as a tool or an agent, impacting approaches to AI explainability and safety.
  2. There are conflicting folk conceptions of alignment, including individualist and collectivist perspectives centered around control.
  3. The distinction of viewing AI as the genie or the lamp, as an agent with goals or as a software tool, is crucial in shaping AI safety discussions and applications.
237 implied HN points 15 Mar 23
  1. Developers will build apps on top of ChatGPT and similar models to create interactive and knowledgeable AI assistants
  2. The CHAT stack approach involves Context, History, API, and Token window, enabling how software applications will operate in the near future
  3. GPT-4 introduces an enlarged token window, improved control surfaces, and better ability to follow human instructions
175 implied HN points 22 Jun 23
  1. AI rules are inevitable, but the initial ones may not be ideal. It's a crucial moment to shape discussions on AI's future.
  2. Different groups are influencing AI governance. It's important to be aware of who is setting the rules.
  3. Product safety approach is preferred in AI regulation. Focus on validating specific AI implementations rather than regulating AI in the abstract.
164 implied HN points 15 Jun 23
  1. Generative AI has the potential to revolutionize the media industry and improve the quality of news stories.
  2. AI can streamline the news reporting process by assisting with drafting, editing, and formatting content.
  3. Creating AI-powered tools for editing, production, art, and promotion can enhance storytelling and make news creation more accessible.
195 implied HN points 21 Apr 23
  1. The rise of AI agents is introducing a new software paradigm that allows AI to make plans from text prompts.
  2. LLMs powered agents can generate detailed plans for achieving goals, revolutionizing the way tasks are accomplished.
  3. The agent paradigm offers a more cost-effective, yet higher-cost per run computation model compared to traditional software development, akin to the cloud computing model.
175 implied HN points 21 Mar 23
  1. A skilled human editor can spot viral potential in stories better than AI models like GPT-4 or GPT-5.
  2. The cost per token for AI models like GPT-4 is high, making human editing more cost-effective for steering content into the viral spotlight.
  3. Context compression and token window optimization are key challenges for AI models to catch up with human editors in understanding and writing content.
154 implied HN points 18 May 23
  1. Different approaches to evaluating AI performance have practical implications in development, deployment, and regulation.
  2. Language models like GPT-4 struggle with resolving ambiguity in human language due to limitations in understanding context.
  3. Using an engineering approach, providing relevant context, and improving language parsing can help mitigate language model biases and inaccuracies.