ML in Practice

ML in Practice explores the diverse landscape of artificial intelligence and machine learning, addressing topics such as AI hype, the sentience of language models, challenges in ML tooling and project implementation, ethical considerations, and the future direction of AI research. It combines theoretical discussions with practical insights from an experienced practitioner's perspective.

Artificial Intelligence Machine Learning AI Ethics and Society Software Development Data Science ML Tools and Practices Project Management in ML AI Research Trends

The hottest Substack posts of ML in Practice

And their main takeaways
58 implied HN points 20 Mar 23
  1. The discussion around AI is often fragmented, focusing on specific aspects and avoiding deeper questions.
  2. AI marketing often emphasizes sensationalized features and capabilities for publicity.
  3. The AI industry is driven by significant financial investments, with companies capitalizing on the technology for profit.
0 implied HN points 23 Jun 22
  1. There is ongoing debate about whether recent large language models are sentient.
  2. Language models like LaMDA may seem sentient but actually just match patterns from data.
  3. Consciousness in AI is complex due to the lack of clear definitions.
0 implied HN points 24 Nov 21
  1. Consulting workload can be tricky to balance between too little and too much.
  2. Transitioning ML projects to production involves considerable engineering work and challenges.
  3. Companies often lag behind in adopting new technology due to the complexity of their existing tech stack.
0 implied HN points 07 Jun 21
  1. The discussion highlights the need for innovation in ML tools beyond existing offerings from cloud companies.
  2. Improving ML practices involves managing uncertainty in projects and addressing data quality and organizational issues, not just tooling.
  3. Transformative tools can change the way people work and think, like Hadoop did with its functional architecture.
0 implied HN points 10 May 21
  1. The book 'Hooked' discusses how products are designed to be habit-forming, focusing on cognitive biases and ethical considerations.
  2. There is ongoing discussion in the ML community about the reviewing and publication model, with a need for constructive review processes.
  3. The question of how much math knowledge is needed for ML practice is raised, comparing it to using guitar pedals for making music.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
0 implied HN points 23 Apr 21
  1. People are struggling to create software at scale
  2. Open source is seen as a scalable approach to writing software
  3. There is a need for better programming languages and tooling
0 implied HN points 18 Jun 21
  1. Building data products involves challenges like data acquisition, cleaning, and creating pipelines for data retrieval.
  2. Managing data dependencies among different parts of an organization is crucial when building systems like recommender engines.
  3. Distributed responsibility for data quality, understanding the value of data, and leaving room for exploration are key aspects of effective data management.
0 implied HN points 28 May 21
  1. MIT research suggests deep learning is reaching computational limits.
  2. Questioning the necessity of vast computing power in deep learning research.
  3. Exploring more resource-efficient approaches in AI research is crucial.
0 implied HN points 17 May 21
  1. Consider the challenges of transitioning from exploratory notebooks to production code in MLOps.
  2. Be aware of survivorship bias when evaluating success stories of founders.
  3. Learn from Bruno Lowagie's experience in building a business around Open Source Software.
0 implied HN points 21 Apr 22
  1. Coding can be refreshing and rewarding, even after taking a break.
  2. Team setup and project approach are crucial in data science projects.
  3. Mistake: approaching data science projects as just implementing a library.
0 implied HN points 11 Oct 23
  1. AI should enable humanity and not supersede it by interacting with us like empathy does.
  2. Archetypes for AI include AI overlords, companions or servants, and eye-level partners.
  3. Current AI lacks inner life and self-awareness, functioning more as subsystems or problem solvers.
0 implied HN points 30 Apr 21
  1. MLOps is becoming standardized with concepts like continuous training and model drift.
  2. The EU is proposing regulations on AI use, defining AI through various technologies.
  3. Consider working alone and using tools when writing software at scale.