Mindful Modeler

Mindful Modeler focuses on enhancing machine learning practices through statistical thinking, critical data analysis, and model interpretability. It delves into methods like conformal prediction, quantile regression, and handling imbalanced data, emphasizing the importance of uncertainty estimation, thoughtful data treatment, and leveraging inductive biases for resilient, informative modeling.

Machine Learning Statistical Modeling Data Analysis Model Interpretability Uncertainty Quantification Research and Development Career Development Writing and Documentation

The hottest Substack posts of Mindful Modeler

And their main takeaways
159 implied HN points 04 Oct 22
  1. Supervised learning can go beyond prediction to offer uncertainty quantification, causal effect estimation, and interpretability using model-agnostic tools.
  2. Uncertainty quantification with conformal prediction can turn 'weak' uncertainty scores into rigorous prediction intervals for machine learning models.
  3. Causal effect estimation with double machine learning allows for correction of biases in causal effect estimation through supervised machine learning.
139 implied HN points 01 Nov 22
  1. Interpretation can be true to the model or true to the data, depending on whether you want to audit the model or gain insights.
  2. For auditing a model, the interpretation needs to be true to the model, considering features' correlation.
  3. When focusing on gaining insights, the interpretation should be true to the data, using methods that avoid unrealistic interpretations of correlated features.
59 implied HN points 14 Mar 23
  1. Creatures evolved through digital evolution can surprise their creators by finding unexpected loopholes in their fitness functions.
  2. Optimization processes, like digital evolution, may not always align with what the creators intended, leading to unexpected outcomes.
  3. Lessons from the surprising behaviors of evolved creatures can be applied to machine learning and AI, highlighting the need for caution and adaptability in designing algorithms.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
59 implied HN points 14 Feb 23
  1. Conformal prediction can be combined with any uncertainty quantification method you already use, making it versatile and not restrictive.
  2. Conformal prediction is model-agnostic, meaning you can implement it without changing your existing models or user interface.
  3. One of the key advantages of conformal prediction is its guarantee of the true outcome coverage, making it a practical and useful addition to predictive modeling.
59 implied HN points 06 Dec 22
  1. The concept of creating fictive datasets using GPT-3 for testing ML models and educational purposes is explored in 'The Infinite Data Hallucinator'.
  2. The 'Infinite Data Hallucinator' is a Jupyter notebook script that leverages the OpenAI API and pandas DataFrame to generate datasets based on a user-provided prompt.
  3. While the generated datasets may have superficial coherence, they are not entirely realistic, and there are limitations due to token limits when creating larger datasets.
59 implied HN points 15 Nov 22
  1. Interpretation methods like SHAP, LIME, and permutation importance can sometimes disagree, but it doesn't always indicate a problem.
  2. There are two types of disagreements: when methods should agree but don't, and when they don't have to agree due to targeting different aspects.
  3. To handle disagreements in interpretations, quantify robustness by computing methods multiple times, understand what each method quantifies, or choose one interpretation method that aligns best with your question.