The hottest Causal Inference Substack posts right now

And their main takeaways
Category
Top Science Topics
Scott's Substack 117 implied HN points 31 Jan 24
  1. No anticipation means the baseline period is equal to Y(0) not Y(1)
  2. Difference-in-differences coefficient equals ATT in the post period for the treatment group plus parallel trends bias minus ATT in the incorrectly specified baseline period
  3. Difference-in-differences always requires three assumptions to point identify the ATT: SUTVA, Parallel trends, and No Anticipation
Get a weekly roundup of the best Substack posts, by hacker news affinity:
Scott's Substack 117 implied HN points 24 Jan 24
  1. Workshop offering discounted price of $95 for non-tenure track professors or those with high teaching loads
  2. Workshop covers topics like potential outcomes model, unconfoundedness, and instrumental variables
  3. Teaching style focuses on comprehension, confidence, and competency in applying causal inference methods
Scott's Substack 39 implied HN points 05 Feb 24
  1. Triple difference design can be used with continuous treatment by defining the parameters based on dosage levels.
  2. When treatment is continuous, the target parameter shifts from average treatment effect to average causal response function.
  3. Continuous treatments require careful definition of parameters to compare different dosages along a treatment curve.
Data Science Daily 0 implied HN points 02 Mar 23
  1. Deep learning can outperform linear regression for causal inference in tabular data.
  2. Different perspectives exist in the debate between deep learning and traditional models like XGBoost.
  3. The study suggests that deep learning models like CNN, DNN, and CNN-LSTM may offer better performance in certain scenarios.