The hottest Academic Integrity Substack posts right now

And their main takeaways
Category
Top Education Topics
Vinay Prasad's Observations and Thoughts β€’ 148 implied HN points β€’ 08 Feb 25
  1. The NIH has lowered the amount of money it gives to universities from over 60% to 15%. This means more money can go to actual researchers instead of administrative costs.
  2. This change will make universities operate differently, encouraging them to reduce unnecessary costs and possibly hold faculty more accountable for their behavior.
  3. Lowering these indirect costs could lead to more funding for research projects. Researchers might actually benefit from this change, as it could increase the number of grants available.
Karlstack β€’ 785 implied HN points β€’ 17 Dec 24
  1. A Harvard professor, Ryan Enos, has been accused of serious data fraud in his research related to Critical Race Theory. This could lead to him retracting a whole book based on this flawed research.
  2. Enos's work showed irregularities in data, including unjustified deletions and missing information, raising concerns about its integrity. Whistleblowers have played a key role in bringing these issues to light.
  3. There are larger implications as Claudine Gay, the President of Harvard, has been implicated in covering up the misconduct. This situation highlights potential corruption within academic institutions.
Imperfect Information β€’ 157 implied HN points β€’ 24 Jan 24
  1. Plagiarism detection tools are widespread and incentives are strong to uncover copied content.
  2. Different types of plagiarism exist, from accidental use of others' work to theft of novel ideas.
  3. Plagiarism war may lead to accusations of minor transgressions, but may not detect serious intellectual misconduct.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
imperfect offerings β€’ 119 implied HN points β€’ 07 Aug 23
  1. Generative AI tools may fail to expose users to diverse ideas and perspectives, reinforcing existing biases.
  2. There is a risk that the use of generative AI may not respect human rights and safeguard individual autonomy, especially for children.
  3. It is important for educators to carefully consider the consequences of incorporating generative AI tools in teaching, ensuring fairness, transparency, and accountability.
imperfect offerings β€’ 119 implied HN points β€’ 21 Apr 23
  1. AI tools like language models cannot be credited with authorship in academic publications due to lack of accountability and responsibility for the work.
  2. Universities need to consider the implications of students using AI writing tools and ensure they are transparent, accountable, and responsible for their own use of these systems.
  3. Writing is a social technology that shapes new selves and identities, and universities play a crucial role in shaping what writing is, what it does for individuals, and why it matters.
ailogblog β€’ 59 implied HN points β€’ 07 Dec 23
  1. AI detectors often struggle to reliably differentiate between human and AI-generated writing, leading to errors, such as falsely identifying human-written work as AI-generated.
  2. AI detectors transfer responsibility for errors to instructors and institutions, relying on habits developed from using similar technology for plagiarism detection, which can lead to overreliance and misplaced judgments.
  3. Educators should reconsider the use of AI detectors as they tend to present analysis in misleading forms, leading to confusion and potential harm to students. They face significant flaws and might not be reliable in practice.