Shrek's Substack

Hey all, I am Michael (nickname Shrek) and I want to share my experience in software, primarily in Healthtech (radiation oncology, specialty pharma, EMR). Topics will vary from leadership to low level technical items such as LLMs or cloud infra.

The hottest Substack posts of Shrek's Substack

And their main takeaways
4 HN points 19 Aug 24
  1. The way you ask questions and set the model's temperature can really affect how well AI solves math problems. Clear prompts and specific instructions can help improve its accuracy.
  2. AI like GPT-4o struggles with big numbers and can make mistakes about half the time when calculating linear equations. It works better with smaller numbers.
  3. It's important to be careful when using AI for math, especially in education. Using other tools to double-check results can help avoid mistakes.
0 implied HN points 15 Jun 23
  1. Using humor in coding reviews can help remove ego and make feedback more enjoyable. It's like having a friend point out mistakes in a fun way.
  2. Modernizing outdated code is important. Just like using fresh ingredients in cooking, using current coding practices makes your code better.
  3. Clear names and proper documentation are key. Good code should be as easy to understand as a well-labeled recipe.
0 implied HN points 24 Aug 23
  1. Nx Cloud can speed up tasks for big teams, but small projects might not need it. It's okay to skip it if you're working solo or on smaller tasks.
  2. If you encounter errors using Nx Cloud, switching to local runners is a good solution. Local runners can handle tasks without relying on the cloud.
  3. To remove Nx Cloud from your app, just change a setting in the nx.json file and switch to a local runner. You can also uninstall Nx Cloud easily with a command.
0 implied HN points 18 Apr 23
  1. Training large language models (LLMs) needs powerful hardware, often multiple A100 GPUs with 40GiB of VRAM each. Running them is cheaper than training.
  2. Different data types like FP16 and TF32 are crucial for handling model memory. New types help manage larger numbers while saving memory.
  3. For smaller models, single hardware can work, but bigger models need a lot of VRAM or multiple systems. There's a difference between training and running models efficiently.
Get a weekly roundup of the best Substack posts, by hacker news affinity: