followfox.ai’s Newsletter

Followfox.ai's Newsletter is a comprehensive exploration into small AI models, focusing on their development and optimization for edge computing. It discusses various AI technologies such as Stable Diffusion, LLaMa, and Vodka models, offering guidance on fine-tuning, learning rates, and running AI on limited resources. The blog advocates for open-source development, reflects on AI's potential impacts on industries, and experiments with machine learning techniques.

AI Model Development and Optimization Open-Source AI Technologies Machine Learning Techniques AI in Edge Computing Industry Impacts of AI

The hottest Substack posts of followfox.ai’s Newsletter

And their main takeaways
176 implied HN points 15 Jun 23
  1. The post discusses getting started with LoRAs and creating a photorealistic LoRA for Vodka models.
  2. It includes steps like downloading and using a LoRA, training the first LoRA, and finally fine-tuning a custom LoRA for photorealistic results.
  3. The process involves using specific tools, datasets, and parameters to train LoRAs, and explores possibilities for creating high-quality, realistic images.
157 implied HN points 13 Mar 23
  1. Estimate the minimum and maximum learning rate values by observing when the loss decreases and increases during training.
  2. Choosing learning rates within the estimated range can optimize model training.
  3. Validating learning rate ranges and fine-tuning with different datasets can improve model flexibility and accuracy.
157 implied HN points 10 Apr 23
  1. Consider exploring ComfyUI as an alternative to Automatic1111 for Stable Diffusion.
  2. Installing ComfyUI on WSL2 involves setting up WSL2, installing CUDA, Conda, and git, cloning the repo, and running tests.
  3. After installation, experiment with different modules, compare outputs with Automatic1111, explore examples in the repo, and share findings.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
137 implied HN points 20 Apr 23
  1. Local LLMs are not as advanced as ChatGPT but show potential for various applications.
  2. LLaMa models by Facebook are licensed for non-commercial use and show good performance for their size.
  3. GPTQ quantization technique enables running LLaMa on old GPUs by compressing model weights and maintaining speed.
137 implied HN points 16 Mar 23
  1. Innovation often comes from unexpected intersections of technologies and fields, like how music sharing led to cryptocurrencies.
  2. Technological progress is a complex web of interactions with surprises and unintended consequences.
  3. The future of AI holds unpredictable surprises and revolutionary advancements, much like the evolution from music sharing to cryptocurrencies.
117 implied HN points 03 Jun 23
  1. Open source software has become a foundational layer of innovation and is prevalent in tech stacks globally.
  2. The interest in open source stems from its ease of debugging, fostering innovation, and being cost-free.
  3. The evolution of tech industries, like AI, shows a progression towards open source to drive innovation and accessibility.
98 implied HN points 21 Jun 23
  1. D-Adaptation method automates setting learning rate, aiming for optimal convergence in machine learning.
  2. Implementing D-Adaptation can consume more VRAM and result in slower training speed compared to other optimizers.
  3. Initial results show D-Adaptation performing comparably to hand-picked parameters in generating high-quality models.
117 implied HN points 18 May 23
  1. Vodka V2 was released with an updated dataset and marginally better model compared to V1
  2. The key changes in V2 included using a better dataset, increasing data volume, and cleaning the data more thoroughly
  3. The training protocol for V2 involved lower learning rate and enhanced data cleaning to achieve smoother training and optimize model performance
117 implied HN points 05 May 23
  1. The latest version of AUTOMATIC1111 includes torch 2.0.0 and ControlNet 1.1, showing the repo is active and relevant
  2. You can choose between using a prebuilt FollowFox.AI WSL distribution for an easy setup or manually installing on WSL2 with steps like setting up CUDA and Conda
  3. Installing ControlNet 1.1 is essential for new features and options, requiring downloading and installing the .pth files for use in the updated Auto1111
98 implied HN points 27 May 23
  1. An automated workflow using Auto1111's API can save time when generating XYZ comparison grids
  2. The process involves creating a CSV file with parameters for each grid and using a script to feed these parameters to Auto1111 through the API
  3. While this automated workflow can save time, it may not allow for immediate review and adjustments after each grid generation