Sudo Apps

A weekly column analyzing the layers of various software applications and technologies.

Top posts of the year

And their main takeaways
121 HN points 06 May 23
  1. Training Large Language Models (LLMs) with new data constantly is impractical due to the vast amount of information and privacy concerns.
  2. OpenAI's focus on improving LLMs in other ways instead of just increasing model size indicates the end of giant model era.
  3. Using tokens, embeddings, vector storage, and prompting can help provide LLMs with large amounts of data for better interpretation and understanding.
16 implied HN points 22 Dec 23
  1. AI advancements come with risks like misuse and content flooding.
  2. AI automation may lead to job displacement and increased productivity.
  3. Managing AI advancement involves differing perspectives, safety regulations, and government frameworks.
2 HN points 24 May 23
  1. Advancements in large language models have enabled new possibilities through chat interfaces.
  2. Experimenting with instructing multiple agents shows potential for improved outcomes in task completion.
  3. Using a lead engineer can help review, guide, and improve outputs from engineering agents in experiments.
2 HN points 15 Jun 23
  1. Gorilla LLM is designed to connect large language models with various services and applications through APIs.
  2. LLaMA was chosen as the base model for Gorilla, which has since been fine-tuned with GPT-4, GPT-3.5, and other models.
  3. Gorilla LLM introduces novel concepts like retriever-aware training and AST sub-tree matching for more accurate inferences.
2 HN points 22 Apr 23
  1. Auto-GPT uses various techniques to make GPT autonomous in completing tasks with executable commands.
  2. Auto-GPT addresses GPT's lack of explicit memory by using external memory modules like embeddings and vector storage.
  3. Interpreting responses with fixed JSON format and executing commands allows Auto-GPT to interact with the real world and complete tasks.
Get a weekly roundup of the best Substack posts, by hacker news affinity: