2084

2084 explores the intersection of technology, ethics, and the future, with a focus on AI, its applications, and philosophical implications. It delves into AI advancement, music and image generation, business strategies, academia, future transportation, and power sources. The Substack analyzes current trends and theorizes on future developments in AI and societal structures.

Artificial Intelligence Technology and Future Trends Music and Image Generation AI Business and Economics Academia and Industry Interface Ethics and Philosophy Transportation Innovations Power and Energy

The hottest Substack posts of 2084

And their main takeaways
19 implied HN points 08 Dec 22
  1. A model called MinD-Vis can decode brainwaves into the images a person is looking at by using advanced technology.
  2. Brain wave to speech models and generative models like ChatGPT-3 are pushing the boundaries of AI capabilities.
  3. The GATO model from Meta showcases that general AI is already making strides, with one network performing various tasks without retraining.
19 implied HN points 07 Dec 22
  1. Zach DeWitt started successful businesses after Yale, emphasizing starting multiple companies in your 20s.
  2. Yale should foster entrepreneurship by connecting with successful alumni and promoting risk-taking.
  3. Yale students should leverage alumni networks for mentorship and explore areas of interest before starting their own ventures.
19 implied HN points 04 Dec 22
  1. AI models like OpenAI's Codex are advancing rapidly in generating code for game development.
  2. Tools like Meta's AudioGen are making it easier to generate sound effects for games.
  3. Platforms such as Aiva offer options for symbolic music generation, catering to various music styles.
19 implied HN points 30 Nov 22
  1. Working late is common, especially for tasks requiring prolonged focus and dedication.
  2. AI models are advancing rapidly in generating 3D models, animations, and enhancing game visuals.
  3. Future games might be entirely procedurally generated by AI, offering endless possibilities and realism.
19 implied HN points 27 Nov 22
  1. Speech to text models have significantly improved in the last few years, with many open source options available.
  2. Non-autoregressive models can provide a significant speedup in speech transcription.
  3. Commercial tools like DeepGram offer fast and accurate transcription services, filling a need in the market.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
19 implied HN points 25 Nov 22
  1. New transistor architecture RibbonFET allows for continuous sizing, optimizing power and efficiency.
  2. Stacked transistors show potential for doubling speed and more in single wafer, advancing Moore's Law.
  3. Developments in AI chips and neural network accelerators point towards smaller, more efficient chips for complex programs.
19 implied HN points 24 Nov 22
  1. Sales is crucial for businesses to survive and thrive.
  2. Sales is a social skill that involves convincing others and building relationships.
  3. Sales may be one of the last professions to be fully automated due to its human-centric nature.
19 implied HN points 17 Nov 22
  1. In the future, cars will become more efficient and cheaper, shifting towards aerodynamic electric models.
  2. AI technology like Stable Diffusion and GPT-3 is advancing rapidly, potentially amplifying human creativity and productivity.
  3. Software development is evolving, with tools abstracting away tedious work and enabling dynamic, personalized content.
19 implied HN points 17 Nov 22
  1. 2084.substack.com is coming soon
  2. The newsletter is about thoughts on the past and future
  3. You can subscribe to 2084 for updates
0 implied HN points 16 Dec 22
  1. OpenAI is projected to reach a billion dollars in revenue by 2024.
  2. OpenAI releases its research publicly, unlike Google.
  3. OpenAI aims to create more general AI by the year 2084.
0 implied HN points 17 Dec 22
  1. Music generation AI is improving, like Riffusion.
  2. Large AI models can perform various tasks they were not trained for.
  3. Future AI may function as a multi-purpose 'super mind'.
0 implied HN points 14 Apr 23
  1. OpenAI has developed Consistency Models for instant image generation, which is faster than diffusion models and has potential for real-time movie generation.
  2. StackLLaMa is a new variant of LLama by Meta, trained on Stack Exchange data to answer questions in the format of a Stack Exchange post.
  3. DeepSpeed offers a comprehensive solution for fine-tuning models, and Galileo AI can automatically generate designs based on text input.
0 implied HN points 25 Jan 23
  1. State space models for language generation scale linearly in input size compared to transformers' quadratic scaling.
  2. State space models are not as effective as transformers, but can be interesting to read about for their theoretical efficiency.
  3. Consider exploring state space models for potential future applications.
0 implied HN points 27 Dec 22
  1. You can modify videos using text-to-image models to improve consistency across frames.
  2. Large models like GATO can handle diverse tasks accurately, showcasing their generalizability.
  3. Graph neural networks can predict weather by performing inference on large datasets, hinting at future applications in music production.
0 implied HN points 25 Dec 22
  1. Google AI released a new system called RT-1, a robot transformer for real-world control.
  2. RT-1 is a text-to-robot command system that can make decisions 3 times a second.
  3. RT-1 was trained on a diverse dataset and shows potential for generalized adaptation across tasks in robotics.
0 implied HN points 22 Dec 22
  1. Hyperboloid model defines the distance between points on a sheet of a hyperboloid.
  2. The distance calculation is easily calculable and useful for neural network loss functions.
  3. Datasets that lie on a hyperbolic manifold can benefit from hyperbolic neural networks.
0 implied HN points 20 Dec 22
  1. 2084: Finalization is a post on supporting a YouTube channel.
  2. The YouTube channel features deep dives on new AI papers.
  3. The channel provides easily understandable summaries on AI advancements.
0 implied HN points 15 Dec 22
  1. Model cards could make state-of-the-art AI accessible by plugging in a card, speeding up programs and making AI available offline.
  2. Pre-trained AI models can save time and resources, but have been limited to research environments; model cards aim to make them accessible to anyone with a computer.
  3. Model cards could democratize AI use, making it easier to create AI-powered projects and enabling a wider audience to leverage AI capabilities.
0 implied HN points 17 Jan 23
  1. The idea of creating a ChatGPT that works like Wikipedia with community moderation for accurate information.
  2. Utilizing a system similar to Wikipedia's hierarchy of moderators and volunteer contributors to continually improve the accuracy of an AI language model.
  3. Envisioning an advanced AI language model in 2084 that is consistently checked for accuracy and expanded with accurate information.
0 implied HN points 21 Jan 23
  1. Consider using techniques for annotated generation to create computer-generated powerpoint slides with some human control.
  2. Scrape powerpoints from Slideshare to create an annotated dataset for training models.
  3. Utilize OCR systems and image captioning models to automate the generation of text and graphics on powerpoint slides.
0 implied HN points 24 Jan 23
  1. IBM research achieved 2nm transistor widths in 2021, which could increase cellphone battery life by 45-75%.
  2. Significant work is being done on 1nm transistors, and there's a focus on future AI hardware and software.
  3. A paper discusses detecting hallucinations in machine learning models, hinting at a potential solution for model accuracy issues.
0 implied HN points 26 Jan 23
  1. KAT Knowledge uses both common sense and given knowledge to answer questions.
  2. KAT Knowledge has a clear split between question and knowledge for a cleaner interface.
  3. It uses a frozen GPT-3 model for commonsense input and encoder layers for explicit knowledge.
0 implied HN points 30 Jan 23
  1. Engineers often struggle with making eye contact due to their intense focus on their work
  2. Nvidia has developed AI technology in their recording software to create the illusion of eye contact
  3. Exciting advancements are being made in using Text To Video models for generating dynamic scenes
0 implied HN points 03 Feb 23
  1. New robot dog Raibo can run at 10kph on various surfaces
  2. Researchers developed a physics simulation for robot locomotion on different terrains
  3. Potential to merge Robotics Transformer 1 with surface-adapting techniques for advanced robot capabilities
0 implied HN points 05 Feb 23
  1. AudioLDM is a model that generates audio from text descriptions.
  2. AudioLDM uses a latent diffusion model to create audio waveforms.
  3. The accessibility of AI is showcased by the fact that AudioLDM was trained on a single GPU.