Asimov’s Addendum

AI is a product and it's for sale. Thinking through the risks arising from AI's commercialization. Developing consensus on best practices for beneficial AI deployment. By Tim O'Reilly and Ilan Strauss.

The hottest Substack posts of Asimov’s Addendum

And their main takeaways
79 implied HN points 16 Aug 24
  1. AI regulation should begin with clear and detailed disclosures, just like accounting standards did after the stock market crash of 1929. This will help everyone understand how AI is being developed and used.
  2. Private companies should agree on best practices and measurements for AI, similar to how accountants developed standardized practices over time. This will create a shared understanding of what works and what doesn’t.
  3. The AI auditing community needs to come together to create standards for oversight. Just like in accounting, having a unified approach will help ensure trust and accuracy in AI practices.
79 implied HN points 31 Jul 24
  1. Asimov's Three Laws of Robotics were a starting point for thinking about how robots should behave. They aimed to ensure robots protect humans, obey commands, and keep themselves safe.
  2. A new approach by Stuart Russell suggests that robots should focus on understanding and promoting human values, but they must be humble and recognize that they don’t know everything about our values.
  3. The development of AI must consider not just how well machines achieve goals, but also how corporate interests can affect their design and use. Proper regulation and transparency are needed to ensure AI is safe and beneficial for everyone.
19 implied HN points 19 Aug 24
  1. Google has been found to have abused its power to control search engine results, limiting competition. This means they had an unfair advantage to keep other companies from competing effectively.
  2. Algorithms that start off as amazing tools can end up being exploited for corporate gain. The way Google uses its algorithms looks like magic at first but turns out to serve its own business interests.
  3. To foster fair competition in the tech industry, we need more transparency and rules about how algorithms work. This could lead to better choices for users and support new companies to grow.
2 HN points 04 Sep 24
  1. AI safety discussions should focus not only on stopping outside threats but also on the risks from the owners of AI systems. These owners can create harm while just trying to achieve their business goals.
  2. There is a need to recognize and learn from past technology failures as these patterns might repeat with AI. We should not overlook potential issues that arise from how AI is managed and used.
  3. It's important for AI developers to share what they are measuring and managing in terms of safety. This information can help shape regulations and improve safety practices as AI becomes more integrated into business models.
0 implied HN points 21 Aug 24
  1. Experts suggest that instead of a single AI regulator, existing agencies like the FDA and SEC should gain expertise in AI to manage its use effectively, just like they do with safety in other fields.
  2. There's an ongoing discussion about how AI companies are navigating acquisitions and regulatory concerns, reminding us that governance is ongoing and complex, not a one-time fix.
  3. It's important to recognize that AI development is still in its early stages, and new methods like Reinforcement Learning from Human Feedback may not lead to breakthroughs as significant as those seen in past successes like AlphaGo.
Get a weekly roundup of the best Substack posts, by hacker news affinity: