Deploy Securely

Deploy Securely focuses on managing cybersecurity risks at the intersection of artificial intelligence and software security. It offers guidance on AI security, data protection, risk management, legal considerations regarding AI, and strategies for using AI tools securely. The content targets CISOs, business leaders, and developers involved in AI deployments.

Artificial Intelligence Governance Cybersecurity Risk Management Legal and Compliance Issues in AI Data Protection and Privacy Strategies for Secure AI Deployment Intellectual Property in AI Investing in Cybersecurity Regulatory Frameworks for AI and Cybersecurity

The hottest Substack posts of Deploy Securely

And their main takeaways
216 implied HN points 10 Jan 24
  1. Block major generative AI tools from scraping your website by adding specific directives to your robots.txt file.
  2. Consider modifying your site's terms and conditions to prevent undesired activities like scraping by AI tools.
  3. Blocking AI tools may impact your search and social media rankings, so find a balance between cybersecurity and potential repercussions.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
39 implied HN points 24 Jan 24
  1. Microsoft 365 Copilot provides detailed data residency and retention controls favored by enterprises in the Microsoft 365 ecosystem.
  2. Be cautious of insider threats with Copilot as it allows access to considerable organizational data, potentially leading to inadvertent policy violations.
  3. Consider the complexities of Copilot's retention policies, especially in relation to existing settings and the use of Bing for web searches.
157 implied HN points 08 Aug 23
  1. Zoom updated its terms to allow training AI models earlier this year.
  2. Zoom clarified that it won't use audio, video, or chat content for AI training without opt-in.
  3. Be cautious about opting into Zoom's generative AI features to avoid your content becoming part of their AI models.
157 implied HN points 21 Jul 23
  1. The fear of repercussions from authorities like prosecutors and regulatory agencies is often greater than that from hackers.
  2. Cybersecurity professionals and their teams face severe consequences for non-compliance, even if the breach was not entirely their fault.
  3. A flawed liability regime and focus on performative compliance rather than actual security measures contribute to the prioritization of checking boxes over protecting data.
98 implied HN points 09 Jun 23
  1. The NIST AI Risk Management Framework provides a governance, risk, and compliance framework for artificial intelligence.
  2. The document highlights the challenges in AI risk management, including identifying and cataloging risks, emergent risks, and availability of reliable metrics.
  3. The criteria to evaluate AI systems include validity, safety, security, accountability, transparency, privacy, and fairness in managing harmful bias.
98 implied HN points 02 Jun 23
  1. PyPI, a popular repository for Python developers, suspended new uploads and user registrations due to an influx of malicious code.
  2. Malicious packages on PyPI pose severe security threats, like running unintentional malware in your system.
  3. Security measures to take include verifying package provenance, checking package names for accuracy, and using trusted hosts with pip.
78 implied HN points 03 Mar 23
  1. The National Cybersecurity Strategy emphasizes the need for businesses to adapt their cybersecurity strategies accordingly.
  2. The strategy addresses the importance of defending critical infrastructure and the need to streamline cybersecurity regulations.
  3. Business leaders should be aware of potential regulatory changes impacting software security and consider the implications of a national cyber insurance backstop.