Boring AppSec

Boring AppSec explores the foundational aspects of application security (AppSec), focusing on integrating emerging technologies, such as large language models (LLMs) and generative AI, into security practices. It provides frameworks for risk management, highlights the evolution of security tools, and underscores the importance of balancing user experience with security measures.

Risk Management in AppSec Integrating LLMs and AI in Security Evolving Security Tools and Practices Balancing User Experience with Security Automation in Security Cloud Security versus Application Security Security Prioritization Frameworks Enhancing Security Teams' Impact

The hottest Substack posts of Boring AppSec

And their main takeaways
7 implied HN points โ€ข 27 Jan 25
  1. ADR focuses on real-time data in production, which helps reduce false positives, while shift-left aims to find issues early in the development process to fix them easily.
  2. You need a balance of both ADR and shift-left strategies. ADR manages existing problems (stock), and shift-left deals with changes being made (flow).
  3. When choosing tools, flow tools should be light and supportive for developers, while stock tools track and analyze existing issues. They both require different management approaches.
38 implied HN points โ€ข 10 Nov 24
  1. The Secure by Design initiative aims to improve software security, but it's unclear how effective it will actually be. Companies might just treat it as another compliance standard without real change.
  2. CISA's approach mixes good ideas with vague guidelines, making it hard for security teams to use effectively. This can lead to companies focusing on basic compliance instead of deeper security improvements.
  3. Awareness initiatives can be helpful, especially for new issues in cybersecurity, but they often become outdated. What worked in the past, like OWASP Top 10, may not be useful for current complex security challenges.
84 implied HN points โ€ข 05 Sep 23
  1. The post discusses a framework for securely using LLMs like ChatGPT and GitHub Copilot in companies.
  2. It highlights key risks and security controls for ChatGPT, focusing on data leakage and over-reliance on AI-generated output.
  3. For GitHub Copilot, it addresses risks like sensitive data leakage and license violations, along with suggested security controls.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
15 implied HN points โ€ข 05 Feb 23
  1. Security teams can leverage their skills to become force multipliers for the organization in areas like tool/platform adoption, incident management, and program management.
  2. Skills picked up by security professionals, such as evangelizing, communicating, prioritizing, and managing stakeholders, can be valuable in various other teams and projects across the organization.
  3. Improving customer trust through branding is a key aspect of a security program, and security professionals can help in public relations, content creation, and event participation to enhance the company's image.
3 HN points โ€ข 13 Oct 23
  1. Pentesters should care about security implications of integrating LLMs in applications.
  2. Identifying LLM usage in applications can involve looking for client-side SDKs, server-side APIs, and popular adoption signs.
  3. Assessing LLM-integrated applications requires manual testing, tooling like Garak and LLM Fuzzer, and aiding developers in defending against vulnerabilities.
2 HN points โ€ข 30 May 23
  1. Degrading user experience to enhance security can harm both aspects.
  2. Considering unintended consequences of design choices is crucial for all engineering disciplines, including security.
  3. Tradeoffs between usability and security can lead to negative impacts on password strength, user behavior, and session management.
1 HN point โ€ข 13 Aug 23
  1. Using third-party LLM providers can offer advantages like minimal setup complexity and experimentation with low upfront costs.
  2. Challenges with third-party LLMs include concerns about data security, biases in responses, and potential cost overruns.
  3. To manage risks when integrating LLMs, consider implementing an LLM gateway for traffic routing, regular auditing and testing, and a monitoring layer for usage.
0 implied HN points โ€ข 19 Jan 25
  1. The newsletter is shifting focus from AppSec operations to building a new AppSec company. This change comes from a personal career transition from being a practitioner to a founder.
  2. Authenticity in writing has become harder because daily problem-solving in AppSec is no longer a part of the new role. The writer has a list of topics but feels less connected to the daily challenges.
  3. Future posts will explore industry insights, engineering challenges, and frameworks for solution thinking in AppSec. The style will stay casual, and thereโ€™s an aim to post more regularly.