The AI-DLC: The Good, The Dangerous, and the Dangerous


AI coding assistants went from experiment to enterprise commonplace sooner than nearly any know-how in latest reminiscence. In a latest StackHawk survey of 250+ AppSec stakeholders, 87% of organizations have adopted instruments like GitHub Copilot, Cursor, or Claude Code. Over a 3rd are already at widespread or full adoption.

The productiveness positive aspects are actual. So are the safety implications. However the dialog about AI coding danger stays caught on whether or not AI “writes weak code” — which misses the deeper shifts in how software program will get constructed and the way it must be secured.

The Good

I believe this one is clear. Velocity issues in the case of product differentiation and innovation—and AI delivers it. Builders are producing considerably extra code than they did six months in the past. Options that used to take weeks now ship in days. 

AI can even enhance baseline code high quality. Assistants educated on tens of millions of repositories have internalized widespread patterns, together with safe ones. For routine stuff — enter validation, commonplace auth flows, widespread API patterns — AI-generated code is usually extra constant than what a junior developer writes from scratch. The “AI writes insecure code” narrative ignores that human-written code was by no means a safety gold commonplace both.

And boilerplate safety is getting automated. Parameterized queries, commonplace encryption patterns, OAuth scaffolding — these are precisely the place AI assistants shine. The repetitive safety hygiene that builders used to shortcut as a result of it was tedious now will get generated accurately by default.

The Dangerous

The context hole is actual and rising. While you write code line by line, you develop instinct about the way it works, what it touches, the place the sting instances reside. While you evaluation AI-generated code, you’re asking a unique query: “Does this work?” Not “Is that this safe?” Not “How does this work together with our authorization mannequin?” Builders accepting full implementations with out deeply understanding them is a essentially completely different danger profile than builders constructing these implementations themselves.

Documentation and institutional information endure. AI-assisted improvement typically means much less time spent within the codebase. Builders perceive options at a practical degree however could not hint the safety implications. That information hole compounds—six months later, no person fairly remembers why a selected API endpoint exists or what knowledge it might entry.

Handbook processes can’t maintain tempo. When improvement velocity will increase 5-10x, every little thing downstream breaks. Safety opinions, structure approvals, asset documentation, assault floor monitoring—any course of that depends on people preserving tempo with improvement is now completely behind. Our survey discovered “maintaining with fast improvement velocity and AI-generated code” was the primary problem cited by AppSec stakeholders.

The Dangerous

The danger isn’t the code—it’s the arrogance. The true hazard isn’t that AI writes weak code (although it might). It’s that organizations ship sooner whereas understanding much less about what they’re delivery. Checks go, code opinions approve, options deploy—however the safety crew’s psychological mannequin of the applying diverges farther from actuality with every AI-assisted dash.

Shadow functions multiply sooner than ever. That weekend proof-of-concept an engineer spun up “simply to check one thing”? AI assistants make it trivially straightforward to construct, which implies trivially straightforward to overlook. Our survey discovered solely 30% of AppSec stakeholders are “very assured” they know 90%+ of their assault floor. AI-assisted improvement makes that quantity worse, not higher.

Safety groups are triaging, not securing. When code quantity will increase however AppSec headcount doesn’t, one thing has to provide. Our knowledge exhibits 50% of AppSec groups spend 40% or extra of their time simply triaging and prioritizing findings—figuring out what’s actual earlier than they’ll handle what issues. That ratio was already unsustainable. AI improvement velocity breaks it utterly.

What This Means for Safety Leaders

The organizations getting this proper aren’t making an attempt to decelerate AI adoption—that ship has sailed. They’re adapting their safety applications for a world the place:

  • Visibility is foundational. You may’t safe what you don’t know exists. Automated assault floor discovery from supply code isn’t a nice-to-have when builders ship sooner than documentation can observe.
  • Runtime validation issues greater than ever. When builders have much less context in regards to the code they’re delivery, you want testing that validates how functions truly behave—not simply how code seems statically.
  • Intelligence beats quantity. The reply to 5x extra code isn’t 5x extra findings to triage. It’s smarter prioritization that connects vulnerabilities to enterprise danger, so finite AppSec assets give attention to what truly issues.

AI coding assistants aren’t going away. The productiveness advantages are too important, and the adoption curve is already behind us. The query isn’t whether or not to embrace them—it’s whether or not your safety program is constructed for the world they’ve created.