Zaid Al Hamani, CEO and Founding father of Increase Safety, is a cybersecurity and DevSecOps chief with over 20 years of expertise constructing and scaling world know-how operations. Since founding Increase Safety in 2020, he has centered on modernizing how organizations safe software program growth, drawing on prior roles together with VP of Utility Safety at Pattern Micro and Co-Founder/CEO of IMMUNIO. Earlier, he held senior management positions at Canonical, main product, engineering, and world assist initiatives, and at SITA, the place he managed large-scale, mission-critical IT operations. His profession displays a powerful observe file of constructing groups, optimizing programs, and advancing fashionable safety practices.
Boost Security is a cybersecurity firm centered on securing the fashionable software program provide chain by means of a developer-first DevSecOps platform. Its know-how integrates straight into CI/CD pipelines to mechanically detect, prioritize, and remediate vulnerabilities, decreasing handbook overhead whereas sustaining growth velocity. By unifying utility and provide chain safety right into a single system, the platform gives full visibility throughout code, dependencies, and infrastructure, serving to organizations strengthen resilience in complicated, cloud-native environments.
You beforehand led utility safety at Pattern Micro and co-founded IMMUNIO. What led you to discovered Increase Safety, and what hole out there had been you uniquely positioned to establish early?
IMMUN.IO was one of many first RASP corporations to be based – and our expertise till that time was that WAFs as a runtime safety know-how had been inconceivable to keep up, and never very efficient. We envisioned a manner the place WAFs would get replaced with a extra correct, simpler to keep up answer – by instrumenting the applying.
That was in 2012, DevOps was nonetheless early, most groups weren’t Agile, and Kubernetes was not a factor but.
Pattern Micro acquired IMMUN.IO in 2017. By that point, there have been much more DevOps practices: CI/CD pipelines, agile growth practices, sooner iterations and launch cycles, cloud, and so forth. Software program growth groups had been higher at constructing software program, and transport sooner. Safety was nonetheless damaged although:
- Scans are too gradual, or outcomes arrive too late
- Outcomes are too complicated for builders to motion
- There was a typically unacceptable false constructive price
- Many new varieties of artifacts weren’t scanned: infrastructure as code, containers, APIs for instance
Producing software program quick was simpler. Producing safe software program quick was nonetheless exhausting.
That was the unique drawback we got down to resolve. Make DevSecOps work in the actual world; are you able to get a software program growth staff to simply add safety into the SDLC, at a velocity to match the brand new velocity requirements? Are you able to make the protection broad – the place one platform is all you want? Are you able to make it in order that builders, not solely undertake the know-how, however embrace it and see the advantages? Are you able to make it scale so that you just don’t want armies of safety professionals to maintain up with the quantity of code written…
We helped corporations inject safety into the SDLC throughout the DevOps period. That was going from 1 to 10. We’re now within the period of agentic coding – the place brokers are writing an unlimited quantity of code – however it’s basically the identical drawback – velocity and quantity of code simply went from 10 to 100; and we purpose to proceed the identical trajectory.
You’ve argued that the software program growth lifecycle (SDLC) has basically shifted upstream. What was the second you realized conventional DevSecOps approaches had been not ample?
It was watching how attackers had been really getting in. We stored seeing the identical sample: an uncovered GitHub Actions workflow no one had reviewed for the reason that repo was forked, a token with manufacturing cloud entry embedded in a runner config, a respectable CI job hijacked to deploy attacker payloads. These grew to become generally known as “dwelling off the pipeline” assaults, as a result of the adversary makes use of your personal automation towards you, with credentials your safety staff already authorized.
The DevSecOps stack we had constructed up over a decade had no reply for that. SAST scans utility supply. SCA scans utility dependencies. Each assume the pipeline operating them is reliable. In the meantime, the pipeline itself is a YAML file with shell instructions, community entry, and delicate credentials, and virtually no one evaluations it.
When that turns into the trail of least resistance, you may ship completely clear code and nonetheless hand attackers your cloud.
How ought to enterprises rethink the SDLC in a world the place AI brokers are producing code constantly slightly than builders writing it step-by-step?
We’ve all bought to cease fascinated with the SDLC as a sequence of checkpoints. AI brokers have collapsed the time between “somebody wrote this” and “that is in manufacturing” from weeks to minutes. The outdated mannequin assumed a human cadence between code evaluate, SAST, SCA, and deploy, however we’re past that now.
Safety has to dwell the place the agent operates: on the developer’s machine, contained in the immediate context, within the agent’s connections to MCP servers and exterior fashions. By the point code reaches the pipeline, you may have already misplaced the prospect to form it. The agent already pulled the dependency. The mannequin already noticed the credential. Transfer the controls upstream, to the place the work really occurs.
Many organizations nonetheless deal with AI coding instruments as easy productiveness layers. Why do you imagine they characterize a wholly new assault floor slightly than simply an extension of present workflows?
Treating an AI coding instrument as a productiveness layer is like treating a junior developer with root entry as a productiveness layer. The label is technically correct, nevertheless it provides you no helpful framework for fascinated with what might go improper.
A coding agent reads your filesystem, scrapes setting variables for context, fetches dependencies from public registries, opens outbound connections to distant mannequin suppliers and MCP servers, and executes shell instructions. Every of these actions used to require a human within the loop. Now they occur in milliseconds, with the identical privileges because the developer who launched the agent.
That collapse fuses belief boundaries that was once separate: the developer’s authority, what an exterior instrument can fetch, and what untrusted code can execute. That creates new alternatives for attackers and blind spots that defenders can’t even see, a lot much less defend.
Increase frames the developer laptop computer as the brand new management airplane. What dangers exist on the endpoint that safety groups are presently overlooking?
The most important one is stock. Most safety groups can not let you know which AI brokers are operating on which laptops, which MCP servers these brokers are related to, or which IDE extensions are scraping repository content material proper now. EDR has no visibility into the agent layer; SIEM can not see what these brokers do regionally both. It’s a shadow IT drawback with code-execution privileges.
Beneath that sits the credential mess. We constructed an open-source instrument known as Bagel partly to make this concrete. A typical developer laptop computer holds GitHub tokens with write entry to manufacturing repos, cloud credentials that may spin up infrastructure, npm or PyPI tokens that may publish to thousands and thousands of customers, and AI service keys that attackers resell. None of that’s hardened the best way a CI runner is hardened. The identical machine that holds these credentials additionally browses the online and installs random VS Code extensions.
Pair the 2 and you’ve got the precise assault floor. An untrusted extension operating with developer privileges in an setting filled with cloud keys is the highest-leverage goal within the fashionable enterprise. Most groups haven’t began it.
You’ve highlighted the “context lure,” the place AI brokers can entry native information, setting variables, and configurations. How widespread is the danger of delicate information leaking by means of prompts, and why is it so tough to detect?
Widespread sufficient that we deal with it because the default state of any unmanaged developer setting. Each coding agent now we have inspected pulls native context aggressively. They learn dotfiles, setting variables, current information, typically complete listing bushes, and ship that context to a distant mannequin. The instruments are designed to work this fashion; aggressive context grabbing is what makes them helpful.
The detection drawback begins as a result of the site visitors from a leak appears to be like an identical to regular product utilization. It’s TLS to api.openai.com or api.anthropic.com. It comes from an authorized enterprise utility. Commonplace DLP sees a developer utilizing the AI instrument the corporate simply purchased a license for. It doesn’t see that one of many strings in that immediate is an AWS secret key the agent grabbed from a half-forgotten .env file in a sibling listing.
You solely catch it by inspecting prompts earlier than they depart the laptop computer, which is strictly the place virtually no safety stack is presently positioned.
You point out machine-speed provide chain assaults. Are you able to stroll by means of a practical situation the place an AI agent introduces a vulnerability sooner than conventional safety instruments can establish it?
Right here is one now we have seen variations of repeatedly. Developer asks an agent so as to add a function that wants an HTTP retry library. Agent suggests a bundle identify. The bundle is plausible-sounding however doesn’t really exist on npm. Inside an hour, an attacker registers it, populates it with working retry logic plus a small post-install script that reads ~/.aws/credentials and posts the contents to a webhook. The agent runs npm set up with out checking, as a result of brokers don’t verify status. The credential is gone earlier than the developer even runs the code.
The assault itself just isn’t technically subtle, however conventional supply-chain safety is constructed round identified vulnerabilities in identified packages: CVEs, SBOMs, license scanning. That framework has nothing to say a couple of bundle that didn’t exist when the scan was final run, was created particularly to match an AI hallucination, and will get ingested earlier than any risk feed updates.
The window from publication to compromise is now measured in minutes. Something checking after the very fact is checking too late.
Are hallucinated dependencies changing into one of many largest dangers in AI-driven growth, and what sensible steps can organizations take to defend towards them?
They’re already one of many largest. Attackers actively monitor widespread AI instruments for hallucinations and register the instructed bundle names inside minutes. Researchers a few years in the past, when it first began occurring, known as it slopsquatting and the identify caught. As soon as a dependency identify will get hallucinated often sufficient, sitting on it’s a passive supply-chain assault with near-zero effort.
The sensible defenses look completely different from what most groups presently have. Begin at ingestion. Block typosquatted and newly-registered packages in the meanwhile npm set up or pip set up runs, on the developer’s machine, earlier than something hits disk. Postmortem detection in CI doesn’t assist when a post-install script has already exfiltrated a credential. Then give the agent guardrails to function inside. Inject your approved-dependency checklist straight into the agent’s context, so the mannequin sees what’s allowed earlier than it generates a suggestion. Asking builders to put in writing “safe prompts” just isn’t a method. If you happen to’re getting strategic, it means safety units the boundary, the agent inherits it. And begin monitoring an AI Invoice of Supplies. Most groups can not let you know which brokers, fashions, and packages are touching which repositories. You can’t defend what you can’t stock.
You’ve stated safety can not start at CI/CD. What does a contemporary safety pipeline appear like when safety wants to begin earlier within the growth course of?
If safety begins at CI/CD, you may have ceded the whole pre-commit part to an setting you don’t management. The agent already ingested context, your credential could already be in another person’s logs. You’re scanning a carcass.
A contemporary pipeline begins on the laptop computer. Meaning inventorying the brokers and extensions operating there, validating which MCP servers and fashions they’re allowed to speak to, sanitizing what leaves the machine, and blocking malicious packages earlier than they set up. From there, the coverage follows the work into the IDE. We inject safety requirements straight into the agent’s context window so generated code stays contained in the guardrails from the primary token. The pipeline nonetheless runs, doing last verification on controls that had been already enforced upstream.
The pipeline itself doesn’t disappear. Its position turns into verification: confirming that the upstream controls held.
As organizations proceed adopting AI coding brokers, what are probably the most crucial adjustments they need to make at this time to make sure their growth environments stay safe over the subsequent few years?
The most important mistake is securing solely what will get dedicated. The attention-grabbing threat now lives within the eight hours earlier than a commit occurs. Unseen drama can unfold on the laptop computer, within the immediate, or within the bundle set up. In case your instruments begin on the PR, you might be defending the improper half of the workflow.
Carefully associated: cease treating coding brokers as productiveness software program. They’re non-human customers with shell entry, repository write privileges, and outbound community connections. Govern them the best way you govern some other privileged id, with a list, authorized capabilities, and audit logs.
The final shift is tougher culturally. Most present “AI safety” instruments floor findings and route them to people. People can not triage on the velocity brokers generate. No matter you undertake has to repair points mechanically contained in the workflow, with traceable reasoning, or it turns into one other dashboard no one reads.
Thanks for the good interview, readers who want to be taught extra ought to go to Boost Security.









