Snyk Embeds Anthropic’s Claude to Advance AI-Powered Safety for Software program Improvement


BOSTON, Might 07, 2026 (GLOBE NEWSWIRE)Snyk, the AI safety firm, at this time introduced it’s leveraging Anthropic’s Claude fashions to advance software program safety in an period of AI-powered growth.

Beginning at this time, Snyk has built-in Claude into the Snyk AI Security Platform – powering automated vulnerability discovery, prioritization, and developer-ready fixes throughout code, dependencies, containers, and AI-generated artifacts.

The menace driving that integration is actual and accelerating.

It is a problem that JPMorganChase’sGlobal Technology Leadership Team named in April 2026 as one of the crucial essential actions enterprises should take now – embedding safety straight into the AI growth and deployment lifecycle. The Snyk AI Safety Platform delivers precisely that.

Frontier AI Discovery Requires AI-Native AppSec

Get the newest information


delivered to your inbox

Join The Manila Instances newsletters

By signing up with an e mail tackle, I acknowledge that I’ve learn and comply with the Phrases of Service and Privateness Coverage.

The Snyk AI Safety Platform is purpose-built for this operational problem. The place frontier fashions floor findings at machine pace, Snyk converts them into prioritized, developer-ready fixes – routinely, contained in the workflows the place code is already being written. Claude’s reasoning capabilities energy each ends: sharper discovery and sooner, higher-confidence remediation.

“As AI dramatically accelerates how briskly builders can write code, conventional safety merely can’t sustain,” mentioned Manoj Nair, Chief Innovation Officer at Snyk. “By leveraging Claude’s superior reasoning throughout the Snyk AI Safety Platform, we’re equipping enterprises with an clever, autonomous protection system that scales proper alongside their AI-driven innovation.”

Safety for AI-Native and Agentic Improvement

Evo by Snyk leverages Claude’s capabilities inside enterprise AI governance workflows – repeatedly discovering each AI asset throughout the group, together with fashions, brokers, MCP servers, datasets, and third-party instruments. It red-teams operating brokers for immediate injection and knowledge exfiltration, scans the agent provide chain for malicious or hidden capabilities, and enforces runtime coverage on software calls earlier than harm happens.

Snyk’s 2026 State of Agentic AI Adoption Report – drawn from greater than 500 enterprise Evo environments – discovered that for each AI mannequin an enterprise deploys, it introduces practically thrice as many further software program elements. 82% of AI instruments in enterprise use at this time come from third-party packages, but conventional governance frameworks are not often constructed to trace them. 65-70% of manufacturing code is AI-generated; practically half incorporates vulnerabilities, and the brokers delivery that code function virtually completely outdoors conventional AppSec tooling. Cloud safety platforms present the place AI runs. Evo reveals the place AI is launched – and stops the danger on the supply.

“In AI safety, detection was by no means the bottleneck,” mentioned Jason Clinton, Deputy CISO at Anthropic. “By pairing Claude’s capabilities with Snyk, enterprises can flip high-fidelity findings into motion contained in the workflows the place software program is constructed.”

“During the last twelve months, we acknowledged that our utility safety program would wrestle to maintain tempo with agentic growth as each the fashions and our engineers improved,” mentioned Brendan Putek, Director of DevOps and Safety Operations at Relay Community. “To get forward of the curve, we grew to become design companions with Snyk, leveraging the identical agentic tooling to shift safety from a retroactive gate to an built-in a part of code creation. Including Anthropic’s frontier discovery capabilities to the prioritization, governance, and repair expertise Snyk gives will allow us to ship a good stronger safety posture for our purchasers, with out burning out the engineering staff to do it.”

Availability

The combination of Anthropic’s Claude fashions into the Snyk AI Safety Platform is out there to joint prospects at this time, with expanded entry rolling out via 2026. To be taught extra, go to snyk.io or contact your Snyk account staff.

About Snyk

Snyk, the AI safety firm, empowers the AI-driven enterprise to develop and safe its future, guaranteeing organizations can belief AI to innovate with out limits. The Snyk AI Safety Platform serves because the trade’s AI Safety Cloth, weaving safety straight into the stream of creation to safe GenAI code, AI-native functions, and agentic techniques. By delivering visibility, management, and autonomous protection safe at inception, Snyk allows over 4,500 world prospects to construct fearlessly within the AI period.

About Anthropic

Anthropic is an AI security and analysis firm devoted to constructing dependable, interpretable, and steerable AI techniques. Its Claude household of fashions allows superior capabilities throughout a variety of functions, together with code understanding and safety evaluation. For extra info, go toanthropic.com.

Media Contact

[email protected]