Zscaler has joined Anthropic’s Project Glasswing and OpenAI’s Trusted Access for Cyber programme to bring frontier AI into its secure software development and security operations, aiming to detect vulnerabilities faster and harden products 1.

Under Project Glasswing, Zscaler will use Anthropic’s Claude Mythos Preview to scan its software stack and Zero Trust Exchange platform for high‑severity vulnerabilities, integrating the model into its secure software development lifecycle 1.

Zscaler will also deploy Anthropic’s Opus 4.7 for AI red‑teaming and agentic security operations to improve detection and response against emerging AI‑enabled threats; the company says frontier models can identify vulnerabilities and generate exploits far faster than manual research 1.

Through OpenAI’s Trusted Access for Cyber programme, Zscaler will use GPT‑5.4‑Cyber to identify, triage and fix vulnerabilities earlier in development, and integrate GPT‑5.4‑Cyber and Codex Security into its internal multi‑agent security architecture for product hardening and cyber defence 1.

The partnerships fold advanced AI capabilities into zero‑trust principles: Zscaler will continue to use OpenAI models for continuous red‑teaming, prompt hardening, AI asset analysis and agentic security research to try to stay ahead of AI‑enabled attackers 1.

Why this matters in India: as Indian enterprises and cloud service customers accelerate digital transformation, the move underlines a simple reality — defenders are adopting the same frontier models that can speed attack‑chain discovery. Indian security teams should factor AI‑assisted vulnerability discovery and zero‑trust hardening into risk assessments and procurement decisions as these capabilities proliferate 1.

How this was made. This article was assembled by Startupniti's editorial AI from the source listed in the right rail. The synthesis ran through our 4-model cascade (Gemini Flash Lite → GPT-4o-mini → DeepSeek → Llama 3.3 70B), logged to ops.llm_calls. Every fact traces to a citation. If a fact looks wrong, write to corrections.