Written by

Perplexity Team

Published on

Feb 20, 2026

How We Built Security Into Comet From Day One

Comet gives AI assistants powerful capabilities to browse websites, interact with content, and complete complex tasks on behalf of users. These capabilities require equally powerful safeguards. Before launching Comet publicly, we hired Trail of Bits to stress-test our defenses through formal audits. Their work helped us identify vulnerabilities and strengthen our mitigations before users started relying on Comet for sensitive tasks.​

Building defense from the ground up

Prompt injection attacks are constantly evolving, and defending against them requires a multi-layered approach. We knew from the start that a single defense mechanism wouldn't be enough. Our core principle is simple: overlapping protections ensure that if one layer is bypassed, others remain to keep users safe.​ Here’s how we put that into practice.

Our security journey: Key milestones

April 2025: Pre-launch security audit

Before launching Comet publicly, we hired Trail of Bits to conduct systematic threat modeling of our architecture. Their team tested real-world attack scenarios, attempting to bypass our defenses using techniques adversaries might employ in the wild. The audit identified specific gaps in our protection system and started an iterative remediation process. We refined our mitigations and closed vulnerabilities before they could affect users.​

October 2025: Publishing our defense architecture

We published a detailed technical post explaining our four-layer defense architecture framework and the reasoning behind each component. The post outlined specific attack types we defend against, from hidden HTML and CSS injections to content confusion and goal hijacking. We also announced our bug bounty program, inviting security researchers to test our defenses and report vulnerabilities. Transparency has always been core to our security approach: we believe sharing our methodology makes the entire AI industry stronger.​

December 2025: Open-sourcing BrowseSafe

Security improves when the entire AI community can learn from shared research. We released BrowseSafe, our detection model and evaluation benchmark, as open-source tools. BrowseSafe-Bench includes 14,719 examples covering 11 attack types, 9 injection strategies, and 3 linguistic styles. By publishing our methodology and datasets, we're helping other AI developers build safer systems and contributing to industry-wide security standards. The response from the developer community has been encouraging, with teams already using BrowseSafe to improve their own AI assistant security.​

What we've learned

Building security for AI assistants has taught us critical lessons about the evolving threat landscape. External adversarial testing reveals blind spots that internal teams can miss, no matter how skilled. Threat modeling isn't a one-time exercise but an ongoing discipline that must evolve as attack techniques advance. Most importantly, security requires collaboration across the industry. By sharing our research, engaging external experts, and contributing to open standards, we're helping build a safer ecosystem.

Our ongoing commitment

We maintain regular security assessments as Comet evolves, ensuring new features undergo rigorous scrutiny before launch. To help reduce misinformation in AI security research, we invest in robust evaluation practices and transparent reporting so that findings about Comet’s defenses are accurate, reproducible, and not distorted by sensational or incomplete security claims. 

Our team also runs a thriving Vulnerability Disclosure Program and private bug bounty program, inviting security researchers everywhere to test our defenses. As AI assistants become more capable and handle increasingly sensitive tasks, our security investments will continue to scale accordingly.​

Share this article