Chapter 6
Frontend and UX Security
The frontend is both the first line of defense and a potential exfiltration surface. Apply standard appsec practices while accounting for the AI-specific risks that come with rendering model output and surfacing untrusted content.
6.1 Standard Application Security
All the normal appsec practices still apply. The difference with agentic AI is that you need to treat the UI as an exfiltration surface, not just a presentation layer.
Authentication and Session Management
- Strong authentication (SSO/MFA for admins) with hardened session management
- HttpOnly and Secure cookies, session rotation, idle timeouts
- Step-up authentication for privileged actions
Request Protection
- CSRF protection for every state-changing request (tokens plus same-site cookies)
- Input validation and output encoding everywhere - never rely on the model to emit "safe" text
- Clickjacking and framing controls
Security Headers
Deploy these headers on every response:
Strict-Transport-SecurityX-Frame-Options/frame-ancestorsX-Content-Type-OptionsReferrer-PolicyContent-Security-Policy(CSP)
CSP Requirements
Your Content Security Policy needs to be strict enough to actually matter:
- Block inline scripts by default (
script-src 'self'plus nonces or hashes) - Restrict
connect-src,img-src,prefetch-src, andform-actionto vetted domains to reduce exfiltration channels - Pair with Trusted Types (where supported) so only vetted sanitizers can write to dangerous sinks (
innerHTML,srcdoc, etc.)
Third-Party Isolation
Maintain strict separation between the app origin and any third-party widgets or file viewers. Load them only inside sandboxed iframes with unique origins and no shared storage.
6.2 Output Handling and XSS-Resistant Rendering
Treat everything emitted by a model exactly like HTML pasted in from the public internet. Because that is effectively what it is.
General Rules
- Prefer plain-text rendering. Only enable rich rendering when the product absolutely requires it and you have explicit sanitization in place.
- Never inject raw model output via
innerHTML,dangerouslySetInnerHTML, or template literals - even for "trusted" markdown responses. - Default every link generated from model output to
rel="noopener noreferrer"and strip protocols other thanhttp/https. - Disallow model control over HTML primitives. Constrain responses to structural markdown or schemas.
Implementation Checklist
- Render server-side when feasible, or use a battle-tested markdown/rich text library with predictable output.
- Always run renderer output through a security-focused sanitizer (e.g., DOMPurify) configured with a strict allowlist before it touches the DOM.
- Disable raw HTML blocks entirely. Treat
<script>,<style>,<iframe>,<img>,<form>, event handlers, andjavascript:URLs as fatal violations. - Enforce origin and scheme policies on generated links. Optionally warn or block when the model produces new or unseen domains.
- If images are required, proxy every request - strip cookies, enforce content-type and size limits, restrict destinations.
- Render code blocks as inert text with CSS highlighting only. Never auto-run or evaluate code.
Diagram Renderers (Mermaid, PlantUML, etc.)
Treat diagram specs as untrusted programs:
- Strip or deny tokens resembling HTML tags, CSS,
javascript:/data:URLs, or link directives before invoking the renderer. - Enforce maximum length and complexity to prevent resource exhaustion or context flooding.
- Disable renderer features that can emit HTML, attach event handlers, or auto-create links where possible.
- Render diagrams inside a sandboxed iframe with a dedicated origin and strict CSP.
- If diagrams are not mission-critical, turn them off. The safest renderer is no renderer.
6.3 UX for Safe and Honest Agent Use
Good UX is a security control. Design your interface so users understand what the agent is doing and can intervene when it gets things wrong.
Transparency
- Show users what the agent can and cannot do. Include an explicit capabilities and limits list in the UI, especially in high-risk domains.
- Display clear disclaimers when agents operate in health, legal, financial, or security contexts.
- Label AI-generated content so users know what came from the model. For impactful decisions, provide a summary or explanation view when feasible.
- Give users feedback controls - buttons to flag harmful, incorrect, or biased responses, with an easy path to escalate to a human.
UI-Driven Exfiltration Constraints
Any flow that sends AI output somewhere else - email, tickets, analytics, "share" links - is a potential data leak. Lock it down:
- Show users exactly what will be transmitted and require confirmation before sending.
- Apply the same PII and secret scrubbing used for normal responses before allowing exports or integrations to fire.
- Default to excluding full transcripts, hidden system prompts, or internal traces from exports unless a human explicitly opts in and reviews the payload.
- Disable or heavily gate auto-generated QR codes, links, and buttons that could smuggle sensitive data into URLs or third-party endpoints.
6.4 Prompt Injection-Aware UI Design
Frontends often surface untrusted content: emails, web pages, PDFs. Assume any of these may contain prompt injections.
Visual Separation
- Visually distinguish between system/agent instructions, user input, and external/untrusted content. Use different backgrounds, borders, or labels so the distinction is obvious.
- Explicitly label external content as untrusted.
- Avoid exposing full system prompts or tool definitions to end users.
Custom System Prompts
If you allow "custom system prompts," restrict them to internal, technically literate users. Use validated templates with constrained parameters (e.g., tone: friendly/professional; domain: sales/support) rather than freeform text.
High-Risk Content Sources
- For emails, web pages, and scraped documents, consider reduced-functionality renderers with no links, no active content, and muted colors to signal "handle with care."
- When embedding untrusted documents alongside chat, keep them in separate panes or tabs so injected instructions are less likely to be mistaken for trusted guidance.
Need a frontend security review for your AI application?
We test agentic AI interfaces for XSS, exfiltration, prompt injection through UI, and the full range of appsec issues that come with rendering model output. Let's talk.
Get in touch