Can JavaScript Escape a CSP Meta Tag Inside an Iframe?
Research: Can JavaScript Escape a CSP Meta Tag Inside an Iframe? In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them. Tags: iframes , security , javascript , content-security-policy , sandboxing
Research
Can JavaScript Escape a CSP Meta Tag Inside an Iframe?
— JavaScript running inside a sandbox="allow-scripts" iframe cannot escape or disable a `` tag, even through removal, modification, or document replacement. Extensive testing across Chromium and Firefox confirmed that CSP policies defined via meta tags are enforced at parse time, and persist even when the iframe is navigated to a data: URI.
In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them.
Simon Willison Blog
https://simonwillison.net/2026/Apr/3/test-csp-iframe-escape/#atom-everythingSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeversionpolicy![Best OCR for template-based form extraction? [D]](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
Best OCR for template-based form extraction? [D]
Hi, I’m working on a school project and I’m currently testing OCR tools for forms. The documents are mostly structured or semi-structured forms, similar to application/registration forms with labeled fields and sections. My idea is that an admin uploads a template of the document first, then a user uploads a completed form, and the system extracts the data from it. After extraction, the user reviews the result, checks if the fields are correct, and edits anything that was read incorrectly. So I’m looking for an OCR/document understanding tool that can work well for template-based extraction, but also has some flexibility in case document layouts change later on. Right now I’m trying Google Document AI , and I’m planning to test PaddleOCR next. I wanted to ask what OCR tools you’d recommend

Speed difference on Gemma 4 26B-A4B between Bartowski Q4_K_M and Unsloth Q4_K_XL
I've noticed this on Qwen3.5 35B before as well, there is a noticeable speed difference between Unsloth's Q4_K_XL and Bartowski's Q4_K_M on the same model, but Gemma 4 seems particularly harsh in this regard: Bartowski gets 38 tk/s, Unsloth gets 28 tk/s... everything else is the same, settings wise. This is with the latest Unsloth quant update and latest llama.cpp version. Their size is only ~100 MB apart. Anyone have any idea why this speed difference is there? Btw, on Qwen3.5 35B I noticed that Unsloth's own Q4_K_M was also a bit faster than the Q4_K_XL, but there it was more like 39 vs 42 tk/s. submitted by /u/BelgianDramaLlama86 [link] [comments]

AI Code Review Is the New Bottleneck: Why Faster Code Is Not Reaching Production Faster
A developer on my team opened eleven pull requests last Tuesday. Eleven. In a single day. Two years ago, that same developer averaged two or three PRs per week. The difference is not that he suddenly became five times more productive. The difference is Claude Code. He describes a feature, the agent implements it, he reviews the diff, and he opens the PR. The code-writing part of his job accelerated by an order of magnitude. The problem is what happened next. Those eleven PRs sat in review for an average of four days. Three of them took over a week. By the time the last one was approved and merged, the branch had conflicts with main that took another hour to resolve. He shipped more code than ever. The code reached production at roughly the same pace as before. And the two senior engineers
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Speed difference on Gemma 4 26B-A4B between Bartowski Q4_K_M and Unsloth Q4_K_XL
I've noticed this on Qwen3.5 35B before as well, there is a noticeable speed difference between Unsloth's Q4_K_XL and Bartowski's Q4_K_M on the same model, but Gemma 4 seems particularly harsh in this regard: Bartowski gets 38 tk/s, Unsloth gets 28 tk/s... everything else is the same, settings wise. This is with the latest Unsloth quant update and latest llama.cpp version. Their size is only ~100 MB apart. Anyone have any idea why this speed difference is there? Btw, on Qwen3.5 35B I noticed that Unsloth's own Q4_K_M was also a bit faster than the Q4_K_XL, but there it was more like 39 vs 42 tk/s. submitted by /u/BelgianDramaLlama86 [link] [comments]

Gemma 4 - 4B vs Qwen 3.5 - 9B ?
Hello! anyone tried the 4B Gemma 4 model and the Qwen 3.5 9B model and can tell us their feedback? On the benchmark Qwen seems to be doing better, but I would appreciate any personal experience on the matter Thanks! submitted by /u/No-Mud-1902 [link] [comments]

Kokoro TTS running on-device, CPU-only, 20x realtime!!!
I wanted a reading app where you could read, read and listen or just listen to books with word-by-word highlighting synced to TTS and i wanted the voice to actually sound good. This turned out to be a really hard challenge with Kokoro on iOS, here's what I ran into: Using MLX Swift is great but uses Metal. iOS kills Metal access the moment you background the app. If your use case needs background audio, this is a dead end. ONNX Runtime on CPU fixes the background problem, but the monolithic Kokoro model only runs at 2-3x realtime. After 30 minutes of sustained generation my phone was scorching hot. What actually worked: I split the monolithic model into a multi-stage pipeline and replaced part of the synthesis with native code on Apple's Accelerate framework. That got it to 20x realtime on



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!