Critical Vulnerability in Claude Code Emerges Days After Source Leak - SecurityWeek
Critical Vulnerability in Claude Code Emerges Days After Source Leak SecurityWeek
Could not retrieve the full article text.
Read on Google News: Claude →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeclaude code
Gemma 4 26b A3B is mindblowingly good , if configured right
Last few days ive been trying different models and quants on my rtx 3090 LM studio , but every single one always glitches the tool calling , infinite loop that doesnt stop. But i really liked the model because it is rly fast , like 80-110 tokens a second , even on high contex it still maintains very high speeds. I had great success with tool calling in qwen3.5 moe model , but the issue i had with qwen models is that there is some kind of bug in win11 and LM studio that makes the prompt caching not work so when the convo hits 30-40k contex , it is so slow at processing prompts it just kills my will to work with it. Gemma 4 is different , it is much better supported on the ollama cpp and the caching works flawlesly , im using flash attention + q4 quants , with this i can push it to literally

Methodology
AI agents are running third-party code on your machine. Last week, Anthropic announced extra charges for OpenClaw support in Claude Code , drawing fresh attention to the ecosystem. We wanted to answer a straightforward question: how safe are the most popular OpenClaw skills? We used AgentGraph's open-source security scanner to analyze 25 popular OpenClaw skill repositories from GitHub. The scanner inspects source code for: Hardcoded secrets (API keys, tokens, passwords in source) Unsafe execution (subprocess calls, eval/exec, shell=True) File system access (reads/writes outside expected boundaries) Data exfiltration patterns (outbound network calls to unexpected destinations) Code obfuscation (base64-encoded payloads, dynamic imports) It also detects positive signals: authentication checks
trunk/23618880643dd5dadb28c68e0fc154beaa8c67f4: [caffe2] Remove unused batch_box_cox perfkernel files (#179515)
These files are unused from the codebase and are being de-synced by D99686350. They were originally added by: #86569 (Unify batch_box_cox implementations into perfkernels folder) #143556 (Move vectorized templates into a separate file for box_cox operator) #143627 (Add AVX512 support for box_cox operator) #159778 (Add float batch box cox SVE128 implementation) Authored with Claude. Pull Request resolved: #179515 Approved by: https://github.com/atalman
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Got Gemma 4 running locally on CUDA, both float and GGUF quantized, with benchmarks
Spent the last week getting Gemma 4 working on CUDA with both full-precision (BF16) and GGUF quantized inference. Here's a video of it running. Sharing some findings because this model has some quirks that aren't obvious. Performance (Gemma4 E2B, RTX 3090): | Config | BF16 Float | Q4_K_M GGUF | |-------------------------|------------|-------------| | short gen (p=1, g=32) | 110 tok/s | 170 tok/s | | long gen (p=512, g=128) | 72 tok/s | 93 tok/s | The precision trap nobody warns you about Honestly making it work was harder than I though. Gemma 4 uses attention_scale=1.0 (QK-norm instead of the usual 1/sqrt(d_k) scaling). This makes it roughly 22x more sensitive to precision errors than standard transformers. Things that work fine on LLaMA or Qwen will silently produce garbage on Gemma 4: F1

Gemma-4 E4B model's vision seems to be surprisingly poor
The E4B model is performing very poorly in my tests and since no one seems to be talking about it that I had to unlurk myself and post this. Its performing badly even compared to qwen3.5-4b. Can someone confirm or dis...uh...firm (?) My test suite has roughly 100 vision related tasks: single-turn with no tools, only an input image and prompt, but with definitive answers (not all of them are VQA though). Most of these tasks are upstream from any kind of agentic use case. To give a sense: there are tests where the inputs are screenshots from which certain text information has to be extracted, others are images on which the model has to perform some inference (for example: geoguessing on travel images, calculating total cost of a grocery list given an image of the relevant supermarket display


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!