datasette-enrichments-llm 0.2a1
<p><strong>Release:</strong> <a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a1">datasette-enrichments-llm 0.2a1</a></p> <blockquote> <ul> <li>The <code>actor</code> who triggers an enrichment is now passed to the <code>llm.mode(... actor=actor)</code> method. <a href="https://github.com/datasette/datasette-enrichments-llm">#3</a></li> </ul> </blockquote> <p>Tags: <a href="https://simonwillison.net/tags/enrichments">enrichments</a>, <a href="https://simonwillison.net/tags/llm">llm</a>, <a href="https://simonwillison.net/tags/datasette">datasette</a></p>
This is a beat by Simon Willison, posted on 1st April 2026.
datasette 1469
llm 581
enrichments 9
Monthly briefing
Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.
Pay me to send you less!
Sponsor & subscribe
Simon Willison Blog
https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm-2/#atom-everythingSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releasegithub
FinancialClaw: haciendo útil a OpenClaw para finanzas personales
Muchas veces hablamos de agentes de IA como si su mayor valor estuviera en entender lenguaje natural. Pero entender no basta. Un agente empieza a ser realmente útil cuando puede ayudar con tareas concretas, reducir fricción y hacerlo de forma consistente. FinancialClaw nació justo de esa idea. Quería que OpenClaw no solo pudiera conversar sobre finanzas personales, sino ayudarme a gestionarlas: registrar gastos, guardar ingresos, manejar pagos recurrentes y consultar resúmenes sin depender de memoria, notas sueltas o pasos manuales repetitivos. Desde el principio, el proyecto tomó una dirección clara: una herramienta personal, con persistencia local, pensada para el uso diario y con soporte multi-moneda. Lo interesante es que esa utilidad no apareció simplemente por añadir nuevas funciones
viable/strict/1775253422: Update third_party/kineto submodule to 628e1d0 (#179244)
Includes the following commits: Add host_name to OSS Kineto trace metadata via gethostname() ( pytorch/kineto#1323 ) 628e1d0 Revert D97166802 ( pytorch/kineto#1326 ) 9d7373b Fix Lingering INT32 Overflow ( pytorch/kineto#1324 ) 3a61657 Re-enabled some hardcoded tests ( pytorch/kineto#1321 ) 50a0085 Expose occupany limiting factors ( pytorch/kineto#1322 ) e19dd92 Authored with Claude. Pull Request resolved: #179244 Approved by: https://github.com/malfet
v0.14.20
Release Notes [2026-04-03] llama-index-agent-agentmesh [0.2.0] fix vulnerability with nltk ( #21275 ) llama-index-callbacks-agentops [0.5.0] chore(deps): bump the uv group across 50 directories with 2 updates ( #21164 ) chore(deps): bump the uv group across 24 directories with 1 update ( #21219 ) chore(deps): bump the uv group across 21 directories with 2 updates ( #21221 ) fix vulnerability with nltk ( #21275 ) llama-index-callbacks-aim [0.4.1] fix vulnerability with nltk ( #21275 ) llama-index-callbacks-argilla [0.5.0] chore(deps): bump the uv group across 58 directories with 1 update ( #21166 ) chore(deps): bump the uv group across 24 directories with 1 update ( #21219 ) chore(deps): bump the uv group across 21 directories with 2 updates ( #21221 ) fix vulnerability with nltk ( #21275 )
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

With hf cli, how do I resume an interrupted model download?
I have a slow internet and the download of a large file was interrupted 30GB in! I download using the ‘hf’ CLI command, like this: hf download unsloth/gemma-4-31B-it-GGUF gemma-4-31B-it-UD-Q8_K_XL.gguf When I ran it again, it started over instead of resuming, to my horror. How do I avoid redownloading a partial model next time? I don’t see a resume option in hf download –help 1 post - 1 participant Read full topic

Gemma 4 is great at real-time Japanese - English translation for games
When Gemma 3 27B QAT IT was released last year, it was SOTA for local real-time Japanese-English translation for visual novel for a while. So I want to see how Gemma 4 handle this use case. Model: Unsloth's gemma-4-26B-A4B-it-UD-Q5_K_M Context: 8192 Reasoning: OFF Softwares: Front end: Luna Translator Back end: LM Studio Workflow: Luna hooks the dialogue and speaker's name from the game. A Python script structures the hooked text (add name, gender). Luna sends the structured text and a system prompt to LM Studio Luna shows the translation. What Gemma 4 does great: Even with reasoning disabled, Gemma 4 follows instructions in system prompt very well. With structured text, gemma 4 deals with pronouns well. This is one of the biggest challenges because Japanese spoken dialogue often omit subj


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!