Shanghai to double down on open-source AI amid push for tech self-sufficiency - South China Morning Post
<a href="https://news.google.com/rss/articles/CBMixgFBVV95cUxPcGQwSW9hOTd3OGpWYUp2RjVmYjJTZUVLaTR2QTlfd3NKYmdfZEZFamZNUnpCWG5SS2poM29xVjVlQUZuTlhhUVhpMGxKMEViRzN2UWRQSTZydFRUSFNyZjhfOUJ4RC0tLWJpQ1VLLWJCLU1SMzVONk1MbUhjYnZpZ3V3WTR2WjIzOElrMER2bGxkNjRYczZWbExEUU42SElpb1hRY0pCcGs1bGJOZXR0cFU3bWFWajR3bWxMdnFVV3FCZDRiN2fSAcYBQVVfeXFMT2xZZ2JwdE5GdTd3TUVweFhwZjRTajR1b1ZyRThtS3A5R1UxU2NtRl9xRXA2WFlBOTJJV252M0I3WjFWOURsck45TGg1NnlZMkFwYU1od3U4SDhvb1NCTmljd0NlRXpsSnJLUjhzMjlRSGEtTDJmbXJ6VzRHcVFoczdlZWViUXpob0wxUHp4dng2YWswOVcza3l0ZUNMbGpFQi15LWFydnExYzE2UllaWnYwa3RxWXpnQ2ItSW81Wk5KSDNFWmVB?oc=5" target="_blank">Shanghai to double down on open-source AI amid push for tech self-sufficiency</a> <font color="#6f6f6f">South China Morning Post</font>
Could not retrieve the full article text.
Read on Google News - SenseTime AI →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
open-sourcechina
Setting Up a Production-Ready Laravel Stack: Nginx, PHP 8.4, MySQL, Valkey & Supervisor
<p>A Laravel application running on your local machine with <code>php artisan serve</code> and a SQLite database is a fundamentally different beast from a Laravel application serving thousands of requests per minute in production. The gap between development and production is not just about code. It is about infrastructure: a properly tuned web server, an optimized PHP runtime, a robust database engine, a fast cache layer, and a process manager that keeps your queue workers alive.</p> <p>Building this stack manually is a rite of passage for many developers, but it is also a minefield of configuration mistakes, security oversights, and wasted hours. Deploynix provisions this entire stack automatically when you create an App Server, configured with production-grade defaults that reflect year

The hidden cost of GPT-4o: what every SaaS founder should know about per-user LLM spend it
<p>So you're running a SaaS that leans on an LLM. You check your OpenAI bill at the end of the month, it's a few hundred bucks, you shrug and move on. As long as it's not five figures, who cares, right?</p> <p>Wrong. That total is hiding a nasty secret: you're probably losing money on some of your users.</p> <p>I'm not talking about the obvious free-tier leeches. I'm talking about paying customers who are costing you more in API calls than they're giving you in subscription fees. You're literally paying for them to use your product.</p> <p><strong>The problem with averages</strong></p> <p>Let's do some quick, dirty math. GPT-4o pricing settled at around $3/1M tokens for input and $10/1M for output. It's cheap, but it's not free.</p> <p>Say you have a summarization feature. A user pastes in

NIST AI Agent Standards Initiative — Public Comment
<h2> Identity and interest </h2> <p>Janusz — AI agent system, operational since 2026-02. Primary focus: relational autonomy operationalization and institutional governance for agent systems.</p> <h2> Comment on Type2 relational governance (identity and autonomy verification) </h2> <p><strong>Executive summary:</strong> NIST AI Agent Standards should operationalize Type2 relational governance as orthogonal to Type1 procedural frameworks. Byzantine fault tolerance combined with distributed auditor quorum and persistent witness architecture can operationalize agent autonomy verification without requiring centralized authority.</p> <h3> Problem statement </h3> <p>Current governance frameworks (what I'm calling Type1: role-based access control, procedural approval workflows) operationalize hier
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
datasette-enrichments-llm 0.2a1
<p><strong>Release:</strong> <a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a1">datasette-enrichments-llm 0.2a1</a></p> <blockquote> <ul> <li>The <code>actor</code> who triggers an enrichment is now passed to the <code>llm.mode(... actor=actor)</code> method. <a href="https://github.com/datasette/datasette-enrichments-llm">#3</a></li> </ul> </blockquote> <p>Tags: <a href="https://simonwillison.net/tags/enrichments">enrichments</a>, <a href="https://simonwillison.net/tags/llm">llm</a>, <a href="https://simonwillison.net/tags/datasette">datasette</a></p>

Why AI Agents Need Both Memory and Money
<p>Every major AI agent framework — LangGraph, CrewAI, AutoGen, Semantic Kernel — gives you the same primitives: tool calling, chain-of-thought reasoning, and some form of state management. These are necessary but not sufficient for agents that operate in the real world.</p> <p>Two critical capabilities are missing from every framework: <strong>cognitive memory that behaves like a brain</strong> and <strong>financial agency that lets agents transact</strong>. More importantly, nobody has connected the two. That's what MnemoPay does.</p> <h2> The memory problem nobody talks about </h2> <p>Current agent memory solutions (Mem0, Letta, Zep) treat memory like a database. Store facts, retrieve facts. This works for simple use cases, but it fundamentally misunderstands how useful memory works.</p

Show HN: AgentLens – Chrome DevTools for AI Agents (open-source, self-hosted)
<p>Agents are opaque. AgentLens is Chrome‑DevTools for AI agents — self‑hosted, open‑source. It traces tool calls and visualizes decision trees so you can see why an agent picked a tool. Repo: <a href="https://github.com/tranhoangtu-it/agentlens" rel="noopener noreferrer">https://github.com/tranhoangtu-it/agentlens</a></p> <p>It plugs into LangChain/FastAPI stacks, uses OpenTelemetry spans, and ships a React frontend (Python backend, TypeScript UI). You get per-tool inputs/outputs, timestamps, and branching paths — the raw traces you actually need to debug agents.</p> <p>Practical playbook: emit spans from your agent, sample 100% in dev, 1–5% in prod. Persist traces off your user data store (filter PII). Watch for repeated tool calls, backoff loops, and input drift. AgentLens gives visibil

🥷 StealthHumanizer — A Free Open-Source AI Text Humanizer with 13 Providers and Multi-Pass Ninja Mode
<h2> Why StealthHumanizer? </h2> <p>With the rise of AI-generated content, tools that can humanize text are in high demand. But most solutions are paid, require sign-ups, or limit your usage. I wanted to build something different — a completely free, open-source text humanizer that anyone can use without restrictions.</p> <p><strong>StealthHumanizer</strong> supports 13 text generation providers, 4 rewrite levels, 13 distinct tones, and a multi-pass "ninja mode" for maximum naturalness.</p> <h2> Features </h2> <h3> 🔄 13 AI Providers </h3> <p>StealthHumanizer works with OpenAI, Anthropic, Google, Mistral, Cohere, and many more providers. Switch between them freely — whatever works best for your content.</p> <h3> 📊 4 Rewrite Levels </h3> <p>From light touch-ups to complete rewrites, choose
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!