Changes to an optimised thing make it worse
TLDR: When you make changes to a thing that has been optimised in some way, any effects of the changes you haven't planned will make it worse. Welcome to the alien planet Sqornshellous Beta. It's dry, it's arid, and it apparently has watchmakers. You buy one. After a little while, you realise it's running a couple of milli-days slow compared to the carefully calculated local time of your ship's clock. "I should fix that" you think. You open up the back and have a look at the gears. You know basic mechanics, so you carefully calculate the change in size to the gear pushing the second hand required for appropriate timekeeping. You buy it from the local gear shop. You fix the watch and go on your way. It seems to be running at the same rate as the ship's clock now, so, proud of your work, you
TLDR: When you make changes to a thing that has been optimised in some way, any effects of the changes you haven't planned will make it worse.
Welcome to the alien planet Sqornshellous Beta. It's dry, it's arid, and it apparently has watchmakers. You buy one. After a little while, you realise it's running a couple of milli-days slow compared to the carefully calculated local time of your ship's clock. "I should fix that" you think. You open up the back and have a look at the gears.
You know basic mechanics, so you carefully calculate the change in size to the gear pushing the second hand required for appropriate timekeeping. You buy it from the local gear shop. You fix the watch and go on your way. It seems to be running at the same rate as the ship's clock now, so, proud of your work, you go to sleep.
The next day, you notice that your watch is now behind but also running faster than the ship's clock; in the evening it is ahead, but running slower. "I should fix that", you think. You take the back off again. You spend all night, and some of the following morning, carefully measuring the turning of the different wheels and the motion of different springs. Eventually, you figure out that one particular spring is oscillating at the same frequency as the issue. You also notice that one of the disks deep in the watch isn't quite circular.
Clearly the watchmakers here aren't quite as good as their reputation. Rumours have time to evolve over galactic distances after all. You remove the spring and sandpaper the disk down into a nice circle. Should be fixed now.
As you need a mattress for your new Sqornshellian home, you take a couple of betan deci-years to pop over to Sqornshellous Zeta.
On your arrival back, it becomes apparent really quite quickly that your timekeeping hasn't fared great during the voyage. This is too much even for you to handle, so you take it to the local watch repair store.
Later that evening, you've learnt that your improved gear means you now obey the sidereal year instead of the solar year; it is also made of a different alloy, which expands and contracts differently with the heat of the day than the original. The spring you removed was the system intended to handle that expansion and contraction, and the nicely circular disk means the watch completely ignores the elliptical orbit of the planet. You have a look in the back of your replacement watch.
"Hmm, that cog seems to be spinning really fast, I should probably fix that..."
When we talk about optimisation, it is common to talk about hill climbing. It's great to be at the top of the mountain, but once you're there, literally any step will take you downhill.
If you're a half step next to the top of the mountain, most steps will take you downhill.
More broadly, so long as the mountain curves downwards, any given step away from the top is going to take you further from the peak than a step towards it is going to bring you closer.
If you're doing this in a space with a lot of dimensions, this "maybe big loss, maybe small win" becomes "definitely medium loss".
I think that there are a wide range of things this principle applies to which are of interest to this community – governance systems, human biology, and anything covered under "world optimisation". I expect people to debate me on which of these are or are not covered, and I look forward to your challenges.
Be careful when fiddling with things which have been carefully optimised. You might break them.
LessWrong AI
https://www.lesswrong.com/posts/YfeZzj5CuEeTwyyNS/changes-to-an-optimised-thing-make-it-worseSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
paper
AGI Won’t Automate Most Jobs—Economist Reveals Why They’re Not Worth It
Why AGI Won't Steal Your Job—And That Might Be Worse The fear that artificial general intelligence (AGI) will render most human labor obsolete has become a staple of modern discourse. But what if the real story is more nuanced—and more unsettling—than the dystopian narrative suggests? A new paper by one of the world's foremost economists of automation challenges the assumption that AGI will simply replace human workers en masse. Instead, it reveals a paradox: many jobs won't be automated not because they're irreplaceable, but because they're not worth the effort to automate. Key Takeaways: The traditional view of AGI as a universal job-killer is being questioned by leading economists. Many jobs may remain untouched by automation, not due to their complexity, but because they lack economic

Academic Proof-of-Work in the Age of LLMs
Written quickly as part of the Inkhaven Residency . Related: Bureaucracy as active ingredient , pain as active ingredient A widely known secret in academia is that many of the formalities serve in large part proof of work . That is, the reason expensive procedures exist is that some way of filtering must exist, and the amount of effort invested can often be a good proxy for the quality of the work. Specifically, the pool of research is vast, and good research can often be hard to identify. Even engaging in research enough to understand its quality can be expensive. As a result, people look toward signs of visible, expensive effort in order to determine whether to engage in the research at all. Why do people insist only on reading research that’s published in well-formatted, well-written pa
![[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-earth-satellite-QfbitDhCB2KjTsjtXRYcf9.webp)
[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry
Hi r/MachineLearning , I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed LLMs. The core idea: instead of monitoring input embeddings (which is what existing tools do), we monitor the statistical manifold of the model’s output distributions using Fisher-Rao geodesic distance. We then run adaptive CUSUM (Page-Hinkley) on the resulting z-score stream to catch slow drift that per-request spike detection misses entirely. The methodology is grounded in published work on information geometry (Figshare, DOIs available). We’ve validated the signal on real OpenAI API logprobs, CUSUM caught gradual domain drift in 7 steps with zero false alarms during warmup, while spike detection missed it entirely. If anyone with cs.LG endorsement is
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

GraphQL Is the Native Language of AI Agents
Your APIs were designed for humans. That’s about to be a problem. When Facebook’s engineering team designed GraphQL in 2012, they were solving a mobile problem: REST endpoints were returning too much data over slow networks, and iOS clients were paying the cost in latency. The solution — let the client declare exactly what it needs, enforce that contract through a typed schema, and expose everything about the API through introspection — turned out to solve a different problem entirely, one Facebook couldn’t have anticipated. Twelve years later, the most constrained consumer of your API isn’t a mobile client on a 3G network. It’s an AI agent with a finite context window. The constraint is different, but the logic is identical. Every field your API returns that an agent doesn’t need is a was

Samsung stopt met eigen chatapp Messages in VS en stapt verder over naar Google
Samsung stopt definitief met het aanbieden van de Messages-app voor sms-berichten in de Verenigde Staten. Vanaf juli 2026 is de app voor recente apparaten niet meer te downloaden via de Galaxy Store. Samsung is al langer bezig met de overstap naar de Google-app Messages, die in het Nederlands Berichten heet.


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!