OpenAI raises record US$122 billion, paving way for superapp pivot - digitimes
<a href="https://news.google.com/rss/articles/CBMioAFBVV95cUxQTmlRb0Q1Wm1xdzFfT3NNUlVZWWZaZE9LaUdtSXFoWWplUHJLbUhXcDM3VDVWNkF2NmtCR3pFcXRER05QSG8tVy1ON1BpNUdMUWxTVi1sRGM2TkpRZ0VSbXNEek5fMEU1MEZIR1RnQ2h0QndCeTVLRndpMm5iaFRIelprUHNTT0RCZmVaNUJuNTJieDNQNUpza3JGaFNnaXVj?oc=5" target="_blank">OpenAI raises record US$122 billion, paving way for superapp pivot</a> <font color="#6f6f6f">digitimes</font>
Could not retrieve the full article text.
Read on Google News: OpenAI →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
billionNvidia Invests US$2 Billion in Marvell to Scale AI Infrastructure - Mexico Business News
<a href="https://news.google.com/rss/articles/CBMipwFBVV95cUxPVlpwMGhJU2w1VXJYSTJXZVlsN2pxNXVxa01jZmtIOG1RY3JPUV9jS1Z3bi0ybkpRS3pHQXR6a2FYVXczeVF2cFdYTU1Fa2p4RmNsSy1jeFZvNjc1X1hFQk05WTVuc242dFBBLWdhLWRySW4teHU2cnpPOW5NN0k3VDZ6U0lTRHB4R203bl9ycTJxYTUxc1RaN1RVbVlfc0lYZVJsQzNLUQ?oc=5" target="_blank">Nvidia Invests US$2 Billion in Marvell to Scale AI Infrastructure</a> <font color="#6f6f6f">Mexico Business News</font>

Generare raises €20M to decode the 97% of microbial chemistry
The Paris techbio company screens microbial genomes to find molecules that evolution spent three billion years producing, and claims to have characterised more novel small molecules in 2025 than the rest of the field combined. Alven and Daphni co-led the Series A. Generare, the Paris-based techbio company reading microbial genomes for molecules that drug development […] This story continues at The Next Web
OpenAI raises $122 billion at $852 billion valuation, closing largest funding round in history - The Cool Down
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxOaDRKOEktVHZLVk10eE8zSlVmTkJXc0hXamcxYWlnQldZOFNtc3liakJjOHJBMENDTmFWS3dtNnpBeG9GZWhfVFZMamhMVHo0UmFsZHVqdUdSUmNmSmdJb2FhZ21sT1NpWi1MbmRRUXVaZWhhbU9Pbm50Q0tzYUxwWmdYNm1JMl80UHpSR1Mwcw?oc=5" target="_blank">OpenAI raises $122 billion at $852 billion valuation, closing largest funding round in history</a> <font color="#6f6f6f">The Cool Down</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
This Defense Company Made AI Agents That Blow Things Up - WIRED
<a href="https://news.google.com/rss/articles/CBMiiAFBVV95cUxNY2V3RGJkNDduUmVPV3JpYnNkRHgxNGt2dEoyNEdZUTAyOVJMbFl5REZsT08zb1Z4TVlMNWc3OFJDVFNqcWRxa0FGSEFFVTlQajg4dVRIVWRKSzhkZV9yXzN4Z3lWbXltbFk4UDIxcDZQSnJ4alhvZTlINnl6YmRoaTFnT2ZNMUtT?oc=5" target="_blank">This Defense Company Made AI Agents That Blow Things Up</a> <font color="#6f6f6f">WIRED</font>
[D] Why I abandoned YOLO for safety critical plant/fungi identification. Closed-set classification is a silent failure mode
<!-- SC_OFF --><div class="md"><p>I’ve been building an open-sourced handheld device for field identification of edible and toxic plants wild plants, and fungi, running entirely on device. Early on I trained specialist YOLO models on iNaturalist research grade data and hit 94-96% accuracy across my target species. Felt great, until I discovered a problem I don’t see discussed enough on this sub.</p> <p>YOLO’s closed set architecture has no concept of “I don’t know.” Feed it an out of distribution image and it will confidently classify it as one of its classes at near 100% confidence. In most CV cases this can be annoyance. In foraging, it’s potentially lethal.</p> <p>I tried confidence threshold fine-tuning at first, doesn’t work. The confidence scores on OOD inputs are indistinguishable f
[P] I replaced Dot-Product Attention with distance-based RBF-Attention (so you don't have to...)
<!-- SC_OFF --><div class="md"><p>I recently asked myself what would happen if we replaced the standard dot-product in self-attention with a different distance metric, e.g. an rbf-kernel?</p> <p>Standard dot-product attention has this quirk where a key vector can "bully" the softmax simply by having a massive magnitude. A random key that points in roughly the right direction but is huge will easily outscore a perfectly aligned but shorter key. Distance-based (RBF) attention could fix this. To get a high attention score, Q and K <em>actually</em> have to be close to each other in high-dimensional space. You can't cheat by just being large.</p> <p>I thought this would be a quick 10-minute PyTorch experiment, but it was a reminder on how deeply the dot-product is hardcoded into the entire ML
[D] Self-Promotion Thread
<!-- SC_OFF --><div class="md"><p>Please post your personal projects, startups, product placements, collaboration needs, blogs etc.</p> <p>Please mention the payment and pricing requirements for products and services.</p> <p>Please do not post link shorteners, link aggregator websites , or auto-subscribe links.</p> <p>--</p> <p>Any abuse of trust will lead to bans.</p> <p>Encourage others who create new posts for questions to post here instead!</p> <p>Thread will stay alive until next one so keep posting after the date in the title.</p> <p>--</p> <p>Meta: This is an experiment. If the community doesnt like this, we will cancel it. This is to encourage those in the community to promote their work by not spamming the main threads.</p> </div><!-- SC_ON -->   submitted by   <a href="ht
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!