Penemue raises €1.7M to scale AI hate speech detection
Hey there, superstar! ✨ Imagine you have a special robot friend who's super good at finding yucky, mean words online.
There's a company called Penemue (say: Pen-eh-moo) that made one of these clever robot friends! This robot helps spot when people say not-nice things, like yelling or being unkind, on the internet.
Guess what? Lots of grown-ups gave Penemue some magic money 💰 so they can make their robot friend even smarter and help more people. This robot can even understand lots of different languages, like a super-translator!
So, Penemue's robot helps keep the internet a happier, safer place for everyone. Yay for friendly robots! 🤖💖
The German startup detects online hate, digital violence, and disinformation across 89 languages in real time, and works with public prosecutors and police alongside commercial clients. Investors were not disclosed. Penemue, the Freiburg-based TrustTech startup developing AI to detect and counter online hate speech, digital violence, and disinformation, has raised more than €1.7 million in [ ] This story continues at The Next Web
The German startup detects online hate, digital violence, and disinformation across 89 languages in real time, and works with public prosecutors and police alongside commercial clients. Investors were not disclosed.
Penemue, the Freiburg-based TrustTech startup developing AI to detect and counter online hate speech, digital violence, and disinformation, has raised more than €1.7 million in a new funding round. The investors were not disclosed.
The company was founded by Jonas Navid Mehrabanian Al-Nemri, Sara Egetemeyr, and Marlon Lückert. Egetemeyr, who serves as co-founder and managing director, framed the problem in terms that extend beyond individual victims: “It is not just the people affected who are victims, but everyone who reads along the fans, the communities, the next generation.”
Penemue’s technology monitors social media comments and direct messages in real time across 89 languages, identifying content that constitutes hate speech, threats, or potentially criminal communication, including coded language, slang, dialects, and emojis.
The AI is continuously updated to recognise newly emerging terms and cultural nuances. Users receive immediate alerts and can hide or delete problematic content with a single click, or file a complaint directly through the platform for legal prosecution.
An impact evaluation conducted by the University of Mannheim has documented positive effects in combating digital violence.
The client base spans Bundesliga clubs in Germany’s first and second divisions, politicians operating at federal level, companies, media houses, and artists and influencers across Germany and Europe.
Penemue also works directly with public prosecutors, police authorities, and official reporting offices to enable more consistent prosecution of digital crimes. The dual track, commercial SaaS for organisations and licensing to governments who distribute the tool to politicians and NGOs, reflects a deliberate choice to operate as a for-profit business rather than a grant-dependent one.
Egetemeyr has noted that raising the capital required to develop expensive AI technology at speed is easier as a private company, particularly when investors share the underlying mission.
The legal tailwind behind the market is concrete. Under the EU Digital Services Act, organisations operating digital communication channels are legally required to implement protective measures against harmful content, a mandate that creates a compliance-driven demand for platforms like Penemue’s, independent of social commitment.
The new capital will fund further AI development, new European and international partnerships, and deeper co-operation with public institutions.
Penemue is a member of Deutsche Telekom’s TechBoost programme and a partner of the #NoHateSpeech initiative, and has previously been recognised as AI Champion of Baden-Württemberg.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
startupmillion
The Documentation Attack Surface: How npm Libraries Teach Insecure Patterns
Most security audits focus on code. But across five reviews of high-profile npm libraries — totaling 195 million weekly downloads — I found the same pattern: the code is secure, but the README teaches developers to be insecure. One finding resulted in a GitHub Security Advisory (GHSA-8wrj-g34g-4865) filed at the axios maintainer's request. This isn't a bug in any single library. It's a systemic issue in how the npm ecosystem documents security-sensitive operations. The Pattern A library implements a secure default. Then its README shows a simplified example that strips away the security. Developers copy the example. The library's download count becomes a multiplier for the insecure pattern. Case 1: axios — Credential Re-injection After Security Stripping (65M weekly downloads) The code: fo
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Why I Run 22 Docker Services at Home
Somewhere in my living room, a 2018 gaming PC is running 22 Docker containers, processing 15,000 emails through a local LLM, and managing the finances of a real business. It was never supposed to do any of this. I run a one-person software consultancy in the Netherlands; web development, 3D printing, and consulting. Last year, I started building an AI system to help me manage it all. Eight specialized agents handling email triage, financial tracking, infrastructure monitoring, and scheduling. Every piece of inference runs locally. No cloud APIs touching my private data. This post covers the hardware, what it actually costs, and what I'd do differently if I started over. The Setup: Three Machines, One Mesh Network The entire system runs on three machines connected via Tailscale mesh VPN: do
![How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]](https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1l58ek0p6aqj2yrzi6.png)
How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]
You want ChatGPT on your website. Maybe for customer support. Maybe to answer FAQs automatically. Or maybe you're running live events and need AI to handle the flood of questions pouring into your chat room. Learning how to embed ChatGPT in your website is simpler than you think - but there's more to consider than most guides tell you. Here's the thing: most guides only cover half the picture. They show you how to add a basic AI chatbot widget. But what happens when 5,000 people hit your site during a product launch? What about moderating AI responses before your chatbot tells a customer something embarrassingly wrong? And what if you need AI assistance in a group chat, not just a 1-to-1 support conversation? To embed ChatGPT in your website, you have two main approaches: use a no-code pla





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!