Multi-Mode Pinching-Antenna Systems: Polarization-Aware Full-Wave Modeling and Optimization
arXiv:2604.01778v1 Announce Type: cross Abstract: Millimeter-wave and terahertz communications face a fundamental challenge: overcoming severe path loss without sacrificing spectral efficiency. Pinching antenna systems (PASS) address this by bringing radiators physically close to users, yet existing frameworks treat the waveguide as a mere transmission line, overlooking its inherent multi-mode capabilities and the critical role of polarization. This paper develops the first polarization-aware, full-wave electromagnetic model for multi-mode PASS (MMPASS), capturing spatial radiation patterns, modal polarization states, and polarization matching efficiency from first principles. Leveraging this physically grounded model, we reveal fundamental trade-offs among waveguide attenuation, atmospher
View PDF HTML (experimental)
Abstract:Millimeter-wave and terahertz communications face a fundamental challenge: overcoming severe path loss without sacrificing spectral efficiency. Pinching antenna systems (PASS) address this by bringing radiators physically close to users, yet existing frameworks treat the waveguide as a mere transmission line, overlooking its inherent multi-mode capabilities and the critical role of polarization. This paper develops the first polarization-aware, full-wave electromagnetic model for multi-mode PASS (MMPASS), capturing spatial radiation patterns, modal polarization states, and polarization matching efficiency from first principles. Leveraging this physically grounded model, we reveal fundamental trade-offs among waveguide attenuation, atmospheric absorption, and geometric spreading, yielding closed-form solutions for optimal PA placement and orientation in single-user scenarios. Extending to multi-user settings, we propose a modular optimization framework that integrates fractional programming with closed-form polarization updates, scaling gracefully to arbitrary numbers of waveguides, PAs, and users. Numerical results show that MMPASS achieves up to a 167% increase in spectral efficiency compared with single-mode PASS. Moreover, when comparing MMPASS with its polarization-ignorant counterpart, polarization awareness alone improves the sum rate by up to 23%. By bridging rigorous electromagnetic theory with scalable optimization, MMPASS establishes a physically complete and practically viable foundation for future high-frequency wireless networks.
Comments: Keywords: Pinching antenna systems, multi-mode pinching antennas, 6G, polarization, electromagnetic modeling
Subjects:
Information Theory (cs.IT); Signal Processing (eess.SP)
Cite as: arXiv:2604.01778 [cs.IT]
(or arXiv:2604.01778v1 [cs.IT] for this version)
https://doi.org/10.48550/arXiv.2604.01778
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Yulin Shao [view email] [v1] Thu, 2 Apr 2026 08:43:02 UTC (903 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceupdate
Untitled
You have 50 models. Each trained on different data, different domain, different patient population. You want them to get smarter from each other. So you do the obvious thing — you set up a central aggregator. Round 1: gradients in, averaged weights out. Works fine at N=5. At N=20 you notice the coordinator is sweating. At N=50, round latency has tripled, your smallest sites are timing out, and your bandwidth budget is gone. You tune the hell out of it. Same ceiling. This is not a configuration problem. This is an architecture ceiling. The math underneath it guarantees you hit a wall. A different architecture changes the math. The combinatorics you are not harvesting Start with a fact that has nothing to do with any particular framework: N agents have exactly N(N-1)/2 unique pairwise relati

AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence
AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence Published: April 05, 2026 | Reading time: ~10 min This week has been nothing short of phenomenal for the AI community, with breakthroughs and announcements that promise to revolutionize the way we develop and interact with artificial intelligence. From building personal AI agents in a matter of hours to the unveiling of cutting-edge multimodal intelligence models, the pace of innovation is not just accelerating - it's transforming the landscape of what's possible. Whether you're a seasoned developer or just starting to explore the world of AI, this week's news is a must-know, offering insights into how technology is making AI more accessible, powerful, and integrated into our daily lives. Buildin

This Week in AI: April 05, 2026 - Revolutionizing Development with Personal Agents and Multimodal Intelligence
This Week in AI: April 05, 2026 - Revolutionizing Development with Personal Agents and Multimodal Intelligence Published: April 05, 2026 | Reading time: ~10 min This week has been incredibly exciting for AI enthusiasts and developers alike. With advancements in personal AI agents, multimodal intelligence, and compact models for enterprise documents, the field is rapidly evolving. One of the most significant trends is the ability to build and deploy useful AI prototypes in a remarkably short amount of time. This shift is largely due to innovative tools and ecosystems that are making AI more accessible to individual builders. In this article, we'll dive into the latest AI news, exploring what these developments mean for developers and the broader implications for the industry. Building a Per
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Truckloads of food are being wasted because computers won’t approve them
Modern food systems may look stable on the surface, but they are increasingly dependent on digital systems that can quietly become a major point of failure. Today, food must be “recognized” by databases and automated platforms to be transported, sold, or even released, meaning that if systems go down, food can effectively become unusable—even when it’s physically available.

I Put an LLM Inside the Linux Kernel Scheduler. Here's What Happened.
A few weeks ago, I did something that probably shouldn't work. I replaced the CPU scheduling algorithm in my Linux kernel with calls to an AI model. As on-device LLM inference capabilities grow, I am curious about its potential as a CPU scheduler. Maybe in the future, tweaking a laptop's performance is a matter of adjusting the system prompt 🤷♂️ What Is a CPU Scheduler? CPU Scheduler is an operating system component that decides which task or process gets to use the CPU at a particular time. Linux's default scheduler is called CFS (Completely Fair Scheduler). It's an algorithm that tries to give every process a fair share of CPU time, weighted by priority. It makes decisions in microseconds, fully algorithmic. The Idea Two things that made this feel worth trying. First, sched_ext landed

I Turned My MacBook's Notch Into a Control Center for AI Coding Agents
Every developer using Claude Code knows the pain: you have 5+ terminal sessions running, Claude is asking for permission in one tab, waiting for input in another, and you're buried in a third. You Alt-Tab frantically, lose context, and waste time. So I built CodeIsland — a free, open-source macOS app that turns your MacBook's notch (Dynamic Island) into a real-time dashboard for all your AI coding agents. The Problem When you're running multiple Claude Code sessions across different projects, there's no way to see everything at a glance. You're constantly switching between terminals to: Check which session finished Approve permission requests Answer Claude's questions Monitor usage limits Multiple Claude Code sessions in cmux, with CodeIsland monitoring everything from the notch The Soluti



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!