Vector Researchers present papers at ACL 2024
Vector researchers will be well represented at the 62nd Annual Meeting of the Association for Computational Linguistics in Bangkok, Thailand this year. 14 papers co-authored by Vector-affiliated researchers are being […] The post Vector Researchers present papers at ACL 2024 appeared first on Vector Institute for Artificial Intelligence .
Vector researchers will be well represented at the 62nd Annual Meeting of the Association for Computational Linguistics in Bangkok, Thailand this year. 14 papers co-authored by Vector-affiliated researchers are being shared at the Main Conference and as Findings papers.
Below is a list of papers accepted at ACL 2024 with Vector-affiliated co-authors.
Accepted Main Conference Papers:
Small But Funny: A Feedback-Driven Approach to Humor DistillationSahithya Ravi, Patrick Huber, Akshat Shrivastava, Vered Shwartz, Arash Einolghozati.
VIEScore: Towards Explainable Metrics for Conditional Image Synthesis EvaluationMax Ku, Dongfu Jiang, Cong Wei, Xiang Yue, Wenhu Chen
Structured Tree Alignment for Evaluation of (Speech) Constituency ParsingFreda Shi, Kevin Gimpel, Karen Livescu
LogogramNLP: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for NLPDanlu Chen, Freda Shi, Aditi Agarwal, Jacobo Myerston, Taylor Berg-Kirkpatrick
DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM WorkflowsAjay Patel, Colin Raffel, Chris Callison-Burch
SpaRC and SpaRP: Spatial Reasoning Characterization and Path Generation for Understanding Spatial Reasoning Capability of Large Language ModelsMd Imbesat Hassan Rizvi, Xiaodan Zhu, Iryna Gurevych
Accepted Findings Papers
DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs Haishuo Fang, Xiaodan Zhu, Iryna Gurevych
ConTempo: A Unified Temporally Contrastive Framework for Temporal Relation ExtractionJingcheng Niu, Saifei Liao, Victoria Ng, Simon De Montigny, Gerald Penn
E2-LLM: Efficient and Extreme Length Extension of Large Language ModelsJiaheng Liu, ZhiqiBai, Yuanxing Zhang, Zhang Chenchen, YuangZh, Ge Zhang, JiakaiWang, Haoran Que, Yukang Chen, Wenbo Su, Tiezheng Ge, Jie Fu, Wenhu Chen, Bo Zheng
ChatMusician: Understanding and Generating Music Intrinsically with LLMRuibin Yuan, Hanfeng Lin, Yi Wang, Zeyue Tian, Shangda Wu, Tianhao Shen, Ge Zhang, Yuhang Wu, Cong Liu, Ziya Zhou, Liumeng Xue, Ziyang Ma, Qin Liu, Tianyu Zheng, Yizhi LI, Yinghao Ma, Yiming Liang, Xiaowei Chi, Ruibo Liu, Zili Wang, Chenghua Lin, Qifeng Liu, Tao Jiang, Wenhao Huang, Wenhu Chen, Jie Fu, Emmanouil Benetos, Gus Xia, Roger Dannenberg, Wei Xue, Shiyin Kang, Yike Guo
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language ModelsAlfonso Amayuelas, Kyle Wong, Liangming Pan, Wenhu Chen, William Yang Wang
SciMMIR: Benchmarking Scientific Multi-modal Information RetrievalSiwei Wu, Yizhi LI, Kang Zhu, Ge Zhang, Yiming Liang, Kaijing Ma, Chenghao Xiao, Haoran Zhang, Bohao Yang, Wenhu Chen, Wenhao Huang, Noura Al Moubayed, Jie Fu, Chenghua Lin
OpenCodeInterpreter: Integrating Code Generation with Execution and RefinementTianyu Zheng, Ge Zhang, Tianhao Shen, Xueling Liu, Bill Yuchen Lin, Jie Fu, Wenhu Chen, Xiang Yue
A Graph per Persona: Reasoning about Subjective Natural Language DescriptionsEunJeong Hwang, Vered Shwartz, Dan Gutfreund, Veronika Thost
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
paperresearch
Neural Robust Control on Lie Groups Using Contraction Methods (Extended Version)
arXiv:2604.01448v1 Announce Type: cross Abstract: In this paper, we propose a learning framework for synthesizing a robust controller for dynamical systems evolving on a Lie group. A robust control contraction metric (RCCM) and a neural feedback controller are jointly trained to enforce contraction conditions on the Lie group manifold. Sufficient conditions are derived for the existence of such an RCCM and neural controller, ensuring that the geometric constraints imposed by the manifold structure are respected while establishing a disturbance-dependent tube that bounds the output trajectories. As a case study, a feedback controller for a quadrotor is designed using the proposed framework. Its performance is evaluated using numerical simulations and compared with a geometric controller.

A virtual-variable-length method for robust inverse kinematics of multi-segment continuum robots
arXiv:2604.02256v1 Announce Type: new Abstract: This paper proposes a new, robust method to solve the inverse kinematics (IK) of multi-segment continuum manipulators. Conventional Jacobian-based solvers, especially when initialized from neutral/rest configurations, often exhibit slow convergence and, in certain conditions, may fail to converge (deadlock). The Virtual-Variable-Length (VVL) method proposed here introduces fictitious variations of segments' length during the solution iteration, conferring virtual axial degrees of freedom that alleviate adverse behaviors and constraints, thus enabling or accelerating convergence. Comprehensive numerical experiments were conducted to compare the VVL method against benchmark Jacobian-based and Damped Least Square IK solvers. Across more than $1.

In AI-Generated Video, Iran-Backed Syria-Based Militia Issues Imminent Threat In Hebrew: 'The Resistance Is Preparing Something For You' - MEMRI | Middle East Media Research Institute
In AI-Generated Video, Iran-Backed Syria-Based Militia Issues Imminent Threat In Hebrew: 'The Resistance Is Preparing Something For You' MEMRI | Middle East Media Research Institute
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Neural Robust Control on Lie Groups Using Contraction Methods (Extended Version)
arXiv:2604.01448v1 Announce Type: cross Abstract: In this paper, we propose a learning framework for synthesizing a robust controller for dynamical systems evolving on a Lie group. A robust control contraction metric (RCCM) and a neural feedback controller are jointly trained to enforce contraction conditions on the Lie group manifold. Sufficient conditions are derived for the existence of such an RCCM and neural controller, ensuring that the geometric constraints imposed by the manifold structure are respected while establishing a disturbance-dependent tube that bounds the output trajectories. As a case study, a feedback controller for a quadrotor is designed using the proposed framework. Its performance is evaluated using numerical simulations and compared with a geometric controller.

A virtual-variable-length method for robust inverse kinematics of multi-segment continuum robots
arXiv:2604.02256v1 Announce Type: new Abstract: This paper proposes a new, robust method to solve the inverse kinematics (IK) of multi-segment continuum manipulators. Conventional Jacobian-based solvers, especially when initialized from neutral/rest configurations, often exhibit slow convergence and, in certain conditions, may fail to converge (deadlock). The Virtual-Variable-Length (VVL) method proposed here introduces fictitious variations of segments' length during the solution iteration, conferring virtual axial degrees of freedom that alleviate adverse behaviors and constraints, thus enabling or accelerating convergence. Comprehensive numerical experiments were conducted to compare the VVL method against benchmark Jacobian-based and Damped Least Square IK solvers. Across more than $1.

O-ConNet: Geometry-Aware End-to-End Inference of Over-Constrained Spatial Mechanisms
arXiv:2604.02038v1 Announce Type: new Abstract: Deep learning has shown strong potential for scientific discovery, but its ability to model macroscopic rigid-body kinematic constraints remains underexplored. We study this problem on spatial over-constrained mechanisms and propose O-ConNet, an end-to-end framework that infers mechanism structural parameters from only three sparse reachable points while reconstructing the full motion trajectory, without explicitly solving constraint equations during inference. On a self-constructed Bennett 4R dataset of 42,860 valid samples, O-ConNet achieves Param-MAE 0.276 +/- 0.077 and Traj-MAE 0.145 +/- 0.018 (mean +/- std over 10 runs), outperforming the strongest sequence baseline (LSTM-Seq2Seq) by 65.1 percent and 88.2 percent, respectively. These res



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!