Hierarchical Intelligence for Multi-Agent Autonomy on the Edge: From Semantic Coordination to In-Context Adaptation

Speaker
Hao-Lun Hsu
Autonomous multi-agent systems operating in resource-constrained environments with limited communication and computation face a fundamental tension between global, long-horizon planning and local, real-time control. This talk presents a hierarchical framework designed to bridge this gap, progressing from strategic semantic coordination to reflexive adaptation.
We first introduce a decentralized execution and centralized coordination architecture that leverages Large Language Models (LLMs) for semantic planning and spatial reasoning, enabling swarms to interpret complex mission objectives while respecting communication constraints. To support scalable coordination, we propose a communication-efficient protocol that combines strategic synchronization with randomized exploration, enabling collaborative learning and effective knowledge sharing across agents with minimal overhead.
Finally, to improve robustness in unpredictable environments, we propose a pretraining framework that augments training data with adversarial environment variations and leverages In-Context Reinforcement Learning (ICRL). By increasing task diversity and enabling rapid adaptation at deployment, this approach enhances generalization while avoiding the slow parameter updates and pessimism associated with worst-case optimization in robust reinforcement learning.
Categories
Artificial Intelligence, Lecture/Talk, Panel/Seminar/Colloquium, Technology