Workshops
LongevIoT 2025: 2nd International Workshop on Longevity in IoT Systems
The premature aging of IoT systems is one of the most challenging topics that need to be addressed to enable widespread usage. Both hardware and the software that is deployed on devices from small sensor nodes to edge devices or the cloud need to be considered holistically, such that deployment lifetimes can be counted in decades not years. With ever changing communication protocols, company-specific software platforms and quickly deprecated hardware components, IoT is at the forefront of uncertain futures, and these technical challenges need to be solved for a sustainable future. In this workshop, we focus on issues arising from the brittle nature of current IoT systems.
Organizers:
- Boris Sedlak, TU Wien, Austria
- Malte Josten, University of Duisburg-Essen, Germany
- Peter Zdankin, University of Duisburg-Essen, Germany
Workshop website: https://longeviot.github.io/2025
ScaleSys 2025: 1st International Workshop on Intelligent and Scalable Systems across the Computing Continuum
The emergence of the computing continuum, with ever-expanding cloud boundaries and its evolution into fog, edge, and IoT paradigms, has created unprecedented opportunities for realizing transformative and intelligent digital solutions. New application domains, such as real-time large language model (LLM) inference, federated learning, and multimodal media processing, demand scalable, low-latency, and adaptive processing across distributed, heterogeneous infrastructures. While the geo-distributed and federated nature of continuum computing improves agility, responsiveness, and service quality, it also introduces significant challenges in scalability, resilience, energy efficiency, and carbon awareness. The demands of dynamic service deployment and rapidly evolving operational requirements require a fundamental rethinking of infrastructure and software stacks across edge-cloud layers. Managing such large-scale, heterogeneous, and performance-critical environments through manual intervention is no longer feasible. Instead, new paradigms, including serverless computing, edge-native orchestration frameworks, and AI-driven decision logic, are essential for unlocking the potential of intelligent, resilient, and sustainable continuum systems.
Recent advances in modular runtimes such as WebAssembly and microVMs, carbon-aware scheduling leveraging real-time telemetry, and learning-based orchestration strategies promise to accelerate this transformation. In parallel, the rise of hybrid and distributed LLM inference pipelines pushes the frontier further, requiring novel runtime systems, efficient model partitioning and distillation techniques, and fine-grained orchestration across compute tiers to meet strict latency, accuracy, and energy objectives. Building on this momentum, ScaleSys 2025 aims to bring together researchers, developers, and practitioners from academia and industry to present their latest research and experiences at the intersection of systems and AI. The workshop aspires to provide a forum for exchanging ideas and advancing the state of the art in scalable, intelligent, and sustainable computing across the edge–cloud continuum. By fostering this dialogue, ScaleSys 2025 seeks to help define open standards, reusable benchmarks, and reproducible experimental frameworks that propel forward both “systems for AI” and “AI for systems” research in this space.
Organizers:
- Reza Farahani, University of Klagenfurt, Austria
- Nishant Saurabh, Utrecht University, the Netherlands
- Gabriele Russo Russo, University of Rome Tor Vergata, Italy
- Lorenzo Carnevale, University of Messina, Italy
Workshop website: https://scalesys2025.itec.aau.at/
SPADE – Scheduling & Parallelism in AI for Distributed Edges
Computationally intensive Artificial intelligence and Deep learning algorithms have demonstrated significant potential in enhancing a wide array of tasks across various domains. With the rapid adoption of these methods, IoT applications are rapidly improving. The rise of deep learning has already enabled IoT applications ranging from anomaly detection in sensor networks to real-time object recognition in surveillance systems. Nonetheless, deploying these models across distributed and heterogeneous edge environments remains complex. Bottlenecks in task parallelization, system heterogeneity, and scheduling inefficiencies pose substantial obstacles. The massive influx of data and increasingly complex tasks render centralized, cloud-based systems insufficient to meet the upcoming demands. Edge computing has emerged as a promising solution by enabling computation to occur closer to the data source. Several federated learning techniques have been proposed for training and inference on edge devices, showing potential for scalable deployment. However, the core challenge lies in implementing these models in such a way that they can fully leverage edge infrastructures. The effectiveness of edge computing is largely depends upon efficient task distribution, data partitioning, and scheduling especially under constrained resource availability. These factors directly impact the performance, responsiveness, and feasibility of lightweight edge deployments. This workshop seeks to create a focused platform for addressing these critical challenges, particularly from a systems and scheduling perspective. By emphasizing limitations in parallelism, dynamic workload assignment, and orchestration strategies, SPADE aims to bridge the gap between theoretical model design and real-world scalable AI deployments at the edge. It aligns closely with IoT 2025’s core themes of Edge AI, scalable architectures, and applied IoT intelligence, offering a timely and necessary contribution to the field.
Organizers
- Dinesh Kumar Sah, University of Oulu, Finland
- Praveen Kumar Donta, Stockholm University, Sweden
- Lauri Lovén, University of Oulu, Finland
- Priyanka Verma; University of Galway, Ireland
Workshop website: TBA
Tutorials
Orchestrating Hierarchical Federated Learning Pipelines with the AIoTwin Middleware
This tutorial introduces participants to the practical deployment of Hierarchical
Federated Learning (HFL) across the computing continuum. It explains the concept of
Federated Learning and its extension to a hierarchical setup, as well as the orchestration
aspects of setting up and running HFL pipelines in distributed environments of the
computing continuum. The tutorial includes a hands-on session which is divided into two
parts. In the first part, participants will use the Flower framework to set up and execute
standard Federated Learning (FL) tasks, providing hands-on insight into FL without
orchestration. The second part focuses on the use of AIoTwin orchestration middleware,
demonstrating how it enables scalable, distributed HFL pipelines across heterogeneous
edge, fog, and cloud environments. Participants will be guided through deploying their own
custom FL tasks on a real-world distributed cluster, while exploring the middleware’s
capabilities for orchestration, resource management, and hierarchical coordination. This
session is ideal for researchers and practitioners interested in edge-to-cloud AI pipelines
and FL at scale.
Organizers
- Ivan Čilić, University of Zagreb, Croatia
- Ana Petra Jukić, University of Zagreb, Croatia
- Katarina Vuknić, University of Zagreb, Croatia
- Ivana Podnar Žarko University of Zagreb, Croatia
Tutorial website: TBA