Urgent Computing refers to a class of time-critical scientific applications that leverage distributed data sources to facilitate important decision-making in a timely manner. The overall goal of Urgent Computing is to predict the outcome of scenarios early enough to prevent critical situations or to mitigate their negative consequences.
Most popular data analytics approaches, including models based on Artificial Intelligence (AI) and Machine Learning (ML), are cloud-based and require transporting data from often distant edge devices to a central location for processing. This limits the amount of data that we can process and our ability to analyze and transform this data into knowledge in a timely manner. The aggregation of heterogeneous resources along the data path from the Edge to the Cloud, also referred to as the Computing Continuum, can be harnessed to support urgent applications.
This workshop aims at scientific contributions that leverage the Computing Continuum for supporting urgent analytics applications. The canonical use cases for this work are data-driven dynamic workflows, which combine knowledge from multiple data sources and integrate it on-demand with distributed, large-scale computational models.
Integrating these analytics with distributed and heterogeneous resources is hindered by a lack of abstractions and software stacks that can support data-driven reactive behaviors, i.e., for determining what, where, and when data is collected and processed across the edge-to-cloud/HPC computing continuum. Developing data-driven applications also requires programming abstractions and runtime systems that address platform heterogeneity as well as the extreme uncertainty in data availability and quality. Such contributions have the potential to address today’s global grand challenges in science, engineering, and society.
We are looking for original high-quality research and position papers on urgent applications, services, and system software for the computing continuum. Topics of interest include, but are not limited to:
- Algorithms, models and systems considerations in designing urgent applications in the Computing Continuum
- Programming support for user expectations and constraints in terms of response time, solution quality, data resolution, cost, energy, etc.
- Run-time techniques to provide flexible execution models for computation and communication.
- Resource management frameworks and interfaces supporting scheduling, resource allocations and application execution for the computing continuum.
- Use of AI and ML techniques to steer urgency in systems and applications.
- Experiences and use cases applying urgent science to computing continuum infrastructures.
- Autonomic Computing in the Computing Continuum
- Resource Management and Scheduling in the Computing Continuum
- Distributed Machine Learning in the Computing Continuum
- Edge Intelligence models and architectures
- Policy driven service and resource life-cycle management