Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

About

The cloud computing industry has grown massively over the last decade and with that new areas of application have arisen. Some areas require specialized hardware, which needs to be placed in locations close to the user. User requirements such as ultra-low latency, security and location awareness are becoming more and more common, for example, in Smart Cities, industrial automation and data analytics. Modern cloud applications have also become more complex as they usually run on a distributed computer system, split up into components that must run with high availability.

Unifying such diverse systems into centrally controlled compute clusters and providing sophisticated scheduling decisions across them are two major challenges in this field. Scheduling decisions for a cluster consisting of cloud and edge nodes must consider unique characteristics such as variability in node and network capacity. The common solution for orchestrating large clusters is Kubernetes, however, it is designed for reliable homogeneous clusters. Many applications and extensions are available for Kubernetes. Unfortunately, none of them accounts for optimization of both performance and energy or addresses data and job locality.

Project Concept

Dark blue background, futuristic visualisation of an AI-based, open and portable cloud management image: a cloud created with Icons like W-Lan, screens, search icon etc.

AI-based, open and portable cloud management

DECICE aims to develop an AI-based, open and portable cloud management framework for automatic and adaptive optimization and deployment of applications in a federated infrastructure, including computing from the very large (e.g., HPC systems) to the very small (e.g., IoT sensors connected on the edge).

dark blue background; futuristic visualisation of a digital twin: two heads with IT brains are connected and mirrowed. Framed with differnet icons like a server, AI, Cloud etc.

Digital Twin

Working at such vastly different scales requires an intelligent management plane with advanced capabilities that allow it to proactively adjust workloads within the system based on their needs, such as latency, compute power and power consumption. Therefore, we envision an AI-model, which can use a digital twin of the resources available, to make real-time scheduling decisions based on telemetry data from the resources.

DECICE framework - which can be used by innovators. Visualisation of an IT employee working at the desk and looking in one of two monitors. Colours: blue, brown, black - professional but not stiff looking

DECICE framework

The DECICE framework will be able to dynamically balance different workloads, optimize the throughput and latency of the system resources (compute, storage, and network) regarding performance and energy efficiency and quickly adapt to changing conditions. The framework also gives the necessary tools and interfaces for the administrators and deployment experts to interface with all the infrastructure components and control them to achieve the desired result.

blue blackground, visualisation of Open standard APIs

Open standard APIs

The integration of the DECICE framework with orchestration systems will be done through open standard APIs to make it portable, modular and extensible. The DECICE framework will be evaluated through established use cases.

Project Impacts

Visualisation of Impact 01 of the DECICE project

Impact 01

Europe’s open strategic autonomy by sustaining first-mover advantages in strategic areas including AI, data, robotics, quantum computing, and graphene, and by investing early in emerging enabling technologies

Visualisation of Impact 02 of the DECICE project

Impact 02

Reinforced European industry leadership across the digital supply chains

Visualisation of Impact 03 of the DECICE project

Impact 03

Robust European industrial and technology presence in all key parts of a greener digital supply chain, from low-power components to advanced systems, future networks, new data technologies, and platforms.

Project Structure

WP1 aims to organize the overall project administration, finance, and project management including definition and coordination of the quality and risk management.

WP2 aims to implement the scheduling agent responsible for the efficient orchestration of the application workload on the cloud-edge infrastructure.
  • D2.1 Specification of the Optimization Scope
  • D2.2 Digital Twin
  • D2.3 AI-Scheduler Prototypes for Storage and Compute
  • D2.4 Integrated AI-Scheduler Prototype
  • D2.5 Final Scheduler and Digital Twin
WP3 seeks to integrate arbitrary backend solutions into a portable framework that could be integrated into arbitrary cloud frameworks and provide a training environment.
  • D3.1 Synthetic Test Environment
  • D3.2 Final Architecture and Interfaces
  • D3.3 Final Implementation
  • D3.4 Security and Trustworthiness
WP4 targets the majority of tasks that concern integration of extensions for Kubernetes.
  • D4.1 Implementation Report of CI/CD Environment
  • D4.2 Integration of Monitoring Framework
  • D4.3 Final Integration of DECICE APIs
  • D4.4 Final Integration of HPC and AI Services
WP5 contains activities revolving around deployment and validation.  
  • D5.1 Use Case Rquirements
  • D5.2 Development Environment Specification
  • D5.3 Project Development Environment Deployed for Phase 1 and 2
  • D5.4 Project Development Environment Deployed for Phase 3
  • D5.5 Performance Evaluation Report
WP6 seeks to enhance the impact of the project in the long-term through strategic planning of the dissemination, communication and stakeholder engagement activities.
  • D6.1 Dissemination & Communication Plan
  • D6.2 Exploitation Strategy (Sensitive deliverable – not open for public viewing )
  • D6.3 Online and Media Presence
  • D6.4 Engagement Summary Report

DECICE on GitHub

back to top icon