The modern edge-cloud continuum data intensive workflows are increasingly based on 6G edge nodes in order to spread their diffusion relying on public network and enhanced by the use of machine learning (ML) models in order to extend their capabilities. Data intensive workflows are also glowingly used in critical scenarios such as health and IoT.
In these scenarios, guarantees on the model prediction quality and on the model non-functional properties (e.g., model confidentiality), are nowadays requested in order to comply with regulations such as the EU AI Act.
Although the traditional CIA (Confidentiality, Integrity, Availability) triad are largely considered as the minimal non-functional properties to be guaranteed for a given system, they cannot be applied as such in the context of ML models. In this paper we identify the shortcomings of the conventional definition of CIA, provides novel ML-specific definitions for the CIA non-functional properties and develops an assurance methodology to evaluate them on the target models and provide relevant guarantees.
The paper presents an experimental evaluation based on a realistic MLOps pipeline aimed to demonstrate its feasibility and effectiveness and is based on the novel definition of ML model integrity Non-Functional Property.