Artificial Intelligence (AI)-based systems are experiencing widespread adoption across a broad range of applications, including critical domains such as law and healthcare. This paradigm shift prompted a push towards the development of trustworthy AI systems, which are increasingly mandated by law and regulations. However, assessment techniques that concretely verify the trustworthiness of AI-based systems are still lacking. Current techniques in fact focus on traditional quality properties, providing either high-level guidelines or low-level techniques that cannot be generalized, and are therefore not applicable to AI-based systems. In this paper, we propose an assessment scheme that builds on a structured catalog of non-functional properties. The support for specific non-functional properties is verified along the entire system life cycle, from data collection to evaluation, by a set of assessment controls.