Breadcrumb
ALTEA
The platform is designed to operate as a multi-tenant and multi-role environment, consistently with the first draft of the attached Survey, which identifies three core application entities: Consultant, Company, and AI Asset. The Consultant acts as the actor delivering assessment services to multiple companies. The Company represents the organization owning one or more AI systems or system versions. The AI Asset is the minimum unit subject to evaluation, corresponding to a single algorithm or to its specific operational version. This modeling is not merely an interface choice; it is the logical basis for data segregation, responsibility traceability, workflow orchestration, and report generation for each analytical perimeter. The scope of the deliverable also includes the Survey engine, which incorporates the methodological structure of the attached ethical framework and translates it into an executable digital process. In line with the attached document, the framework is organized into seven modules corresponding to the European requirements for Trustworthy AI: autonomy and human oversight/fundamental rights, technical robustness security/safety/accuracy, privacy and data governance, transparency and explainability, diversity/non-discrimination/fairness, environmental and social sustainability, and accountability. Within the platform, these modules are not treated as static questionnaires, but as evaluative blocks connected to consistency rules, evidence collection, scoring, escalation, and reporting. the structured information base of the Survey and the automated production of deliverables, audit reports, gap analyses, and remediation recommendations. In this architecture, Villanova LLM acts as a controlled and contextualized linguistic component rather than as an autonomous source of truth. The model receives structured inputs originating from the Survey, the documentary repository, the scoring engine, and the verification logs, and produces texts consistent with predefined reporting templates, while final validation responsibility remains with human reviewers. project documentation and as the basis for subsequent deployment activities, internal audits, regulatory verification, exposure to trusted third parties, and preparation of technical material supporting future compliance processes with respect to the AI Act, internal governance policies, and assurance practices required in complex industrial contexts.