If assisted tools are acknowledged to improve process of organisations, there are also growing concerns about their embedded algorithms which remain biased.
Like previously with General Data Protection Regulation (GDPR), companies will have to comply with an upcoming EU regulation connected to their use of AI assisted systems in order to prevent the perpetuation of historical patterns of discrimination (e.g., against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation).
At the end of the training, participants are:
Aware of trustworthy AI main paradigms
Aware of the future regulation challenges for their organisations
Able to identify "risky" AI-assisted tools used in their companies
Trustworthy AI: what does it mean?
Sources and risks of biased AI assisted tools for your organisation (illustrations: Amazon recruitment engine, compass algorithms for recidivism assessment, facial recognition)
The AI act
How to increase fairness in AI algorithms (metrics, bias mitigation methods, tool kits and programme, data collection, explainable AI)?
Case study: an AI-assisted programme to match CVs with job offers avoiding age biases (use of the LIST technological demonstrator AMANDA).
AI technologies designers
Digital transformation officers
Data protection officers
Leaders and manager in companies
Case studies/illustrations (e.g. recruitment, face recognition, justice)
In situ learning (i.e. use of a technological demonstrator)
The training material will be handed out at the beginning of the course.
At the end of the course, participants will receive a certificate of attendance issued by the House of Training and Digital Learning Hub.
Conditions of enrolment
To apply for this course, you have to register on
14, Porte de France - Belval