Poster
in
Workshop: Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation
Justified Trust in AI Fairness Assessment using Existing Metadata Entities
Alpay Sabuncuoglu · carsten maple
AI is becoming increasingly complex, opaque and connected to systems without human oversight. As such, ensuring trust in these systems has become challenging, yet vital. Trust is a multifaceted concept which varies over time and context, and to support users in making decisions on what to trust, work has been recently developed in the trustworthiness of systems. This includes examination of the security, privacy, safety and fairness of a system. In this work, we explore the fairness of AI systems. While mechanisms, such as formal verification, aim to guarantee properties such as fairness, their application in large-scale applications is rare due to cost and complexity issues. A major approach that is deployed in place of formal methods involves providing claims regarding the fairness of a system, with supporting evidence, to elicit justified trust in the system. Through continuous monitoring and transparent reporting of existing metadata with model experiment logs, organisations can provide reliable evidence for claims. This paper provides details of a new approach for evidence-based trust. We share our findings from a workshop with industry professionals and provide a practical example of how these concepts can be applied in a credit risk analysis system.