Industry, academia, and governments are all exploring the best ways to encourage the development and use of artificial intelligence (AI) that is human-centered and trustworthy. The quest for trustworthy AI, particularly when it comes to autonomous automotive scenarios, is a fundamental issue as it is becoming increasingly difficult to determine whether an AI system protects the individual rights and democratic values of an ever-widening group of stakeholders.
To help identify the broad range of risks, it is common to define a set of trustworthy AI principles that are derived from human rights, ethical norms, and legal principles. This webinar will present a systematic approach to identify use case-specific risks relevant to each trustworthy AI principle as it relates to automotive scenarios. It will also describe methods for mitigating those risks to an acceptable level and provide the audience with the tools that support the implementation of this framework as an agile and iterative process.