TOOLS – A.I. ETHICS
Click the tabs to navigate between different tools.

This catalogue aims to provide a comprehensive collection of tools and metrics to ensure AI's trustworthiness and its alignment with human values and ethics.
2. What does the catalog cover?
The catalogue covers a wide range of tools and metrics, including those related to transparency, fairness, robustness, and interpretability of AI systems. It also touches upon safety, privacy, and accountability mechanisms inherent in trustworthy AI.
3. How is the catalogue compiled and kept up to date?
Regular updates are made to the catalogue to ensure its relevancy. Inputs are sourced from experts, stakeholders, and the general public. Feedback loops are also established to continuously refine and enhance the content based on real-world experiences and evolving needs.
The OECD collaborates with various global organizations, research institutions, and industry leaders to curate this catalogue. This diverse network ensures a holistic and well-rounded approach to the development and maintenance of the catalogue.
OECD AI Incidents Monitor (AIM)
Purpose:
"...documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks. Over time, AIM will help to show patterns and establish a collective understanding of AI incidents and their multifaceted nature and serve as an important tool for trustworthy AI."
(From the OECD's AI Incidents Monitor website)
Relevance for Librarian Work?
For example, if you receive questions about...
How to use the AIM
What are Responsible AI Licenses (RAIL)?
"Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology in order to prevent irresponsible and harmful applications."
(From the RAIL website)
The following paper serves as a theoretical framework on RAIL application:
Contractor, D., McDuff, D., Haines, J. K., Lee, J., Hines, C., Hecht, B., Vincent, N., & Li, H. (2022). Behavioral use licensing for responsible AI. 2022 ACM Conference on Fairness, Accountability, and Transparency, 778–788. https://doi.org/10.1145/3531146.3533143
"The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."
(From the AI Incidents Database website)
For example, if you receive questions about...
