Skip to Main Content
Libraries
askus Ask us
 

A.I. for UVic Librarians -- Internal

TOOLS – A.I. ETHICS

Click the tabs to navigate between different tools.

OECD Logo

OECD Catalogue of Tools & Metrics for Trustworthy AI

 

1. Background - Why this catalogue?

This catalogue aims to provide a comprehensive collection of tools and metrics to ensure AI's trustworthiness and its alignment with human values and ethics. 

2. What does the catalog cover?

The catalogue covers a wide range of tools and metrics, including those related to transparency, fairness, robustness, and interpretability of AI systems. It also touches upon safety, privacy, and accountability mechanisms inherent in trustworthy AI.

3. How is the catalogue compiled and kept up to date? 

Regular updates are made to the catalogue to ensure its relevancy. Inputs are sourced from experts, stakeholders, and the general public. Feedback loops are also established to continuously refine and enhance the content based on real-world experiences and evolving needs.

4. Partner Network

The OECD collaborates with various global organizations, research institutions, and industry leaders to curate this catalogue. This diverse network ensures a holistic and well-rounded approach to the development and maintenance of the catalogue.

OECD AI Incidents Monitor (AIM)

Purpose:

"...documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks. Over time, AIM will help to show patterns and establish a collective understanding of AI incidents and their multifaceted nature and serve as an important tool for trustworthy AI."

(From the OECD's AI Incidents Monitor website)

Relevance for Librarian Work?

For example, if you receive questions about... 

  • identifying risks involved with AI application in certain fields
  • assessing risks of existing AI tools
  • finding safe alternatives

How to use the AIM

  • the monitor is a database that can be searched by keywords 
  • result data can be downloaded in XLS format

Link 

Logo RAIL licenses

 

Purpose

What are Responsible AI Licenses (RAIL)? 

"Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology in order to prevent irresponsible and harmful applications." 
(From the RAIL website)

Theoretical Framework

The following paper serves as a theoretical framework on RAIL application:

Contractor, D., McDuff, D., Haines, J. K., Lee, J., Hines, C., Hecht, B., Vincent, N., & Li, H. (2022). Behavioral use licensing for responsible AI. 2022 ACM Conference on Fairness, Accountability, and Transparency, 778–788. https://doi.org/10.1145/3531146.3533143

Resources

Link to the License Webpage

AI Incidents Database (AID)

Purpose:

"The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."

(From the AI Incidents Database website)

Database features

  • Indexes over 2,000 incidents of AI harm. 
  • Defines what an AI incident is
  • Offers an incident classifications
  • Provides various ways to explore the data

Relevance for Librarian Work?

For example, if you receive questions about... 

  • identifying risks involved with AI application in certain fields
  • assessing risks of existing AI tools
  • finding safe alternatives

How to use the AIM

  • the monitor is a database that can be searched by keywords 
  • result data can be downloaded in XLS format

Link 

Creative Commons License
This work by The University of Victoria Libraries is licensed under a Creative Commons Attribution 4.0 International License unless otherwise indicated when material has been used from other sources.