Machine learning (ML) systems are rapidly being introduced across a variety of sectors, impacting how we interact with one another and access information. As a result, many are calling for “trustworthy” AI systems – from President Biden’s executive order to the EU's AI Act to the AI For Good initiative. These calls emerge from a host of studies and experiences that have highlighted important limitations in the reliable use of machine learning at scale: error, hallucinations, bias, dangerous safety concerns, and harms emerging from the systems themselves. Researchers, policy-makers and practitioners are all looking to understand, create, and oblige trust and trustworthiness in ML.
However, both the calls and the responses have made it apparent that “trust” and “trustworthiness” mean different things to different people, groups, and fields of study. What does it mean for a person to “trust” a computationally-driven machine? Some conceptions of “trust” are a technical definition a ML model must meet. Some notions of “trust” reflect the processes through which the model was generated, or through which it is used, more than any property of the model itself. Still other perceptions are based on individuals’ and groups’ beliefs about the model and its developers, data contributors, deployers, users, or subjects. Among this wide variety of conceptions of “trust” and “tustworthiness” surrounding machine learning, a solid foundation is needed to connect existing scholarship and develop a future program of research.
This workshop brings together experts across a range of backgrounds, methods and disciplines, featuring researchers from academia and industry, theory and practice, who all bring different societal and individual lenses to the topic of trust. This workshop’s goal is for participants to learn, compare, and synthesize different definitions of trust and how they can be operationalized in machine learning.
We encourage submissions related to the following topics:
- Algorithmic auditing
- Economic approaches (e.g., to consumer confidence and firm reputations)
- Explanations and explainability
- Participatory machine learning
- Philosophical/ethical foundations
- Reliable predictions and reporting them responsibly
- Reproducibility, including in statistical estimates of treatment effects
- Social choice and other political-theoretic approaches to designing AI systems.
For those interested, we will additionally use the workshop convening as a starting point to develop a paper that makes explicit the theoretical foundations and future directions for this budding research program. There will be an optional working session at the end of the day to discuss next steps in this direction.