August 20, 2021 / BY Emily F. Keller

UW Team Launches Center for Responsibility in AI Systems & Experiences

Responsibility in AI Systems & Experiences (RAISE) at the University of Washington, launched earlier this year, is a group of researchers from across the university who focus on issues of transparency, fairness, accountability, and trustworthiness in AI systems, as well as building and evaluating human-centric AI algorithms and systems.

RAISE is a newly formed UW Research Center. The group held a spring seminar with over 40 students, postdocs and faculty that covered topics such as explainable AI, federated learning, and the environmental and financial costs of large language models. There were several invited talks, including a presentation on ethical AI by Ricardo Baeza-Yates, a research professor at the Institute for Experiential AI of Northeastern University.

The mission of RAISE is to provide a platform for scholarly, educational, and outreach activities that involve:

  • Foundational research in intelligent information systems and their interaction with human values and experiences;
  • The development and deployment of such systems in critical yet underserved contexts across the public sector; the physical, life, and social sciences; and relevant industries. These contexts include health, education, finance, and policy.

RAISE is led by Assistant Professor Tanu Mitra, Associate Professor Chirag Shah, and Associate Professor Bill Howe, Director of Urbanalytics. Participants represent a variety of disciplines, including information science, linguistics, computer science, psychology, physics, social science, human-computer interaction, astronomy and statistics.

Projects include EquiTensors, which uses techniques from multi-task learning, representation learning, and fair ML to learn fair representations of heterogeneous urban datasets for use in downstream applications; Fairness Accountability Transparency Ethics (FATE), which aims to balance relevance and diversity in search, leading to new metrics and frameworks for estimating fairness in ML systems; and building a research infrastructure to audit search and recommendation algorithms to reveal misleading and inaccurate information.

More information is available here: https://www.raise.uw.edu/.