AI Governance

Understanding Industry Practitioners' Experiences in Generative AI Governance

Introduction

Generative AI offers impressive capabilities but also poses unique challenges, such as hallucinations. Researchers and regulators have proposed various tools, policies, and frameworks to address these issues. However, there is still a gap in understanding the real-world needs and challenges of AI practitioners when it comes to operationalizing governance.

Objective

We aim to engage AI practitioners in the conversations to inform the development of effective and practical governance guidelines and tools.

Team

A team of user researchers and UI/UX designers from multiple countries (India, US, Germany) collaborated. My role was to conduct end-to-end user research, including planning research protocols, recruiting participants, conducting interviews, analyzing qualitative data, and writing up results.

  • Hyo Jin (Gina) Do (User Researcher, IBM Research, US)
  • Swati Babbar (User Researcher, IBM India Software Labs, India)
  • Wenjing Li (UI/UX Designer, IBM, US)
  • Laura Walks (UI/UX Designer, IBM, US)
  • Shayenna Misko (User Researcher, IBM Software, Germany)

Method

We conducted semi-structured interviews with 10 industry practitioners involved in AI governance, recruited via User Interviews and Respondent.

The interview consisted of two phases:

  1. Experiences with generative AI governance (e.g., goals, challenges, needs)
  2. User experience of our governance tool design probe (e.g., feedback, questions)

Design Probe

Link to the interactive prototype is here.


Findings

Goals

  • Improve AI models’ quality by evaluating/monitoring outputs through various performance metrics
  • Assessing ethical and societal impact of the AI outputs
  • Ensuring data privacy and security
  • Managing compliance

Challenges

  • Evaluating and improving AI models to achieve their target performance
  • Interpreting regulations in specific contexts
  • Data protection in open servers and validation for security
  • Technical challenges (e.g., integration, automation)

Needs

Participants requested the following informational needs:

  • Evaluation metrics
  • User data and use cases
  • AI models and their inner workings (e.g., parameters, weights, and architecture)

However, their needs are not fully addressed by current AI governance solutions:

  • Limited evaluations and metrics, particularly when customized or rare metrics are used
  • Lacking explainability features

Design Probe Feedback

Overall, participants appreciated the way information was displayed and organized, particularly on dashboards and visualizations. Key suggestions for improvement included:

  • Recommendations for resolving violations
  • Support in understanding various metrics and terminologies
  • Detailed explanations of AI models, data, context, and evaluation methods

Deliverables

Question Bank and Product Impact

We proposed the following question bank to inform the design of explainability features for AI governance tools, including IBM’s Watsonx.Governance.

Publication

We presented our findings at the CHI Conference on Human Factors in Computing (Do et al., 2025).

Watch our presentation video to learn more!