AI Disclosure
Understanding governance practices and designing tools
AI disclosure focuses on informing consumers whether and how AI-generated decisions or content are produced. As AI-generated and co-created content becomes indistinguishable from human-authored work, accurate disclosure is critical for transparency, proper attribution, and helping consumers calibrate their trust in the content. Despite emerging policy mandates such as the EU AI Act and South Korea’s AI Basic Act, current disclosure practices remain underdeveloped and poorly aligned with social and institutional norms.
My research addresses this gap by examining how AI disclosure should be designed to answer the following questions: (1) Ownership: how disclosure practices can align with authors’ psychological and legal ownership in co-created work; (2) Accountability: how disclosure shapes understandings of who is responsible for the risks and potential harms associated with AI-generated content; and (3) Trust: how different disclosure strategies influence consumer perceptions, trust, and reliance on AI-generated media.