Data Driven Responsible LLM Tools with Adriana Alvara Garcia, Karla Badillo-Urquiola, Ozioma Oguine
Автор: Notre Dame - IBM Technology Ethics Lab
Загружено: 2026-01-09
Просмотров: 8
Описание:
Red-teaming datasets play a key role in identifying potential harms in Large Language Models (LLMs), but current approaches often overlook domain-specific risks. Our project addresses this gap by collaborating with domain experts in youth online safety to design datasets that reflect real-world, context-sensitive definitions of harm. Through this work, we aim to develop a methodology for embedding expert knowledge into LLM evaluation, ensuring safer and more responsible AI deployment.
CHI 2025 Publication:
Adriana Alvarado Garcia, Heloisa Candello, Karla Badillo-Urquiola, and Marisol Wong-Villacres. 2025. Emerging Data Practices: Data Work in the Era of Large Language Models. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 846, 1–21. https://doi.org/10.1145/3706598.3714069
CSCW 2025 Workshop:
Title: Bridging Expertise and Participation in AI: Multistakeholder Approaches to Safer AI Systems for Youth Online Safety
Website: https://sites.google.com/nd.edu/cscw2...
CSCW 2025 Publication:
Ozioma C. Oguine, Oghenemaro Anuyah, Zainab Agha, Iris Melgarez, Adriana Alvarado Garcia, and Karla Badillo-Urquiola. 2025. Online Safety for All: Sociocultural Insights from a Systematic Review of Youth Online Safety in the Global South. arXiv (April 2025). https://doi.org/10.48550/arXiv.2504.2...
Ozzie’s Fellowship announcement
2025-2026 Notre Dame–IBM Technology Ethics Lab Fellows announced https://ethics.nd.edu/news-and-events...
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: