Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective
Автор: Conference on Language Modeling
Загружено: 2025-11-03
Просмотров: 49
Описание:
Authors: Weijie Xu, Yiwen Wang, Chi Xue, Xiangkun Hu, Xi Fang, Guimin Dong, Chandan K. Reddy
Large Language Models (LLMs) often generate responses with inherent biases,
undermining their reliability in real-world applications. Existing evaluation meth-
ods often overlook biases in long-form responses and the intrinsic variability of
LLM outputs. To address these challenges, we propose FiSCo (Fine-grained Se-
mantic Comparison), a novel statistical framework to evaluate group-level fairness
in LLMs by detecting subtle semantic differences in long-form responses across
demographic groups. Unlike prior work focusing on sentiment or token-level
comparisons, FiSCo goes beyond surface-level analysis by operating at the claim
level, leveraging entailment checks to assess the consistency of meaning across
responses. We decompose model outputs into semantically distinct claims and
apply statistical hypothesis testing to compare inter- and intra-group similarities,
enabling robust detection of subtle biases. We formalize a new group counterfac-
tual fairness definition and validate FiSCo on both synthetic and human-annotated
datasets spanning gender, race, and age. Experiments show that FiSCo more
reliably identifies nuanced biases while reducing the impact of stochastic LLM
variability, outperforming various evaluation metrics.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: