Advanced search
1 file | 525.45 KB Add to list

The KL-divergence between a graph model and its fair I-projection as a fairness regularizer

Maarten Buyl (UGent) and Tijl De Bie (UGent)
(2021) arXiv.
Author
Organization
Abstract
Learning and reasoning over graphs is increasingly done by means of probabilistic models, e.g. exponential random graph models, graph embedding models, and graph neural networks. When graphs are modeling relations between people, however, they will inevitably reflect biases, prejudices, and other forms of inequity and inequality. An important challenge is thus to design accurate graph modeling approaches while guaranteeing fairness according to the specific notion of fairness that the problem requires. Yet, past work on the topic remains scarce, is limited to debiasing specific graph modeling methods, and often aims to ensure fairness in an indirect manner. We propose a generic approach applicable to most probabilistic graph modeling approaches. Specifically, we first define the class of fair graph models corresponding to a chosen set of fairness criteria. Given this, we propose a fairness regularizer defined as the KL-divergence between the graph model and its I-projection onto the set of fair models. We demonstrate that using this fairness regularizer in combination with existing graph modeling approaches efficiently trades-off fairness with accuracy, whereas the state-of-the-art models can only make this trade-off for the fairness criterion that they were specifically designed for.
Keywords
fairness, i-projection, information projection, link prediction, loss, objective

Downloads

  • main.pdf
    • full text (Author's original)
    • |
    • open access
    • |
    • PDF
    • |
    • 525.45 KB

Citation

Please use this url to cite or link to this publication:

MLA
Buyl, Maarten, and Tijl De Bie. “The KL-Divergence between a Graph Model and Its Fair I-Projection as a Fairness Regularizer.” ArXiv, 2021.
APA
Buyl, M., & De Bie, T. (2021). The KL-divergence between a graph model and its fair I-projection as a fairness regularizer.
Chicago author-date
Buyl, Maarten, and Tijl De Bie. 2021. “The KL-Divergence between a Graph Model and Its Fair I-Projection as a Fairness Regularizer.” ArXiv.
Chicago author-date (all authors)
Buyl, Maarten, and Tijl De Bie. 2021. “The KL-Divergence between a Graph Model and Its Fair I-Projection as a Fairness Regularizer.” ArXiv.
Vancouver
1.
Buyl M, De Bie T. The KL-divergence between a graph model and its fair I-projection as a fairness regularizer. arXiv. 2021.
IEEE
[1]
M. Buyl and T. De Bie, “The KL-divergence between a graph model and its fair I-projection as a fairness regularizer,” arXiv. 2021.
@misc{8697343,
  abstract     = {{Learning and reasoning over graphs is increasingly done by means of
probabilistic models, e.g. exponential random graph models, graph embedding
models, and graph neural networks. When graphs are modeling relations between
people, however, they will inevitably reflect biases, prejudices, and other
forms of inequity and inequality. An important challenge is thus to design
accurate graph modeling approaches while guaranteeing fairness according to the
specific notion of fairness that the problem requires. Yet, past work on the
topic remains scarce, is limited to debiasing specific graph modeling methods,
and often aims to ensure fairness in an indirect manner.
  We propose a generic approach applicable to most probabilistic graph modeling
approaches. Specifically, we first define the class of fair graph models
corresponding to a chosen set of fairness criteria. Given this, we propose a
fairness regularizer defined as the KL-divergence between the graph model and
its I-projection onto the set of fair models. We demonstrate that using this
fairness regularizer in combination with existing graph modeling approaches
efficiently trades-off fairness with accuracy, whereas the state-of-the-art
models can only make this trade-off for the fairness criterion that they were
specifically designed for.}},
  author       = {{Buyl, Maarten and De Bie, Tijl}},
  keywords     = {{fairness,i-projection,information projection,link prediction,loss,objective}},
  language     = {{eng}},
  pages        = {{13}},
  series       = {{arXiv}},
  title        = {{The KL-divergence between a graph model and its fair I-projection as a fairness regularizer}},
  url          = {{https://arxiv.org/abs/2103.01846}},
  year         = {{2021}},
}