top of page
Stack of Files

Research Paper

Optimizing Efficiency, Ensuring Equity: Advancing Knowledge Distillation with a Focus on Bias Reduction

Abstract

This paper proposes a framework for model compression and debiasing in the context of neural network training by combining two techniques: knowledge distillation and adversarial learning. The primary objective was to create a more efficient and unbiased student model by transferring knowledge from a complex teacher model while simultaneously mitigating biases inherent in the training data.

​

Despite a lack of high quality, trainable fairness datasets, it was found that leveraging adversarial debiasing can reduce bias, as measured by disparity, and increase accuracy by re-balancing predictions in favor of under-represented attributes within a class. This study focuses on debiasing in regards to protected characteristics, specifically gender.

​

This work contributes to the ongoing research in model compression and fairness in machine learning, offering a comprehensive approach that simultaneously addresses efficiency, performance, and bias concerns. The proposed framework has the potential to advance the deployment of machine learning models in real-world applications where resource constraints and ethical considerations continue to pose challenges to AI advancement.

bottom of page