Fair AI Project

By Ray Chen on Jun 30, 2025
Visualization of fairness metrics across weather and lighting conditions in model predictions.

Fair AI Visulization

The Fair AI project introduces an interactive visualization system for evaluating fairness in AI models using Signed Deviation Error (S.D.E.). The tool enables researchers to interpret bias in predictive systems across sensitive (e.g., weather) and domain (e.g., time of day) attributes. Using median-based signed error shifts, the visualization highlights disparities in subgroup performance, offering intuitive insight into model behavior.

Key Results

  • Introduced the new metric (median deviation of signed error) as a fairness indicator.
  • Developed an interactive dashboard to visualize group-level error dynamics.
  • Demonstrated subgroup bias patterns using BDD-based autonomous driving data.
  • Embedded subgroup clustering using t-SNE and TRIMAP to reveal structural bias.

Team

  • Ray ChenPh.D Student
  • Christopher William Driggers-Ellis - Ph.D Student
  • Prof. Christian GrantAdvisor

Code and Data

Publications

Come out in this Fall!

Sponsors

© Copyright 2025 by UF Data Studio. Built with ♥ by ceg.me (via CreativeDesignsGuru!).