Algorithmic Bias Explorer

A toy model of how algorithms can treat groups differently even when the math “looks neutral.” Adjust thresholds and base rates for two groups and watch common fairness metrics shift.

← Back to Home

Loan Approvals

Predict who will repay a loan. Explore how different approval thresholds change denial rates by group.

Hiring Filter

Rank job applicants and decide who gets interviews. See how small shifts change who gets screened out.

Health Risk Score

Flag “high-risk” patients for extra care. Examine who is over- or under-flagged across groups.

Model & Group Controls

Each group has a different base rate (how often the true outcome is present in that population) and a decision threshold (how strict the algorithm is about saying “yes”). In real systems, these differences can come from data quality, historical bias, or explicit policy choices.

Group A

Often the majority group in training data. Higher base rate = more people in this group truly have the positive outcome. Higher threshold = the model demands stronger evidence before saying “yes.”

Group B

Often underrepresented or historically marginalized. Different base rates and thresholds here simulate under‑measurement, noisier data, or stricter rules for this group.

Group A Group B We simulate 10,000 people per group and approximate fairness metrics from that toy data.

Fairness Metrics

In this scenario, a positive prediction means “approved for a loan.” Ideally, we want equal true positive rates and false positive rates across groups, but that rarely happens automatically.

Selection Rate

Share of each group getting a positive prediction.

Equal Opportunity Gap

Difference in true positive rates (TPR) between groups.

False Positive Gap

Difference in false positive rates (FPR) between groups.

Who Pays the Error Cost?

Which group shoulders more “unfair” errors (false negatives or false positives), given this scenario.