AI requirements
  • Introduction
    • Abstract
    • Scope
  • AI REQUIREMENTS
    • GDPR
    • Discrimination rules
    • AI Act
  • DOCUMENTATION
    • Legal and industry benchmarks
    • Comparison tables
    • References
Powered by GitBook
On this page
  1. DOCUMENTATION

Legal and industry benchmarks

PreviousAI ActNextComparison tables

Last updated 6 months ago

Legal metrics
Industry metrics
Toolbox

Privacy

Not explicit

  • Privacy budget

  • Exposure

  • k, ℓ and t values [394]

Fairness

Not explicit in the EU, 80% rule in the US

  • Disparate impact (80%)

  • Equality of odds Equality of opportunities

  • Minimum, invariance & directional testing scores

Explicability

Not explicit

  • LIME values

  • SHaPley values

  • Ability to deliver counterfactuals

Accuracy

Not explicit

  • Precision, recall & F1 score for underfitting

  • Cross-validation for overfitting

Robustness - safety

Not explicit

  • Recalibration

  • OoD detection

  • Testing scores

Robustness - security

Not explicit

  • Robustness certification

  • Robustness under adversarial perturbations

  • Testing scores

393 The fourth annex of the AI Act mentions that AI providers must define metrics to measure the accuracy, robustness, cybersecurity and compliance and potentially discriminatory impacts of their AI systems, leaving to industry actors the liberty to define most of those benchmarks.

[394] Respectively referring to k-anonymity, ℓ-diversity, and t-closeness

TensorFlow Privacy
IBM DiffPrivLiv
AI Fairness 360
TensorFlow What If Dashboard
Error Analysis Toolkit
AIX 360
Google Robustness Metrics
Adversarial Robustness Toolbox