AI requirements
  • Introduction
    • Abstract
    • Scope
  • AI REQUIREMENTS
    • GDPR
    • Discrimination rules
    • AI Act
  • DOCUMENTATION
    • Legal and industry benchmarks
    • Comparison tables
    • References
Powered by GitBook
On this page
  1. DOCUMENTATION

References

Fundamental texts

  • European Convention of Human Rights, 1950 European Union Charter of Fundamental Rights, 2000

Regulations & directives

  • Regulation (EU) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing

  • Directive 95/46/EC (General Data Protection Regulation) Proposal (EU) 2021/0106 for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)

  • Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation

European parliament

  • EP, “Artificial Intelligence ante portas: Legal & ethical reflections” (Briefing), 2019

  • EP, “EU guidelines on ethics in artificial intelligence: Context and implementation” (Briefing), 2019

  • EP, “The impact of the General Data Protection Regulation (GDPR) on artificial intelligence” (Study), June 2020

  • EP, “European framework on ethical aspects of artificial intelligence, robotics and related technologies”, September 2020

European commission

  • European Commission, “White Paper On Artificial Intelligence - A European approach to excellence and trust”, 19 February 2020

EDPB, EDPS & ENISA

  • WP29, "Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC", 9 April 2014

  • WP29, "Guidelines on Automated Individual Decision-Making and Profiling for the purposes of Regulation 2016/679", adopted on 3 october 2017

  • WP29, "Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is « likely to result in a high risk » for the purposes of Regulation 2016/679", 2017

  • EDPB, "Guidelines 07/2020 on the concepts of controller and processor in the GDPR", 7 July 2020

  • EDPB, "Guidelines 05/2020 on consent under Regulation 2016/679”, 4 May 2020

  • EDPB, "Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects”, 8 October 2019

  • EDPB, “Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects”, 9 April 2019

  • EDPB, "Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data", 18 June 2021

  • EDPB, Letter Ref: OUT2022-0009, 22 February 2022

  • EDPS, "Synthetic data” EDPS, “Pseudonymous data: processing personal data while mitigating risks”, 21 December 2021 56

  • ENISA, "Recommendations on shaping technology according to GDPR provisions - An overview on data pseudonymisation", 28 January 2018

  • ENISA, "Pseudonymisation techniques and best practices", 3 December 2019

  • ENISA, "Data Pseudonymisation: Advanced Techniques and Use Cases", 28 January 2021

  • ENISA, "Boosting your Organisation's Cyber Resilience - Joint Publication”, 14 February 2022

Jurisprudence

  • US Supreme Court, Watson v. Fort Worth Bank & Trust, 1988

  • Case C‑389/20, CJ v. Tesorería General de la Seguridad Social [2021] ECR

  • Case C-132/92, Birds Eye Walls [1993] ECR I5592

  • Case C-411/98, Angelo Ferlini v. Centre Hospitalier de Luxembourg [2000] ECR I-8141

  • Case C-132/92, Birds Eye Walls [1993] ECJ I5592

  • Case 57325/00, D.H. and Others v. the Czech Republic [2007] ECHR

  • Case C-524/06, “Heinz Huber v Bundesrepublik Deutschland” [2008] ECJ

  • Case C-152/73, Sotgiu v. Deutsche Bundespost [1974] E.C.R

Data protection authorities

  • CNIL, "L’anonymisation de données personnelles", 19 May 2020

  • CNIL, "Chacun chez soi et les données seront bien gardées : l’apprentissage fédéré" (LINC), April 2022

  • CNIL, “Quels usages pour les données anonymisées ?" (LINC), November 2017

  • CNIL, “Guide d'auto-évaluation pour les systèmes d'intelligence artificielle (IA)"

  • ICO, “Anonymisation: managing data protection risk code of practice, Annex 2 – Anonymisation case-studies"

  • ICO, "Principle (e): Storage limitation"

  • ICO, “Overview of the General Data Protection Regulation (GDPR)”

  • ICO, “Guidance on the AI auditing framework: draft guidance for consultation”, 2020 Datatilsynet, "Artificial intelligence and privacy report", January 2018

  • AEPD, " Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción”, 2020

US authorities

  • U.S. Congress, National Artificial Intelligence Initiative Act of 2020

  • U.S. Congress, "Promoting Digital Privacy Technologies Act", introduced in congress on the 2 April 2021

  • U.S. Census, "The Census Bureau's Simulated Reconstruction-Abetted Re-identification Attack on the 2010 Census”, 7 May 2021

  • U.S. Census, "2020 Decennial Census: Processing the Count: Disclosure Avoidance Modernization", 2 November 2021

  • US Census, "Disclosure Avoidance: Latest Frequently Asked Questions", last revised on 7 December 2021

  • U.S. Department of State, “Artificial Intelligence (AI)"

  • NIST, “Differentially Private Synthetic Data” (Cybersecurity Insights) 57

  • U.S. EEOC, "Uniform guidelines on employee selection procedures", March 2, 1979.

  • U.S. Senate Committee on Armed Services, Committee Hearing of Thursday, 13 February 2020

  • U.S. Department of Energy Office of Scientific and Technical Information, “Defending Against Adversarial Examples”, 2019

Technical authorities & agencies

  • European Medicines Agency, "Data anonymisation - a key enabler for clinical data sharing", 4 December 2018

  • European Medicines Agency, “Data anonymisation - a key enabler for clinical data sharing”, 4 December 2018

  • Bundesamt für Sicherheit und Informationstechnik, “Sicherer, robuster und nachvollziehbarer Einsatz von KI”, 9 February 2021

  • ILNAS, "Artificial intelligence : technology, use cases and applications, trustworthiness and technical standardization” (White paper), February 2021

  • WIPRO, "Robust or resilient, How to Navigate Uncertainty and the New Normal”, October 2020

  • ANSSI, « Cyber résilience en entreprise – Enjeux, référentiels et bonnes pratiques »

  • HLEG, “Ethics Guidelines for Trustworthy AI”, 8 April 2019, p. 36

  • ECB, “Guideline of the European Central Bank on the data quality management framework for the Centralised Securities Database” (ECB/2012/21), 2012

Repositories

  • Kaggle, “Ethics and AI: how to prevent bias on ML?”, 2019

  • GitHub, “cleverhans-lab/cleverhans”, latest commit in September 2021

  • GitHub, “Trusted-AI/AIF360”, latest commit in December 2019

  • GitHub, “poloclub/FairVis”, last commit in August 2021

  • GitHub, “dssg/aequitas”, last commit in May 2021

  • GitHub, “Trusted-AI/AIX360”, latest commit in June 2022

  • Github, “Trusted AI/Adversarial Robustness Toolbox”, last commit in June 2022

  • GitHub, “fairlearn/fairlearn”, last commit in June 2022

  • GitHub, “microsoft/responsible-ai-toolbox”, last commit in June 2022

  • GitHub, "awslabs/privacy-preserving-xgboostinference”, last commit in January 2022

Legal literature

  • Paul Ohm, "Broken promises of privacy: responding to the suprising failure of anonymization”, 2010

  • Wachter et al., “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR”, 2017

  • Wachter et al., "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation", 2017

  • Tourkochoriti, “Jenkins v. Kingsgate and the Migration of the US Disparate Impact Doctrine in EU Law”, 2017

  • Mehrabi, “A Survey on Bias and Fairness in Machine Learning”, 2019 58

  • Suresh et al., “A framework for understanding unintended consequences of machine learning”, 2019

  • Floridi et al., “The Ethics of Information Transparency”, 2021

  • Raji et al., “Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing”, 2020

  • Hutchinson, “Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure”, 2020

  • Mokander et al., “Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation” (Oxford), 2021

  • Floridi et al., “capAI, a Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act” (Oxford), 2022

Technical literature

  • Latanya et al., "Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression", 1998

  • Latanya Sweeney, "Simple Demographics Often Identify People Uniquely”, 2000

  • Arvind Narayanan, "How To Break Anonymity of the Netflix Prize Dataset", 2006

  • Machanavajjhala et al., “l-Diversity: Privacy Beyond k-Anonymity”, 2006

  • Dwork et al., "Our Data, Ourselves: Privacy via Distributed Noise Generation", 2006 Erlingsson et al., "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response", 2014

  • Cynthia Dwork and Aaron Roth, “The Algorithmic Foundations of Differential Privacy”, 2014

  • Goodfellow et al., "Generative Adversarial Networks”, 2014

  • L. Gorissen et al., "A Practical Guide to Robust Optimization", 2015

  • Feldman et al., "Certifying and Removing Disparate Impact”, 2015

  • Goodfellow et al., “Explaining and harnessing adversarial examples”, 2015 Papernot et al., “Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks”, 2015

  • Amodei et al., “Concrete Problems in AI Safety”, 2016

  • Bryce Goodman and Seth Flaxman, "EU Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’", 2016

  • Abadi et al., "Deep Learning with Differential Privacy", 2016

  • Ribeiro et al., “«Why Should I Trust You? » Explaining the Predictions of Any Classifier”, 2016

  • Ledig et al., "Photo-Realistic Single Image Super-Resolution Using a Generative AdversarialNetwork", 2016

  • C. Lipton, “The Mythos of Model Interpretability”, 2016 Kurakin et al., “Adversarial examples in the physical world”, 2016

  • Nicholas Carlini & David Wagner, “Defensive Distillation is Not Robust to Adversarial Examples”, 2016

  • Mendoza et al., “The Right not to be Subject to Automated Decisions based on Profiling”, 2017

  • M. Lundberg et al., "A Unified Approach to Interpreting Model Predictions", 2017

  • Gu et al., “BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain”, 2017

  • Wicker et al., “Feature-Guided Black-Box Safety Testing of Deep Neural Networks”, 2017

  • Guo et al., “On the calibration of modern neural networks”, 2017

  • Madry et al., “Towards Deep Learning Models Resistant to Adversarial Attacks”, 2017

  • Steinhardt et al., "Certified Defenses for Data Poisoning Attacks”, 2017

  • Liu et al., “Trojaning Attack on Neural Networks”, 2017

  • Chen et al., "ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models”, 2017

  • Thilo Strauss et al., "Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks", 2017

  • Valentina Zantedeschi et al., “Efficient Defenses against Adversarial Attacks”, 2017

  • Carlini and Wagner, "Adversarial Examples are not Easily Detected: Bypassing Ten Detection Methods", 2017

  • Wang et al., "Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training”, 2018

  • Papernot et al., "Scalable Private Learning with PATE", 24 February 2018

  • Gilmer et al., “Motivating the Rules of the Game for Adversarial Example Research”, 2018

  • Stutz et al., “Disentangling Adversarial Robustness and Generalization”, 2018

  • Tsipras et al., "Robustness may be at odds with accuracy”, 2018 Dong Su et al., "Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models", 2018

  • Wang et al., “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach”, 2018

  • Boopathy et al., “CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks”, 2018

  • Tsui-Wei Weng et al., "Towards Fast Computation of Certified Robustness for ReLU Networks", 2018

  • Zhang et al., "Efficient Neural Network Robustness Certification with General Activation Functions", 2018 Saleiro et al., “Aequitas: A Bias and Fairness Audit Toolkit”, 2018

  • Nalisnick et al., “Do Deep Generative Models Know What They Don't Know?”, 2018

  • Carlini et al., “Audio Adversarial Examples: Targeted Attacks on Speech-to-Text”, 2018 Andrew Ilyas et al., “Black-box Adversarial Attacks with Limited Queries and Information”, 2018

  • Mitchell et al., “Model Cards for Model Reporting”, 2018

  • Cheng et al., “Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach", 2019

  • Schulam et al., “Can You Trust This Prediction? Auditing Pointwise Reliability After Learning", 2019

  • Ting-En Lin & Hua Xu, "Deep Unknown Intent Detection with Margin Loss", 2019

  • Stefan Larson, "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction”, 2019

  • Rahimian et al., “Distributionally Robust Optimization: A Review", 2019

  • He et al., "Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation", 2019

  • Oakden-Rayner et al., “Hidden Stratification Causes Clinically Meaningful Failures in Machine Learning for Medical Imaging”, 2019

  • Prost et al., "Toward a better trade-off between performance and fairness with kernel-based distribution matching", 25 October 2019

  • Saria et al., “Tutorial: Safe and Reliable Machine Learning", 2019

  • Schorn et al., “Automated design of errorresilient and hardware-efficient deep neural networks”, 2019

  • Cabrera et al., “FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning”, 2019

  • Hendrycks et al., “The Many Faces of Robustness: A Critical Analysis of Out-ofDistribution Generalization”, 2020

  • Abdar et al., “A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges", 2020

  • Stadler et al., "Synthetic Data -- Anonymisation Groundhog Day", 2020

  • N. Benjamin Erichson et al., "Noise-response Analysis for Rapid Detection of Backdoors in Deep Neural Networks", 2020

  • Bourtoule et al., "Machine Unlearning", 2019 Choquette-Choo et al., "Teaching Machines to Unlearn", 2020

  • Andreux et al., "Siloed Federated Learning for Multi-Centric Histopathology Datasets", 2020

  • Yao-Yuan Yang et al., "A Closer Look at Accuracy vs. Robustness”, 2020

  • La Malfa et al., “Assessing Robustness of Text Classification through Maximal Safe Radius Computation”, 2020

  • Jie Zhang et al., “Model Watermarking for Image Processing Networks”, 2020 Li et al., “SoK: Certified Robustness for Deep Neural Networks”, 2020

  • Nick Frosst, "Certifiable Robustness to adversarial Attacks; What is the Point?”, 2020

  • N. Benjamin Erichson et al., "Lipschitz Recurrent Neural Networks", 2020

  • Ribeiro et al., "Beyond Accuracy: Behavioral Testing of NLP models with CheckList", 2020

  • Long et al., "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators”, 2021

  • Sharma et al.,"MLCheck- Property-Driven Testing of Machine Learning Models", 2021

  • Levy et al., “RoMA: a Method for Neural Network Robustness Measurement and Assessment”, 2021

  • Xue et al., “Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations”, 2021 Chen et al., "De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks”, 2021

  • Nguyen Truong et al., “Privacy Preservation in Federated Learning: An insightful survey from the GDPR Perspective”, 2021 61

  • Lee et al., “Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network”, 2021

  • Knott et al., "CrypTen: Secure Multi-Party Computation Meets Machine Learning", 2021

  • APP, "Multiparty computation as supplementary measure and potential data anonymization tool", 2021

  • Wang et al., “Provable Guarantees on the Robustness of Decision Rules to Causal Interventions”, 2021

  • Abdelzad et al., "Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output”, 2021

  • Minto et al., "Stronger Privacy for Federated Collaborative Filtering With Implicit Feedback”, 2021 Muhammad Aitsam, "Differential Privacy Made Easy", 2022

  • Stanley L. Warner, "Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias”, 2022

  • Ravikumar et al., "Norm-Scaling for Out-ofDistribution Detection", 2022

  • Zhang et al., "Membership inference attacks against synthetic health data", 2022

  • Carlos Mougan et al., “Introducing explainable supervised machine learning into interactive feedback loops for statistical production system”, 2022

Google research

  • Google Cloud, “Recommendations AI”

  • Google PAIR Explorables, “How randomized response can help collect sensitive information responsibly”

  • Google PAIR Explorables, “Hidden Bias”

  • Google AI Blog, “Improving Out-of-Distribution Detection in Machine Learning Models”, 2019

  • Alexandra White, “Privacy Budget, limit the amount of individual user data exposed to sites to prevent covert tracking” (Chrome developers), 4 March 2022

  • McMahan et al., "Federated Learning with Formal Differential Privacy Guarantees" (Google research), 28 February 2022

  • Erlingsson et al., "Learning Statistics with Privacy, aided by the Flip of a Coin" (Google Research), 2014

  • Andrew Hard et al., “Federated Learning for Mobile Keyboard Prediction” (Google Research), 2018

Apple research

  • Apple Differential Privacy Team, “Learning with Privacy at Scale” (Apple Machine Learning Research), December 2017

  • Apple, "Apple Differential Privacy Technical Overview”

IBM research

  • IBM Research Trusted AI, “AI Explainability 360”

  • Pin-Yu Chen, "Certifying Attack Resistance of Convolutional Neural Networks" (IBM research), 2019

Amazon research

  • Xianrui Meng, “Machine learning models that act on encrypted data” (Amazon Science), 27 November 2020

Microsoft research

  • Vaughan et al., “A Human-Centered Agenda for Intelligible Machine Learning” (Microsoft research), 2020 62

  • Besmira Nushi, “Responsible Machine Learning with Error Analysis” (Microsoft Research), 2021

Industry white papers

  • Huawei, “AI Security White Paper”, 2018

  • Wood et al., “Safety first for automated driving”, 2019 PwC, “A practical guide to Responsible Artificial Intelligence (AI)”, 2020

PreviousComparison tables

Last updated 1 year ago