Compliance Engineering for High-Risk HR AI under the EU AI Act: Discrimination Risks, Controls, and Audit Evidence

Authors

DOI:

https://doi.org/10.5281/zenodo.18525816

Keywords:

Algorithmic hiring, HR analytics, Algorithmic discrimination, EU AI Act, Compliance engineering, bias audit, Human oversight

Abstract

Artificial intelligence (AI) is increasingly used in human resources for recruitment and worker evaluation, including résumé screening, candidate ranking, online assessments, video-interview scoring, and performance analytics. While these systems can improve efficiency and consistency, they may also introduce or amplify discrimination through proxy variables, historically biased labels, measurement error in “soft” constructs, and feedback loops across hiring and performance pipelines. This paper proposes a compliance engineering framework that operationalizes the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) for high-risk HR AI systems by translating legal obligations into implementable technical and governance controls. The framework integrates the NIST AI Risk Management Framework lifecycle with HR-specific fairness practices, data protection safeguards relevant to automated decision-making, and enforceable bias-audit patterns from employment regulation. Results include (i) a reference governance-and-technical architecture for HR AI, and (ii) a control–metric matrix mapping discrimination risk modes to test procedures, mitigations, and audit-ready evidence artifacts. The paper concludes with practical compliance dossier templates suitable for both deployers and vendors, supporting traceability, meaningful human oversight, and continuous monitoring of performance and subgroup fairness.

References

European Data Protection Board. (2018, May 25). Automated decision-making and profiling. European Data Protection Board

European Data Protection Board. (2020). Guidelines 05/2020 on consent under Regulation 2016/679 (Version 1.0). European Data Protection Board+1

European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88. EUR-Lex

European Union. (2024). Regulation (EU) 2024/1689 (Artificial Intelligence Act). (Official Journal text via EUR-Lex).

Equal Employment Opportunity Commission. (n.d.). What is the EEOC’s role in AI? (Technical assistance).

Future of Privacy Forum. (2022, May). Automated decision-making under the GDPR: Practical cases from courts and data protection authorities. Future of Privacy Forum

Hunton Andrews Kurth LLP. (n.d.). Impact of the EU AI Act on human resources activities (Client alert).

ISO/IEC. (2021). ISO/IEC TR 24027:2021: Information technology—Artificial intelligence (AI)—Bias in AI systems and AI aided decision making. International Organization for Standardization. ISO+1

Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705. EUR-Lex+1

Mayer Brown. (n.d.). Commentary on EEOC guidance/technical assistance on AI and disparate impact (Client alert).

New York City Department of Consumer and Worker Protection. (n.d.). Automated Employment Decision Tools (Local Law 144).

New York City Department of Consumer and Worker Protection. (2023). Notice of adoption / statement of basis and purpose: Rules implementing Automated Employment Decision Tools (Local Law 144). rules.cityofnewyork.us

New York State Office of the State Comptroller. (2025, December 2). Enforcement of Local Law 144 – Automated Employment Decision Tools (Audit report). osc.ny.gov

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20) (pp. 469–481). Association for Computing Machinery. Creating Future Us+1

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2019). Mitigating bias in algorithmic hiring: Evaluating claims and practices (arXiv:1906.09208). arXiv. arXiv

Reuters. (2024, April 11). EEOC says Workday must face claims that AI software is biased. Reuters. Reuters

Reuters. (2024, May 14). Workday urges judge to toss bias class action over AI hiring software. Reuters. Reuters

Taylor Wessing. (n.d.). The EU AI Act from an HR perspective (Client alert).

Upturn. (2018). Help wanted: An exploration of hiring algorithms, equity, and bias. macfound.org+1

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. ISO+1

Article 29 Data Protection Working Party. (2018, February 6). Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679 (WP251rev.01). Privacy + Security Academy

European Data Protection Board. (n.d.). Endorsed WP29 Guidelines (listing of WP251rev.01 and related endorsed guidance). European Data Protection Board

OECD. (n.d.). General guidance on trustworthy AI / governance (if used in the manuscript’s comparative discussion).

National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). National Institute of Standards and Technology.

New York City Department of Consumer and Worker Protection. (2021). Local Law 144 of 2021 (Automated Employment Decision Tools).

Perkins Coie LLP. (2023, April 6). New York City adopts final rules for law governing Automated Employment Decision Tools (Client update). Perkins Coie

International Association of Privacy Professionals. (2022). Coverage of FPF ADM report / case-law analysis (news item).

Regulation (EU) 2016/679 (GDPR). (2016). Article 22: Automated individual decision-making, including profiling (consolidated text reference). EUR-Lex+1

ISO/IEC. (2021). ISO/IEC TR 24027:2021 (IEC webstore listing). IEC Webstore

Future of Privacy Forum. (2022). Automated decision-making under the GDPR (blog release/summary page). Future of Privacy Forum+1

Downloads

Published

2025-09-30

How to Cite

Lutsenko, K. (2025). Compliance Engineering for High-Risk HR AI under the EU AI Act: Discrimination Risks, Controls, and Audit Evidence. Ege Scholar Journal, 2(3), 184–195. https://doi.org/10.5281/zenodo.18525816

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.