Control Coverage and Auditability in Managerial AI Systems Evidence for EU AI Act Oriented Compliance
Control Coverage and Auditability in Managerial AI Systems Evidence for EU AI Act Oriented Compliance
Keywords:
AI governance, ; managerial decision-making, accountability, ethics, EU AI Act, complianceAbstract
Artificial intelligence is increasingly used to support managerial decisions in hiring, credit, customer interaction, fraud detection, and planning. The EU Artificial Intelligence Act establishes a risk-based framework that requires traceable controls across the AI lifecycle, including documentation, logging, transparency, human oversight, and post-deployment monitoring. This study proposes a control-based compliance operating model and evaluates it using a structured dataset of managerial AI use cases scored for control coverage and auditability. The pilot sample (n=6) indicates a robust positive correlation between control coverage and auditability (r=0.887), with a linear model accounting for 78.6% of the variance in auditability (R²=0.786). Scenario projections indicate that increasing control coverage by 15 points in high-risk use cases could raise expected auditability by approximately 18 points. Findings support the practical claim that EU AI Act compliance is achieved through repeatable controls and evidence-based artefacts rather than policy statements alone, offering a testable checklist for trade partners in cross-border value chains.
References
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning.
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1).
Daci, E., & Rexhepi, B. R. (2024). The role of management in microfinance institutions in Kosovo: Case study Dukagjini Region. Quality – Access to Success, 25(202). https://doi.org/10.47750/QAS/25.202.22 admin.calitatea.ro+1
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608.
European Data Protection Board. (2018). Guidelines on Automated individual decision-making and Profiling (endorsed WP29). European Data Protection Board
European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689, 12.7.2024. EUR-Lex
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies (pp. 167–194). MIT Press.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57.
IBM. (2023). IBM Global AI Adoption Index: Enterprise Report (Dec 2023). filecache.mediaroom.com+1
IEEE. (2019). Ethically Aligned Design (1st ed.). IEEE.
ISO/IEC. (2022). ISO/IEC 27001:2022 Information security management systems—Requirements. ISO. ISO+1
ISO/IEC. (2023). ISO/IEC 23894:2023 Information technology—Artificial intelligence—Guidance on risk management. ISO. ISO
Kroll, J. A. (2021). Outlawing discrimination in the algorithmic economy. Cambridge University Press.
McKinsey & Company. (2024). The state of AI in early 2024. McKinsey & Company
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).
Murtezaj, I. M., Rexhepi, B. R., Dauti, B., & Xhafa, H. (2024). Mitigating economic losses and prospects for the development of the energy sector in the Republic of Kosovo. Economics of Development. https://doi.org/10.57111/econ/3.2024.82 EC Dev+1
Murtezaj, I. M., Rexhepi, B. R., Xhaferi, B. S., Xhafa, H., & Xhaferi, S. (2024). The study and application of moral principles and values in the fields of accounting and auditing. Pakistan Journal of Life and Social Sciences, 22(2), 3885–3902. https://doi.org/10.57239/PJLSS-2024-22.2.00286 PJLSS+1
NIST. (2023). AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology.
NIST. (2024). Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST-AI-600-1). https://doi.org/10.6028/NIST.AI.600-1 NIST
OECD. (2019). OECD Principles on Artificial Intelligence. OECD.
OECD. (2024). Evolving with innovation: The 2024 OECD AI Principles update. OECD AI+1
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14.
Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of public AI audits. In AIES 2019 Proceedings.
Rexhepi, B. R., Murtezaj, I. M., Xhaferi, B. S., Raimi, N., Xhafa, H., & Xhaferi, S. (2024). Investment decisions related to the allocation of capital. Educational Administration: Theory and Practice, 30(6), 513–527. https://doi.org/10.53555/kuey.v30i6.5233 Kuey+1
Rexhepi, B. R., Mustafa, L., Sadiku, M. K., Berisha, B. I., Ahmeti, S. U., & Rexhepi, O. R. (2024). The impact of the COVID-19 pandemic on the dynamics of development of construction companies and the primary housing market: Assessment of the damage caused, current state, forecasts. Architecture Image Studies, 5(2). https://doi.org/10.48619/ais.v5i2.988 University of Aberdeen+1
Rexhepi, B. R., Berisha, B. I., & Mustafa, L. (2022). Developing factoring service for small and medium enterprises at Kosovo’s Pro Credit Bank. Baltic Journal of Law & Politics, 15(7), 81–112. https://doi.org/10.2478/bjlp-2022-007009 ResearchGate
Shneiderman, B. (2020). Human-centered AI. Oxford University Press.
UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. UNESCO+1
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the GDPR. International Data Privacy Law, 7(2), 76–99.
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2026 Burhan Rexhepi

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in Ege Scholar Journal (ESJ) are open access and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licence permits use, sharing, adaptation, distribution, and reproduction in any medium or format, provided that appropriate credit is given to the author(s) and the source, a link to the licence is provided, and any changes made are indicated.
The author(s) retain copyright for their work. As long as they meet the conditions of CC BY 4.0, users are free to download, read, copy, print, and redistribute the content without prior permission.
Third-party material included in an article (e.g., figures, tables, images) is covered by the same licence unless otherwise stated in the credit line. If material is not included under the article’s licence and your intended use is not permitted by statutory regulation, permission must be obtained from the copyright holder.
License: https://creativecommons.org/licenses/by/4.0/








