Artificial Intelligence in the formulation and customization of public policies: Between efficiency, equity, and democratic risk

Authors

DOI:

https://doi.org/10.31207/colloquia.v12i1.197

Keywords:

Artificial intelligence, public policy, algorithmic decision-making, digital governance

Abstract

The integration of artificial intelligence (AI) systems into the public policy cycle is substantially transforming how states diagnose problems, design interventions, allocate resources, and evaluate outcomes. This article critically examines how AI reconfigures the relationships between administrative efficiency, distributive equity, institutional transparency, and democratic legitimacy, and proposes an operational framework for its responsible adoption in public contexts. This analysis is based on the question: under what conditions do the benefits of AI justify the risks it poses to contemporary democratic governance? To answer this question, this research develops a conceptual and methodological analysis based on four pillars: efficiency, equity, transparency, and legitimacy. Each pillar begins with an operational definition, moves on to verifiable indicators that allow for the assessment of its level of compliance, and finally defines the normative limits that should guide its application. Similarly, the research distinguishes between political personalization and administrative personalization, demonstrating that administrative personalization improves the concentration and provision of services. However, it can also exacerbate inequalities if there are biases in the data or weak human oversight. The article proposes a governance sequence before, during, and after implementation that links algorithmic impact assessment, human guidance at key points, periodic external audits, traceability, disclosure of rationale, and practical appeals mechanisms. Drawing on international cases in health, social protection, and education, it identifies enabling requirements and veto assumptions to ensure that efficiency is not prioritized over social justice, non-discrimination, and proper accountability. This research concludes by noting that AI can help strengthen state capacity, but only if its implementation takes place within an institutional framework that safeguards citizen participation, democratic control, and transparency. AI does not replace human judgment or the deliberative process; it complements them when there are clear safeguards, verifiable standards, and continuous oversight focused on the public good.

References

Almada, M. (2021). Human oversight in automated decision-making: Principles and challenges. AI & Society, 36(4), 1257–1270. https://doi.org/10.1007/s00146-020-01077-3

Barocas, S., & Hardt, M. (2020). Fairness in machine learning. MIT Press.

Beckman, R., Müller, J., & Hansen, T. (2024). Democratic accountability in algorithmic governance. Governance, 37(1), 55–74. https://doi.org/10.1111/gove.12710

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Capraro, C., Hernández, L., & Paredes, M. (2024). Algorithmic personalization in Latin American public education. Education Policy Analysis Archives, 32(12), 221–240. https://doi.org/10.14507/epaa.32.2411

De la Briere, B., & Lindert, K. (2005). Reforming Brazil’s Bolsa Família Program (Social Protection Discussion Paper No. 0534). The World Bank. https://openknowledge.worldbank.org/handle/10986/13687

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2020). Transparency you can trust: Transparency requirements for artificial intelligence. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720921115

Ferreira de Lima, G., & Veiga, A. (2025). AI-supported pedagogical systems in Latin America: Equity and performance implications. Journal of Learning Analytics, 12(1), 44–63. https://doi.org/10.18608/jla.2025.8123

Kaminski, M. E., & Malgieri, G. (2021). Algorithmic impact assessments under the GDPR. Computer Law & Security Review, 41, 105532. https://doi.org/10.1016/j.clsr.2021.105532

Kroll, J. (2020). Outlining traceability: A principle for accountability in machine learning systems. Harvard Journal of Law & Technology, 33(2), 403–454. https://doi.org/10.1145/3442188.3445937

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. arXiv. https://doi.org/10.48550/arXiv.2001.00973

Shneiderman, B. (2022). Human-centered AI. Oxford University Press.

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158

Zouridis, S., van Eck, M., & Bovens, M. (2020). Automated decision-making in the public sector: Between efficiency and democratic legitimacy. Information Polity, 25(3), 281–297. https://doi.org/10.3233/IP-200238

Published

2025-12-20