he rapid expansion of artificial intelligence (AI) has structurally transformed decision-making processes in areas such as healthcare, education, the public sector, and environmental management. Far from being a neutral set of technical tools, AI today constitutes a sociotechnical infrastructure that embeds values, normative assumptions, and power relations. In this context, the Artificial Intelligence for Social Good (AI4SG) approach emerges as a theoretical–practical framework aimed at aligning the design, implementation, and evaluation of AI systems with explicit goals of social well-being, equity, and sustainability.
AI4SG (Artificial Intelligence for Social Good) can be defined as a field of research and practice that seeks to apply advances in artificial intelligence to address social problems and improve the well-being of individuals, society, and the planet as a whole.
AI4SG is not limited to the application of advanced technologies to social challenges; rather, it proposes a normative reorientation of algorithmic innovation, integrating applied ethics, governance, and social impact assessment as structural components of technological development.
From Technological Optimism to Algorithmic Critique
Contemporary critical literature has shown that the indiscriminate adoption of algorithmic systems can generate significant adverse effects. O’Neil (2016) documents how opaque predictive models, even when statistically robust, can amplify inequalities and consolidate forms of structural exclusion. Complementarily, Benjamin (2019) demonstrates that AI systems tend to reproduce pre-existing racial and social hierarchies, configuring what she calls the New Jim Code.
From a broader perspective, AI is increasingly understood as a material and political phenomenon, whose operation depends on global chains of resource extraction, precarious human labor, and asymmetric concentrations of power (Crawford, 2021). These contributions converge in highlighting that technical efficiency is not a sufficient criterion for assessing the social legitimacy of AI, thereby opening conceptual space for approaches such as AI4SG.
Definition and Scope of AI4SG
Following Cowls (2022), AI4SG can be defined as the set of approaches, methodologies, and practices aimed at maximizing the positive social impact of AI while minimizing ethical, social, and environmental risks. This approach is characterized by three fundamental features:
- Normative intentionality: social objectives are not collateral effects, but explicit goals of the system.
- Human-centeredness: AI is conceived as support for human deliberation and decision-making, not as a substitute for moral responsibility.
- Impact assessment: system performance is measured in both technical and social terms.
From this perspective, AI4SG lies at the intersection of data science, technology ethics, and public policy.
The specialized literature converges around a set of principles that structure AI4SG projects:
- Algorithmic justice and equity, through the identification and mitigation of bias.
- Transparency and explainability, as conditions for public trust.
- Responsibility and accountability, clearly defining actors and roles.
- Precaution and proportionality, especially in contexts of high vulnerability.
- Verifiable social impact, beyond operational efficiency.
Christian (2020) conceptualizes this challenge as the alignment problem, emphasizing that aligning intelligent systems with human values is simultaneously a technical, institutional, and moral problem.
AI4SG in Critical Sectors: The Case of Healthcare and the Public Sector
I can be deployed across multiple domains to positively impact individuals, communities, or ecosystems:
- Social inclusion: helping reduce gaps through applications that facilitate communication for people with disabilities or tools that detect gender bias in hiring and credit processes.
- Health and well-being: used to diagnose diseases (such as sepsis or diabetic retinopathy) through the analysis of medical records and images. It also enables telemedicine, allowing healthcare services to reach remote areas via mobile devices.
- Quality education: enabling personalized learning systems (intelligent tutors or avatars) that adapt to the pace and specific needs of each learner.
- Agriculture and the environment: applied in precision agriculture through robots that optimize planting and irrigation, as well as in climate monitoring, marine life protection, and anti-poaching efforts using drones and computer vision algorithms.
Similarly, in the public sector, the application of AI to the targeting of social policies requires robust governance frameworks to prevent the uncritical automation of decisions with high social impact.
Decolonial Analysis of AI4SG: Epistemic Limits and Emancipatory Possibilities
From a decolonial perspective, the AI4SG approach requires additional problematization that goes beyond dominant normative frameworks in AI ethics. Following Aníbal Quijano, modern technology is inseparably linked to the coloniality of power, understood as a historical pattern that articulates knowledge, economy, and authority. In this sense, AI—including that oriented toward the “social good”—cannot be assumed to be neutral or universal.
Most AI systems are designed based on epistemologies and technical rationalities rooted in the Global North, implying that social problems and optimization criteria are often defined from frameworks external to the contexts in which these technologies are deployed. As Walter Mignolo warns, this process reproduces a form of coloniality of knowledge, whereby certain forms of knowledge are legitimized as universal while others are systematically marginalized.
Artificial Intelligence for Social Good currently constitutes an indispensable framework for guiding the development of artificial intelligence in contexts of high social complexity. By integrating ethics, governance, and epistemological critique, AI4SG makes it possible to move beyond reductionist views of technological innovation and advance toward a conception of AI as a tool in the service of collective well-being. The incorporation of a decolonial perspective further expands this approach, reminding us that there can be no true “social good” without epistemic justice, cultural contextualization, and the effective participation of affected communities. Ultimately, the future of AI4SG will depend on institutional capacity to translate these principles into concrete practices of design, regulation, and evaluation.
References
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
Christian, B. (2020). The alignment problem: Machine learning and human values. W. W. Norton & Company.
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Cowls, J. (Ed.). (2022). Artificial intelligence for social good. Springer.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
PhD. Wilmer Lopez Lopez – Marzo 2026




Deja una respuesta