Body representations in AI-generated images in physical and sports contexts: biases, stereotypes, and critical reflections

Authors

DOI:

https://doi.org/10.47197/retos.v71.116433

Keywords:

actividad físico-deportiva, diversidad corporal, estereotipo y sesgo corporal, inteligencia artificial generativa, representación corporal

Abstract

Introduction: Every day, millions of images generated by generative artificial intelligence (AI) platforms are produced, distributed, and used across personal and professional domains.

Objective: This article analyzes bodily representations generated by three generative AI platforms in the context of physical and sports activities.

Methodology: Using a content analysis approach through structured co-observation, a total of 732 images were examined. These images were generated by three AI tools (Dall·E 3, Mid Journey, and Stable Diffusion) based on 60 neutral prompts describing various physical and sports activities, without reference to body-related or sociodemographic characteristics.

Results: The findings reveal the reproduction of hegemonic and systemic biases and stereotypes related to the body, despite the use of inclusive and neutral prompts. The AI-generated images predominantly portray normative bodies (mainly young, muscular, white, male figures) while reinforcing female stereotypes and rendering invisible other bodily realities associated with race, age, or disability.

Discussion: These results align with recent literature suggesting that artificial intelligence tools not only replicate existing societal biases and body stereotypes but also exacerbate them.

Conclusions: AI is both reproducing and amplifying pre-existing social biases and stereotypes. This highlights the need for reflective research that encourages a deeper, critical, and thoughtful examination of the responsible use of these technologies. It calls for progress toward fairer, more neutral, diverse, and inclusive AI models.

Author Biography

José Díaz Barahona, Universitat de València"Facultad de Magisterio; Departamento de Didáctica de la Expresión Musical, Plástica y Corporal

Profesor Dr. Titular de Universidad, Facultad de Magisterio. Universitat de València; Departamento de Didáctica de la Expresión Musical, Plástica y Corporal. 

Director de Posgrado de Competencia Digital Docente 16412330 - DIPLOMA D'ESPECIALITZACIÓ "COMPETÈNCIA DIGITAL DEL PROFESSORAT: INNOVAR I MILLORAR LA PRÀCTICA DOCENT AMB NOVES TECNOLOGIES." 

References

Abbott, B.D., & Barber, B.L. (2010). Embodied image: gender differences in functional and aesthetic body image among Australian adolescents. Body Image, 7(1), 22-31. https://doi.org/10.1016/j.bodyim.2009.10.004

Águila Soto, C., & López Vargas, J. J. (2019). Cuerpo, corporeidad y educación: una mirada reflexiva des-de la Educación Física. Retos, 35, 413–421. https://doi.org/10.47197/retos.v0i35.62035

Ali R., Tang, O.Y., Connolly, I.D., Abdulrazeq H.F., Mirza F.N., Lim R.K., Johnston B.R., Groff M.W., Wi-lliamson T., Svokos K., Libby T.J., Shin J.H., Gokaslan Z.L., Doberstein C.E., Zou J., & Asaad W.F. (2024). Demographic representation in 3 leading artificial intelligence text-to-image generators, JAMA Surgery, 159(1), 87-95. https://doi.org/10.1101/2023.05.24.23290463

Borau, S. (2025). Deception, discrimination, and objectification: ethical issues of female AI Agents. J Bus Ethics, 198,1–19. https://doi.org/10.1007/s10551-024-05754-4

Borji, A. (2023). Generated faces in the wild: Quantitative comparison of Stable Diffusion, Mid Journey and DALL-E 2. https://arxiv.org/abs/2210.00586

Bourdieu, P. (1998). La domination masculine. Paris, Éditions de Seuil.

Butler, J. (2004). Undoing gender. New York, NY: Routledge.

Cheong, M., Abedin, E., Ferreira, M., Reimann, R., Chalson, S., Robinson, P., Byrne, J., Ruppanner, L., Alfano, M., & Klein, C. (2024). Investigating gender and racial biases in DALL-E Mini Images. ACM Journal on Responsible Computing, 1, 1-20. https://doi.org/10.1145/3649883

Collett, C., Neff, G., & Gouvea Gomes, L. (2022). Los efectos de la IA en la vida laboral de las mujeres. Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura (UNESCO), Banco Interamericano de Desarrollo (BID) y Organización para la Cooperación y el Desarrollo Eco-nómicos (OCDE). https://wp.oecd.ai/app/uploads/2022/03/Los-efectos-de-la-IA-en-la-vida-laboral-de-las-mujeres.pdf.

Díaz-Barahona, J., Valverde, T., & Moya, I. (2022). Estereotipos asociados al cuerpo humano: análisis de aplicaciones móviles usadas en la educación físico-deportiva. Pixel-Bit, Revista de medios y educación, (64), 79–103. https://doi.org/10.12795/pixelbit.90297

Díaz-Barahona, J., Valverde-Esteve, T., & Moya-Mata, I. (2023). Diseño y validación de un instrumento para analizar los estereotipos corporales representados en el software y aplicaciones digitales (apps). Retos, 47, 69-77. https://doi.org/10.47197/retos.v47.94899

Every Pixel. (2023, August 15). AI image statistics for 2024: How much content was created by AI. Eve-rypixel Journal. https://journal.everypixel.com/ai-image-statistics

Fernández, A., & Garrido-Merchán, E. (2025). A taxonomy of the biases of the images created by genera-tive artificial intelligence. Curr Trends Business Mgmt, 3(1), 01-10.

Franganillo, J. (2022). Contenido generado por inteligencia artificial: oportunidades y amenazas. Anua-rio ThinkEPI, (16), 16-24. https://doi.org/10.3145/thinkepi.2022.e16a24

Freire-Sánchez, A., Fitó-Carreras, M., Vidal-Mestre, M., & Barra-Pérez, D. (2024). Diseño y representa-ción visual en la IA generativa de la salud mental. Gráfica, (391), 1-9. https://doi.org/10.5565/rev/grafica.391

García-Ull, FJ., & Melero-Lázaro, M. (2023). Gender stereotypes in AI-generated images. Profesional de la información, (32)5. https://doi.org/10.3145/epi.2023.sep.05

Gengler, E. (2024). Sexism, racism, and classism: social biases in text-to-image generative AI in the con-text of power, success, and beauty. Wirtschaftsinformatik 2024. https://aisel.aisnet.org/wi2024/48

Gestos, M., Smith-Merry J., & Campbell, A. (2018). Representation of women in video games: a system-atic review of literature in consideration of adult female wellbeing. Cyberpsychology, Behavior, and Social Networking, 21(9), 535-541. https://doi.org/10.1089/cyber.2017.0376

Gisselbaek, M., Suppan, M., Minsart, L., Köselerli, E., Nainan, S., Matot, I., Barreto, O. L., Saxena, S., & Ber-ger-Estilita, J. (2024). Representation of intensivists' race/ethnicity, sex, and age by artificial in-telligence: a cross-sectional study of two text-to-image models. Critical care, 28(1), 363. https://doi.org/10.1186/s13054-024-05134-4

González, A., Táboas, M. I., & Rey, A. (2010). Los libros de texto como herramientas para la promoción de una práctica físico-deportiva en igualdad: análisis comparativo de la representación racial entre los libros publicados durante la vigencia de la LOGSE y la LOE. Cuadernos de psicología del deporte, 10(2), 31-36.

Heaven, W. D. (2024, Jan). 6 big questions for generative AI. MIT Technology Review, 127, 30-37. https://www.proquest.com/magazines/6-big-questions-generative-ai/docview/1937358610/se-2

Heinemann, K. (2003). Introducción a la metodología de la investigación empírica en las ciencias del deporte. Paidotribo, Barcelona.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biome-trics, 33(1), 159–174.

Linares-Lanzman, J., Stypińska, J., & Rosales, A. (2025). Edadismo generativo: cuando la IA generativa refuerza los estereotipos de edad. Revista de estudios de ciencias de la información y la comu-nicación COMeIN. (151). https://doi.org/10.7238/c.n151.2513

MacDonald, J. P., Pape, M., Ackerman, K. E., Carneiro, E., Huang, Y., Rizzone, K. H., & Mountjoy, M. (2024). The digital mirror: how generative artificial intelligence reflects and amplifies gender bias. British Journal of Sports Medicine, (59)207-209. https://doi.org/10.1136/bjsports-2024-108998

Mack, K. A., Qadri, R., Denton, R., Kane, S. K., & Bennett, C. L. (2024). They only care to show us the wheelchair: Disability representation in text-to-image AI models. En Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), 11–16 de mayo de 2024, Hono-lulu, HI, EE.UU. ACM. https://doi.org/10.1145/3613904.3642166

Martínez-Bello, V., Cabrera, Y., Díaz-Barahona, J., & Bernabé-Villodre, M., (2021). Bodies in the early childhood education classroom: a Bourdieusian analysis of curricular materials, Sports, Educa-tion and Society, 26(1), 29-44. https://doi.org/10.1080/13573322.2019.1690442

Moya-Mata, I., Ruiz, L., Martín, J., Pérez, P.M., & Ros, C. (2017). La representación de la discapacidad en las imágenes de los libros de texto de Educación Física: ¿inclusión o exclusión? Retos, 32, 88–95. https://doi.org/10.47197/retos.v0i32.52191

Moya-Mata, I., Ruiz-Sanchis, L., Martín-Ruiz, J., & Ros, C. (2019). La actividad física representada en los libros de texto de educación física en educación primaria. Movimento, 25, 1-13. https://doi.org/10.22456/1982-8918.89296

Neuendorf, K.A. (2011). Content analysis: A methodological primer for gender research. Sex Roles, 64(3-4), 276-289. https://doi.org/10.1007/s11199-010-9893-0

Nicoletti, L., & Bass, D. (2023, June 9). Generative AI takes stereotypes and bias from bad to worse. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

Park, Y. S. (2024). White Default: Examining Racialized Biases Behind AI-Generated Images. Art Educa-tion, 77(4), 36–45. https://doi.org/10.1080/00043125.2024.2330340.

Pedro, R., Lucio, A., Torrão, R., & Tognoli, N., (2024). Datafication, artificial intelligence and images: the dominant paradigm in the representation of knowledge in images. Knowledge Organization, 51 (2), 117-126. https://doi.org/10.5771/0943-7444-2024-2-117

Rapp, A., Di Lodovico, C., Torrielli, F., & Di Caro, L. (2025). How do people experience the images creat-ed by generative artificial intelligence? An exploration of people's perceptions, appraisals, and emotions related to a Gen-AI text-to-image model and its creations. International Journal of Human-Computer Studies, 193, https://doi.org/10.1016/j.ijhcs.2024.103375

Rice C., Riley S., LaMarre A., & Bailey K.A. (2021). What a body can do: rethinking body functionality through a feminist materialist disability lens. Body Image, 38, 95-105, https://doi.org/10.1016/j.bodyim.2021.03.014

Rivera, J.G. (2024). Inteligencias artificiales generativas 2024 (2.ª ed.). Red Educativa Digital Descartes, Córdoba.

Rodríguez, J., Sanmiguel-Rodríguez, A., & Álvarez-Seoane, D. (2018). Revisión bibliográfica en el contex-to español sobre investigaciones relacionadas con los libros de texto y materiales didácticos en educación física. Retos, 34, 363-370. https://doi.org/10.47197/retos.v0i34.65902

Sandoval-Martín, T., & Martínez-Sanzo, E. (2024). Perpetuation of gender bias in visual representation of professions in the generative AI tools DALL·E and Bing Image Creator. Social Sciences, 13(5), 250. https://doi.org/10.3390/socsci13050250

Santoniccolo F., Trombetta T., Paradiso M.N., & Rollè L. (2023). Gender and media representations: a review of the literature on gender stereotypes, objectification and sexualization. International Journal of Environmental Research and Public Health, 20 (10)5770, https://doi.org/10.3390/ijerph20105770

Stoyanov, S.R., Hides, L., Kavanagh, D.J., Zelenko, O., Tjondronegoro, D., & Mani, M. (2015). Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR mHealth and uHealth, 3(1), 27. https://doi.org/10.2196/mhealth.3422

UNESCO. (2024). Challenging systematic prejudices: an investigation into bias against women and girls in large language models. UNESCO, Paris.

Vilanou, C. (2001). Imágenes del cuerpo humano. Apunts. Educación Física y Deportes, 63, 94-104.

Wu, Y., Nakashima, Y., & Garcia, N. (2025). Revealing gender bias from prompt to image in stable diffu-sion. Journal of Imaging, 11(2), 35. https://doi.org/10.3390/jimaging11020035

Yue, M., & Yang, B. (2024). The gender show in “Mid Journey”: the stubborn biases and gender fantasies of ai painting tools. Communications in Humanities Research, (33), 152-159. https://doi.org/10.54254/2753-7064/33/20240080

Zhou, M., Abhishek, V., Derdenger, T. P., Kim, J., & Srinivasan, K. (2024). Bias in generative AI. https://arxiv.org/abs/2403.02726

Published

2025-08-29

How to Cite

Díaz Barahona, J., & Marín Liébana, P. (2025). Body representations in AI-generated images in physical and sports contexts: biases, stereotypes, and critical reflections. Retos, 71, 716–729. https://doi.org/10.47197/retos.v71.116433

Issue

Section

Original Research Article