Organization and Management Theory OMT

CfP: Special Issue of Business and Society Review on AI Literacy for Business Education

  • 1.  CfP: Special Issue of Business and Society Review on AI Literacy for Business Education

    Posted 10-01-2025 12:05
    ***Apologies for cross-posting***

    Business and Society Review
    Call for Papers

    Promoting AI Literacy for Business Education through the Integration of Human-
    Centered Artificial Intelligence (HCAI) and Virtue Ethics

    Guest Editors
    Dulce M. Redín, University of Navarra, Spain
    Jude Chua Soo Meng, Nanyang Technological University, Singapore
    Mauricio C. Serafim, State University of Santa Catarina, Brazil
    Miguel Velasco Lopez, CUNEF University, Spain

    Please check out the full version of the Special Issue Call for Papers here: https://onlinelibrary.wiley.com/pb-assets/assets/14678594/CfP-AI-Literacy-BSR-1742489832537.pdf

    Overview
     
    The rapid advancement of artificial intelligence (AI) has transformed industries, raising critical questions about ethical integration, societal impact, and the role of education in preparing future leaders. This call for contributions seeks to explore how Human-Centered AI (HCAI) and virtue ethics can foster AI literacy in business education, equipping students with the tools to navigate AI's complexities while promoting human flourishing.
     
    We aim to compile a special issue that addresses interdisciplinary perspectives, harmonizes constructs within AI literacy, and bridges theory with practice in business education. This issue will provide actionable insights for educators, researchers, and industry professionals interested in integrating AI literacy frameworks into educational and organizational practices.

     
    Background and Context
     
    The digitalization of human activities has generated vast amounts of data for training algorithms, and combined with increasing computational power, AI techniques are increasingly being applied across industries (Lee & Shin, 2020). AI's widespread application is expected to bring significant benefits in fields such as human resources, strategy, healthcare, education, cybersecurity, and environmental protection. However, it is crucial for the scientific community to address the societal impacts and potential changes resulting from AI adoption. In this context, it is essential that AI learners in higher education develop the knowledge and skills to collaborate with AI, ensuring they stay competitive in a rapidly evolving professional landscape and learn how to "navigate ethically" (Almatrifi et al. 2024). Examples from various sectors, such as biased underwriting systems in finance (Bhutta et al., 2021), AI-curated content on social media (Ozmen Garibay et al., 2024), and the automation of human resource tasks (Bankins, 2021), illustrate the ethical and social challenges that accompany AI's growth. In healthcare, the potential for AI and robotic nurses to replace traditional operating room nurses, part of a multidisciplinary team, is being evaluated (Ergin et al., 2023).

    To address these challenges, decision-makers at various organizational levels could achieve better results by adopting a comprehensive human-centered and virtue ethics approach to AI-driven curation, moderation, and prediction. This approach envisions AI as a tool to enhance, rather than replace, human capabilities and improve the environment.
     
    Throughout previous industrial revolutions, mechanization surpassed human physical capabilities, while humans retained cognitive superiority. Today, concerns arise that AI might exceed human intelligence, potentially causing job displacement, heightened dependency, and profound societal changes (Anderson et al., 2018). However, human intelligence and AI remain fundamentally distinct. AI demonstrates strengths in multitasking, computation, and memory, whereas humans excel in logical reasoning, creativity, emotional intelligence, and language (Komal, 2014). While some foresee AI surpassing human intelligence (Grace et al., 2018), these differences highlight opportunities for synergy, underscoring the importance of aligning AI with human values through a human-centered approach.
     
     
    Human-Centered AI (HCAI) and Virtue Ethics Perspectives

    Human-Centered AI (HCAI) specifically seeks to amplify human capabilities by positioning humans at the core of the AI process (Riedl, 2019; Xu et al., 2019). Unlike traditional AI research, which aims to emulate human behavior, or AI engineering focused on replacement, HCAI prioritizes safety, reliability, and trustworthiness (Shneiderman, 2020a, 2020b). It fosters self-efficacy, creativity, and social participation while accounting for individual differences and prioritizing human values over algorithmic capabilities. By balancing human control with automation, HCAI ensures safety, protects human agency, and creates systems that are accessible and understandable (Shneiderman et al., 2020b). This paradigm is already influencing domains such as education (Renz & Vladova, 2021), yet widespread adoption remains forthcoming, making this call for papers timely and imperative.

    The literature on virtue ethics and AI examines various ethical concerns surrounding AI technologies (cf. Giarmoleo et al. 2024), including the morality of AI (Sison & Redín, 2023), artificial wisdom (Kim & Mejia, 2019), challenges posed by large language models (Sison et al., 2023), and strategies for AI integration (Farina et al., 2024; Neubert & Montañez, 2020; Smith & Vickers, 2024). It also addresses HCAI frameworks aligned with virtue ethics, emphasizing human flourishing and societal well-being (Bertolaso & Rocchi, 2022). Additionally, virtue ethics contributes to broader debates on innovation (Redín et al., 2023), digital labor (Sison, 2024) and the governance of technology (García-Ruiz, 2024).


    Aim and Scope of the Special Issue

    In light of these insights, we invite submissions for essays that explore how HCAI and virtue ethics perspectives can be introduced and integrated into AI literacy in business education. We are particularly interested in examining how these approaches can enrich teaching practices and promote ethical and effective AI use across key business domains such as finance, human resource management, marketing, production, corporate governance, and communications.
    Although the field of AI literacy has matured recently with a proliferation of new studies (Pinski & Benlian, 2024), it still provides fertile ground for such exploration, as its literature remains in its infancy and lacks consensus on its definition and scope. Terms like 'AI readiness' (Karaca et al., 2021), 'AI capabilities' (Markauskaite et al., 2022), and 'machine learning literacy' (cf. Laupichler et al. 2022) reflect overlapping constructs within the field. Scholars have also connected AI literacy to broader traditions of technological understanding, including 'digital literacy' (Gilster, 1997), 'media literacy' (Livingstone, 2004), and 'data literacy' (Wolff et al., 2016).

    The proliferation of these terms underscores the need for research to harmonize perspectives and create a unified framework that addresses AI literacy's interdisciplinary nature (Laupichler et al., 2022). Promoting AI literacy in higher and adult education equips future employees to collaborate effectively with AI (Cetindamar et al., 2022; Tzirides et al., 2024) and contributes to building an ethical foundation for a 'Good AI Society' (Floridi et al., 2018). Since AI literacy varies greatly across domains (Almatrafi et al., 2024), we encourage contributions that explore its diverse applications and contexts. 


    Potential Themes

    To address these challenges, we seek contributions that advance this critical field by drawing on insights from virtue ethics and HCAI, as well as practical experiences of human-AI interaction in various business domains. Submissions should bridge the gap between theoretical frameworks and practical applications, offering innovative perspectives on how AI literacy can be developed and implemented within business education.
     
    We invite submissions presenting new and original research addressing questions, including but not limited to the following: 

    1. Curricular Integration of AI Ethics:
    • How can business schools integrate AI ethics into their curricula to prepare future leaders for the ethical challenges posed by AI technologies?
    • What pedagogical approaches are most effective for embedding HCAI and virtue ethics within AI literacy frameworks in business education?
    2. Frameworks and Competencies for AI Literacy:
    • What dimensions, competencies and skills should an AI literacy framework include to prepare students for critical engagement with AI applications across business domains?
    • How can HCAI and virtue ethics inform the design of AI literacy programs that prepare students to address ethical and societal implications of AI use while fostering human flourishing?
    3. Harmonization of Constructs in AI Literacy:
    • How can research address the proliferation of overlapping terms and constructs (e.g., AI readiness, AI capabilities, machine learning literacy) to develop a unified framework for AI literacy?
    • What interdisciplinary methodologies can harmonize perspectives on AI literacy across academic and business domains?
    4. Application-Specific AI Literacy:
    • How can AI literacy in business education address the nuances of different AI models, such as supervised learning, unsupervised learning, reinforcement learning, neural networks, deep learning, generative models, transfer learning, and optimization models?
    • What strategies can help equip students with the knowledge to understand and apply these models effectively in diverse business contexts?
    5. Human-AI Interaction, Virtue Ethics and HCAI:
    • How can practical experiences of human-AI interaction inform the development of AI literacy programs that emphasize ethical and responsible use of AI in business?
    • What implications do virtue ethics and HCAI have for fostering meaningful human-AI collaboration and their integration into AI literacy frameworks?

    Instructions for Submissions

    Authors are strongly encouraged to refer to the BASR's submission guidelines for detailed instructions on submitting a paper to this Special Issue. Papers must be original and unpublished. They can have up to 10,000 words and must follow the editorial style of Business and Society Review which are found at
    https://onlinelibrary.wiley.com/page/journal/14678594/homepage/forauthors.html

    The Guest Editors invite potential contributors to this issue to send a short proposal via email to Dulce M. Redín (dredin@unav.es) by November 15, 2025. Feedback will be provided regarding the suitability of the proposed contribution by December 15, 2025. All papers must be submitted via Wiley's Research Exchange Platform (https://wiley.atyponrex.com/journal/BASR) by February 28, 2026. Please be sure to indicate that the paper is for this Special Issue during the submission process.

     
    Why Contribute?

    This special issue aims to transform AI literacy in business education by integrating HCAI and virtue ethics perspectives. We invite contributions that introduce innovative frameworks and strategies for ethical AI use, foster the alignment of interdisciplinary educational practices, and demonstrate practical applications for business and industry. 

    For further inquiries, contact Dulce M. Redín (dredin@unav.es).


    References
    Almatrafi, O., Johri, A. & Lee, H. (2024). A systematic review of AI literacy conceptualization, constructs, and implementation and assessment efforts (2019–2023). Computers and Education Open 6, 100173. https://doi.org/10.1016/j.caeo.2024.100173
    Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial intelligence and the future of humans (pp. 10). Pew Research Center. 
    Bankins, S. (2021). The ethical use of artificial intelligence in human resource management: a decision-making framework. Ethics and Information Technology 23, 841–854. https://doi.org/10.1007/s10676-021-09619-6
    Bertolaso, M., & Rocchi, M. (2022). Specifically human: Human work and care in the age of machines. Business Ethics, the Environment & Responsibility, 31(3), 888–898. https://doi.org/10.1111/beer.12281.
    Bhutta, N., Hizmo, A., & Ringo, D. (2021). How much does racial bias affect mortgage lending? Evidence from human and algorithmic credit decisions. SSRN Electronic Journal. https://doi.org/10.2139/ ssrn.3887663
    Cetindamar, D., Kitto, K., Wu, M., Zhang, Y., Abedin, B., & Knight, S. (2022). Explicating AI literacy of employees at digital workplaces. IEEE Transactions on Engineering Management. https://doi.org/10.1109/TEM.2021.3138503
    Ergin, E., Karaarslan, D., Şahan, S. & Bingöl, Ü. (2023). Can artificial intelligence and robotic nurses replace operating room nurses? The quasi-experimental research. Journal of Robotic Surgery 17, 1847–1855. https://doi.org/10.1007/s11701-023-01592-0
    Farina, M., Zhdanov, P., Karimov, A. & Lavazza, A.  (2023). AI and society: a virtue ethics approach. AI & Society 39, 1127–1140. https://doi.org/10.1007/s00146-022-01545-5
    Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People-an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
    García-Ruiz, P. (2024). "Governing Technology: A MacIntyrean Approach to the Ethics of Artificial Intelligence". In: Redín, D.M., Potts, G.W., Ogunyemi, O. (eds) MacIntyre and the Practice of Governing Institutions. Ethical Economy, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-031-78888-8_9
    Giarmoleo, F. V., Ferrero, I., Rocchi, M. & Pellegrini, M. M. (2024). What ethics can say on artificial intelligence: Insights from a systematic literature review. Business and Society Review, 129(2), 258-292. https://doi.org/10.1111/basr.12336
    Gilster, P. (1997). Digital literacy. John Wiley & Sons, Inc. Graham
    Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). Viewpoint: When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754. https://doi.org/10.1613/jair.1.11222
    Karaca, O., Çalışkan, S. A., & Demir, K. (2021). Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – development, validity and reliability study. BMC Medical Education, 21(1). https://doi.org/10.1186/s12909-021-02546-6
    Kim, T. W., & Mejia, S. (2019). From artificial intelligence to artificial wisdom: What Socrates teaches us. Computer 52(10), 70–74. https://doi.org/10.1109/MC.2019.2929723
    Komal, S. (2014). Comparative assessment of human intelligence and artificial intelligence. International Journal of Computer Science and Mobile Computing, 3,1–5.
    Laupichler, M. C., Aster, A., Schirch, J. & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence 3, 100101. https://doi.org/10.1016/j.caeai.2022.100101
    Lee, I., & Shin, Y. J. (2020). Machine learning for enterprises: Applications, algorithm selection, and challenges. Business Horizons, 63(2), 157–170. https://doi.org/10.1016/j.bushor.2019.10.005
    Livingstone, S. (2004). What is media literacy? InterMedia, 32(3), 18–20.
    Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., de Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers & Education: Artificial Intelligence, 3, 100056. https://doi.org/10.1016/j.caeai.2022.100056
    Neubert, M. J., & Montañez, G. D. (2020). Virtue as a framework for the design and use of artificial intelligence. Business Horizons, 63(2), 195–204. https://doi.org/10.1016/j.bushor.2019.11.001
    Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., ... Xu, W. (2023). Six Human-Centered Artificial Intelligence Grand Challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320
    Pinski, M., & Benlian, A. (2024). AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects. Computers in Human Behavior: Artificial Humans, 2(1), 100062. https://doi.org/10.1016/j.chbah.2024.100062
    Redín, D. M., Cabaleiro-Cerviño, G., Rodriguez-Carreño, I., & Scalzo, G. (2023). Innovation as a practice: Why automation will not kill innovation. Frontiers in Psychology, 13, 1045508. https://doi.org/10.3389/fpsyg.2022.1045508
    Renz, A., & Vladova, G. (2021). Reinvigorating the discourse on human-centered artificial intelligence in educational technologies. Technology Innovation Management Review, 11(5), 5–16. https://doi. org/10.22215/timreview/1438
    Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36. https://doi.org/10.1002/hbe2.117
    Shneiderman, B. (2020a). Human-centered artificial intelligence: Reliable, safe, & trustworthy. Int. J. Hum.-Comput. Interact. 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
    Shneiderman, B. (2020b). Design lessons from AI's two grand goals: Human emulation and useful applications. IEEE Trans. Technol. Soc. 1, 2 (Early Access). Retrieved from https://ieeexplore.ieee.org/document/9088114.
    Smith, N., & Vickers, D. (2024). Living well with AI: Virtue, education, and artificial intelligence. Theory and Research in Education, 22(1), 19-44. https://doi.org/10.1177/14778785241231561
    Sison, A. J. G. (2024). Can digitally transformed work be virtuous? Business Ethics Quarterly, 34(1), 163–191. https://doi.org/10.1017/beq.2023.33
    Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a "weapon of mass deception" ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human–Computer Interaction 40(17), 4853–4872. https://doi.org/10.1080/10447318.2023.2225931
    Sison, A. J. G. & Redín, D. M. (2023). A neo-aristotelian perspective on the need for artificial moral agents (AMAs). AI & Society 38, 47–65. https://doi.org/10.1007/s00146-021-01283-0
    Tzirides, A. O. (Olnancy), Zapata, G., Kastania, N. P., Saini, A. K., Castro, V., Ismael, S. A., You, Y., Santos, T. A. D., Searsmith, D., O'Brien, C., Cope, B., & Kalantzis, M. (2024). Combining human and artificial intelligence for enhanced AI literacy in higher education. Computers and Education Open, 6, 100184. https://doi.org/10.1016/j.caeo.2024.100184
    Wolff, A., Gooch, D., Cavero Montaner, J. J., Rashid, U., & Kortuem, G. (2016). Special issue on data literacy: Articles creating an understanding of data literacy for a data driven society. Journal of Community Informatics, 12(3), 9–26. https://doi.org/10.15353/joci.v12i3.3275
    Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2022). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human–Computer Interaction 39(3), 494–518. https://doi.org/10.1080/10447318.2022.2041900
    --
    Dulce M. Redin Goñi
    Catedrática / Professor
    Head of Business Department
    School of Economics and Business

    T: +34 948425600 Ext: 802785
    dredin@unav.es




    Este mensaje puede contener información confidencial. Si usted no es el destinatario o lo ha recibido por error, por favor, bórrelo de sus sistemas y comuníquelo a la mayor brevedad al remitente. Los datos personales incluidos en los correos electrónicos que intercambie con el personal de la Universidad de Navarra podrán ser almacenados en la libreta de direcciones de su interlocutor y/o en los servidores de la Universidad durante el tiempo fijado en su política interna de conservación de información. La Universidad de Navarra gestiona dichos datos con fines meramente operativos, para permitir el contacto por email entre sus trabajadores/colaboradores y terceros. Puede consultar la Política de Privacidad de la Universidad de Navarra en la dirección: https://www.unav.edu/aviso-legal

     

    This email message may contain confidential information. If you are not the intended recipient of this message or their agent, or if this message has been addressed to you in error, please immediately alert the sender by reply email and then delete this message and any attachments.  The personal information included in email messages exchanged with employees of the University of Navarra may be stored in the database of your interlocutor and/or the servers of the University for the time-period stipulated by its internal information storage policy. The University stores such data for purely administrative purposes, to facilitate e-mail contact between its employees and third parties. The University of Navarra Privacy Policy may be accessed at https://www.unav.edu/aviso-legal      

     

    Antes de imprimir este mensaje o sus documentos anexos, asegúrese de que es necesario. Proteger el medio ambiente está en nuestras manos.
    Before printing this e-mail or attachments, be sure it is necessary. 
    It is in our hands to protect the environment.