Shifting Grounds: Higher Education Development and the Changing Meaning of Quality in the AI Era by Sabrina Gallner

The emergence of generative artificial intelligence (GenAI) is reshaping higher education, prompting institutions, educators, and academic developers to reconsider foundational concepts such as teaching quality. While much attention has been paid to technological affordances and pedagogical innovation, less is known about how the very discourse of “quality” in digital higher education has evolved in response to successive waves of digital transformation—from early e-learning to pandemic-induced remote teaching and now the integration of GenAI.

This study investigates how the concept of quality in digital higher education has been discursively constructed and transformed in international academic publications across four key phases: early digitalization (pre-2015), strategic digitalization (2015–2019), emergency remote teaching during the COVID-19 pandemic (2020–2022), and the current GenAI discourse (from 2023 onward). Drawing on the heuristic framework of Harvey and Green (1993, 2000; Harvey, 2024), which distinguishes five dimensions of quality—excellence, perfection, fitness for purpose, value for money, and transformation—the study explores how these dimensions are emphasized, reinterpreted, or challenged in light of technological and societal shifts.

Methodologically, the research employs the Sociology of Knowledge Approach to Discourse (SKAD) developed by Reiner Keller, which enables a nuanced analysis of how knowledge about quality is produced, stabilized, and contested within academic and policy discourses. The broader study includes peer-reviewed articles, policy papers, and reports from organizations such as UNESCO, OECD, and EDUCAUSE. The conference contribution focuses on the policy dimension of this corpus, examining how international organizations conceptualize and govern “quality” in the context of digital higher education. Particular attention is paid to how GenAI is framed in these policy discourses—whether as a threat to academic integrity, a tool for personalization, or a catalyst for redefining educational values.
By mapping discursive shifts and identifying emerging quality dimensions such as “digital sovereignty” and “ethical quality,” the analysis sheds light on how global policy narratives shape institutional understandings of good digital teaching. It thus provides a foundation for later phases of the project, which will integrate perspectives from academic scholarship and faculty development practice.

References:

  • Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information, 15(11), 676. https://doi.org/10.3390/info15110676
  • Harvey, L. (2024). Extended Editorial: Defining quality thirty years on: quality, standards, assurance, culture and epistemology. Quality in Higher Education, 30(2), 145–184. https://doi.org/10.1080/13538322.2024.2355026
  • Harvey, L., & Green, D. (2000). Qualität definieren. Fünf unterschiedliche Ansätze. Zeitschrift für Pädagogik, Qualität und Qualitätssicherung im Bildungsbereich; Schule, Sozialpädagogik, Hochschule.(41), 17–39.
  • Keller, R. (2011). The Sociology of Knowledge Approach to Discourse (SKAD). Human Studies, 34(1), 43–65. https://doi.org/10.1007/s10746-011-9175-z
  • OECD. (2020, April 8). Trustworthy artificial intelligence (AI) in education. OECD. https://www.oecd.org/en/publications/trustworthy-artificial-intelligence-ai-in-education_a6c90fa9-en.html
  • UNESCO. (2024). Artificial intelligence and the Futures of Learning | UNESCO. https://www.unesco.org/en/digital-education/ai-future-learning

Designing institutional support for generative AI adoption by Iris Capdevila (EPFL and Kim Uittenhove (EPFL)

This practice-based paper presents a structured approach to supporting faculty adoption of generative artificial intelligence (GenAI) in STEM education at a technical university. Drawing on survey data from 109 faculty members, we identify a significant gap between actual and desired use of GenAI tools, particularly for supporting student learning. While most teachers express interest in integrating GenAI, barriers such as limited training, perceived feasibility, and lack of institutional support hinder adoption.

To address this, we propose targeted initiatives informed by Cultural-Historical Activity Theory (CHAT) and Diffusion of Innovations (DOI), emphasizing systemic and peer-driven change. These initiatives span five stages of innovation adoption (knowledge, persuasion, decision, implementation, and confirmation) and include co-constructed guidelines, peer exchanges, hands-on workshops, customizable GenAI tools, and collaborative AInEd laboratories. The proposed framework offers a transferable model for fostering effective and innovative GenAI integration in higher education.

References:

  • Al-Abdullatif, A. M. (2024). Modeling Teachers’ Acceptance of Generative Artificial Intelligence Use in Higher Education: The Role of AI Literacy, Intelligent TPACK, and Perceived Trust. Education Sciences, 14(11), 1209. https://doi.org/10.3390/educsci14111209
  • Cabellos, B., De Aldama, C., & Pozo, J. I. (2024). University teachers’ beliefs about the use of generative artificial intelligence for teaching and learning. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1468900
  • Engeström, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work, 14(1), 133–156. https://doi.org/10.1080/13639080020028747
  • Rogers, E. M. (1983). Diffusion of innovations (3rd ed.). The Free Press of Glencoe Division of The Macmillan Co.

Communities of Practice as Catalysts for Constructive Engagement with Generative AI in Higher Education – Insights from EduAI@FHNW by Monika Schlatter, Dominik Tschopp, Juliane Felder, Roy Fischer, Johanna Thüring (FHNW)

The rise of generative artificial intelligence (AI) presents both opportunities and complex challenges for higher education, demanding flexibility and agility from institutions and instructors. To support constructive engagement with these emerging technologies, the University of Applied Sciences and Arts Northwestern Switzerland (FHNW) established EduAI@FHNW as a platform for engaged professionals for knowledge exchange, peer learning, and innovation in AI-enhanced teaching with the goal to transform it into a CoP.

With an empirical study (Schlatter et al. 2025), we investigated whether EduAI@FHNW meets the targeted characteristics of a CoP as defined by B. Wenger-Trayner und E. Wenger-Trayner (2015) and thus truly qualifies as a CoP and examined its impact on individual and organisational levels using the value-creation framework by B. Wenger-Trayner et al. (2017). By means of a quantitative survey, we explored participants’ perceptions of the community and how their participation contributes to their professional development, facilitates organisational learning, and supports a proactive approach to the challenges posed by generative AI within the organisation.

Our results demonstrate that EduAI@FHNW has indeed evolved into a CoP, meeting the aforementioned characteristics, and generates clear immediate and potential benefits for its members, not only in terms of content but also due to the format itself as the first cross-institutional community of its kind at FHNW. We conclude that with professional organisation, commitment from the facilitators, and sufficient time, the conditions for a sustainable establishment of a CoP can be created. However, for sustainable development in the engagement with AI within the FHNW, the knowledge gathered in the CoP must be embedded across the entire organisation beyond the scope of the CoP. For this, we see the need for organisational support and recognition of participants’ engagement, which in turn would require an organisation-wide strategy towards AI.

In our session we will first introduce the results of this empirical study as a case study for the use of CoPs for faculty development regarding AI and give insights into our work and the limitations we have met. We then reflect these experiences together with the participants, compare them with theirs, and collectively explore how we could use CoPs to enable development in higher education.

References:

  • Schlatter, M., Tschopp, D., Fischer, R., Felder, J., Thüring, J. (2025). Künstliche Intelligenz und Hochschullehre: Der Beitrag von Communities of Practice für einen konstruktiven Umgang am Beispiel von EduAI@FHNW [Artificial Intelligence and teaching in higher education: The contribution of communities of practice to constructive engagement, using the example of EduAI@FHNW]. Impact free: Journal für freie Bildungswissenschaftler, 65. https://epub.sub.uni-hamburg.de/epub/volltexte/2025/188621/
  • Wenger-Trayner, B., Wenger-Trayner, E., Cameron, J., Eryigit-Madzwamuse, S., & Hart, A. (2017). Boundaries and Boundary Objects: An Evaluation Framework for Mixed Methods Research. Journal of Mixed Methods Research, 13(3), 321–338. https://doi.org/10.1177/1558689817732225
  • Wenger-Trayner, E., & Wenger-Trayner, B. (2015). Introduction to communities of practice. A brief overview of the concept and its uses. https://www.wenger-trayner.com/introduction-to-communities-of-practice/

Using epistemic micro-practices to prepare students for increased use of AI in software design by Siara Isaac (EPFL)

Students’ epistemic beliefs (i.e. their ideas about the nature of knowledge and how knowledge is generated and validated) may seem quite distant from the concerns of engineering educators. However, the practical significance of how students evaluate knowledge claims and solutions becomes clear when we consider the increasing reliance on AI for generating programming code (Becker et al., 2023; Cambaz & Zhang, 2024; Sergeyuk et al., 2025). The field of personal epistemology began with William Perry who described a progression from dualistic thinking, where knowledge is seen as black-and-white, absolute, and grounded in authority, to more relativistic and constructivist views (1970). More sophisticated epistemic beliefs are characterised by seeing knowledge as complex, evolving, and often context-dependent, requiring critical thinking and judgment rather than reliance on fixed truths. Constructive use of AI requires these more sophisticated approaches to assess and challenge the generated responses. As engineering students have rarely been observed to consistently apply these sophisticated approaches (Gainsburg, 2015; Author, 2022), educators should consider how to better support the development of their students’ epistemic beliefs.

The epistemic micro-practices model (Author, 2022) provides educators with a structure to design learning tasks that promote students’ epistemic development and therefore their ability to work more constructively with AI. This study reports on a think-aloud study that the captured the actions of 11 computer science students working on the conceptual design phase of software development. Students’ actions related to seeking, justifying, and evaluating scientific knowledge were characterised using the 4-level epistemic micro-practices model. These observations reveal the nuanced ways students engage with knowledge in context and also identify how instructors can design tasks that encourage students to extend their abilities towards approaches that are more appropriate for assessing and parsing AI output. I will conclude with recommendations for using epistemic micro-practices framework to support students’ capacity to work constructively with AI, particularly questioning or challenging AI generated answers or code.

References:

  • Becker, B. A., Denny, P., Finnie-Ansley, J., Luxton-Reilly, A., Prather, J., & Santos, E. A. (2023). Programming Is Hard – Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, 500–506. https://doi.org/10.1145/3545945.3569759
  • Cambaz, D., & Zhang, X. (2024). Use of AI-driven Code Generation Models in Teaching and Learning Programming: A Systematic Literature Review. Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, 172–178. https://doi.org/10.1145/3626252.3630958
  • Gainsburg, J. (2015). Engineering Students’ Epistemological Views on Mathematical Methods in Engineering. Journal of Engineering Education, 104(2), 139–166. https://doi.org/10.1002/jee.20073
  • Sergeyuk, A., Golubev, Y., Bryksin, T., & Ahmed, I. (2025). Using AI-based coding assistants in practice: State of affairs, perceptions, and ways forward. Information and Software Technology, 178, 107610. https://doi.org/10.1016/j.infsof.2024.107610

Generative AI Competencies for Higher Education Lecturers – Towards an Extension of Existing Frameworks by Isabelle F. Geppert (UniBE), Kerstin Denecke (BFH), Daniel Reichenpfader (BFH) and André Klostermann (UniBE)

The rapid rise of generative Artificial Intelligence (genAI) tools such as ChatGPT is transforming higher education (HE) by reshaping how knowledge is created, taught, and learned. Unlike earlier digital technologies, genAI enables autonomous content creation, personalized feedback, and interactive learning experiences, but also raises concerns about academic integrity, over-reliance, and the decline of key cognitive skills. This duality highlights the urgent need for HE lecturers to develop new competences to meaningfully and responsibly integrate genAI into teaching. Current research provides several frameworks for developing digital and AI-related competences in higher education, including the UNESCO AI Competency Framework for Teachers and the European Digital Competence Framework for Educators (DigCompEdu). However, to the best of our knowledge, no specific framework has yet been developed for HE lecturers in the context of genAI. A first attempt to address this gap was made by Annapureddy et al. (2025), who proposed a genAI competence framework for educators, policymakers, and government officials. While their model provides an important foundation, it remains at a general level and does not capture the specific requirements of higher education.

To explore this, we conducted a Delphi study with lecturers, higher education professionals, and genAI experts, complemented by interviews and group discussions. The first Delphi round (N = 21) employed open-ended questions to gather perspectives on the competences required for preparation, teaching, assessment, and evaluation with genAI in higher education. Responses were analyzed using Kuckartz’s (2018) qualitative content analysis, which deductively applied the DigCompEdu framework while inductively extending it as necessary. In subsequent rounds, participants rated the relevance of competences and then clustered into competence areas and subdimensions.

The analysis yielded 75 competences, later organized into 28 core competences grouped into six areas: application competence, reflection and evaluation competence, professional competence, design competence, personal competence, and supporting competence. Most frequently mentioned were application expertiseprompt competence, and the ability to critically evaluate and refine the didactic use of genAI. Relevance ratings underscored competences related to critical output assessment and evaluation of genAI in teaching-learning contexts. The results suggest that the application of GenAI tools in higher education does require specific competences. However, those address very specific aspects, like critical output assessment, didactic integration, or evaluation of didactic use of genAI tools. These findings overlap substantially with Annapureddy et al. (2025) but provide higher education–specific nuance, particularly in areas such as didactic integration and learning analytics.

We will begin the discussion with a brief recap of the study aim and competence cluster, supported by a slide presenting the five competence areas. Central questions will guide the exchange, such as which competences are essential for responsible teaching with genAI, the implications for lecturers and faculty development, and the greatest challenges in practice. Participants will then reflect in small groups of 3–4 people, using guiding questions to connect the cluster to their teaching context, consider adaptations of existing frameworks or potential training, guidelines, and self-assessment tools based on this cluster. During the plenary session, groups will share their key insights, highlight shared themes and tensions, and link them to implications for training, policy, and research. The session will close with a wrap-up summarizing takeaways.

References:

  • Annapureddy, R., Fornaroli, A., & Gatica-Perez, D. (2025). Generative AI Literacy: Twelve Defining Competencies. Digital Government: Research and Practice6(1), 1-21. https://doi.org/10.1145/3685680
  • Chan, C. K. Y. (2023). Is AI Changing the Rules of Academic Misconduct? An In-depth Look at Students’ Perceptions of ‘AI-giarism’. https://doi.org/10.48550/arXiv.2306.03358
  • Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1-15. https://doi.org/10.1080/14703297.2023.2195846 
  • Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., . . . Kasneci, G. (2023). ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. https://doi.org/10.35542/osf.io/5er8f
  • Kuckartz, U. (2012). Qualitative Inhaltsanalyse: Methoden, Praxis, Computerunterstützung. Beltz: Juventa.
  • Miao, F., & Cukurova, M. (2024). AI competency framework for teachers. UNESCO. https://doi.org/10.54675/ZJTE2084 
  • Redecker, Christine. (2017). European Framework for the Digital Competence of Educators: DigCompEdu, Joint Research Centre, https://dx.doi.org/10.2760/159770
  • Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI.Economics and Business Review9(2). https://doi.org/10.18559/ebr.2023.2.743

Chatbot inclusion in a pedagogical scenario: initial feedback on use and effectiveness by Valentina Rossi (EPFL)

Within the context of rapidly evolving AI technologies, it is imperative that higher education institutions provide evidence-based support to teachers and students on the integration and impact of such tools on the student’s learning process. This preliminary study investigates the pedagogical relevance of a chatbot-based learning scenario implemented as a substitute during a one-time teacher’s absence. 

The integration of artificial intelligence tools, such as AI-trained chatbots, offers new possibilities for maintaining instructional support, developing subject’s interest and fostering learner autonomy. The central research question guiding this inquiry is whether the inclusion of an AI-trained chatbot into a pedagogical scenario constitutes a meaningful and effective alternative to traditional teacher-led instruction in the event of a one-time absence. 

In this preliminary study, we gathered students’ perceptions on their own learning and interest in the subject using, as additional support, an AI-trained chatbot fed with course material. A questionnaire was administered to students who participated in the chatbot-based activity in the second week of the semester. The instrument was designed to capture student perceptions across three dimensions: 1) the use and effectiveness of the chatbot, 2) the perceived impact on their learning, and 3) the pedagogical value of this scenario. The data collected provide insights into how students experience this pedagogical scenario and whether they consider it an effective and satisfactory one-time replacement for direct teacher involvement. 

This exploratory study contributes to the large number of experimental experiences in the use of AI in education by offering preliminary evidence on student perceptions of chatbot-mediated learning in the absence of the teacher. It underscores the importance of designing pedagogical scenarios that integrate digital tools thoughtfully and discusses the potential and pitfalls of chatbots use in a specific pedagogical scenario. Moving forward, we envisage testing the effectiveness of AI-trained chatbots on students’ learning, as opposed to students’ perception of learning. 


Augmenting Active Learning with GenAI: Enhancement or Impairment? Evidence from a Data Visualization Course by Adrian Holzer (UNINE)

As Generative Artificial Intelligence (GenAI) is becoming ubiquitous across learning domains, it is crucial to better understand how learning experiences could take advantage of its possibilities and avoid its pitfalls. In this paper, we address this issue by focusing on the context of a data visualization course. What makes this context unique is its combination of two areas where GenAI has shown notable effectiveness: writing code and storytelling.

To evaluate how undergraduate students would leverage GenAI in this context, we conducted an in-class between-subjects experiment (N=43) with a control (no GenAI) and treatment group (with GenAI). In the 60-minute experiment, students from the data visualization course were asked to prepare a data story within Jupyter Notebook, including both textual story elements and data visualization. In addition to these two groups, we included AI-only group in which task instructions were given directly to a GenAI tool without further human intervention.

The results of our experiment indicate that students perceive GenAI as a tool improving both their learning experience and outcomes. However, an analysis of the learning outcomes exhibits no statistically significant difference between creations of students with or without GenAI support. Interestingly, the outputs generated by GenAI alone outperformed those of both student groups.

References:

  • Mezzaro, S., Gambi, A., Fraser, G.: An empirical study on how large language models impact software testing learning. In: EASE’24. ACM (2024)
  • Kim, N.W., Ko, H.K., Myers, G., Bach, B.: Chatgpt in data visualization education: A student perspective. In: VL/HCC’24. pp. 109–120. IEEE (2024)
  • DeJeu, E.B.: Using generative ai to facilitate data analysis and visualization: a case study of olympic athletes. Journal of Business and Technical Communication 38(3), 225–241 (2024) ;
  • Firat, M.: What chatgpt means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching 6(1) (2023)
  • Su, J., Yang, W.: Unlocking the power of chatgpt: A framework for applying generative ai in education. ECNU Review of Education 6(3), 355–366 (2023) ;

Engaging with AI use in assessments by Rachel Germanier (Les Roches)

In a world where “creation” – the highest level of Bloom’s taxonomy – is now instantly accessible to students via generative AI, educators face increasing complexity in designing assessments that genuinely measure critical thinking. Recent research has shown that repeated use of Gen AI reduces critical thinking ability and a decrease in learning skills (Cotton et al., 2024; Kosmyna et al., 2025) yet the number of students using these tools has risen to nearly two-thirds (Freeman, 2025).
This paper presents findings from a multi-phase investigation carried out in 2025 with students and faculty at a Swiss-based international university of applied sciences, examining the challenges and opportunities presented by AI in assessment design.

Perkins et al. (2024) propose a five-level model of AI engagement in education, ranging from level 1 (no AI use) to level 5 (AI as co-pilot). Using this framework, an assessment was designed to assess students’ critical evaluation of a ChatGPT-produced literature review. However, grading revealed that a considerable number of students had used AI to perform the critical analysis. An addition to Perkins et al.’s model is therefore suggested introducing a level 0 – where AI is covertly used by students even when assignments are not designed with AI in mind.

The same student cohort later conducted 109 peer-led, semi-structured interviews to explore students’ own experiences and perceptions of generative AI in their studies. Thematic analysis revealed several insights: students appreciated AI as a tireless, non-judgmental academic companion; they became increasingly aware of its limitations including hallucinations and sycophantic tendencies through specific structured critical thinking tasks and they expressed a desire for explicit guidance on its ethical and pedagogical use.

These student findings, along with the adapted AI engagement model, were then shared with faculty, who were asked to reflect explicitly on students’ required or potentially covert use of AI in their assessments. Faculty mapped their current assessments onto Perkins et al.’s continuum and indicated any desired shifts, following discussion of covert AI use. Results highlighted the diversity of assessment practices across the institution, with AI integration varying widely by subject and level. Stated faculty development needs included access to premium AI tools, concrete training on AI integration into assessments, guidance on teaching responsible AI use, and input from industry to align academic practices with evolving professional expectations.

By combining student and faculty perspectives, this paper contributes to a growing body of pragmatic research, including that of Gimpel et al. (2025), which seeks to support educators in embedding generative AI meaningfully, ethically, and effectively within higher education assessment design.

References:

  • Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148
  • Freeman, J. (2025). Student Generative AI Survey 2025. HEPI Policy Note 61. Higher Education Policy Institute. https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025/
  • Gimpel, H., Hall, K., Decker, S., Eymann, T., Gutheil, N., Lämmermann, L., Braig, N., Maedche, A., Röglinger, M., Ruiner, C., Schoch, M., Schoop, M., Urbach, N., & Vandirk, S. (2025). Using generative AI in higher education: A guide for instructors. Journal of Information Systems Education, 36(3), 237–256. https://doi.org/10.62273/QLLG7172
  • Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task (No. arXiv:2506.08872; Version 1). arXiv. https://doi.org/10.48550/arXiv.2506.08872
  • Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The AI assessment scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(06). https://doi.org/10.53761/q3azde36