The “Share-a-thon” format is for sharing good practices among SFDN members. Ideally session attendees bring along relevant ideas and teaching artifacts to contribute during the interactive part of the session.

Navigating GenAI Integration: A Blueprint as a Catalyst for Faculty Development Discussion by Daniel Flück (ETHZ)

The pervasive rise of Generative AI (GenAI) compels higher education to rethink curriculum and assessment, seeking ways to ensure students maintain engagement, critical thinking, and analytical competencies. While many lecturers seek practical guidelines for adapting assignments, the question remains whether such pragmatic approaches are the most effective way forward in this disruptive landscape. This Share-a-thon submission presents a “GenAI-Ready Assignments Blueprint” – a pragmatic, hands-on framework for adapting existing assignments – not as a definitive solution, but as a catalyst for a critical discussion on “HumAIn faculty development”.

Rooted in Self-Determination Theory, this blueprint emphasizes fostering intrinsic motivation by aligning assignments with psychological needs: Autonomy, Competence, and Relatedness. It proposes four key shifts:

  • TIME over Task: Prioritizing skill-building effort to strengthen competence and support autonomy.
  • VARIATION over Routine: Introducing diverse options to enhance personal choice and autonomy.
  • REFLECTION over Completion: Integrating reflection to clarify learning paths and strengthen competence.
  • COLLABORATION over Competition: Encouraging work with peers and GenAI to build relatedness.

Crucially, the blueprint also advocates for redefining learning objectives based on how GenAI influences work practices and required competencies.

This session aims to stimulate a vital discussion within the SFDN community. Is this “Blueprint” approach, with its focus on practical adaptation, the right direction for faculty development as we integrate AI into teaching? Are other formats or different types of blueprints needed to adequately support instructors in this evolving educational landscape?

We invite participants to critically examine the utility of such structured guides, share their insights, and collaboratively explore how we can shape faculty development that is thoughtful, forward-looking, and always centred on people.


AI in higher education at HLS – local and sustainable by Oliver Münken and Tilman Schieber (FHNW)

Large Language Models (LLMs) are now part of everyday academic workflows. However, hosted solutions like ChatGPT or Gemini are not always suitable, especially when privacy, reliability, or nuanced conversations are required. At the FHNW School of Life Sciences (HLS) needs include keeping sensitive data private, tailor-made assistance in Life Sciences specifically for HLS students and providing reliable assistance in STEM topics. We aim to make AI a partner in learning — not a shortcut. Local LLMs are central to this vision.
 
We developed local HPC-based LLMs prototypes
    – HPC server with two NVIDIA GPUs, tested with different models
    – Accessible via a web interface similar to OpenAI or Google’s platforms
    – Used by students to analyze data from BSc/MSc theses

Small local LLMs on student laptops (Windows/MacOS)
    – System prompts hardcoded into models
    – Guidance provided for setup and usage
    – Enables fully private chats

Local AI agent for language learning & feedback
    – Runs on HPC, built via aistudio.google.com
    – Offers interactive English practice and assignment feedback
    – Used individually alongside language classes
 
Challenges in STEM remain. Local LLMs still struggle with advanced mathematics. For now, cloud-based tools like gpt.wolfram.com (via ChatGPT) or acetate.ai provide better STEM support, though access may be restricted.
 
A modular “lego-like” approach means we are developing a scaffolding that combines techniques such as retrieval-augmented generation (RAG), multi-step reasoning, tool integration, and output verification. This allows agents to be tailored to educational goals—avoiding ready-made answers, ensuring accuracy, and adapting teaching styles.
 
At the share-a-thon participants can chat with our models and test small local LLMs on laptops.


From Offloading to Active Learning: AI Coaches for Faculty Development by Christian Coenen (ZHAW) and Tobias Zimmermann (PHZH)

This 40-minute interactive session presents two AI coaching bots designed using Generative Experience Design principles to support faculty in developing innovative, student-centered teaching approaches that meaningfully integrate generative AI.

The challenge: As generative AI transforms educational landscapes, faculty need practical support to move beyond superficial AI integration toward pedagogically sound designs that enhance active learning (Freeman et al. 2014) while avoiding the pitfalls of using AI for cognitive offloading (Fan et al. 2025). In faculty development, time for personalized, iterative coaching that adapts to individual teaching contexts and challenges is often scarce.

Enter Generative Learning Experience Design: This pedagogical approach creates active learning experiences supported or enabled by generative AI tools (Coenen 2024). The central goal is to have students actively and meaningfully interact with AI. For example, AI chatbots can represent historical figures, scientific concepts, or characters from case studies, or take on a specific role in group or individual work, such as that of a critical friend or motivational coach. This approach establishes dynamic, adaptive learning environments where students research, practice, and collaborate with AI to deepen their understanding. GLX Design emphasizes student agency and engagement, allowing for more individualized and transfer-oriented learning pathways that adapt to different learning needs and contexts. The concept harnesses the strengths of GenAI in acting as an interactive learning partner that facilitates deeper cognitive processing through active engagement (Moldoveanu & Siemens 2025).

The shared AI Coaching Bots:

1) GLX-Design Coach: This bot coaches faculty through developing Generative Learning Experience (GLX) designs for their specific courses. For example, it may lead a pharmacology lecturer to introduce a more active educational setting where students actively engage with AI chatbots role-playing as different substances (caffeine, alcohol), fostering a deeper, more transformative a deeper understanding of the pharmacological processes to be learned.

2) Learning as Assessment Coach:
This bot guides faculty in developing integrated assessment approaches where evaluation becomes an integral component of the entire didactic arrangement, incorporating formative, summative, and self-evaluative elements seamlessly within the learning process (Zimmermann 2024). This assessment approach is in line with the GLX Design approach.

Interactive Session Format – participants will:

  • Experience the coaching bots firsthand through guided interactions
  • Discuss implementation strategies for institutional adoption
    Moreover, the discussion might also explore the following aspects:
  • Investigate how AI-powered coaching can support faculty development at scale
  • Share experiences and challenges in integrating AI into teaching practice

Designing Aha-Moments: Human-Powered Techniques for Faculty Development by Sara Petchy (UZH)

How can we help instructors move beyond hearing about good teaching strategies to actually trying them, reflecting on their impact, and deciding how to integrate them into their practice? Some of the most powerful academic development approaches are those that guide participants through an experience first before any theory is explained so that insights come from reflection on what they themselves just felt and did. These are moments when human facilitation shines: setting up the activity, noticing what emerges in the room, and helping participants connect the dots to their own teaching contexts.

This share-a-thon session focuses on experiential workshop techniques that foster this kind of deep, embodied insights. We will first go through a few examples:

  • Feeling Like a Novice: Short exercises that put workshop participants in the shoes of a university student in her/his first year, letting them feel the confusion and cognitive overload that students often face when encountering new concepts taught by disciplinary experts.
  • Contrasting Kinds of Thinking: An interactive challenge where participants complete two very similar tasks that nonetheless require different mental processes: one is smooth and almost automatic, the other surprisingly effortful and frustrating. Comparing the two experiences leads to an eye-opening discussion about cognitive load, attention, and the hidden work students do when processing information.

After these demonstrations, the floor will open for participants to share their own strategies, activities, and tools that create similar “aha moments” in workshops or in their teaching. We will discuss where these techniques work best (and where they don’t), and each of us will leave with a fresh set of “borrowed” best practices from our SFDN peers.

This session directly addresses the conference theme by highlighting a uniquely human strength in academic development: our ability to design experiences that evoke emotion, surprise, and insight, qualities that can’t easily be automated. Participants will leave with new activities they can adapt to their own contexts and a deeper understanding of why these embodied approaches often create more lasting change than explanation alone.


Generative AI in Learning and Assessment: A Practical Tool for Reflection and Dialogue by Henriette Carbonel (Uni Distance)

How can teachers clearly explain to students when generative AI is a valuable aid, when it risks undermining learning, and when it is explicitly prohibited? This workshop introduces a tool designed to support educators in making these distinctions visible and meaningful. It helps clarify which uses of generative AI are encouraged, discouraged, or prohibited, and, crucially, the pedagogical and ethical reasons behind such choices.

Grounded in a literature review and enriched by the outcomes of the Critical AI Literacy for Learning scientific seminar, the tool provides a structured way to reflect and discuss generative AI uses with students.

During the workshop, participants will be invited to explore the tool through collaborative activities and engage in a critical dialogue about its use. At the end of the workshop, participants will leave with a tool they can use directly in their own context, along with practical strategies for framing the role of generative AI in learning and assessment.