Dublin, Ireland, Tuesday March 4, 9.00-12.30
Summary
Generative Artificial Intelligence applications powered by large language models (LLMs) have significantly influenced education and in particular, reimagined writing technologies. While LLMs offer huge potential to provide automated writing support to learners it is also important to identify challenges they bring to learning, assessment, and critical interaction with AI.
This workshop aims to shape possibilities for writing analytics to promote and assess learning-to-write and writing-to-learn that are appropriate for the generative AI era. In this Seventh workshop of the Writing Analytics series, we propose a symposium-style format to identify how the field can unravel in the age of LLMs. In particular, we focus on (case) studies within two topics: (1) using writing analytics to design and evaluate interactive writing support systems and (2) using writing analytics to evaluate human-AI interactions.
To present your work in the workshop, you can submit an extended abstract (500-750 words) using the form on this website by 4 December 2024. Please note that accepted submissions are not published in LAK companion proceedings, but will be published and made available through the Workshop website.
Important Dates
- 4 Dec 2024: Deadline for submission of workshop abstracts
- 20 Dec 2024: Notification of acceptance
- 20 Jan 2025: Early-bird registration closes at 11:59pm AOE
- 3-7 March 2025: LAK25 Conference
Workshop focus
The advent of Large Language Models (LLMs) has had a significant influence on a wide variety of aspects in education, which has resulted in an increasing academic interest within the field of learning analytics as well as education in general (Khosravi et al., 2023). LLMs come with ample opportunities as well as challenges, including threats to academic integrity, overreliance, fairness, privacy concerns, and reduced critical thinking (Kasneci et al., 2023; Memarian & Doleck, 2023). While LLMs are increasingly adopted by learners and educators, it is crucial that learning does not get compromised (Memarian & Doleck, 2023). In this workshop, we specifically focus on the effects of LLMs on writing, with the aim to identify how to promote and assess writing in the age of LLMs, aligning with the goals of writing analytics to support ‘learning-to-write’ and ‘writing-to-learn’ effectively in educational contexts (Gibson & Shibani, 2022).
Using LLMs in writing has resulted in various modes of human-AI interaction including co-authoring with AI and multimodal writing assistance powered by LLMs, that envision new forms of writing support (Lee et al., 2024). However, a majority of articles on the use of LLMs still revolves around case studies and opinion pieces (Khosravi et al., 2023; Memarian & Doleck, 2023). There is limited empirical research on the effects of (writer-initiated) use of LLMs and AI-based writing support on learners’
writing as well as potential contextual factors (including individual differences, task design, course design, ethical considerations) that might influence the effectiveness.
The small − but increasing − number of evaluation studies on LLMs often still adopt a system-centric view, focusing primarily on the accuracy of the system, or focusing on the perceptions on the user, with limited emphasis on the human-AI interactions evolving during the writing process (Lee et al., 2023). To comprehensively understand learning-to-write and writing-to-learn in the age of LLMs, one needs to focus on written product and user perspectives, but also on the objective user interactions (Shen & Wu, 2023), that is, the (human-AI) writing process. This aligns with the goal of writing analytics, which aims at understanding both the writing product and process, as set out in the first workshop (Buckingham Shum et al., 2016).
In this seventh writing analytics workshop, we aim to invite the current SoLAR Writing Analytics community to envision and shape possibilities for the field in the age of LLMs. In particular, we aim to focus our attention on two key directions:
1. How can we design and evaluate intelligent and interactive writing support systems in effectively aiding learning-to-write and writing-to-learn in the age of LLMs? How do we define ‘effectiveness’? How can we ensure learners and educators to use the tools effectively? These might include empirical studies on the design and evaluation collaborative human-AI writing tools (for example ABScribe, Reza et al., 2024) as well as empirical studies focusing on the ethical considerations in designing and evaluating the tools, including for example trust calibration or use of explainable AI (see e.g., Shen et al., 2023).
2. How can writing analytics support the evaluation of human-AI interactions? How can we evaluate and understand the evolving use of LLMs over time? How can we deal with non-deterministic LLM output? This might include studies for example examining the use of trace analysis, such as keystroke logging and eye-tracking (Lindgren & Sullivan, 2019), authorship visualization (Shibani et al., 2023), and linguistic analysis of sentence histories (Mahlow et al., 2024).
Programme
The half-day workshop will run in a symposium format, including short presentations on the two key directions. The provisional schedule is given below:
9.00 Introductions of workshop organizers and participants
Introduction to the workshop themes
9.30 Part 1: Short presentations on theme 1 ‘using writing analytics to design and evaluate interactive writing support systems’
Interactive discussion on theme 1, including co-creation of shared notes and resources
[COFFEE BREAK]
11.00 Part 2: Short presentations on theme 2 ‘using writing analytics to evaluate human-AI interactions’
Interactive discussion on theme 2, including co-creation of shared notes and resources
12.15 Concluding remarks on the workshop and community engagement among the Special Interest Group on Writing Analytics members
Organisers
Rianne Conijn, Eindhoven University of Technology, the Netherlands
Rianne Conijn is an assistant professor the Human-Technology Interaction group at Eindhoven University of Technology, the Netherlands. Her research interests include the analysis and interpretation of (online) learning and writing processes to improve learning and teaching. Current research topics include the use of keystroke logging as analytics tool, analyzing human-(generative) AI interactions, learning dashboards, and explainable AI. She is currently leading the SoLAR Special Interest Group on Writing Analytics (SIGWA).
Antonette Shibani, University of Technology Sydney, Australia
Antonette Shibani is a Senior Lecturer at the Transdisciplinary School (TD School) in the University of Technology Sydney, Australia. Her research includes Writing analytics tools, LLMs for writing feedback and their integration into classrooms for pedagogic use. She uses text analytics for analyzing writing and revision behaviors, and studies the critical interaction of writers with automated feedback. She has chaired prior Writing Analytics workshops at ALASI and LAK. She was an elected member of the Society of Learning Analytics Research (SoLAR) executive committee, co-hosted the SoLAR podcast, and co-leads the SoLAR Special Interest Group on Writing Analytics (SIGWA).
Laura Allen, University of Minnesota, USA
Laura Allen is an Associate Professor in the Department of Educational Psychology at the University of Minnesota, USA. Her research seeks to understand how individuals most effectively learn and communicate information through text and discourse.
Simon Buckingham Shum, University of Technology Sydney, Australia
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, Australia. His current focus is on the future of education, specifically, the contribution of human-centred design of Analytics/AI-powered tools to close the feedback loop to learners and educators, to build the capabilities needed for the future of work and citizenship.
Cerstin Mahlow, ZHAW School of Applied Linguistics, Switzerland
Cerstin Mahlow is Professor of Writing Research at the School of Applied Linguistics at the Zurich University of Applied Sciences (ZHAW). As a computational linguist, her research focuses on the systematic linguistic modeling of empirical data from writing processes and investigates the influence of intelligent tools on writing. As specialist in higher ed didactics and e-learning she is also interested in approaches for teaching future skills needed in today’s and tomorrow’s digitally transformed world. She is currently co-coordinator of the Special Interest Group “Writing” (SIG Writing) [https://www.earli.org/sig/sig-12-writing] of the European Association for Research on Learning and Instruction (EARLI), member of the board of “Girls can Code Switzerland” and member of the steering committee of the ACM Symposium on Document Engineering (DocEng).
Submit your abstract
Please submit your extended abstract (500-750 words) latest by 4 Dec 2024 AOE. We welcome contributions providing theoretical, empirical, methodological advances and or critical perspectives in one of the two key directions:
(1) using writing analytics to design and evaluate interactive writing support systems
(2) using writing analytics to evaluate human-AI interactions.
For more information, see “Workshop Focus” tab.
Submit using the Google Form below:
https://forms.gle/d4JMMmtqHqS43zBZ9