I’ve just returned from a week at Arizona State University, in the SOLET (Science of Learning and Educational Technology) lab, funded by a small NSF Data Consortium Fellowship with Laura Allen. Incredibly productive week with some really great researchers – so thank you to all of them (and particularly Laura!). So, on Tuesday I presented with meetings and collaborations over the rest of the week (as below).
So the week involved working on:
- a bid we (at UTS) are drafting with other institutions around our writing analytics work, and the synergies/differences of that work to a parallel proposal from ASU (Monday, with Danielle McNamara and Laura Allen)
- drafting a proposal (somewhat based on this post) around feedback on feedback, and understanding student misalignment between self-and-tutor assessments (Monday with Danielle and Lauara, and then drafting with Laura on Wednesday, and discussing with Rod Roscoe – who has a particular interest in the misalignment issue)
- Multiple Document Comprehension and source-based writing tasks with Kris Kopp. These tasks are particularly interesting (to both of us) because exploring how students source information in their writing can give insight into how they’re thinking about the sources and task (and we both have a strong interest in epistemic cognition in this context). Kris and I talked about collaboration around (1) how we can explore sourcing and the kinds of research such methods would apply to, and (2) a specific potential project exploring variations in source properties and the impact of that on sourcing. Kris has also been involved in some interesting work around how students understand argumentation and refutation, and intelligent systems to teach them these issues, (e.g. here), which is very relevant to my research and teaching interests (as this is a key concern in Arguments, Evidence, and Intuition). (Kris Kopp, Tuesday)
- discussion of sequences of rhetorical moves in student texts, based on some recent analysis and a LAK17 short paper (in submission) (with Danielle and Laura, Tuesday)
- a recent ‘academic integrity’ proposal to investigate student’s understanding of academic integrity practices (more broadly construed than ‘do not plagiarise’) using a source-based-writing task. The idea of the proposal was that by providing students with a known summary with known source documents, we can ask them to “improve the summary” and fix issues with it, to investigate translation of academic integrity tuition into an authentic task. That paradigm would also play out in other contexts where the interest isn’t academic integrity but broader concerns of sourcing (e.g., are certain sources systematically privileged/ignored). (With Kris Kopp on Tuesday, Danielle and Laura Monday, and Laura on Wednesday)
- Theory Thursday Laura and I spent discussing (1) a paper idea we have (with Andrew Gibson, at UTS), around ‘writing as a lens onto metacognition’; (2) drafting a structure for a paper idea on ‘a theory of change for writing analytics’. I then had a one-to-one with Rod Roscoe – who leads the ‘Sustainable Learning and Adaptive Technology for Education‘ (SLATE) lab – about the various projects, particularly around misalignment, and some of the things we’ve been thinking about around ‘analytics literacy’ (and ‘playful interaction’ e.g., in ‘dear-learner‘). Back to theory, I joined the reading group Laura convenes, discussing ‘Creating Language‘.
- Friday, I had a series of back-to-back meetings to take me up to departure time, these were targeted at (1) sharing key synergies, (2) seeing where we might build collaboration, and (3) just chatting about research directions, etc.
- Kevin Kent – We chatted about people’s beliefs regarding learning technologies and their relationship to (1) use of videos in MOOCs, and (2) their beliefs regarding writing tools and their use. One can imagine building resources in this space around ‘beliefs about learning’ (possibly building on epistemic cognition), and their relationship to use of specific technologies – e.g., what kinds of features do people believe tools can give insight on (if any); do people have ‘flexibility’ in their technology use – are they strategic in use of tech to augment cognitive abilities, etc.. We also talked a lot about embedding analytics in contexts, and activities, e.g. getting teachers to compare human-analytics outputs.
- Tricia Guerrero – We chatted about how to leverage systems (and, particularly feedback systems) to target particular user groups and their salient needs. This is an issue we grapple with in our UTS work too – we want to target feedback, and ensure we can give quality feedback, and sometimes that means giving feedback on low-level features (like spelling and grammar) – but there’s an issue nicely illustrated by handwriting: we don’t want to read beautiful prose and only give feedback saying “you should work on your handwriting” (I experienced this), because this deprives people of the higher-level feedback, and potentially reinforces divisions. This led to a nice discussion of how we scaffold feedback (e.g., linking feedback to examples or/and tutorials on the issue), the role of self assessment (I demo’d REVIEW), and how we translate classroom-feedback into automated tutoring systems.
- Kevin Kent, Kathleen Corley, Tricia Guerrero, and Melissa Stone – We chatted about ‘learning analytics literacy’ and how we build adoption and engagement of automated-writing-evaluation systems by educators. This is a very live area for us at UTS and it was great to discuss ‘beliefs about AWE’, visualisation of data, and development of resources to support educators a bit more.The issue here is in thinking about how we talk about what analytics do, and how we develop ‘learning patterns’ and organisational infrastructure to deploy analytics at scale in real contexts. The SOLET lab have developed some nice resources on this for the WPal and istart systems. Something I hadn’t been as aware of was how toxic ‘assessment’ is as a word – I think my experience of teaching and working in UK and Australian HE is that people are so aware of ‘formative assessment’ and ‘assessment for learning’ that the name isn’t so toxic now. In any case, one of the key issues for me in thinking about use of analytics is how we build good quality assessment and assessment literacy (i.e., capability to design assessments that give us the information we need to target pedagogic intervention). In that context, it’s important to consider e.g., designing dashboards to understand how well cohorts are doing and what things they need more tuition on vs the things they’re mostly nailing.In discussing how we get educators to use relatively lower level (easyish) tools that aren’t just ‘point and click’ but might help address particular needs and concerns they have Kathleen pointed me to ‘Using Corpora in the Language Classroom’ (which I’ve now bought) – which is targeted at engaging educators in using corpus linguistics in their classroom contexts. I think the huge value of this kind of approach is:
- It builds understanding of how these systems work and can be used
- It applies systems in real-contexts, and encourages researchers to develop ‘learning patterns’ that describe how their tools might be deployed across diverse contexts that share common features
- It builds research capacity, by engaging practitioners in their own research, and (hopefully) building their confidence in knowing which kinds of questions can be addressed by which kinds of tools (even if the work of developing those tools involves looking for researchers)
- Cecile Perret – We chatted about our shared interests in multiple document comprehension tasks and some of the theoretical issues around it. We’re both interested in how people find and evaluate information, and how to design studies around this…I used my PhD work to point out some study designs it was a good idea to avoid
- Mike Hogan – Mike’s actually an Irish researcher, who was also over visiting colleagues. We talked about collective intelligence, and the use of improvement science to build participatory collective intelligence and analytics literacies in transdisciplinary learning analytics teams, and how we could share resources in the area.
Possible action areas:
- ‘Writing analytics beliefs’ or/and ‘learning technology beliefs’ (particularly in flipped learning contexts) – could we develop an instrument (‘beliefs about AWE scale’?) to help understand the perspectives people hold on analytics tools, to use in support of building analytics literacies and to understand misconceptions regarding the potential of such technologies (Kevin and Rod)
- ‘A framework for feedback in AWE systems’ – discussing options for types of automated feedback, and translation of classroom feedback to tutoring systems. (Tricia)
- Feedback-feedback grant – (Tricia, Rod, Laura)
- LAK17 workshop and writing analytics literacy – look at developing resources (drawing on the corpus linguistics work) to support educators in use of analytics. Think about how this ties into the ‘theory of change’ paper – why does educators having greater analytics literacy matter for learning (Kevin and Laura)
- General collaboration around key grants (feedback-feedback, source-based writing – social epistemology & academic integrity) + papers
Source: SK Blog
Link: Visit to ASU