I came across this review article on writing tools published in 2019, and wanted to make some quick notes to come back to in this post. I’m following the usual format I use for article notes which summarizes the gist of a paper with short descriptions under respective headers. I had a few thoughts on what I thought the paper missed, which I will also describe in this post.
Reference:
Carola Strobl, Emilie Ailhaud, Kalliopi Benetos, Ann Devitt, Otto Kruse, Antje Proske, Christian Rapp (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers & Education 131 (33–48).
Aim:
- To present a review of the technologies designed to support writing instruction in secondary and higher education.
Method:
Data collection:
- Writing tools collected from two sources: 1) Systematic search in literature databases and search engines, 2) Responses from the online survey sent to research communities on writing instruction.
- 44 tools selected for fine-grained analysis.
Tools selected:
Academic Vocabulary
Article Writing Tool
AWSuM
C-SAW (Computer-Supported Argumentative Writing)
Calliope
Carnegie Mellon prose style tool
CohVis
Corpuscript
Correct English (Vantage Learning)
Criterion
De-Jargonizer
Deutsch-uni online
DicSci (Dictionary of Verbs in Science)
Editor (Serenity Software)
escribo
Essay Jack
Essay Map
Gingko
Grammark
Klinkende Taal
Lärka
Marking Mate (standard version)
My Access!
Open Essayist
Paper rater
PEG Writing
Rationale
RedacText
Research Writing Tutor
Right Writer
SWAN (Scientific Writing Assistant)
Scribo – Research Question and Literature Search Tool
StyleWriter
Thesis Writer
Turnitin (Revision Assistant)
White Smoke
Write&Improve
WriteCheck
Writefull
Inclusion criteria:
- Tools intended solely for primary and secondary education, since the main focus of the paper was on higher education.
- Tools with the sole focus on features like grammar, spelling, style, or plagiarism detection were excluded.
- Technologies without an instructional focus, like pure online text editors and tools, platforms or content management systems excluded.
I have my concerns in the way tools were included for this analysis, particularly because some key tools like AWA/ AcaWriter,
Writing Mentor, Essay Critic, and Grammarly were not considered. This is one of the main limitations I found in the study. It is not clear how the tools were selected in the systematic search as there is no information about the databases and keywords used for the search. The way tools focusing on higher education were picked is not explained as well.
Coding Scheme:
Coding framework consisted of 26 features subdivided into four categories: (refer to appendix of the article for detailed descriptions). It extended the coding framework proposed by Allen et al. (2016), and was developed in a 3-step iterative process with feedback from all seven authors.
- General specifications:
Public: Educational level
Target language
Support language
Public: L1/L2
Tool category
Genre
Domain
Policy
Validation efforts
2. Instructional approaches:
Text level focus
Instructional setting
Targeted subtask of the writing Process
Targeted skills and strategies
Instructional practice
Digital interaction support
Adaptability to preferences of the Learner
Adaptability to preferences of the teacher/tutor
3. Technical specifications:
Technology Used
Backend data
Context
4. Feedback specifications
Feedback Source
Feedback Focus
Tutoring Component
Specificity level
Provision
Delivery
Analysis
- Coded data (Writing tool features) organised in Ms Excel.
- Descriptive and quantitative analysis in SPSS.
- Data visualised in tables and bar charts
I wish the authors added more details on how the coding was actually done. The paper has not reported any inter rater reliability statistics or recorded conflicts during the coding process. I find this a limitation of this study because there is quite a lot of room for how the tools are perceived in context, and I would expect to see more discussion on this crucial point of why a tool was coded as one over the other.
Results:
The section highlighted quantitative results of the 44 tools coded using features from the coding framework discussed above. The article contained overview tables with the number of codes, which you can check for reference. I will highlight the ones ranked most high/ low and discuss significant findings.
In the general specifications, summary of results are below:
- 17 out of 44 tools targeted higher education
- 70% of tools offered support in English, with few others offering other language support like German.
- About 27 tools provide automated writing feedback, one in combination with scoring. 16 tools were IWPs that did not analyse texts for providing feedback, but only provided a platform to facilitate writing.
- Most tools (37/45) did not specify a particular genre of writing and provided general support. Others were mostly targeting essays and research papers.
- Varied policies of usage was observed: 14 commercial tools offered paid-access, 16 open access tools provided limited features without premium accounts, and 12 others were restricted to users from an institution/ registered users.
- Only half of the tools reported any validation in a publicly available form.
Results of instructional approaches are below:
- Among the different writing processes like prewriting, planning, drafting, revising, and editing, most tools (24/44) targeted editing and revising tasks.
- 22/44 targeted micro text level only (words, phrases, or sentences) and correlated to editing and revising tasks, and 11/44 focusing on macro level (paragraphs, whole text) addressed planning and drafting tasks.
- An uneven distribution of targeted skill domains: factual (15), conceptual (9), procedural (3), and metacognitive (2) was found, where the latter two are targeted as a secondary skill in combination with knowledge domains.
- A statistically significant relationship was observed between the subtask focus and the skills and strategies supported.
- 20/44 tools were intended for self-directed use, and six tools were specifically intended for supporting teachers and students during and after class instruction.
- Those instructional practices tended to correlate with intended instructional settings, for e.g., a scaffolding approach had ties with self-directed use. 15/44 tools were used in classroom settings, but most of them (31/44) did not allow adaptations by teachers.
- Learner adaptability was found lesser in AWE tools (17/27) than ITS and IWP tools (12/16) which offered options for learners to adapt the tool based on individual preferences.
- 8 tools allowed teacher-student interactions, and 6 allowed interactions between learners and learner-teacher, and one allowed learner-learner interaction only. Remaining (28/44) tools did not facilitate any interaction between users. This feature of human-human interaction had significant relationships with the supported writing process stage and instructional practice – Most tools meant to scaffold writing only allowed interactions with the system.
Technical specification results are as follows:
- Most tools did not reveal the underlying technology and/ or back-end corpus used. Among the ones which specified them, 9 used some level of Natural Language Processing (NLP), 5 used list based pattern matching, 6 used corpora, and 6 used word lists. However, the quality of linguistic or corpus analysis was not evident.
- Most tools (39/44) were web-based or had a web-based option, and 5 ere standalone applications.
- Few tools (5/44) had additional visual and graphical representations.
Feedback specification results are below:
- Only 28 tools provided some kind of automated feedback, and 6 allowed of them allowed for additional human feedback.
- Feedback was provided mostly on the written product (17), with few other tools (7) offering guidance in managing the writing process, and 3 on self-regulation.
- Feedback provided was mainly on language use, genre, content and structure, and strategies (I’m not sure about the exact numbers here since the table did not match the text in the article).
- Most feedback was given at once and after the feedback is requested by the student, and feedback on language and text content/ structure were most prominent.
- 13 tools delivered feedback at varied levels of granularity, and others on specific levels like word (2), sentence (4), paragraph (3), and section level (3).
- Few tools offered other modes of feedback using visual elements like color codes, clickable symbols, graphic representations and indices in addition to textual feedback.
Throughout the whole results section, there was little detail explaining the tools since very few tools were named beyond the numbers . I found this quite hard to follow since I had to refer back the coded file to get adequate information on the referred tools.
Discussion and conclusion:
While the updated coding framework is a significant contribution of the article, as noted by the authors, it is limited in representing the economic and technological dimensions of writing support tools. The design, architecture, and evaluation of such tools are not fully developed due to inconsistently available information. Similarly, usability of the tools were not tested. I would also argue that learning theories behind the tools (if any) should also be examined so that the instructional focus is clear. The distinction between research-evaluated tools and commercial tools without explicit evaluation could be made clearer.
The paper provides a snapshot of the existing writing support tools, and I have no doubt of its usefulness. The coded data set is a valuable asset for the community (Link: http://hdl.handle.net/2262/85522). However, a key concern as I discussed earlier is the omission of key tools in this review. Furthermore, while these results with numerical summaries provide a gist of the writing tools, I missed a deeper critical look on the tools and the features helpful for learners from this review. An interesting path for a future literature review could be to look at in more detail on how such writing tools are applied in practice evaluated in real scenarios. I’m also not too convinced of the new category of IWP tools introduced, which act as platforms for writing. While few of them have a clear instructional focus, others are comparable to platforms like Google docs, MSWord, Grammarly etc. that facilitate writing, in fact some of them might be of lesser value than them since these tools also provide some forms of feedback to the user to improve their writing.
I know this is a pretty long post, so if you made it to the end, well done! See you in my next one
Source: Shibani Blog
Link: Notes: ‘Digital support for academic writing: A review of technologies and pedagogies’