I recently attended CIKM in Melbourne, and heard/followed up on a couple of ideas around an idea on ‘assertion identification’ – that is, the spotting of claims that might be described as factive or assertive claims which either assert facts, or assume facts. These sorts of sentences are interesting for lots of reasons, including (from a not at all formal review of lit) to:
- help identify claims to be ‘fact checked’ in political debates (per the ClaimBuster) – i.e. claims that assert factual statements (immigration is up by 6 billion %, etc.)
- help identify bias or emotive description where sentiment analysis might indicate sentiment terms alongside assertions (e.g. in news reports, per (broadly) this paper).
- help identify fact based versus opinion/subjectively based reviews and posts in product forums (e.g. here)
- identify bias (lack of ‘Neutral Point of View‘/NPOV) in Wikipedia edits (here)
- Identifying Emotional and Informational Support in Online Health Communities (actual title, here)
- and other general work on distinguishing fact and opinion (e.g. here, )
Note, the point is not necessarily to identify the truth (or even truthy) status of the utterances, but to distinguish between utterances which make (or assume) assertions of factual content, and those that do not. Further analysis might then focus on whether assertive or factive statements do or do not co-occur with other statement types (e.g. sentiments, or features such as citations). Or, as in the ClaimBuster case, particular propositions might be marked for further attention by human assessors (fact checkers in that case).
I think there’s potential here for something interesting in the education context. There are a class of writing which might (context dependent) be problematic around:
- Sentences which fail to assert a (factual) claim, perhaps characterised as ‘waffle’
- Sentences which assert a factual claim, but fail to give a citation
- Sentences which assume a factual claim without actually stating it
- Sentences which are factive and emotive, possibly suggesting bias or inappropriate evaluative language
Of course, it’s also possible that in some writing contexts some students will identify more inter-related (or not) claims than others, and that text-length won’t always be a function of this (i.e. some texts waffle on without saying very much). So in a very preliminary thought, we can imagine a tool which might parse text, to help people identify places where they are:
- asserting claims
- asserting and a citation is missing (and should be provided)
- not asserting anything (apparently) but wish to do so
- assert, but provide no analysis
- assert, accompanied by some sentiment (which may or not be appropriate, context dependent)
It’s a simpler model than some of the other things we’re working on in UTS:CIC, (for example, it gives no analysis of the broader rhetorical structure of a text), but in being simple, it might provide more directly actionable feedback.