Wednesday, May 28, 2014

Why can't children read... Dickens?

This post arose out of a tweet I read this morning which said that ‘a friend had to read the first page of Oliver Twist aloud to university students because it was too difficult for them’. Thinking back to the time I was teaching literature courses at university, I also found that my students in one of them (I mention no names!) had considerable difficulty in reading complex novels (Rushdie, Naipaul, Jean Rhys) and the theoretical texts which went with the course. How did I know? They openly admitted as much. 

That should, if you think about it, give us a clue as to why there is this persistent problem in getting children to read canonical works: it extends right back to the first years of schooling. Children need to be taught to read and spell to a very high degree of proficiency by the end of Key Stage 1. When this is done well, children get off to a good start: reading is something they derive pleasure from and have success with. If this happens and well trained teachers in Key Stage 2 continue to fine-tune reading and spelling skills by teaching many of the less frequent, more obscure sound-spelling correspondences and they teach their pupils to read and spell ever longer polysyllabic words, with practice, children find that reading becomes more and more fluent. This is important because they are now at the stage of reading to learn and fluency guarantees direct access to meaning on the page. In fact, reading should become so fluent that, unless a particular word contains a less frequent sound-spelling correspondence or is not in the reader’s spoken vocabulary, the process of decoding slips beneath the level of conscious attention.

Interestingly, Jeanne Chall, an expert in ‘readability’, demonstrated that during the period 1920-1960, when sight-word and meaning-based approaches were more common, ‘the number of different words in primary reading textbooks decreased substantially...In contrast, from the late 1960s to the early 1980s, a time when decoding-based methods were more popular, the number of different words in primary reading books increased’*!

That isn’t to say that there aren’t other factors at work. As suggested above, vocabulary difficulty is also likely to be a strong and consistent factor in predicting text difficulty. Vocabulary difficulty is measured in two ways: the first is one of the frequency of words in print; the second, by the number of ‘new’ and/or ‘difficult’ words introduced in a text and how often they are repeated. Beyond that, syntactic features such as the length of sentences, cohesion and the complexity of sentences - the presence or absence of embedded clauses and prepositional phrases - are also aspects for consideration.

Some would argue that the proliferation of readability formulas have been responsible for the long-standing and steady reduction in text difficulty. This has been because publishing houses measured readability and deliberately reduced the level of text difficulty. Chall et al (1977) discovered that, between the 1940s and 1970s, ‘social studies, literature,  grammar and composition textbooks’ had all diminished in ‘difficulty on measures such as readability scores, maturity level, question complexity, and ratio of illustrations to text’*. By contrast, one rarely hears either primary or secondary teachers talk about readability, with most secondary teachers almost exclusively preoccupied with filleting everything for meaning.

Scores can tell us how difficult a text is but not how difficult a text should be. The most common way of establishing text difficulty is a test of reading comprehension. If a pupil can answer successfully a series of multiple-choice questions on the main ideas being conveyed, on some of the detail in the text and on inference, the match is thought to be optimal. 75% success is the score required by Bormuth (1975); Thorndike (1916) put it at 80%. I wonder how many teachers still use regular comprehension quizzes.

Going back to Dickens, on the Flesch-Kincaid readability formula, for which a higher score indicates easier readability and on which scores usually range from 0 to 100, the first paragraph of Dickens’s Oliver Twist scored -10! Such a text would then be regarded as being very difficult to read. [God knows what Bleak House would score!] Using Readability-Score.com, a random paragraph taken from Philip Pullman’s The Tiger in the Well scored 88.5 on the Flesch-Kincaid, and at an average grade level of 4.5 in the USA (UK Year 5). Stormbreaker by Anthony Horowitz scored 82.9 on the Flesch-Kincaid and at an average grade level of 6.1 (UK Year 7). An excerpt from The Hunger Games scored 77.3 on the Flesch-Kincaid readability formula and at an average grade level of 7.2 (UK Year 8). And, you might be interested to know that Of Mice and Men scored 82.5 on the Flesch-Kincaid index and at an average grade level of 7 (UK Year 8). As is obvious, a fairly accomplished reader will hardly break sweat reading any of the more contemporary works.

Going on my own experience, most modern teenagers who do read for pleasure seem to read books that rarely progress beyond the Year 8/Year 9 level of readability. Oh yes! they love them and gobble them up. One of my own daughters read three John Green (The Fault in Our Stars) books in a matter of days and it was a delight to see her so engrossed. But, if they stick at this point, they arrest! Is it then any wonder why, when confronted by novels written in the nineteenth century or even those written in the first half of the twentieth century, they find them so daunting as not to even try and tackle them?

If you run through a readability calculator many of the kinds of books those teenagers that do read are reading, you will find what Jeanne Chall found: the further forward in time you go, the easier children’s novels and textbooks are to read. So, what’s the answer? It's very sad to have to say this but if pupils are already years behind their chronological ages by the time they enter secondary school, it’s probably too late, given the lack of expertise in and commitment to teaching pupils to read in many secondary schools. However, even pupils already doing well need pushing to make continual improvements in performance. Too often the velleities of secondary school expectation are to blame. Thus, all children entering secondary school should be screened for reading and teachers in every subject made aware of their pupils' abilities and made responsible for developing them. Textbooks of all kinds should be made progressively more challenging in terms of content, vocabulary and sentence complexity. Otherwise fewer and fewer children will be able to read a wide and challenging range of imaginative and informational texts.

* Quotations are taken from Chall, J. &Conard, S.S, Should Textbooks Challenge Students: The case for Easier or Harder Textbooks, (1991)

Friday, May 16, 2014

How valid is the phonics screening check?

The Journal of Research in Reading has just published an important and timely paper on the government’s phonics screening check ‘Validity and sensitivity of the phonics screening check: implications for practice’ (Duff, F.J., Mengoni, S. E., Bailey, A.M. and Snowling, M.J.
It asks two ‘critical ‘ questions: First, how well do scores on the screening check ‘correlate with reading skills measured by objective tests’? And, second, ‘is the check sensitive?’, which refers to how sensitive is the check is in detecting children ‘showing early signs of being at-risk of encountering a reading difficulty?’
The study involved eight primary schools in York and included 292 children. Aside from the screening check, an array of other tests was administered. These included school-based assessments, class spelling tests, individualised word reading, comprehension, nonword reading and phonological awareness tests.
So, how valid is the check? Does it measure what it claims to measure? The authors conclude that the check is ‘a highly valid measure of children’s phonics skills’. Moreover, the check ‘showed convergent validity by correlating strongly with other measures of phonics skills and with broader measures of reading’. The latter includes ‘single-word reading accuracy, prose reading accuracy and comprehension’. This should provide strong justification on the part of the government to introduce the check, though as Hans Eysenck once remarked, [i]deological thinking is not easily swayed by factual evidence’*.
The authors also agree with previous studies in concurring that ‘a rigorous assessment of phonics skill is important for early identification of children at risk of reading difficulties’, which they define here as ‘not yet attaining the phonics phase expected by the end of Year1 (i.e., not attaining phase 5)’.
An unexpected boon from the study was that there was ‘a slight tendency to overestimate the prevalence of at-risk readers (as compared with standardised tests of reading accuracy and fluency)’, which the authors, in my view rightly, contend to be ‘a favourable property for a screening instrument’.
Where the authors are more equivocal is around the issue of whether the check is necessary. Although they conclude that it is valid, they also suggest that, where teachers are well trained ‘in the teaching and assessment of phonics, their judgements are sufficient for the purpose’. They go further and add that ‘the use of resources to better equip teachers to conduct ongoing phonic assessments would be more cost-effective, not least because this would place them in the best position to intervene before reading difficulties set in’.
Although it is hard to disagree with the proposition that there wouldn’t be a need for a screening check if teachers were sufficiently well trained to monitor and assess children’s abilities and capabilities in regard to their phonics knowledge and understanding, the authors seem to be ignoring a number of important findings. As Jeanne Chall revealed in her seminal book Learning to Read: The Great Debate (1967), teachers have a strong tendency to be eclectic. They find it very difficult to abandon previous approaches, many elements of which can often be seen to run counter to the principles of a new programme. Neither do they easily relinquish their old pedagogical ideologies unless given training that provides a clear rational for what it is they do and the way in which they do it.
More recently, the NFER report ‘Phonics screening check evaluation: Research report’ (May 2014) bore this out. ‘Even amongst those who are strongly supportive of phonics,’ it reported that there ‘was a firm conviction that other strategies were of equal value and that phonics as a method of teaching reading was most successful when used in conjunction with other techniques’.
As was made clear by a number of respondents to the survey carried out by NFER, there is still a huge amount of confusion, particularly around the areas of decoding and comprehending, in the minds of many teachers. Indeed, one percipient teacher remarked, ‘I think the moment you start to use other methods, you aren’t actually doing synthetic phonics’.
Another rather conspicuous omission from the paper was any comment on the match-funding programme initiated by the government and the appalling disclosure that over 90% of allocated funding had been spent by schools on resources (meaning mainly books) and not on training.
Two things: why is it that so few research articles on the teaching of reading spelling ever quote from or comment on the work of Diane McGuinness? And, why is there such a disconnect between the practitioners out there in the field training teaching practitioners (nearly 12,000 alone in the case of Sounds-Write) and academic researchers in our universities? In case you’re listening out there, there is much better stuff out there than the insipid and meagre diet doled out to many children in the form of Letters and Sounds.

* Quoted from Robert Peal's Progressively Worse: The burden of bad ideas in British schools, p.62.