Tuesday, October 27, 2015

Meaning no harm...?

Many years ago, I developed a taste for the novels of Graham Greene and, although I didn’t find his tale The Quiet American (1955) as enthralling as some of his others, nevertheless, a phrase from the book has always reverberated in my  mind. It's a complex meditation on the changing order of power relations in, particularly Indo-China, as it was then. Greene, in the context of growing American involvement in Vietnam (post Dien Bien Phu), writes that “Innocence is like a dumb leper who has lost his bell, wandering the world, meaning no harm”, the implication being that innocence is, in fact, capable of doing great harm. The 'innocent' in this labyrinthine web of foreign intrigue is Pyle, an American agent, who is emblematic of what Greene considers to be the naiveté of the new kid on the international block, the United States, whose intervention in the Far East causes, so the book alleges, untold harm and suffering to the Vietnamese.
When Greene wrote the book, the fear of contracting leprosy was terrifying – it wasn’t until the 1940s that a cure was finally developed – and the prospect of being consigned to a leper colony was an ever present horror for those living and working in many regions of Africa, India and South America.
Similarly, this kind of dangerous innocence reminds me of defenders of whole language and its many manifestations, the main one of which is mixed methods. They infect the gullible and unwary with the bacterium illiteracy. As with leprosy, the victims suffering from the contamination are often unaware of the symptoms, which can remain latent for many years. Unlike the case of leprosy, there has always been a cure for the disease of illiteracy. It was developed by the Sumerians around five thousand years ago. Sadly, like the quacks down the ages offering curatives for leprosy, innocence in the guise of quick and easy answers –  "Look at the first letter and guess", or "What do think it says from the picture?" or "What do you think it might be from its position in the sentence?" –  to the difficulties posed by the English alphabetic code has damaged the teaching of universal literacy by promoting methods that promise everything while delivering nothing.
In 2006, I was working with a school that was collecting for CAFOD, an organisation that offers aid to underdeveloped countries. In this particular year, the school was collecting money for the victims of leprosy, a chronic infection caused by Mycobacterium leprae and Mycobacterium lepromatosis.

As I was teaching a Year 2 class how to read and spell polysyllabic words, I thought it would be helpful to make use of the fact that they were collecting for sufferers from leprosy and integrate it into our phonics lesson. Only a matter of weeks before, I’d happened to read an article in the New Scientist on this very subject. The article explained that there were two main types of the disease: paucibacillary and multibacillary leprosy.

After giving a short talk about the disease and how it had taken until the 1940s to find a cure, I wrote the words ‘multibacillary’ and ‘paucibacillary’ on the board. We talked about what the words meant. As readers will readily work out, ‘bacillus’, is the Latinate word for pathogen or germ, ‘multi’ means many, while ‘pauci-‘ comes from the Latin ‘paucus’, meaning ‘a few’ or ‘lack of’ and from which we get the word ‘paucity’. Armed with this knowledge, the children quickly suggested that ‘paucibacillary’ leprosy might be easier to cure than ‘multibacillary’ leprosy. They were right!
Having established meaning, we now turned our attention to the structure and spelling of the words. In the case of ‘paucibacillary’, I asked them what syllables they could hear. They agreed to divide it as follows pau | ci | ba | ci | lla | ry. Next, we looked at the individual sound-spelling correspondences within each syllable. Thus, /p/ /or/ | /s/ /i/ | /b/ /a/ | /s/ /i/ | /l/ /a/ | /r/ /ee/. And then we talked about what might or might not be difficult to spell were we to need to write the word.
Most of the children thought that the spelling au for /or/ would be the one they’d have to think about and we analogised it by linking it to other words with the same sound-spelling correspondence. They suggested ‘August’, ‘autumn’, and ‘Paul’. One or two other children felt that the spelling of the sound /s/ might need to be noted, though, after already having been taught explicitly the various ways of spelling /s/, most thought it wouldn’t be a problem.
After saying the word precisely by saying it aloud in its syllables, we all wrote it, saying the sounds in each syllable as we wrote them, leaving gaps between the syllables. Then we repeated the process by writing the word as it would normally be written, i.e. without gaps.
Two weeks later, in the context of the fundraising, we asked the same class of children to write the words again. Three quarters of the children spelt the words correctly and the quarter that didn’t spell them correctly spelt them with plausible (in other words ‘readable’) spellings, spelling the /or/ as or and /s/ as s.
Phonics, as my friend Debbie Hepplewhite has often said, is not something for teaching children in the early years alone. It is an approach that enables people of all ages to read and, potentially, to spell any word in the English language. If children as young as seven can read and spell words like ‘paucibacillary’, they can learn to read and spell anything however seemingly complex. And yet we still have supposedly intelligent adults telling us that words like ‘have’, ‘said’ and ‘was’ cannot be decoded.
The only constant in the writing system is the sounds of the language. It is the sounds that drive the code; the spellings are the symbolic representations of those sounds. This is why whether a person is four or eighty-four, phonics provides well substantiated answers to the problems of reading and spelling in the first instance.
As with Pyle, of whom Fowler, the story's 'protagonist' says, “I never knew a man who had better motives for all the trouble he caused”, so it is with the mixed methods and whole language 'innocents' who peddle their self deceptions in an ostensibly noble cause. 
The point is that, without having been inoculated from the life-long blight of illiteracy with a daily dose of linguistic phonics, the children referred to above wouldn’t have been able to read and spell not only the words associated with the blight of leprosy but also with all and any words.

Thursday, October 22, 2015

Do fluent, adult readers read whole words as 'sight words'? Nooooooooooooo

A question that comes up repeatedly in regard to adult readers’ fluent reading is whether such fluent readers recognise whole words as ‘sight words’ or process through words so fast that it falls below the level of their conscious attention, rendering them unaware of what’s going on. In short, the answer is the latter! Just because something appears to be the case doesn’t make it so. We know from very many studies that what may seem to be happening on the surface is very different from what may be happening at a subconscious level.

Where did the idea that fluent readers read whole words by sight and without processing each letter or group of letters within a given word come from? In fact it all started over a hundred years ago with James Cattell, who reasoned that because adults could both read common words as fast as they could name letters and they could read words in context twice as fast as they could read words in isolation, this provided incontrovertible, scientific proof that adults were whole word readers. Oh that things were so simple.

The idea has also become known as ‘late stage sight word  theory’, which proposes that once a reader has read a word and become familiar with it, it ‘appears’ to be recognised instantly as a whole. As you can see from the way in which I have problematised the word ‘appears’, the reality is very different, and what the proponents of this theory fail to recognise is that the tests conducted by Cattell could just as easily have to do with the speed of motor processing or the rate of speech output NOT the speed of processing visual input.

Diane McGuinness is particularly scornful of such explanations about how people read and maintains that any attempt to ‘infer something about perception, cognition, brain processing and speech production from a single measure of response time is extremely naive’. As Keith Rayner put it in his brilliant study ‘Eye movements in reading and information processing: 20 years of research’, ‘Any single measure of processing time per word is a pale reflection of the reality of cognitive processing’.

What exactly is involved in the reading/speaking act? You have to look and focus, scan in a saccade or a number of saccades, transform the visual information into sounds and thus into the word, and finally execute the output or speak. Given that individuals response times vary from 450 to 800 milliseconds to read a word, what McGuinness asks is if there is any evidence why phonological decoding stops just because we read faster.

As brain studies show, most of what goes on in the mind happens under the level of our conscious attention. Imagine for a moment that you go on a training day. As the trainer appears before you, within milliseconds you have made all sorts of assumptions about what the person is like from what they are wearing, how old you think they are, their accent and other aspects of their speech, their body language, etc, etc. Very often, you won’t even be aware of having made these kinds of assessments being made unless something happens to surprise you and contradict first impressions. So, we form our impressions instantaneously and then spend subsequent time with the person in question confirming or questioning those first impressions. How many times have we had the conversation with someone who later becomes a friend and we find ourselves confessing that ‘When I first met you, I thought XXX, but then I discovered later that …

In terms of conscious processing, i.e. switching on our (deliberate) conscious awareness of a situation, our immediate reactions to situations are often very ponderous and require much-needed time to allow us to process relevant information. This also comes at the cost of then missing other events taking place around us. This often happens when we’re reading a text that contains words and information with which we are unfamiliar: the effort of processing the words (decoding) will often necessitate our having to re-read the text because we’ve lost sight of the big picture.

You will notice that the response to the hypothetical trainer will be multi-faceted and processing takes place in parallel, simultaneously: seeing, hearing, listening and smelling. As McGuinness states: ‘The brain can map grapheme-phoneme correspondence, analyze patterns of orthographic redundancy, register degrees of word familiarity, perceive context clues, and work out possible decodings of odd or unpredictable spellings, in parallel.’ She further adds that we do many of these things when we are listening to someone speak. In other words, the brain performs a vast array of analyses at a speed far in excess of anything we are consciously aware of never mind being able to control. The model proposed by late stage sight word theory is linear and sequential and fails utterly to account for the dazzling simultaneity of operations the brain is able to perform synchronously.

As McGuinness further points out, doffing her cap in the direction of Robert Glushko, whose research in this area was so seminal, 'all the information about a word – visual, phonological, orthographic, semantic – is processed at the same time in parallel. This means that no matter how many elements contribute to successful decoding, processing is not carried out in separate (disconnected) pathways.

The subtlety of this kind of perspective on the reading process and the dual route (separate pathways) model is a bit like comparing a couple of telephone lines to the complexity of the internet.

None of this is to say that the decoding process is reduced in importance. Far from it! Decoding is a vital component of a complex mix and, as many in the research community assert, the more accurate and automatic the process of decoding, the more cognitive resources can be allocated to other aspects of textual interpretation.

We know from the work of Rayner (see also here and here) and his colleagues on the eye movements of experienced and fluent readers that they are sensitive to semantically related words, that their fixation durations decrease with the frequency of the same words in the text, that they ‘look longer at morphemes in long words that are more informative with respect to overall meaning of the word’, and that ‘fixation time in the region of a pronoun varies as to a function of how easy it is to make the link between the pronoun and its antecedent’. All of which is sophisticated stuff, but none of this can happen if the reader can't decode the words on the page!

Glushko, R., 'The Organization and Activation of Orthographic Knowledge in Reading Aloud', 1979, Vol 5, No 4, 674-691.
McGuinness, D., (2004), Early Reading Instruction, MIT Press.
Rayner, K., 'Eye Movements in Reading and Information Processing: 20 Years of Research, Psychological Bulletin, 1998, Vol. 124, No 3, 372-422

Friday, October 16, 2015

Why the split spelling cracks me up

 I know that many teachers will not appreciate me starting this hare, especially when we, at Sounds-Write, have coded lots of words with split spellings and, to boot, we also have a terrific lesson for teaching them, but I wanted to give us something to think about.

Having said that, I want to state firmly from the outset that I have no intention at the moment of adopting the alternative I'm about to suggest, though Trish at TRT might want to think about it.

For those unsure of what I’m writing about, the split spellings most commonly in use are a-e, e-e, i-e, o-e and u-e. We teach these spellings in words like ‘bake’, ‘Pete’, ‘home’, ‘fine’ and ‘flute’, in which, respectively, they represent the sounds /ae/, /ee/, /ie/, /oe/ and /oo/.

Although I believe that we teach teachers how to cope with so-called split spellings very well indeed, and I believe that, in the main, our approach works well, I think that there may be a better, more credible alternative.

To begin with, I want to ask the question why split spellings were introduced in the first place. I have a hunch it was because the originators were unsure about two aspects (especially the second) of the alphabet code: the one-to-many principle, in which one sound can be represented by multiple spellings; and, the many-to-one principle, in which one spelling can represent different sounds. For example, in the first, the sound /ae/ can be spelled a, ay, ai, ea, eigh, ei, ey, etc, as well as by the split spelling a-e; in the second, the spelling o can represent the sounds /o/, /oe/, /oo/ and /u/.

Keeping the above in mind, let’s take the first word with a ‘split’ spelling I used in the post: ‘hate’. The word is comprised of three sounds /h/ /ae/ /t/ and someone decided that they would code it as h, a-e, t, with the t splitting the a-e. Apart from being more complicated than most spellings to teach, this approach has the disadvantage that, when an inflectional morpheme such as –ing, –s or –d is added, possible confusions begin to arise. For example, the /ae/ in ‘hate’ is spelled with a ‘split’ spelling but in ‘hating’ with the spelling a. The same is true of ‘hated’, a two syllable word comprised of the sounds /h/ /ae/ /t/ /schwa/ /d/.

The second complication comes when teaching words like ‘hates’. Here, the teacher has to guide pupils when they are reading by going from /h/, forwards to the split spelling a-e, then back to the /t/ and finally, bizarrely, (jumping across the letter e, the second part of the split digraph) forward again to the /s/. This definitely slows down novice readers.

In addition, the split spelling is very likely to be confused with words which look as if they might contain split spellings but don't. For example a much better – and more transparently linguistic, approach would be to teach from left to right across the word in logical sequence. We would do this by teaching ‘hate’ as h a te. In other words, it would still be taught as a CVC word, but with the final consonant spelled as te.  This approach would make the split spelling far less likely to be confused with words which look as if they might contain split spellings but don't. For example, we already teach consonant + vowel digraphs in words like ‘give’, 'have', ‘horse’, ‘sleeve’, ‘granite’, 'snooze', 'some' and many others. Teaching consonant plus e instead of all the ‘split’ spellings would be consistent and more logical, as well as making at least as much sense as its counterpart.

So, how would we correct errors, such as when pupils read the word ‘hate’ as /h /a/ t/ (‘hat’)? We would simply point to the spelling a and say that it can be /a/ but that it can also be /ae/ and ask the learner to say /ae/ in this word. The context of the sentence will do the rest, exactly as it does when we read words with the spelling o in them.

"Scure nel tronco" by Luigi Chiesa - Photo taken by Luigi Chiesa. Licensed under CC BY-SA 3.0 via Commons - https://commons.wikimedia.org/wiki/File:Scure_nel_tronco.jpg#/media/File:Scure_nel_tronco.jpg