Welcome Pros

The content of this page is largely intended for Professional Educators. It contains some rather heavy-duty though very pragmatic instructional methods. Your comments, reactions, concerns and questions are welcomed. Please click on or copy and paste the following survey: http://www.surveymonkey.com/s/SFD9L3H
Many Thanks

Tuesday, June 22, 2010

The Informal Reading-Thinking Inventory (IR-TI)

The Informal Reading-Thinking Inventory:
Assessment Formats for Discovering Typical & Otherwise Unrecognized Reading & Writing Needs – and Strengths


Ula Manzo, PhD
Professor and Chair, Reading Department
California State University, Fullerton

Anthony V. Manzo, PhD
Professor Emeritus, Director, Center for Studies in Higher Order Literacy,
Governor, Interdisciplinary Doctoral Studies
University of Missouri-Kansas City


“The teacher who learns to use the techniques described in this chapter will be well on her way to differentiating instruction.” Emmett Betts, Chapter 21, “Discovering Specific Reading Needs,” Foundations of Reading Instruction, 1957

Doing a diagnostic workup with the Informal Reading-Thinking Inventory is a little like taking a float trip down a familiar river but going farther than you had ever gone before. It is almost always a bit of the mysterious and a source of eye-opening discoveries.
A number of years ago, the authors, along with several doctoral students and other collaborators, began to look at the degree to which students’ Instructional levels, as identified by an Informal Reading Inventory (IRI), correspond to those students’ higher-order thinking abilities – i.e., the inclination and ability to also respond constructively, or critically and creatively to text. What we found was a conundrum or apparent paradox that does not seem to surprise most seasoned teachers, but oddly is only spottily addressed, or dismissed as just so much static in the literature of the field. The finding in a word is that in a typical heterogeneous group of students, there are likely to be about 12% whose ability to respond to higher-order questions is significantly below their Instructional level, while correspondingly there often would be found about another 12% whose ability to respond critically and creatively seemed to significantly exceed their Instructional level (Manzo & Casale, 1981; Casale, 1982, Manzo & Manzo, 1995; Manzo, Manzo & McKenna 1995). We referred to the first group who paradoxically did not seem to think as well as they read as “Profile A,” and the second as who seemed somehow to be able to think better than they could read as “Profile B.” That which we called Profile A had been noticed as troublesome especially in higher education by Chase (1926) who referred to the condition as that of Ungeared Minds. Our initial objective was to tweak the IRI into an Informal Reading-Thinking Inventory (IR-TI) that ideally would have sufficient sensitivity to identify students in both sides of this seeming conundrum. So, at the very outset we were intentionally looking for unlikely weaknesses in some students and almost unimaginable strengths in others. (As an aside, we must admit that the quest was very engaging we were where few had dared to go before. We found several studies where very accomplished researchers had simply dismissed the unexplained as a statistical quirk.) But to continue, at first we tried to find these paradoxical cases by reaching beyond the assessment of word recognition and basic comprehension simply by adding some questions that offered an optional way of assessing higher-order thinking. But there was much more to be done to give these few additional odd-angled questions what is known as “construct validity” – or legitimacy as a measure of this theorized factor or factors. This was especially important since the actual question types could sometimes be found here and there in conventional IRIs. That next step involved several rather sophisticated factor analytic studies and the application of a rather common sense idea that there was great potential value in being able to better identify and discover more specific reading needs and especially reading strengths than is discoverable with other more legacy-bound instruments.
Following is a brief overview of how the IRI in its various forms and formats has and has not evolved to reflect current views of reading development. Then we describe our own attempts to date to provide IRI options with the potential to broaden its outreach, especially into the realm of higher order thinking – the seminal challenge of twenty-first century education. Finally, we re-visit some “heritage” characteristics of the IRI that may be in danger of being lost when they should be preserved in the ebb and flow of theoretical constructs in assessment of specific reading needs.
The IRI as Emergent Science
The Informal Reading Inventory (IRI) was and remains an unrecognized bit of significant historical progress in cognitive and pedagogical science in a relatively pre-scientific era. Emmett Betts (1957), one of the founders of modern reading assessment and instruction, synthesized a great deal of research and practice on individual assessment of reading progress and implicitly, in cognitive development. Flippo, et.al (2009) note that Betts “is frequently credited with the development of IRI techniques, though some reading researchers trace their use even further back.” Indeed, Betts introduces his chapter on Discovering Specific Reading Needs with the acknowledgement that:
Space does not permit a summary of all the investigations pertinent to the use of informal reading inventories. A detailed explanation of all the whys and wherefores of a reading inventory would probably fill a sizable volume (p. 445).

Perhaps not surprisingly, then, “The “Informal Reading Inventory,” or IRI, as he called the synthesized protocols detailed in his landmark textbook, ranks among the most sophisticated approaches ever created for the evaluation of the decoding and comprehension aspects of human cognitive development. It is rarely appreciated, but by setting research-based percent criteria for accuracy in word recognition and comprehension when reading graded passages, this protocol provided what is rarely available in the study of human psychology: measureable, criterion referenced dimensions of a complex cognitive process. The IRI quickly became a cornerstone of the field of Reading. In the half-century since its publication, research-informed understandings of the reading process have been applied to fine-tune certain aspects of the protocol, but in most fundamental respects it remains remarkably – one might say, disturbingly - unchanged from the description in Betts’ chapter on reading diagnosis.
The quantitative criteria for estimating Independent, Instructional, Frustration and Capacity levels are little changed from those recommended in the earliest IRI protocols, and the means of evaluating accuracy in word recognition is a straightforward matter that also is little changed. The analysis and interpretation of oral reading errors to identify specific decoding needs has changed very little, though it has become popular to supplement this with analysis of errors from a psycholinguistic stance to discover the student’s development along a theoretical continuum from relying primarily on orthographic cues, then to semantic cues, to eventual reliance on semantic and syntactic cues. Even more attention, though little progress, has been focused upon various means of evaluating comprehension accuracy – an issue that has been and remains unsettled. In Betts’ (1957) discussion, he notes that,
Two techniques are used commonly to appraise accuracy of comprehension: First, a series of questions each of which may be answered in a word, phrase, or sentence. Second, a single question which requires the pupil to reproduce what he has read [retelling]” ( p. 459).

Betts adds that,

Mere recall of facts provides an index to accuracy of comprehension. . . . To appraise quality and depth of comprehension, however, it is desirable to interrogate with inferential-type questions or to give the pupil an opportunity to express his between-the-lines reading (p. 461).

Early commercial IRIs, such as Silvaroli’s Classroom Reading Inventory (1969) settled upon providing a series of questions for each passage for assessment of comprehension accuracy. These questions typically included a combination of literal, main idea and detail, vocabulary, and inferential questions of a grounded, text specific type (in other words, these “inferential” questions were still quite literal). This approach to quantifying passage comprehension based on answers to a series of questions of these types persisted until fairly recently, despite criticism that it measured the product, not the thought processes or cognitive development aspects of comprehension. Authors of several commercial IRIs responded by adding the option to evaluate comprehension by means of retellings, which are difficult to quantify, and arguably even more literal than questions of all other question types. Applegate, Quinn, and Applegate (2002) challenged IRIs for their failure to evaluate students’ higher-order comprehension. These authors looked at eight IRIs published between 1993 and 2001, and found that,
more than 91% of the nearly 900 items that we classified required text-based thinking; that is, either pure recall or low-level inferences. Nearly two thirds of the items we classified fell into the purely literal category, requiring only that the reader remember information stated directly in the text (p. 178).

Thus, 24% (91% minus 67%) of the 900 items were text-bound inference questions. The driving purpose of these authors’ study was to find out whether and to what extent IRIs used questions that required higher-order thinking, and in fact they found that questions of this type comprised fewer than 1% of the 900 items analyzed. The authors discussed the significant implications of the omission of the assessment of higher-order thinking in IRIs – the failure of these instruments “to distinguish between those children who can remember text and those who can think about it.” They cautioned that assessment drives curriculum and the way reading is taught, and “if the IRIs we use to assess children are insensitive to the differences between recalling and thinking about text, our ability to provide evidence of any given child’s instructional needs, let alone to have an impact upon instruction, is severely limited” (pp. 178-179). This limitation had been noted before by H. G. Wells, (1892) who put this sticky problem in these more figurative terms: “The examiner pipes and the teacher must dance—and the examiner sticks to the old tune.” As previously noted, some commercial IRIs have included higher-order thinking questions in comprehension assessment. Nilsson’s 2008 review of eight IRIs published since 2002 details the variety and complexity of current approaches to evaluating comprehension of both narrative and expository text passages, finding that “IRI authors provide measures of various dimensions, or levels, of reading comprehension – most commonly literal and inferential comprehension” (p. 532). Four of these eight IRIs did include questions requiring reader response of some type and in some format (Applegate et al., 2008; Cooter et al., 2007; Johns, 2005; Silvaroli & Wheelock, 2004).
Clearly, reading assessment should address comprehension processes as well as products, including the inclination and ability to apply higher-order thinking processes in response to reading. This type of question however tends to be more open-ended and therefore serving up too many variations on what might be an acceptable answer. That said, the individual administration of an IRI greatly reduces the problem of evaluating responses to questions that can have many “right” answers since the examiner becomes quickly skilled, as most teachers already are, to sound and relevant reasoning versus ungrounded, or in Chase’s terms, ungearded answers. Even this of course is imperfect, but it is by all accounts sufficiently reliable to be the very same convention employed on several subtests of the Weschler IQ tests that also are individually administered. Remaining issues might be how many higher-order responses should be sought in proportion to literal and basic inferential responses; and, whether results of an IRI based largely upon higher-order comprehension would be comparable to results of more conventional IRIs. Our research has led us to the conclusion that the convention of basing IRI Levels on literal and basic inferential responses is a valid assessment of this dimension of the educational goals of schools an schooling. However, this needs to be, and rather easily can be supplemented with assessment of higher-order thinking. In the Informal Reading-Thinking Inventory, described next, we recommend three sets of questions on each passage: Reading the Lines, Reading Between the Lines, and even Reading Beyond the Lines. The resulting Levels are identified based on the first two sets of questions, with responses to the Beyond the Lines questions evaluated separately and qualitatively, rather than quantitatively. It is the separation of these three types of questions into two sets, as was indicated by the factor analytic studies, that makes this all workable. In virtually all other assessment instruments where such questions might be used they are consolidated into general comprehension or inferential comprehension (which again is very literal as is evident in the fact that they so highly correlate with other conventional question types that there is no real point in treating them as a separate factor). Again, it was here that we relied upon several sophisticated factor analytic studies to provide the empirical evidence that supports the separation action that is the nexus of the IR-TI, and therefore of its “construct validity.” Oddly, this point is made best on logical more so than statistical grounds. Simply ask any educator what they wish to accomplish through quality education and the response will be some variation on students who can think critically and creatively. It is these qualities that convert mere schooling to education. It is these qualities that convert information into knowledge. And, we all hope that it will be these qualities that will bring about a worldview – often referred to as the highest state of literacy – that will be marked by greater empathy, tolerance and a more enlightened vision of our tomorrow than of our today. It is an important but almost an inane question that examiner raises when they obsessively ask how well these higher mental functions correlate with being able to read at a literal level. It is the inverse question that really counts. Our goal is not to merely keep individuals from being illiterate but rather to educate people who are literate - well read, reflective, critical and constructive. The IR-TI moves these valued goals to at least co-equal position with merely learning how to read, especially now that we have increasing evidence that a fairly significant number of our most proficient readers may well be struggling, ungeared thinkers. But let’s talk more now about how these valued objectives can be translated into classroom assessment, and implicitly into instructional actions.

Seven IR-TI Options for Discovering Specific, though often overlooked, Reading Needs – and Strengths
Option 1: Identification of Profiles A and B
Assessment of reading development is assessment of human nature. How one “reads” is virtually a projective test of how one perceives the world and one’s place in it. The minds of the children and older students for whom we have responsibility are as complex and different from one another as any DNA sampling. As each mystery of mind is unwrapped it becomes increasingly clear that assessment remains as much an art as a science. In 1995, we described Profiles A and B as follows:
Todd, a fifth grader, has never had trouble reading. He typically scores well on standardized tests and is able to answer most questions posed by Ms. Reese, his teacher, during discussions of reading selections. Ms. Reese is often surprised, however, at Todd’s reticence whenever these discussions become more open-ended and thoughtful, and opinions are encouraged. At those times, Todd generally has little to say.

Lakesha, Todd’s classmate, has struggled with reading, and has a history of placement in remedial programs. She often stumbles over words and clearly labors over assigned selections. However, after class discussions have provided her with the gist of what her abler classmates have read, she seems to blossom. Her contributions to the give-and-take of what is said are intelligent, pointed, and insightful. Ms. Reese is also puzzled by Lakesha, for she is, after all, a “remedial reader” (Manzo, Manzo & McKenna, 1995, p. 103).

Apparently Betts had struggled with this phenomenon as well, as illustrated in this quote from his 1957 textbook:
Two second-grade pupils, Sally and Billy, may be used to illustrate briefly the complexity of the instructional problem in a given grade. Both had chronological ages of seven years. Their reading achievement was estimated to be about ‘first-reader’ level, which indicated to the teacher that systematic reading instruction should be initiated at that level. . . . Sally’s basal [Independent] reading level was estimated to be about primer level, while Billy’s was assessed about preprimer level . . . . This was complicated by the fact that Billy was found to have the capacity to deal orally with the language and the facts presented in third-grade basal textbooks while Sally’s reading capacity was not above ‘first-reader’ level or beginning ‘second-reader’ level” (pp. 439-440).
The Cardinal Theory Underlying the IR-TI
The cardinal theoretical construct underlying the IR-TI is that higher-order critical-creative response to reading is a factor independent of literal or text bound inferential responding. While it might be assumed that students who can answer literal and text specific inferential questions would be able to answer higher-order questions, this does not appear to be the case. Our research suggests that one does not develop from basic to higher-order comprehension, but separately in each. Thus, students’ ability to respond to higher-order questions should be evaluated separately from their ability to respond to literal and inferential questions. Therefore, the IR-TI follows the traditional protocol for identifying Instructional level, using literal and basic inferential questions. However, it also provides higher-order questions for each passage. The recommendation is to identify Instructional level in the traditional way, and then go back to one or more passages for the student to respond to after re-reading a passage. For this reason, we described the IR-TI as “constructed ‘around’ a traditional IRI,” and suggested that “it might be helpful to think of the IR-TI as ‘containing’ an IRI” (Manzo, Manzo & McKenna, 1995). In this way, Todd and Sally’s need to develop habits of reading beyond the lines, and Lakesha and Billy’s ability to do so both are discovered, validated, and can be more intentionally attended to.
Innovations in Assessment can Spur Innovations in Treatment
There is little in typical reading theory or assessment practice to either explain or identify these seeming anomalies, and therefore little instructional attention to meeting these students’ needs. However, simply identifying these profiles can lead to unique approaches to intervention (Manzo, Manzo, Barnhill & Thomas, 2000; Manzo, Manzo & Albee, 2004). For example, in a comparison study Martha Haggard (a distinguished professor who at the time was a doctoral student) discovered that struggling readers made statistically significantly greater gains in basic reading comprehension than did a control group when they began each session with a “Creative Thinking Activity” followed then by conventional skill-based remedial instruction. The control group received the conventional skill-based remedial instruction only, and theoretically should have done better since they spent more time on the task we would call “remedial reading instruction.”
Additional IR-TI Options
The six additional options described below permit the teacher to “discover” otherwise unattended habits of mind that drive and focus instructional level reading, and, as importantly, to incorporate these into diagnostic-teaching, or teaching that simultaneously reveals and addresses student strategy and skill needs.
Option 2: Evaluate the student’s “habit” of schema activation
The habit of intentionally calling to mind one’s personal experiences and knowledge related to an anticipated reading topic is an essential component of effective reading (Pearson, Hansen, & Gordon, 1979; Recht & Leslie, 1988), and most particularly reading at the Instructional level. For each passage in the IR-TI, three “schema activation” questions are provided to be asked prior to having the student read, and space is provided on the teacher record form for recording the student’s responses. For example, in a 3rd grade passage about things that grow in the desert, the questions are:
Why is it hard for things to live in the desert?
Do you know of any things that live in the desert?
Can you tell me about a cactus?
In a 7th grade passage about whaling, the questions are:
Do you know any economic uses of whales?
How did men hunt whales?
Can you describe a whale?
Observing a student’s willingness and ability to consider and respond to such questions prior to reading is a simple way to evaluate this habit of mind.
Option 3: Evaluate the student’s “habit” of personal response to reading
The habit of generating personal responses to reading is another essential component of effective study reading, and particularly study-type reading at the Instructional level. For each passage in the IR-TI, the option to assess the student’s personal response is provided in the form of a prompt to ask the degree to which the student has “liked” reading the passage. For fictional passages, the prompt is, “How much did you enjoy reading this story?” For nonfiction passages, the prompt is, “How much would you enjoy reading the rest of this selection?” In either case, the student is prompted to respond on a scale of 1 to 5. This simple inquiry can reveal whether the student typically responds noncommittally to reading about most topics, or tends to express moderate or strong likes and dislikes – the latter suggesting the habit of personal response to reading. It can also be informative to compare students’ comprehension accuracy when reading passages they report to have “liked” as compared to those to which they responded noncommittally or negatively. Steps taken to encourage personal responses to and connections with text can have a positive impact for years going forward as reading becomes increasingly subject area and fact based.
Option 4: Evaluate the student’s “habit” of elaborative response to comprehension questions (“D” for detail)
A simple indicator of students’ inclination to read between and beyond the lines is the level of detail that they offer in response to questions about what they have read. The IR-TI option to collect this information is a simple reminder, on the teacher record form to record a notation of “D” alongside each answer for which the student provides details additional to a strictly correct answer to the question; it is, in a manner of speaking, an indicator of “engagement” – a strong predictor of future progress. A useful measure of this characteristic can be obtained by calculating the percent of questions answered with added detail out of the total number of questions asked.
Option 5: Evaluate the student’s “habit” of engagement during and after reading
Effective readers willingly engage in the task of reconstructing meaning during and following reading, understanding that during Instructional level study reading, comprehension doesn’t just “happen,” as it does during Independent reading level. They are able to respond to comprehension questions with answers that, even if inaccurate, at least are relevant to the questions. Struggling readers tend to have lower levels of engagement than effective readers (Manzo, 1969), often responding to comprehension questions with “throw away, go away” answers that are not even relevant to, or congruent with to the question. For example, given the question “What do all living things need to survive?” an incongruent response would be, “It never rains.” A measure of “congruity” can be obtained by calculating the percent of questions answered congruently out of the total number of questions asked. An increase in a student’s congruent responding has proven to be a good predictor of subsequent comprehension growth; it seems to be saying, “I’m with you, and trying.”
Option 6: Evaluate the student’s “habit” of metacognitive monitoring
Effective readers keep mental track of when they are understanding what they are reading, as well as when comprehension is faltering (Baker & Brown, 1980; DiVesta, Hayward & Orlando, 1979; Flavell, 1979). The IR-TI option to include information about the student’s metacognitive monitoring habits consists of a prompt at the end of the literal and basic inferential questions; simply, “How well do you think you have answered these factual and thought questions?” (to be answered on a scale of 1 – 5). If the “beyond the lines” questions are used for a passage, a similar question prompt is provided: “How well do you think you have answered these last questions?” A measure of effective metacognitive monitoring can be obtained by subjective evaluation of how frequently the student’s self-evaluation of comprehension aligned with actual comprehension. This simple measure is given functional validity by the extent to which it tends to increase in response to instruction. It also is a good predictor of progress in comprehension. It seems to be the student’s mental habit way of saying, “I can make sense out of this.”
Option 7: Evaluate the student’s “habit” of composing thoughtful and well-organized written response to reading
In Option 1, described above, “beyond the lines” thinking can be evaluated by using a set of questions of these types that is provided for each passage. The last question in each of these “beyond the lines” question sets is constructed to also lend itself to be used for an optional written response to reading. For example, following the 3rd grade passage about the desert, the final question/writing prompt is, “Pretend that a cactus plant, an oak tree, and a jungle vine found themselves in the same place. Where might they be, and what might they say to one another?”
For more detailed evaluation of writing, the IR-TI offers two forms of an Informal Writing Inventory (IWI) under the same cover: one for primer to fourth grade students (and older students with more limited skills), and one for fifth grade through high school levels. Each form is structured to evaluate critical-evaluative and creative thinking as well as the mechanics of writing.
The IR-TI Simply Extends Heritage Characteristics of the IRI
The Informal Reading Inventory is the quintessential performance based assessment. Originally designed as a flexible template for classroom use, it has been translated into more readily usable commercial versions that can be used for quick estimations of students’ functioning levels and for in-depth assessment to suggest specific individual strengths and needs. These commercial versions have been fairly exhaustively analyzed over the years, and criticized on dozens of major and minor points. Most importantly, while the validity of the technique has rarely been questioned, the reliability of commercial versions has been challenged at regular intervals. Recently, Janet Spector (2005) reviewed previous studies on this topic, and analyzed the reliability documentation and data in the manuals of nine IRIs published between 2000 and 2004. She reported that fewer than half of the manuals provided reliability information, and none of the nine IRIs analyzed provided reliability data that met the criteria of the study. Her interpretations of her findings were harsh. Under the heading, “Potential for harm,” Spector concluded that “IRIs that provide no evidence of reliability should not be used to estimate a student’s reading level, regardless of how casually the results will be applied.” Even though IRIs are not standardized tests, she states, “any test – no matter how informal – has the potential for harm if the information it provides is imprecise or misleading” pp. 599-600). Furthermore, while conceding that IRIs are “intuitive appealing instruments for assessing student performance in reading,” Spector warned that school psychologists, educators in leadership roles, and teachers should be informed of the “limited utility” of IRIs, and select “measures with adequate reliability for particular purposes” (p. 601).
We would argue that it is this kind of thinking that poses the greater danger to the vitality of the field, and the consequent services that reading educators are equipped to provide to children. McKenna (1983) cites Estes and Vaughn as urging teachers to “accept the philosophy of the IRI as being a strategy, not a test, for studying the behavior in depth” (p. 670). Blanchard and Johns (1986) have also argued that the IRI be considered an assessment strategy that teachers can use flexibly and differentially to access diagnostic information about students’ reading abilities. The IRI is a robust and time-tested tool for discovering specific reading needs. It lends itself to adaptation as research reveals increasing understandings about the reading process. Several “heritage” uses and traditions have accrued to the IRI that align well with current views on the purposes of reading assessment.
Use of the IRI to Invite Students’ Self Assessment
The IRI permits quantification and characterization of various dimensions of reading development, acquired in a one-to-one setting with careful attention to establishing optimal rapport between teacher and student. As such, it offers an ideal opportunity for the teacher to review and explain the findings to the student, and enlist the student’s involvement in explanation of the results, and setting goals for instruction.
Betts took care to point out that students should be aware of and invested in their own literacy development, and the uses of the IRI to involve students in self-assessment:
In the work directed by the writer and his students, it is assumed that the learner should be literate regarding his level of reading achievement, his specific needs, and his goals of learning. It has been found that this makes for intelligent co-operation between teacher and learner. As one boy exclaimed, ‘This makes sense. This is the first time I have known what I am trying to do’ (1957, p. 464).
Betts concludes, “An informal reading inventory is an excellent means of developing learner awareness of his reading needs” (1957, p. 478).
We propose that the added options in the IR-TI provide important topics to be considered in this type of informed and guided self-assessment. Students may be led to see that their prior knowledge of a topic does or does not tend to affect their comprehension; that their self-stated interest in a given passage does or does not tend to affect their comprehension; that their estimations of how well they answered comprehension questions do or do not tend to be accurate; that their answers to questions are or are not congruent with the questions; that their answers to questions tend to be brief and sometimes incomplete, or elaborative. Finally, and most importantly, students can be led to consider how easily and how well they tend to respond to beyond-the-lines questions, and how this might be affecting their regular classroom learning.
Regarding the latter point, the newly revised Standards for the Assessment of Reading and Writing (2010) by the Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English is posited upon the need to extend assessment beyond knowledge acquisition, to assessment of inquiry and problem-solving. They propose that the reading and writing standards in many districts “frequently omit important aspects of literacy such as self-initiated learning, questioning author’s bias, perspective taking, multiple literacies, social interactions around literacy, metacognitive strategies, and literacy dispositions,” adding that “students who are urged in classroom instruction to form opinions and back them up need to be assessed accordingly, rather than with tests that do not allow for creative or divergent thinking. In this context, the first Standard states that:
First and foremost, assessment must encourage students to become engaged in literacy learning: to reflect on their own reading and writing in productive ways, and to set respective literacy goals. In this way, students become involved in and responsible for their own learning and are better able to assist the teacher in focusing instruction” (p. 11).

Use of IRI Protocols and Results to Inform Diagnostic-Teaching
An experienced cook may translate a quarter teaspoon of salt into two pinches and a cup of flour into two hands-full. Similarly, teachers trained in the administration and interpretation Informal Reading Inventories can translate and apply its principals on an everyday, unstructured basis. This concept, too, is little changed from the earliest conceptions of the purposes and uses of an IRI:
In the classroom, the teacher can observe daily behavior in reading situations and, therefore, may need only a few minutes for an individual inventory. It is likely that sufficient information regarding the reading problems and needs of most children can be obtained from careful observations in class and small-group situations. In a clinic, a full half hour may be required for the inventory (Betts, 1957, p. 457).

As example of ultra-simplified application of the basic IRI criteria, these have been translated into a simple tool for students to use in selecting books for independent reading, commonly known at the “1-5-10” test – in approximately 100 words, if a student has trouble with 1 word (reading 99% with ease) the book will be easy to read; with 5 words (reading 95% with ease) the book will be fairly difficult; with 10 words (reading 90% with ease) the book may be too difficult to negotiate without a good deal of effort.
Another example would be a content area classroom application in which a selection from the textbook is displayed for students to read silently; after reading, the selection is removed and students write responses to a series of literal and basic inferential questions (75% correct would be estimated to be Instructional level). With today’s scanning and PowerPoint technologies, these short checkpoints could be made weekly. With “clicker” technologies, students could see their scores immediately, and these could be stored for later analysis by the teacher. We would suggest that the final question be a beyond-the-lines question that students either complete as homework, or use as the prompt for a cooperative structure activity of some type.
The second Standard of Standards for the Assessment of Reading and Writing (2010) states that:
Most educational assessment takes place in the classroom, as teachers and students interact with one another. . . . This responsibility demands considerable expertise. First, unless teachers can recognize the significance of aspects of a student’s performance – a particular kind of error or behavior, for example – they will be unable to adjust instruction accordingly. They must know what signs to attend to in children’s literate behavior. This requires a deep knowledge of the skills and processes of reading and writing and a sound understanding of their own literacy practices (p. 14).
Teachers familiar with IRI protocols and criteria are well prepared to interpret students’ oral reading and comprehension behaviors and difficulties in terms of the significance of a given number of errors and the relative importance or unimportance of various types of errors. Very importantly, and intentionally, teachers familiar with including higher-order questions in IRI protocols will be more likely to include these questions in daily instructional interactions, and observe the ease or difficulty with which individual students are able to respond. Thus, an important reason for including higher-order questions in an IRI is to include this dimension of reading in teachers’ repertoire of categories for informal assessment. As noted in the Introduction to the IR-TI,
One of the chief values of the IR-TI is to help you, the teacher, to personalize the question types, formats, and formulas for estimating student progress while you are engaged in teaching and discussions with your students. Ideally, as you do so, your students will begin to ask similar questions of you, of one another, and also of themselves while they read” (Manzo, Manzo & McKenna, 1995, p. 4)

In a study of second, third, and fifth graders, Barton, Freeman, Lewis and Thompson (1981) taught students to use strategies for personal response to text. Not only did students acquire and independently use these strategies much more easily than the researchers had anticipated (p. 27), but after the study had been officially concluded, they noted that “the biggest surprise the researchers experienced was the [students’] unplanned use of metacognitive strategies throughout the day during curricular areas other than reading” (p. 38). This seems to say that when a desired skill is taught and reinforced as a strategy, it can have very strong transfer effects, and even become a new habit of mind.
A good way to incorporate IR-TI options into diagnostic-prescriptive teaching is to begin the school year by administering an Informal Textbook Inventory that adds prior knowledge questions, metacognitive monitoring questions, and beyond-the-lines comprehension questions. Use these initial data, particularly for the beyond-the-lines comprehension, to group students heterogeneously for postreading cooperative structure activities based on beyond-the-lines questions.
The Contentious Practice of Comprehension Assessment Based on Oral Reading At Sight
One aspect of the original IRI that reading educators have struggled with in recent years is the practice of basing reading assessment upon oral reading at sight. Given that the IRI is in other respects a performance based assessment tool, it has been difficult for some to reconcile this practice. Even Betts found it difficult to accept this practice, but conceded that it had a reasonable use for at least some passages in an IRI administration:
In general, the procedure for the administration of an informal reading inventory for the systematic observation of performance in controlled reading situations is based on the principles governing a directed reading activity. . . . An exception to the principles basic to a directed reading activity is that of using oral reading at sight (i.e., without previous silent-reading preparation) as one means of appraising reading performance. This does have, however, the advantage of uncovering responses to printed symbols that might be undetected in a well-directed reading activity” (p. 457).
Numerous authors have recommended comparison of oral and silent reading comprehension, and cautioned that word recognition errors in oral reading at sight be analyzed only on passages below Frustration level. In addition to permitting observation and analysis of word recognition errors, basing comprehension assessment on oral reading at sight has the fortuitous effect of much more efficiently identifying Instructional level than when comprehension is based on either silent reading or oral re-reading. When reading material at one’s Independent, easy reading level, one is able to read straight through, from beginning to end, with almost complete comprehension. Thus, the highest level at which one can read with a minimum accuracy of 99% in word accuracy and 90% comprehension is one’s Independent level. Once Independent level is identified, the IRI protocol has the student continue to read higher-level passages as if these were at his or her Independent level. This makes it possible to identify the point at which a non-strategic, easy reading approach breaks down.
Asking the student to read orally at sight, at levels above Independent Level, removes the option to apply any active study-reading strategies such as re-reading, pausing to reflect, or skipping ahead; it even impedes comprehension monitoring, visualization, and generation of personal connections. Thus, the IRI criteria for Instructional level in oral reading at sight are set relatively low: a minimum of 95% accuracy in word recognition, and a minimum of 75% accuracy in comprehension. Seldom, elsewhere, would 75% comprehension be considered “good.” However, if the protocol were adjusted to permit students to read silently before reading orally, little could be observed about the silent reading strategies they might be using, and new criteria would need to be created for what would constitute Instructional Level under this different condition. Essentially, the IRI conditions for identifying Instructional Level might be redefined: rather than “the highest reading level at which systematic instruction can be initiated,” it would be more accurate to say that it is the highest reading level at which the student no longer can read passively, without applying study-reading strategies (or receiving instruction that models and/or prompts appropriate study-reading strategies).
In the previous section, we described a technique for using oral reading at sight for a quick whole class comprehension assessment in content area classrooms. We would further suggest that teachers explain to students the difference between Independent Level “easy” reading and Instructional Level “study” reading, as explanation for why a score of 75% on the forced oral reading at sight task is an acceptable score.
Analysis of Oral Reading Errors
In developing the IR-TI, we departed from the systems popular at the time for analyzing oral reading errors to specific decoding. Rather than analyzing phonic elements in decoding errors, it is more parsimonious, that is efficient and effective, to simply follow up with a straightforward phonics test. The practice of evaluating errors in terms of the cue system predominantly used (orthographic, syntactic, semantic) seems to be a narrow window with limited instructional implications. In the IR-TI we provided a list of suggestions to use when looking for error patterns (Manzo, Manzo & McKenna, 1995, pp. 65-67). These are summarized and revised below, with specific instructional recommendations
Figure 1
“Reading Oral Reading

Error patterns (predominance of a particular type of error) with possible diagnostic implications Instructional recommendations
(in order of importance)
Teacher pronunciations:
lacking basic sight words and strategies for decoding words that are not yet sight words at the passage level

Build basic sight word vocabulary
Build phonics strategies for acquiring new sight words
Build strategies for response to text at Listening level

Non-semantic substitutions/skipped words: un-inclined to reconstruct passage meaning; overlooks unfamiliar words and unfamiliar written language constructions at the passage level


Build strategies for higher-order response to reading at Independent level
Build strategies for schema activation and metacognitive comprehension fix-up at Instructional level
Identify unfamiliar meaning vocabulary words when reading, and build strategies for acquiring meanings of these words at Instructional level

Hesitations/repetitions/self-corrections: committed to reconstructing passage meaning, but lacking automaticity decoding words that are not yet sight words and/or unfamiliar with written language patterns at the passage level

Build strategies for decoding non-sight words at Independent level
Build strategies for meaning vocabulary acquisition at Instructional level
Build strategies for reconstructing meaning from language patterns at Listening level


Differential Uses of IRIs for Educators of Different Experience Levels
The IR-TI manual urges users to use it differentially according to purpose:
Because teachers’ purposes for giving the IR-TI will vary, no fixed method of administration exists. This is a consequence of the informal nature of all IRIs and should be viewed as a strength. The important thing is to clarify your own purpose and then to use the instrument accordingly. (Manzo, Manzo & McKenna, 1995, p. 27)

Toward the end of Betts’ chapter on Discovering Specific Reading Needs, he provided three complete and quite different forms for use in recording results of a full Informal Reading Inventory, when given by inexperienced examiners, by experienced examiners, and by participants in his reading clinic. This approach – differential record forms for different levels of experience (or perhaps for different purposes) -- makes a great deal of sense. Most commercial IRIs recommend that the various options offered should be used differentially, according to the purpose of the assessment; however, they tend to offer all of the options that might accompany a given reading selection on the same pages. A well-intentioned teacher or teacher-trainer may feel remiss in omitting any of these options, and thus seriously “over-test” in many cases. Perhaps a future solution would be an online IRI, in which the teacher is given a series of (explained) options initially, and the test administration and record form pages are then generated to include only those options.
Provisional Conclusions
The narrative of the IRI continues to evolve. However some things can be provisionally concluded. IRIs are useful, time-tested tools that should be considered as a series of options to be selected from flexibly according to purpose. These options should reflect the most current understandings of the nature of reading processes and reading development, and the nature and goals of learning as well as of mere schooling. IRIs should cast a broad net to discover not only obvious needs and strengths but less obvious ones such as those that eventually characterize the highest states of literacy, such as cognitive development and worldview. It is a fairly simple matter to embed questions into the IRI interaction with students to tap inclinations and abilities to activate schema prior to reading, to have them evaluate their own comprehension, and to connect with and respond elaboratively to an author’s intended and especially so in literature and persuasive pieces to unintended but reasonably conjectural meanings. In other words, the one-on-one setting of an IRI should be capitalized upon to evaluate a student’s ability to read beyond the lines in order to determine whether this may be an otherwise overlooked need of an otherwise “proficient” reader -- or – an otherwise unacknowledged strength in an otherwise average, to slightly below average reader. In truth, the IR-TI is best understood as a heuristic – or a mechanism for aiding teachers in the discovery of the wonder of our different minds as well as their unique journey to conventional academic reading objectives. IR-TI is much more a system and profile analysis for estimating our individual paths to full literacy, than just another measure of academic skills, that after all correlate so highly with one another that there is little justification for endless testing. The fourth Standard of Standards for the Assessment of Reading and Writing (2010) cogently states that which has guided the present development of the IR-TI, and plans for in future iterations:
Assessment that reflects an impoverished view of literacy will result in a diminished curriculum and distorted instruction and will not enable productive problem-solving or instructional improvement (p. 17).


Works Cited

Applegate, M.D., Quinn, K.B, & Applegate, A.J. (2002). Levels of thinking required by comprehension questions in informal reading inventories. Reading Teacher, 56(2), 174-180.
Applegate, M.D., Quinn, K.B., & Applegate, A.J. (2008). The critical reading inventory: Assessing students’ reading and thinking, 2nd edition. Upper Saddle River, NJ: Pearson Education Baker, L. & Brown, A.L. (1980). Metacognition and the reading process. In D. Pearson (Ed.), A handbook of reading research. NY: Plenum.
Barton, V., Freeman, B., Lewis, D., & Thompson, T. (2001). Metacognition: Effects on reading comprehension and reflective response. Unpublished masters thesis, Chicago: IL, Saint Xavier University.
Betts, E.A. (1946). Foundations of reading instruction. New York: American Book.
Blanchard, J., & Johns, J. (1986). Informal reading inventories--a broader view. Reading Psychology, 7(3), iii.
Chase, R. H. (1926) The ungeared mind. Philadelphia: F. A. Davis Company, publishers.
Cooter, R.B., Jr., Flynt, E.S., & Cooter, K.S. (2007). Comprehensive reading inventory: Measuring reading development in regular and special education classrooms. Upper Saddle River, NJ: Pearson Education.
DiVesta, F.J., Hayward, K.G., & Orlando, V.P. (1979). Developmental trends in monitoring text for comprehension. Child Development, 50, 97-105.
Casale, U. (1982). Small group approach to the further validation and refinement of a battery for assessing ‘progress toward reading maturity.’ Doctoral dissertation, University of Missouri-Kansas City, Dissertation Abstract International. 43, 770A.
Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911.
Flippo, R., Holland, D., McCarthy, M., & Swinning, E. (2009). Asking the right questions: How to select an informal reading inventory. Reading Teacher, 63(1), 79-83.
Johns, J.L. (2005). Basic reading inventory (9th ed.). Dubuque, IA: Kendall/Hunt.
Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English (2010). Standards for the Assessment of Reading and Writing, Revised Edition. Newark, DE.
Manzo, A.V. (1969). Improving reading comprehension through reciprocal questioning (Doctoral dissertation, Syracuse University, Syracuse, NY). Dissertation Abstracts International, 30, 5344A.
Manzo, A.V. & Casale, U. (1981). A multivariate analysis of principle and trace elements in 'mature reading comprehension'. In G.H. McNinch (Ed.). Comprehension: Process ¬and –Product. First Yearbook of the American Reading Forum. Athens, GA: American Reading Forum, 76-81.
Manzo, A.V., & Manzo, U.C. (1995). Creating an Informal Reading-Thinking Inventory. In K. Camperell, B. L. Hayes, & R. Telfer (Eds.), Literacy: Past, present and future. Fifteenth Yearbook of the American Reading Forum. Logan, UT: Utah State University.
Manzo, A.V., & Manzo, U.C., & Albee, J. A. (2004). Reading assessment for diagnostic-prescriptive teaching, 2nd ed. NY: Wadsworth.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.
Manzo, A., V., Manzo, U.C, & McKenna, M.C. (1995). Informal reading-thinking inventory: An informal reading inventory (IRI) with options for assessing additional elements of higher-order literacy. Fort Worth, TX: Harcourt Brace College Publishers.
McKenna, M.C. (1983). Informal reading inventories: a review of the issues. Reading Teacher, 36(7), 670-679.
Nilsson, N. (2008). A critical analysis of eight informal reading inventories. Reading Teacher, 61(7), 526-536.
Pearson, P.D., Hansen, J. & Gordon, C. (1979). The effect of background knowledge on young children’s comprehension of explicit and implicit information. Journal of Reading Behavior, 11, 201-209.
Recht, D.R., & Leslie, L. (1988). The effect of prior knowledge on good and poor readers’ memory for text. Journal of Educational Psychology. 80, 16-20.
Silvaroli, N. J. (1969). Classroom reading inventory. Dubuque, IA: William C. Brown, Publishers.
Silvaroli, N.J. & Wheelock, W.H. (2004). Classroom reading inventory (10th ed.). NY: McGraw-Hill.
Spector, J. (2005). How reliable are informal reading inventories? Psychology in the Schools, 42(6), 593-603.
Manzo, A.V., Manzo, U., Barnhill, A., Thomas, M. (2000). Proficient reader subtypes: Implications for literacy theory, assessment, and practice. Reading Psychology. 21(3), 217-232.

1 comment:

  1. Lucky Club Casino site - Live Dealers - LuckyClub.live
    Lucky Club Casino - Live Dealers. Live Casino. The newest luckyclub.live games and exclusive bonuses for Lucky Club Casino, Play casino games, and enjoy live casino

    ReplyDelete