III ‑ Methodology

 

Subjects

 

We began the study with forty high school Puerto Rican students. Permission to participate in this study was received from the parents and the students. (See Appendix E, El, F). Due to the shift and movement of the student population, the number of subjects was reduced to thirty‑one.

 

These students were from Roxbury High and Dorchester High. According to teachers' judgment, all of them were reading below grade level in Spanish. All lived in the greater Boston area; namely, Dorchester, Roxbury and the South End. All came from families of low income.

 

There were thirteen males and eighteen females in the sample. Their mean age was 16.7 years with standard deviation of 1.2 years. The grade distribution of these participants was as follows: 9th grade ‑ 10 students; 10th grade ‑ 7 students; 11th grade ‑ R stu­dents; 12th grade ‑ 6 students. These thirty‑one students have been exposed to bilingual education. They have lived in the continental United States for over a year.

 

Instruments

Language Assessment Scale Test (See Appendix A).

Language Assessment Scale (LAS) was administered to each student at school level in order to find out the level of language proficiency in English.


 

The following information represents various studies deal­ing with the reliability and validity of the LAS test. The data reported below are based on a number of studies conducted between 1976 and 1978. The various studies to be described involved collecting data in California, New Mexico, Texas, Michigan, New York, Oregon, Illinois and Latin America. In reporting the results of these studies, data are pooled in those cases where information is compatible.

In the first study, an attempt was made to determine the discriminate validity of the LAS II instrument. Teachers and other school personnel familiar with the familial and socio‑cultural background of each child informally identified One hundred and seventy children for linguistic competency in both English and Spanish.


 

Table III

 

ANOVA: Comparison Between English "Dominant"* and Spanish "Dominant"* Children on English and Spanish

Test Forms (English "Dominant" N=59; Spanish "Dominant" N‑111)

 

 


 

*Based on Preliminary Teacher Identification

 

In an attempt to assess the inter judge reliability of the oral and written production sections of the LAS II several studies have been completed.

 

Study I. In the first study 75 students between the ages of 13 and 19 were tested in both English and Spanish. Performances on the oral production section for each language version were scored by three independent judges consisting of (1) a bilingual. ESL teacher, (2) a bilingual SSL teacher and (3) a bilingual psychologist. The range of inter‑rater agreement was from .84 to .96 with a mean of .89. For the Spanish version the range of inter‑rater agreement was from .89 to .94 with a mean of .92.

 

Study II. In a more recent study the LAS II English oral and written production sections were administered to 32 children between the ages of 12 and 17 drawn from a monolingual English‑speaking middle class California suburban community. Performances were rated independ­ently by two judges, both of who are native English speaking teachers. On the oral production section, the percent of inter‑rater agreement was .80. On the written production section, the percent of inter‑rater agreement was .98.

 

Study III. In a final study of the LAS II Spanish oral and written production sections were administered to 14 students between the ages of 12 and 18 drawn from native‑Spanish‑speaking students re­siding in a Latin American community having a similar population of middle class residents as in the above study. Two judges scored both teachers whose first language are Spanish performances in­dependently. Per cent of inter‑rater agreement was .80 on the oral production section and on the written production section there was 1.00, or perfect, agreement.

 

The relationships between the sub‑tests were examined by correlating each sub‑test with every other sub‑test as well as with total score. Table IV provides the intercorrelation matrix for both English and Spanish forms of the test. The data reported herein were pooled from several different geographic populations throughout the United States.

 

TABLE IV

 

Interscale Correlations for Eng./Span. Subscales (N=283)

 

The Cloze Procedure

 

This instrument was used in order to estimate the reading grade level, thus determining how well a reader can use conceptual clues to replace deleted words. (See Appendix B). The Cloze procedure is another technique that is useful for placing students in grade mat­erial or for use in selecting materials to meet the needs of a particular group of students.

 


 

The procedure consists of deleting every nth word and replacing it with a blank line. Students are to then read the ma­terial and attempt to fill in the blanks using the correct word accord­ing to the proper context of the sentence. The percentage of correct answers is then calculated and from these percentages Free or In­dependent, Instruction and Frustration reading levels are derived. Cloze tests also correlate highly with standardized test data, dicta­tion tests, listening comprehension tests, and provide an indication of reading and literacy skills (Boston Public Schools, 1979). Va­lidity and Reliability of the Spanish Cloze Test have been reported .in the Technical Proposal submitted to Inter‑America Associates by Boston Public Schools, Lau Unit. Lombardo established concurrent Validity on the English Cloze test in 1979. Lombardo reported that the cloze test correlated significantly with the Stanford Diag­nostic Reading Test, 1976 edition; (r =‑.61, P .05). The Hoyt estimate of readability per cloze story as reported by the Boston Public School Lau Unit was story 2, 0.76; story 4, 0.64; story 6, 0.53; story 8, 0.7.

 

Passages may vary in length depending on the grade level of the students; however, for students of age levels equivalent to third or fourth grade level, or above, passages of about 250 words are often used. The entire first and last sentences are usually left intact. If passages of 250 words plus an intact first and last sen­tence are used, and if every fifth word is omitted, there would be fifty blanks and every blank or answer would be equivalent to two percentage points.


The use of the cloze procedure is rapidly becoming more popular among specialists. It is valuable in determining students' Free or Independent, Instructional and Frustration reading levels.

 

Research reported that: (a) the IRI was effective in determining reading levels of non‑native speakers, Motta, (1974); (b) the Cloze test was an effective measure for determining the comprehension levels of non‑native speakers, Jongsma, (1971); Oller, (1971, 1972); Stubbs and Tucker, (1974); Aitken, (1977). This led the researcher to hypothesize that perhaps the comprehension scores of IRI's and Cloze tests would correlate for non‑native speakers. This hypothesis was based on Bormuth's Study, (1967), which found that the comprehension scores of IRI's and Cloze tests correlated for English speakers.

 

Bormuth, (1967) conducted three studies based on Taylor's work (1953). He examined the relationship of comprehension for multiple choices and Cloze tests and found a correlation between the two.  Bormuth (1969), attempted to examine factors in the validity of the Cloze tests as a measure of reading comprehension. A series of passages were leveled according to the Dale‑Chall Read­ability Formula (1948), and a Cloze test and IRI were constructed for each passage. Conclusive results indicated that there was a high correlation between the two tests.

 

Bormuth s (1968) objectives were to: (a) determine a set of criteria scores for Cloze tests that would be comparable to the cri­teria scores used with oral reading tests to determine the readability of passages; and (b) further substantiate that Clare tests are measures of comprehension. The Cloze tests scores highly correlated with the comprehension scores of the oral test. Comparable criterion was de­termined with Cloze scores of 44% and 57% corresponding with compre­hension criterion scores of 75‑90%, and Cloze scores of 33‑549; corresponding g with word recognition scores of 95‑98%.

 

Other evidence also reported that there is a correlation be­tween the Cloze test and the IRI. Wiechelman (1971) found that there was a positive relationship between the functional reading levels iden­tified with the Cloze test and the IRI and that the instructional lev­els of the IRI and Cloze were more accurate in identifying the reading levels than the Durrell Listening Reading Test. Oller and Conrad (1971) demonstrated the effectiveness of the Cloze test with non‑native. Speakers for measuring language proficiency and comprehension levels and found that it could be used in placing non‑native English speakers in English and reading classes.

 

Standardized tests are not the best measures for assessing students' functional reading levels. However, the problem is further compounded if the reading levels of bilingual students are to be assessed using standardized reading tests. First of all, norms were established with native English‑speaking students. Secondly, items on these tests are culturally biased; and thirdly, the tests presuppose certain visual and auditory skills for which the students may not have received adequate training. Cohen (1969) contended that culturally deprived youngsters couldn’t succeed easily in a verbal culture in which they must function as if they were non‑verbal. In other‑ words, the students' language proficiency is not considered in these standardized measures. Bi­lingual students are made to feel void in tests of unfamiliar lexical items and syntactical structures.

Motta, (1974), recommended the use of IRI's with ESL and bilingual students because she claimed that standardized tests do not take into account such factors as SES, IQ, motivation, culture, or the psycholinguistic experience of these students. The Boston School System administered cloze tests.

 

Table VII illustrates the correlations between Spanish Cloze test reading level and the interference of grammatical struc­tures. The specific structures studied were not part of the Cloze test; therefore, the correlation was not statistically significant.

 

Informal Oral Interference Multiple Choice Test (See Appendix C)

 

The Multiple Choice Interference Test was based upon the work of Dr. Perez Sala. Dr. Sala's research identified the grammat­ical structures that emerged from the mixture of Spanish structure with English structure in male and female adults from various socio­economic and academic levels living in the metropolitan San Juan, Puerto Rico.

Dr. Sala examined the interference of the English language spoken in Puerto Rico. Content validity was established by Dr. Sala when he developed his list of syntactic phrases from extensive re­search into the structures of English and Spanish (Navarro 1948, Porras Crus 1968, Rodriguez Bou 1948, Gili Gaya 1939, Ruben del Rosario, 1939, Chomsky 1965, and the contrastive analysis of English and


 

Spanish Kany 1967, Stockwell, Bowen and Martin, 1965).  Unfortunately the consistent reliability as measured by coefficient alpha was low (Alpha 3.0). The 29 multiple choice inter­ference test items were selected from Dr. Sala's list of grammatical structures. Twenty‑six items, which were determined to be Anglicism approved from general usage in Puerto Rico, were eliminated from this study.

 

The format of this test was a multiple-choice type. One answer was grammatically correct with no interference. Another answer. contained an interference structure and the other answer was grammatical. The students were provided with an answer sheet (See Appendix C) with three spaces in blank. The administrator of the test explained the procedure to the students. After the examiner read the items orally, the students were to mark the space on the answer sheet that was more common in their oral language. The answer sheet was the com­puter type prepared so as to avoid any clues for answering.

The answers were corrected based on the selection of the interference form. If the student selected the standard grammatical structure, he/she was marked as no interference. If the student chose the interference grammatical structure, he/she was marked as having interference. If the student selected the grammatical form, he/she was eliminated from the study. No student selected the grammatical form.

 

The grammatical structures used in this multiple choice interference test were used in the informal reading comprehension test for correlations. According to the analysis, the correlation between total interference and total comprehension was ‑.48 (see Table V), which was statistically significant at the .01 level. This correla­tion demonstrates that greater interference is associated with poor comprehension.

 

Informal Reading Inventory (See Appendix D).

 

A reading inventory test was administered to the students also. This test was based on Bloom's Taxonomy of reading comprehension, which provides a developmental process for reading comprehension. This process is as follows: knowledge, comprehension, application, analysis, synthesis and evaluation.

 

In order to establish an index of reading difficulty on read­ing comprehension study, we used Thomas' Criteria because it provides different levels of difficulty from primer level, very easy, easy, relatively easy, difficult, and very difficult. The reading inventory used in this investigation was in the scale of .07, relatively easy, as explained in the readability graph. The reliability of the Reading Inventory used in this study as measured by coefficient Alpha was 0.83.

 

More than 30 years ago Spaulding (1950) proposed a readability Formula for Spanish based on Dale‑Chall Formula. The average sentence length was determined by counting the number of words and sentences in all the samples and dividing by the total number of words. The density rating of the sample, expressed as the decimal percentage, and the average sentence length, were then substituted in the following formula: Difficulty + 1.609 (average sentence length) + 331.8 (density) + 220. For this investigation 13 textbooks and a juvenile book written in Spanish for primary pupils were analyzed to test the feasibility of using Fry graph to establish readability in Spanish.


 

Based on the 22 books examined the finding indicates that the Fry readability graph can be adapted for use on Spanish materials.

 

The methodology for the criteria used in this story is as follows:

 

Criteria Used in Order to

Establish an Index of Reading Difficulty

on Reading Comprehension Story

 

Rafael in Boston

 

Number of words

= 368

Number of sentences

= 30

Average sentence Length

30/368  = .12

Complexity of Vocabulary

.368/27 = .07

 

 

 

Formula for determining the relative level of difficulty of Spanish written material to obtain an index of readability, Thomas, (1371, p. 53).

 

List I   For the selection of a sample of content:

 

1. In long selections

 

(a) Analyze samples of 100 words every ten pages.

 

2. In shorter selections

 

(b) Analyze samples of 500 words every 1,000 words.

 

3. In selections of 500 words or fewer

 

(c) Analyze the entire passage.


 

List II   To apply the formula:

 

1. Count the number of words in the sample.

 

2. Count the number of sentences.

 

3. Divide the number of words by the number of sentences. Result is average sentence length.

 

4. Check the words against List I and count the number of words not in the list.

 

5. Divide the number of words not on the list by the number of words in the sample. The result is the density or complexity of the vocabulary.

 

6. Using the table, find the number, which corresponds to the density.

 

7. Find the number, which corresponds to the average sentence length.

 

8. Draw a line to connect the two points of density and average sentence length.

 

9. The point at which the two lines intersect the central column represents the relative difficulty of the sample.

 

The Index of Reading Difficulty ranges from 20 to 160 and can be divided as follows:

 

 

20‑40

Primer level

 

20‑60

Very easy

40 ‑ Grade 1

 

 

50 ‑ Grade 2

 

 

60 ‑ Grade 3

 

 

 

60‑80

Easy

60 ‑ Grade 4

 

 

70 ‑ Grade 5

 

 

80 ‑ Grade 6

 

 

 

80‑100

Relatively Easy

Grades 6‑7‑8

100‑120

Difficult

Grades 8‑10

120‑160

Very Difficult

Grades 11‑12 and above

 

Figure 1. Readability Graph


Figure 1. Readability Graph

 

Specific Grammatical Structures Used in the Investigation

 

These are the Specific Grammatical Structures used in the Multiple Choice Interference Test and also in the Informal Reading Inventory Comprehension Test for the first analysis of this investi­gation as can be seen in Tables V, VI, VIII, IX.

           

Standard Spanish Form

VERBS

nadar

contener

hacer

llamame

entere

se supone que el

dejando crecer

 

WORD ORDER

año pasado

ahora esto si

habian llegado pronto

invitado cortesmente

 

ADDITIONAL PARTS OF SPEECH

                (One extra word)

le gusta

el es

parecia demasiado

 

SUBSTANTIVES

idea de como

English Interference Variation

VERBS

nadando

conteniendo

haciendo

llamame para atras

hagase usted consciente

el ester supueste a

tu estas creciendo

 

WORD ORDER

pasado año

ahora si esto

habian pronto llegado

cortesmente invitado

 

ADDITIONAL PARTS OF SPEECH

                (One extra word)

como le gusto

el es un

parecia uno demasiado

 

SUBSTANTIVES

vision de como

 

                       


Scientific Grammatical Structures Used in the Investigation

 

These are the specific grammatical structures used in the informal Multiple Choice Interference Test and also in the Informal Reading Comprehension Test. The structures were used when the data was reorganized for a second analysis in terms of style of structural change in the following way. See Table VII.

 

Standard Spanish Form

WORD ORDER

año pasado

ahora esto si

habian llegado pronto

invitado cortesmente

 

ADD AN EXTRA WORD

le gusta

llamame

es

parecia demasiado

 

MAJOR STRUCTURAL CHANGE

        ‑LEXICAL CHANGE

entere

se supone que e1

expresa lo siguiente

estas dejando crecer

 

MORPHEME CHANGE

nadar

 

LEXICAL CHANGE

idea de como

reconoci

English Interference Variation

WORD ORDER

pasado año

ahora si esto

habian pronto llegado

cortesmente invitado

 

ADD AN EXTRA WORD

como le gusta

llamame para atras

es un

parecia un demasiado

 

MAJOR STRUCTURAL CHANGE

         ‑LEXICAL CHANGE

haga usted consciente

el esta supuesto a

lee come sigue

estas creciendo

 

MORPHEME CHANGE

nadando

 

LEXICAL CHANGE

vision de como

realize

 

 

[Página Anterior]