Summary and commentary for “The Relationship Between the Readability of Pupils’ Compositions and Their Measured Intelligence“
Irving Lorge and Lorraine Kruglov
The Journal of Educational Research, Vol. 43, No. 6 (Feb., 1950), pp. 467-474
I have to admit it was a little bit dispiriting to read this article. First, it describes a project very similar to the one I am about to undertake. Second, this project beat me to the punch by more than fifty years. Third, the findings were negative, while I’m expecting my findings to be positive. And finally, in the 62 years this article has existed, it has garnered exactly 7 citations, so I have to wonder how interested the academy will be in the project I am just starting. Anyway, back to the article at hand.
In this paper, Lorge and Kruglov use the high-school entrance exam scores of 50 eighth- and ninth-graders to correlate the “readability” of the students’ writing to the same students’ scores on the intelligence-testing portion of the same exam. They find positive correlations, but the values are low (~.10) and not significantly different from zero. They conclude that for people matched on education and age level, the complexity of their writing is not a good predictor/substitute/correlate of general intelligence.
The main reason they do not find a significant correlation is likely to be the restricted range of the data. In the article, the authors mention two successful demonstrations of correlation between readability measures and education levels. It seems Lorge and Kruglov were too ambitious in thinking that readability would be successful in predicting intelligence in a small sample of relatively similar students: all were eighth- and ninth-graders in New York schools applying for a selective science high school.
One could rightly argue that the data are nearly useless in answering the question of whether there exists a relationship between writing complexity and intelligence in general. The lack of a significant correlation in this narrow range of measured data points does not disprove an overall relationship that may still exist.
The paper is important in practical terms. Suppose the test evaluators had intended to use Lorge Readability as the sole measure of subjects’ ability. The fact that it does not correlate with intelligence in this sample shows this would be a grave mistake.
I still hypothesize that – in general – writing complexity and intelligence will be correlated, but this article gave me some pause. If evaluation in a narrow range is the goal, I will need to be extremely careful as to whether my methods are rigorous and precise enough to meet that goal. And I will need to be clear in explaining that they do not, if that is the case.
- It sounds like the authors had thousands of exam results to choose from and chose 50 at random for this study. Times change, I guess. Although I might have done the same if I was computing all the scores and correlations by hand.
- On average, students write two grade levels below their current level. The authors claim this is because students comprehension runs ahead of their ability to compose.
- The intelligence measure was the total score on 30 arithmetic problems, 60 multiple-choice vocabulary questions 15 “proverb-matching” items. Compositions were of ~100 words. I wonder how much longer compositions or multiple compositions per student would have increased the precision of the readability measure.