Author Archives: jasonjjones

Comments on Lorge & Kruglov, 1950

Summary and commentary for “The Relationship Between the Readability of Pupils’ Compositions and Their Measured Intelligence

Irving Lorge and Lorraine Kruglov

The Journal of Educational Research, Vol. 43, No. 6 (Feb., 1950), pp. 467-474

I have to admit it was a little bit dispiriting to read this article.  First, it describes a project very similar to the one I am about to undertake.  Second, this project beat me to the punch by more than fifty years.  Third, the findings were negative, while I’m expecting my findings to be positive.  And finally, in the 62 years this article has existed, it has garnered exactly 7 citations, so I have to wonder how interested the academy will be in the project I am just starting.  Anyway, back to the article at hand.

In this paper, Lorge and Kruglov use the high-school entrance exam scores of 50 eighth- and ninth-graders to correlate the “readability” of the students’ writing to the same students’ scores on the intelligence-testing portion of the same exam.  They find positive correlations, but the values are low (~.10) and not significantly different from zero.  They conclude that for people matched on education and age level, the complexity of their writing is not a good predictor/substitute/correlate of general intelligence.

The main reason they do not find a significant correlation is likely to be the restricted range of the data.  In the article, the authors mention two successful demonstrations of correlation between readability measures and education levels.  It seems Lorge and Kruglov were too ambitious in thinking that readability would be successful in predicting intelligence in a small sample of relatively similar students: all were eighth- and ninth-graders in New York schools applying for a selective science high school.

One could rightly argue that the data are nearly useless in answering the question of whether there exists a relationship between writing complexity and intelligence in general.  The lack of a significant correlation in this narrow range of measured data points does not disprove an overall relationship that may still exist.

The paper is important in practical terms.  Suppose the test evaluators had intended to use Lorge Readability as the sole measure of subjects’ ability.  The fact that it does not correlate with intelligence in this sample shows this would be a grave mistake.

I still hypothesize that – in general – writing complexity and intelligence will be correlated, but this article gave me some pause.  If evaluation in a narrow range is the goal, I will need to be extremely careful as to whether my methods are rigorous and precise enough to meet that goal.  And I will need to be clear in explaining that they do not, if that is the case.

Quick hits:

  • It sounds like the authors had thousands of exam results to choose from and chose 50 at random for this study.  Times change, I guess.  Although I might have done the same if I was computing all the scores and correlations by hand.
  • On average, students write two grade levels below their current level.  The authors claim this is because students comprehension runs ahead of their ability to compose.
  • The intelligence measure was the total score on 30 arithmetic problems, 60 multiple-choice vocabulary questions 15 “proverb-matching” items.  Compositions were of ~100 words.  I wonder how much longer compositions or multiple compositions per student would have increased the precision of the readability measure.

Learning How Things Go Together

[This is my attempt at converting my dissertation abstract to “Up-Goer Five speak” (i.e. using only the 1000 most-frequently used English words).  For context, here’s the xkcd comic that started the trend.  Search the #upgoer5 hashtag on Twitter for more.  Try it yourself on the Up-Goer Five text editor.]

Big things are just many small things put together. It would be good to know which small things go together. You could learn how a brain works by thinking this way. Or you could learn which people like which other people. Thinking about how small things are put together to make big things is a good idea. It would be good to know how we learn, and how we should learn which things go together.

To this end, I did five studies in which people learned which things in a set were joined together. To show you what I mean, some people learned “who is friends with who” in a friend group. But other people learned about other things that were joined together – like which cities have roads that go between them. By doing these studies, I found out a few things. One thing I learned was that it matters how the things are joined up. To show you what I mean, think about the friend group again. It is easier to learn who is friends with who in a group where few people have many friends and many people have few friends. If things are more even, and all people have about the same number of friends, it is hard to learn exactly who is friends with who.

It doesn’t matter if the joined things are people or cities or computers. It is all the same. Also, it doesn’t seem to matter much why it is you are learning what things go together.

I also show that people learn better by seeing a picture of joined-together things rather than reading about joined-together things. This is the case even more when the things that are joined are made to be close together in the picture.

Finally, I talk about an all-around idea for how people learn about groups of joined together things. I say people start out by quickly sorting things into much-joined and few-joined types. Then they more slowly learn which one thing is joined to which one other thing a little at a time.