Tag Archives: cognitive psychology

Comments on Lorge & Kruglov, 1950

Summary and commentary for “The Relationship Between the Readability of Pupils’ Compositions and Their Measured Intelligence

Irving Lorge and Lorraine Kruglov

The Journal of Educational Research, Vol. 43, No. 6 (Feb., 1950), pp. 467-474

I have to admit it was a little bit dispiriting to read this article.  First, it describes a project very similar to the one I am about to undertake.  Second, this project beat me to the punch by more than fifty years.  Third, the findings were negative, while I’m expecting my findings to be positive.  And finally, in the 62 years this article has existed, it has garnered exactly 7 citations, so I have to wonder how interested the academy will be in the project I am just starting.  Anyway, back to the article at hand.

In this paper, Lorge and Kruglov use the high-school entrance exam scores of 50 eighth- and ninth-graders to correlate the “readability” of the students’ writing to the same students’ scores on the intelligence-testing portion of the same exam.  They find positive correlations, but the values are low (~.10) and not significantly different from zero.  They conclude that for people matched on education and age level, the complexity of their writing is not a good predictor/substitute/correlate of general intelligence.

The main reason they do not find a significant correlation is likely to be the restricted range of the data.  In the article, the authors mention two successful demonstrations of correlation between readability measures and education levels.  It seems Lorge and Kruglov were too ambitious in thinking that readability would be successful in predicting intelligence in a small sample of relatively similar students: all were eighth- and ninth-graders in New York schools applying for a selective science high school.

One could rightly argue that the data are nearly useless in answering the question of whether there exists a relationship between writing complexity and intelligence in general.  The lack of a significant correlation in this narrow range of measured data points does not disprove an overall relationship that may still exist.

The paper is important in practical terms.  Suppose the test evaluators had intended to use Lorge Readability as the sole measure of subjects’ ability.  The fact that it does not correlate with intelligence in this sample shows this would be a grave mistake.

I still hypothesize that – in general – writing complexity and intelligence will be correlated, but this article gave me some pause.  If evaluation in a narrow range is the goal, I will need to be extremely careful as to whether my methods are rigorous and precise enough to meet that goal.  And I will need to be clear in explaining that they do not, if that is the case.

Quick hits:

  • It sounds like the authors had thousands of exam results to choose from and chose 50 at random for this study.  Times change, I guess.  Although I might have done the same if I was computing all the scores and correlations by hand.
  • On average, students write two grade levels below their current level.  The authors claim this is because students comprehension runs ahead of their ability to compose.
  • The intelligence measure was the total score on 30 arithmetic problems, 60 multiple-choice vocabulary questions 15 “proverb-matching” items.  Compositions were of ~100 words.  I wonder how much longer compositions or multiple compositions per student would have increased the precision of the readability measure.

Learning How Things Go Together

[This is my attempt at converting my dissertation abstract to “Up-Goer Five speak” (i.e. using only the 1000 most-frequently used English words).  For context, here’s the xkcd comic that started the trend.  Search the #upgoer5 hashtag on Twitter for more.  Try it yourself on the Up-Goer Five text editor.]

Big things are just many small things put together. It would be good to know which small things go together. You could learn how a brain works by thinking this way. Or you could learn which people like which other people. Thinking about how small things are put together to make big things is a good idea. It would be good to know how we learn, and how we should learn which things go together.

To this end, I did five studies in which people learned which things in a set were joined together. To show you what I mean, some people learned “who is friends with who” in a friend group. But other people learned about other things that were joined together – like which cities have roads that go between them. By doing these studies, I found out a few things. One thing I learned was that it matters how the things are joined up. To show you what I mean, think about the friend group again. It is easier to learn who is friends with who in a group where few people have many friends and many people have few friends. If things are more even, and all people have about the same number of friends, it is hard to learn exactly who is friends with who.

It doesn’t matter if the joined things are people or cities or computers. It is all the same. Also, it doesn’t seem to matter much why it is you are learning what things go together.

I also show that people learn better by seeing a picture of joined-together things rather than reading about joined-together things. This is the case even more when the things that are joined are made to be close together in the picture.

Finally, I talk about an all-around idea for how people learn about groups of joined together things. I say people start out by quickly sorting things into much-joined and few-joined types. Then they more slowly learn which one thing is joined to which one other thing a little at a time.

Learning a Lattice is Easier than Learning an Irregular Graph (Sometimes)

If you made a picture of your social network, what would it look like? Would it look like a regular structure, like a lattice? Or would there be strange detours and crazy long-range connections between your friends’ friends?

Jason's Friendship Network

Jason's Friendship Network

It would probably be something more like the irregular graph than the lattice. People don’t form friendships in a regular, orderly manner conforming to strict rules of structure. Instead, people form local clusters of friends (you can call them cliques) and some people act as bridges between cliques to connect them and form the small-world topography familiar to social networks.

Ring Lattice Network

Ring Lattice Network

This is one of the points Watts and Strogatz illustrated with their social network models. A ring lattice may be a poor analog for a real-life friendship network, but a ring lattice with a few perturbations of the edges does a good job of capturing two characteristics of social graphs: local structure and random edges that allow a small world.

Watts & Strogatz Perturbations of a Ring Lattice

Watts & Strogatz Perturbations of a Ring Lattice

What would happen if we asked people to learn “who is friends with whom” in a ring-lattice social network or a perturbed Watts & Strogatz network? Will the regularity of the lattice structure make it easier to learn, or will it be difficult to learn because it goes against one’s expectations of how friendship clusters work?

The answer depends on the mode of presentation of the network. If the network is presented visually, as a network diagram, subjects learn the perfect Ring Lattice more easily than the perturbed version. However, if the network is presented simply as a list of connected nodes, the two graphs are equally easy (or hard) to acquire.

Accuracy by Training Type and Graph Type

Accuracy by Training Type and Graph Type

Diagram training allows for simple strategies. Names that are close together spatially in a Ring Lattice diagram are necessarily friends. This is true to some degree for the perturbed lattice as well, but it is not as reliable a strategy.

Scale-Free Graphs are Easier to Learn than Random Graphs

The first of my three hypotheses about social network acquisition concerns the structure of the social network graph. The claim is that human subjects will acquire a network’s structure more quickly if it resembles a true human social network rather than an arbitrary network. To translate this into an experiment, I compared the learning rate for subjects learning a random graph to the learning rate for those learning a scale-free graph.

What is a random social network graph? A random social network graph is a graph in which people are nodes, and the friendship ties between them (edges in the graph) are placed at random. In other words, there is nothing special about the node that determines what edges it participates in. All the edges (friendships) are sprinkled at random within the graph. Below is an illustration of a random graph.

Random Social Network Graph. Produced using the Erdős-Rényi method.

What is a scale-free social network graph? A scale-free social network graph is a graph in which the more edges (friendships) a node (person) participates in, the more likely that node will be to form new edges. In other words, the rich get richer, or the more popular one is the easier it is to make new friends. Below is an illustration of a scale-free graph.

Scale-Free Graph

Scale-Free Social Network Graph. Produced using the Barabási–Albert method.

So which of these two types of social network graph is easier to learn? The scale-free graph. I’ll post the figures for a couple experiments below. This is a clear and reliable result that replicates across all of my studies so far.

scale free vs random graph results

Scale-free graphs are acquired more quickly than random graphs. The number of trials needed to reach criterion performance is lower.

scale free vs random graph learning curves

Scale-free graphs are acquired more quickly than random graphs. The number of errors made during training decreases more rapidly for the scale-free social network graph.

UPDATE 8/21/2012: I’ve replicated this result several times now. More details and a formal description of the experiment and the results are available in pre-prints of two papers on my SSRN author page:

Acquiring Social Network Structure – Results Soon

I have completed the analysis of the first three (three!) experiments. Currently I am putting these results together for a talk here at UCSD. I will also be submitting a paper to CogSci 2011.

That means very soon I’ll have something to talk about here on the blog. Until then, enjoy the introductory slide for Friday’s talk.

Coming Soon: Acquiring Social Network Structure

Coming Soon: Acquiring Social Network Structure

Acquiring Social Network Knowledge

The shotgun approach isn’t just the name of the blog. I live it. I have so many projects going that I buy file folders by the pallet.

The one I’m really excited about at the moment is an attempt to bring together the two worlds I’ve been living in for the past year – cognitive psychology and social networks – and keep myself working on my dissertation at the same time.

For this project, I’ll be running several online experiments. The main goal is to characterize exactly how humans acquire and retain social network information.

I’ll be posting links to experiments and results here as time allows. For now, I’ll list the first few hypotheses I’ll be testing:

  • Human subjects will acquire a network’s structure more quickly if it resembles a true human social network rather than an arbitrary network. To operationalize this, I will measure learning curves as subjects learn the structure of random or scale-free graphs.
  • Human subjects will acquire a network’s structure more quickly if it is framed as a social network as opposed to the same network framed in some other manner (e.g. a computer or transport network.)
  • Some forms of representation of the network will lead to faster acquisition than others. For example, you might represent a network as a series of edges between vertices (e.g. friendships between people) or you might represent a network as a traversal of the links within it (think of following links in the Kevin Bacon Game). Some forms will lead to faster acquisition than others, and this will allow us to draw conclusions about how graph information is represented in the brain.

Check back for updates on this project, and please leave feedback and questions.

March Cognitive Psychology Word Cloud

The word cloud below was created by dropping into Wordle the titles and abstracts from every article published in a select group of cognitive psychology journals in March 2009. Hopefully it will give you a sense of what cognitive psychology researchers were talking about this month.

Click on the image to see the word cloud full size.

March 2009 Cognitive Psychology Word Cloud

March 2009 Cognitive Psychology Word Cloud