Skip to main content

Verified by Psychology Today

Intelligence

Collective Intelligence: Help The World Create An IQ Test

A conversation with Christopher Chabris about iq and collective intelligence

You’ve seen those IQ tests online and you probably wonder who makes them. Well now you can actually become the test inventor rather than test taker in a new project from the M.I.T. Center For Collective Intelligence.

It turns out crowdsourcing is how the New York Times crossword puzzles are made. So why not use this method to make an IQ test? Researchers David Engel, Christopher Chabris, Anita Woolley, and Tom Malone have teamed up to lead the world in creating specific types of IQ test puzzles: matrix reasoning items. Can you help create more of them? If you are interested, go to http://www.newmatrixreasoning.com/.

I had the chance to ask my colleague Christopher Chabris some questions about their latest project. Their prior research on this topic was actually published in the journal Science, and people who contribute items can become a co-author on the publication and even win cash prizes.

JON: Why do we need another IQ test? Don't we have tons of reliable and valid measures already?


CHRIS:

Yes, there are many good tests of IQ and of general cognitive ability (Spearman's g-factor). Matrix reasoning tests are considered the gold standard for measuring fluid intelligence, or the ability to reason abstractly and nonverbally about novel problems. But the matrix reasoning tests that exist are all expensive for researchers, who often do not have the funding to pay for them.

CHRIS:

Also, it is somewhat of a mystery why matrix reasoning tests consistently turn out to be such good measures of the g-factor. Just and Carpenter published a famous paper over twenty years ago that attempted to reverse-engineer their mechanism. Understanding how matrix reasoning tests work would be easier if there were a freely available, large pool of items for researchers to use.

Finally, it would be very useful to have multiple versions of a matrix reasoning test, with varying levels of difficulty as well as multiple alternate forms with the same difficulty level. With a large pool of normed items these could easily be created, as could adaptive matrix reasoning tests based on item response theory.

In short, the purpose of our project is to produce materials that will be useful for all researchers studying general cognitive ability and IQ, but not to produce new clinical tests, since those already exist and serve their users well.


The idea of crowd sourcing is innovative. Do you think by putting it out on the internet this will increase the diversity of thought included in the test?

The idea of crowd sourcing is innovative. Do you think by putting it out on the internet this will increase the diversity of thought included in the test?

Thanks! Increasing diversity of kinds of matrix reasoning puzzles is definitely one goal of the crowdsourcing approach. Colleagues have found that it is surprisingly difficult to create items of this type, so having dozens, hundreds, or even more people trying to make them should yield many more useful items than would having a small team working alone. This is one of the main points of crowdsourcing, in fact: the best solutions generated by large groups ought to be better than the best ones generated by small groups or individuals.

If people contribute items to this test, what can they expect to receive in return? Who (or what) selects which items will eventually be included?

According to the contest rules, people who submit items will be recognized in two ways: (1) Anyone who submits an items that winds up being included in a test described in a published scientific paper will be listed as a co-author of that paper; (2) $1300 in cash prizes will be awarded to people who submit items that are judged to be valid matrix reasoning items: $500 for the best item, $500 for submitting more valid items than anyone else; and $100 each to three randomly chosen people who submit at least one valid item. The decisions on validity and quality will be made by our research team, which includes experts on human intelligence research.

So far we have had several submissions in the first two days of our contest. It is open through January 31st, 2014, so there is plenty of time to get involved!

© 2013 by Jonathan Wai

You can follow me on Twitter, Facebook, or G+. For more of Finding the Next Einstein: Why Smart is Relative go here.

advertisement
More from Jonathan Wai Ph.D.
More from Psychology Today