Collective Intelligence: Help The World Create An IQ Test
A conversation with Christopher Chabris about iq and collective intelligence
Posted December 11, 2013
It turns out crowdsourcing is how the New York Times crossword puzzles are made. So why not use this method to make an IQ test? Researchers David Engel, Christopher Chabris, Anita Woolley, and Tom Malone have teamed up to lead the world in creating specific types of IQ test puzzles: matrix reasoning items. Can you help create more of them? If you are interested, go to http://www.newmatrixreasoning.com/.
I had the chance to ask my colleague Christopher Chabris some questions about their latest project. Their prior research on this topic was actually published in the journal Science, and people who contribute items can become a co-author on the publication and even win cash prizes.
JON: Why do we need another IQ test? Don't we have tons of reliable and valid measures already?
Also, it is somewhat of a mystery why matrix reasoning tests consistently turn out to be such good measures of the g-factor. Just and Carpenter published a famous paper over twenty years ago that attempted to reverse-engineer their mechanism. Understanding how matrix reasoning tests work would be easier if there were a freely available, large pool of items for researchers to use.
Finally, it would be very useful to have multiple versions of a matrix reasoning test, with varying levels of difficulty as well as multiple alternate forms with the same difficulty level. With a large pool of normed items these could easily be created, as could adaptive matrix reasoning tests based on item response theory.
In short, the purpose of our project is to produce materials that will be useful for all researchers studying general cognitive ability and IQ, but not to produce new clinical tests, since those already exist and serve their users well.
Thanks! Increasing diversity of kinds of matrix reasoning puzzles is definitely one goal of the crowdsourcing approach. Colleagues have found that it is surprisingly difficult to create items of this type, so having dozens, hundreds, or even more people trying to make them should yield many more useful items than would having a small team working alone. This is one of the main points of crowdsourcing, in fact: the best solutions generated by large groups ought to be better than the best ones generated by small groups or individuals.
If people contribute items to this test, what can they expect to receive in return? Who (or what) selects which items will eventually be included?
According to the contest rules, people who submit items will be recognized in two ways: (1) Anyone who submits an items that winds up being included in a test described in a published scientific paper will be listed as a co-author of that paper; (2) $1300 in cash prizes will be awarded to people who submit items that are judged to be valid matrix reasoning items: $500 for the best item, $500 for submitting more valid items than anyone else; and $100 each to three randomly chosen people who submit at least one valid item. The decisions on validity and quality will be made by our research team, which includes experts on human intelligence research.
So far we have had several submissions in the first two days of our contest. It is open through January 31st, 2014, so there is plenty of time to get involved!
© 2013 by Jonathan Wai