The most interesting intelligence research in 2023...
250 words written at age 11 predict educational attainment close to expert assessment!
Below are five of the most interesting papers from the 2023 International Society for Intelligence Research (ISIR) conference in Berkeley, California. Note that we’ve picked papers that didn’t require lots of technical knowledge to comprehend — there were some exceptional papers that were too esoteric for this round-up.
Testing for selective adoption placement with polygenic scores in the Minnesota Center for Twin and Family Research
Emily Willoughby used genotyped samples of American adoptees from the U.S. and Korea to investigate the extent of selective placement in adoption studies. The reason for comparing adoptees of U.S. and Korean origin is simple: Korean law does not allow prospective parents to select their adopted children. If an American family adopts a Korean child, that child will be assigned to them more-or-less at random. In the U.S. by contrast, adoption is selective: prospective parents tend to pick children based on factors like age, race, resemblance, and perceived compatibility. This means that comparing U.S. and Korean-origin adoptees is an excellent way to study selective placement effects.
As the participants were genotyped, Willoughby was able to compute polygenic scores for them, which she then correlated with various aspects of the family environment. Among the U.S.-origin adoptees, there were a few significant correlations, and among the Korean-origin ones, there were none at all. As it turns out then, selective placement is a not major issue in these adoption data. Nor is range restriction in environmental quality – going by an off-hand remark during the presentation. The assumptions of adoption studies based on these data appear to be satisfied, so they can go on being used for research.
250 words written at age eleven predict intelligence and final educational attainment close to expert assessment
Tobias Wolfram’s presentation earned the most genuine laughs of any that was given at the conference. He showed that you can predict people’s intelligence and educational attainment by applying methods from computational linguistics, natural language processing and deep learning to 250-word personal essays about how they envisioned their futures. The correlations of model-graded performance in the 250-word essays with educational attainment were very high (> .5) and the correlations with intelligence were about equal to test-retest reliability.
The main problem with Wolfram’s method is that there seemed to be evidence of overfitting. The only way in which he addressed this was via something called k-fold cross-validation (where, as I recall, k = 10). This is unfortunate since it is easy to overfit a model even with cross-validation. The true test of Wolfram’s method would be using a holdout sample or simply replicating the results in a new one.
The stability of cognitive abilities: A meta-analysis of longitudinal studies
Moritz Breit presented a meta-analysis of the stability of cognitive ability over different ages and time intervals. Compiling 1,288 test-retest score correlations from 205 longitudinal studies conducted over the past century, he found that the mean rank-order stability at age 20, with a test-retest interval of five year, is equivalent to a correlation of .77. Scores were less stable in pre-schoolers and more stable from late adolescence to late adulthood. Unsurprisingly, stability decreased with the time interval between tests.
Breit also found that the minimum stability needed for diagnostic decisions (a test-retest correlation of .80) can only be expected for kids older than age seven, with relatively short time intervals. In adults, by contrast, high stability can be expected even when the time interval exceeds five years. Knowledge-based abilities had greater stability than those based on effortful processing – though the differences were reduced when accounting for measurement error.
Personality and intelligence from an observer perspective
Colin DeYoung presented a study on the association between intelligence and personality as measured by observers. A recurrent problem when studying the association between these two traits is researchers’ reliance on self-reports of personality. This matters because people might not perceive their own personalities the same way as others do. In two samples of 234 and 309 participants, DeYoung compared the associations of intelligence with self and peer ratings of personality using the Big Five Aspect Scales.
As you’d expect, the correlations were stronger in the sample that had a better intelligence test and two rather than one peer rating. But DeYoung’s main finding was that self and peer ratings largely agreed with one another: intelligence was similarly correlated with the two types of rating. Replicating many previous studies, he found that openness was associated with intelligence, with the association being stronger for the intellect dimension than the openness one. Conscientiousness was negatively associated with intelligence, while the other associations were effectively null.
Can machine learning-based predictive modelling improve our understanding of human intelligence?
Kirsten Hilger presented a study on machine learning-based predictions of intelligence, which showed that intelligence could, to some degree, be predicted from functional brain networks. Examples of these include whole-brain connectivity, connectivity within and between specific networks, and from connectivity proposed by different theories of intelligence. Hilger was better able to predict general and crystallized intelligence than fluid intelligence, which makes sense given that g seems to be generally-represented in the brain: g-loaded measures appear to be more biologically proximate, and crystallized intelligence tests have higher g-loadings.
Her presentation concluded by noting that multiple brain networks were involved in prediction, that simulated lesioning could be compensated by other systems, and that localized connectivity did not show markedly greater predictive accuracy than widely distributed brain connections and whole-brain connectivity. This provides suggestive evidence against process overlap theory, if processes are indeed localizable. Hilger stated that she plans to work with global, time-varying prediction features in future work. We look forward to it.
We also asked ten of the ISIR attendees our special question. Watch below:
Follow us on Twitter and consider supporting Aporia with a $6.99 monthly subscription:
You may find the case of this academic plagiarist interesting:https://rwincblog9.wordpress.com/2023/06/15/11/
Could you help me? Years ago, I subscribed to the Journal Intelligence but let my subscription lapse. I would appreciate information on receiving the Journal or any other Journal on intelligence. Thanks