University of Illinois at Urbana-Champaign

Facebook

Send comments
to the ECRP Editor.

          
Share HomeJournal ContentsIssue Contents
Volume 5 Number 1
©The Author(s) 2003

Comment on Marcon (ECRP, Vol. 4, No. 1, Spring 2002):
"Moving up the Grades: Relationship between Preschool Model"

Christopher J. Lonigan
Florida State University

Abstract

Commenting on Rebecca Marcon's study, which indicated that an academically oriented preschool model had negative effects in later school years, this article calls into question the study's data analyses and interpretations. The commentary asserts that there were no reliable differences in report card grades between children who attended academically directed (AD), child-initiated (CI), or middle-of-the-road (M) preschool classes by either third or fourth grades once conventional levels of statistical significance are used; a lack of follow-up analyses allows no interpretation of grade-by-preschool interaction; it was unclear how children who had been retained in grade by third grade were included in a follow-up study; and the significantly higher likelihood of retention prior to grade 3 for children who participated in CI and M type preschools is a clear finding glossed over in Marcon's report. The commentary also raises questions about the potential differences in factors responsible for preschool selection because type of preschool and preschool model were confounded in the study, and about potential context effects in the study. The commentary concludes by reiterating that the most significant finding of Marcon's study was given the least attention: that children who attended AD preschool were one-half as likely to be retained in grade by third grade than were children who had attended CI and M Model preschools.


I was surprised the day that the press release about the article published in Early Childhood Research & Practice, the online journal started by the ERIC Clearinghouse on Elementary and Early Childhood Education, showed up in my mailbox. Within that press release were an intriguing set of quotations that suggested that the article by Rebecca Marcon provided clear evidence on the effects of different preschool models. I suppose that the press release did its job, at least for me, because within a few hours I was downloading the article to read (http://ecrp.illinois.edu/v4n1/marcon.html; "Moving up the Grades: Relationship between Preschool Model and Later School Success"). Interestingly, the press release read as if this study provided the clear answer to questions concerning the impact of different approaches to early childhood education—despite the well-known tenet of science that no single study serves as the arbiter of any question. Yet, one of the current "battles" in early childhood education is between those who believe that anything other than a child-initiated model is developmentally inappropriate and those who believe that it is possible, developmentally appropriate, and desirable to teach children some of the skills that will help them succeed once their "formal" education starts in kindergarten and first grade. Hence, it is possible that the press release was more about politics than about science.

The bottom-line message of the article was that an academically oriented preschool model had negative effects that resonated through the early school years. I will admit, up front, that I have significant doubts that a sensible teacher-directed early childhood curriculum will have negative impacts on children. Yet, I am interested in looking at the evidence. After reading the report of the study, I believe I have some reasonable questions about the study's design and description, and I believe that these questions raise the issue of whether this study provides very much information about the effects of early childhood programs.

First, there is the issue of Type 1 error. The purpose of conducting inferential statistics on data is to prevent the support of conclusions based on spurious results. In inferential statistics terms, this means that we are typically willing to accept results if they are likely to occur by chance no more than 5 out of 100 times (i.e., p < .05). The analysis of multiple nonindependent outcome measures results in an increase in familywise error. That is, the likelihood of a spurious result is increased when multiple tests are conducted (i.e., the inferential probabilities are based on the assumption of independence). Most generally, researchers who conduct multiple inferential tests on measures that are not independent adjust their "alpha" levels to hold familywise error at the conventional .05 level. By contrast, Dr. Marcon reports the results of 12 nonindependent comparisons for preschool type and 12 nonindependent comparisons for gender in each of two years. Even if we forget about the gender comparisons and the multiple years, a typical correction (e.g., modified Bonferroni procedure) would require adjusting the alpha to p < .004 to maintain familywise error at p < .05. Rather than adjusting the alpha, Dr. Marcon interpreted the comparison that yields that largest group difference as significant at p = .07! Therefore, the real answer from the data in this study is that there were no reliable differences in report card grades between children who attended academically directed (AD), child-initiated (CI), or middle-of-the-road (M) preschool classes in either the third or fourth grades, given that the above-mentioned p = .07 finding was the only contrast that even came close to being significant.

Second, analyses follow a typical order that takes into account how the different effects are decomposed. Significant results from analyses have a set of appropriate follow-up contrasts that allow the significant results from the main analyses to be interpreted. One typically examines the interaction first (here it comes last) in a sequential analysis because the main effects are interpretable only in the absence of an interaction. So, what about the interaction between year of assessment and preschool type? Here the article is on a little more stable ground in terms of Type 1 error—same 12 comparisons; same adjustment needed; but at least there are some statistics at less than the conventional p < .05 level. What if only overall GPA had been examined (instead of overall GPA as well as the GPA for the 11 specific subject areas)? In this case, there would be one comparison, and at p < .05, it is clear (despite the fact that the column in the table appears to be mislabeled) that there is a significant grade by preschool model interaction. Because it is already known that there are no group differences on this variable at either grade 3 or grade 4, one would need to conduct appropriate follow-up tests to interpret the significant interaction. What would one test? Perhaps one would want to know if the change from grade 3 to grade 4 was significant for each of the three groups. Perhaps one would want to know if the rate (or direction) of change differed significantly for all three groups, rate of change for one group differed significantly from the other two, or if rate of change for one group differed significantly from only one other group. None of these tests was reported. Therefore, the article provides no information on how to interpret the interaction—other than to know that it does not result in a significant difference between the groups at grade 4 (or grade 3).

The comparisons and discussion of Type 1 error above are complicated by the fact that there seem to be different children included in the different analyses. That is, the children included in the analyses comparing children from different preschool models across years represent a subset of children in the preschool model comparison for the separate years. It is not clear why a single set of analyses on children for whom data were available in both years was not what was reported.

Third, the information reported in the article limits what we know about what was actually tested. The article notes that 20% of the sample had been retained in grade by the third grade. The article further notes that children who had attended CI and M preschools were significantly more likely to have been retained in grade prior to the third grade than children who attended AD preschools—and this difference was very strong for the boys. There is a single sentence in the article that reads, "The academic performance of children who were 'on schedule' at the end of Year 5 (grade 3), as well as performance of children who had been retained prior to third grade, was examined in this follow-up study" to describe the children included in the sample. What does this mean? How were those children who were retained in grade—the majority of whom came from CI and M preschools—included in the sample?

The answer to this question could have significant influence on the results. Was it the case that the data for children retained in grade were collected in Year 6 and Year 7 so that they contributed report card grades from their third- and fourth-grade classes (like the students who were not retained)? Was it the case that whatever grade they were in at the time of Year 5 and Year 6 were the grades from which report cards used? One can imagine that you are likely to receive better grades the second time you complete a particular grade than the first time you completed it. Hence, if 20% of children who had been in CI and M classrooms contributed report cards from their repeat of a grade, it is perhaps not surprising that they appear to have higher grades (leaving aside for the moment the likelihood that teachers may be more inclined to give higher grades to children who have already repeated a grade). Moreover, children from AD preschools are contributing grades based on significantly more difficult material under this scenario. If 20% of children who had attended CI and M preschools contributed data after an extra year of schooling (i.e., their third- and fourth-grade report cards were used), would it not be expected that they would do better than children with less time in school? Certainly, one of the most consistent findings from educational research is that more time-on-task predicts higher scores.

In either case, there is something of an apples and oranges comparison being made here. However, it would not be very informative to conduct the comparison excluding children who were retained in grade—except to provide a very weak test of the author's preferred hypothesis—because only the most academically capable children would still be included in the CI and M preschool groups. However, it would perhaps be telling—except that it would be confirmation of the null hypothesis—if children from AD preschools scored as well as children from CI or M preschools once those children retained in grade were excluded from the analysis. More telling would be if children from AD classrooms scored better than children from CI or M preschools once children retained in grade were excluded from the analysis.

It seems that one clear result that is being glossed over in the article is the significantly higher likelihood of retention prior to grade 3 for children who participated in CI and M preschools. One could almost declare that the "game" was over at that outcome, and CI and M had lost. Imagine a scenario in which the outcome is not report cards but quality of life following a medical procedure. If twice as many patients in one group die as in another group, there can be no question asked about quality of life (i.e., there is no quality of life when you are dead). I suppose that it is open to debate whether one can ask about school success after twice as many children in one group than in another group have already failed—although some recent reviews suggest that grade retention is a significant risk factor for negative school outcome (Jimerson & Kaufman, 2003).

Fourth, it also seems to me to be reasonable to ask about the potential differences in (perhaps unmeasured) factors responsible for preschool selection because type of preschool and preschool model were confounded in the study. That is, none of the Head Start preschools was classified as Model AD (based on the description provided, it is not possible to deduce if any were classified as Model M). However, Head Start preschools contributed 16% of the sample. If the Head Start classes were excluded, what would the proportion of Model CI and Model M classrooms have been? Given the different admissions criteria for Head Start and other preschools, such a confound between preschool models and type of preschool is potentially significant. A strong test would require that the apparent impact of Model CI classrooms not be dependent on Head Start classes (e.g., by replicating the effect with Head Start classes excluded from the analyses). In the absence of such a demonstration, the effect—if actually present once the retention issue was worked out—could not be unambiguously attributed to preschool model.

Finally, I think it is not unreasonable to ask about potential context effects in the study (e.g., overall achievement at a particular school). Were children from the different preschool models equally likely to attend the same schools? Given the potentially subjective nature of report card grading (e.g., use of a grading "curve"), it is possible that children with quite different scores on their report cards had very similar abilities. It is a bit surprising that there was no attempt to include data from the district's standardized assessment of achievement, which in most districts is administered by the fourth grade. Such an assessment would allow an examination of how well report cards reflected student ability. In the absence of such data, it would be useful to control for context effects in the analyses.

It is absolutely reasonable and important to ask about the long-term effects of different preschool models. Significantly, the purpose of conducting scientifically valid examinations of educational practices is to understand how best to serve the needs of young children. Such decisions need to be based on the best scientific methods. The costs of poor decisions are far too high—both to the children and to society. Ultimately, the quality of the decisions is based on the quality of the evidence used.

Whereas I do not think a priori that academically oriented preschool experiences are harmful to children, I also do not believe that preschools should look like first- or second-grade classrooms with children spending most of their time sitting at desks or tables engaging in "academics" or "drill and kill" activities. There is a significant difference between thinking that preschool teachers can provide children with directed activities designed to promote the development of some skill and thinking that children should be engaged in some activity more appropriate for a first- or second-grade student. Parents engage in age-appropriate directed learning activities all the time; however, we do not ask if an engaged parent is ruining his or her child's intrinsic motivation for learning. Similarly, a skilled preschool teacher can engage children in responsive and interesting educational, academically oriented, activities in ways that both foster children's skills and provide enjoyment for the children. In many cases, children will, in fact, choose these same activities when they are in a free-choice period. Hiding academically relevant experiences until children are in kindergarten does not seem to be the way to promote a love of knowledge and learning.

What seems most compelling about the results reported in this study is the finding that is given the least attention. That is, children who had attended AD preschools were one-half as likely to be retained in grade by the third grade than were children who had attended CI and M preschools. What are the consequences—both in terms of socioemotional development and academic development—of being retained in grade by third grade? What impact does such early retention have on intrinsic motivation for learning? These are important questions. What is clearly not true based on the results of this study is the claim made in the article's abstract that "Children's later school success appears to have been slowed by overly academic preschool experiences that introduced formalized learning experience too early for most children's developmental status."

Let's let good science decide the best way to help children succeed in school and in life. Ultimately, what are needed are randomized controlled studies that allow unambiguous attributions of causality. Such studies are difficult and costly to conduct. However, the future of children is far too significant to let the issue be decided by fallible information. It is unlikely that the needs of children are best served by what at times seems like politically motivated dissemination of misinformation. The field needs to agree on the desired outcomes and how to measure them. Then, we can collect data that will be informative on the best way to help children achieve those outcomes.

Reference

Jimerson, S. R., & Kaufman, A. M. (2003). Reading, writing, and retention: A primer on grade retention research. Reading Teacher, 56, 622-635.

Author Information

Christopher Lonigan is a professor of psychology at Florida State University and associate director of the Florida Center for Reading Research. His primary research interests include the development of emergent literacy skills during the preschool period and how these skills impact later reading, development of assessment instruments that measure the key areas of emergent literacy, and evaluation of preschool interventions and curricula designed to prevent reading difficulties for preschool children who are at-risk for later academic problems. Other interests include psychiatric disorders in children as well as the overlap between psychiatric disorders and problems in reading. Recent publications include "Development and Promotion of Emergent Literacy Skills in Preschool Children At-Risk of Reading Difficulties" in Preventing and Remediating Reading Difficulties: Bringing Science to Scale (B. Foorman, ed.),"Family Literacy and Emergent Literacy Programs" and "Assessment of Children's Pre-literacy Skills" (with K. Keller and B. M. Phillips) in Handbook on Family Literacy: Research and Services (B. Wasik, ed.), and "Temperamental Basis of Anxiety Disorders in Children" (with B. M. Phillips) in The Developmental Psychopathology of Anxiety (M. W. Vasey & M. R. Dadds, eds.).

Christopher J. Lonigan
Florida State University
Department of Psychology
One University Way
Tallahassee, FL 32306-1270
Email: lonigan@psy.fsu.edu

This article has been accessed 17,683 times through June 1, 2007.