Early Childhood Research & Practice is in the process of moving to the early childhood special education program at Loyola University Chicago after 17 years at the University of Illinois at Urbana-Champaign. We are delighted by the opportunity to “pass the torch” to our Loyola early childhood colleagues.

We suggest you visit ECRP’s Facebook page for future updates.

HomeJournal ContentsIssue Contents
Volume 5 Number 1
©The Author(s) 2003

Reply to Lonigan Commentary

Rebecca A. Marcon
University of North Florida

Abstract

Responding to Lonigan's commentary on her preschool models study, Marcon clarifies points from the original article and provides findings from a reexamination of the data to answer Lonigan's questions. The response first addresses the issue of retention, reiterating the possible reasons for the lower retention of students in an academically directed (AD) preschool and focusing on one: family income influences on early grade retention. It is noted that lower-income children were more likely than higher-income children to have been retained prior to third grade, and none of the Head Start children had been enrolled in an AD model preschool. Stating the rationale for analyzing data by year in school rather than by grade, thus accounting for grades repeated, the commentary points out that selection of report card grades as an outcome measure might be seen as favoring the AD approach in a school system where grades reflect number of objectives mastered in the competency-based curriculum. Lonigan's suggestions for how to deal with retained children in a longitudinal analysis prompted a reexamination of the data. The response then highlights several conclusions that stand out in the reexamination. First, the impact of the CI model on children's grades was not dependent on Head Start classrooms. Second, the decline in grades associated with the AD model was more evident among children who had never been retained. Significant correlations between report cards and scores on the standardized achievement test battery administered for the first time in third grade were found in all subject areas as well as between children's GPA and total test battery score; thus report card grades were reasonable outcomes to evaluate as an indicator of children's academic abilities. Finally, the response revisits the distinctions between different approaches, pointing out that the preschool models contrasted in the study were empirically derived and reflect a continuum of experiences not an either/or categorization. The response concludes by pointing out that although the study does not provide "the answer" to questions concerning the impact of different approaches, it does help in understanding what facilitates or possibly hinders children's progress through school by demonstrating difficulties that graduates of AD preschools encounter.

I read with interest Professor Lonigan's comments and welcome the opportunity to address concerns he has raised. In this response, I will clarify points that were unclear in the original article and provide findings from a reexamination of the data to answer Professor Lonigan's questions.

The issue of retention is clearly one that deserves further attention. Because I do agree with Lonigan and others that being retained in grade places the child at risk for negative school outcomes, possible reasons for the lower retention rate of Model AD children prior to third grade were discussed at length in the original article. These reasons included (1) greater continuity between the Model AD preschool experience and educational practices in the primary grades, (2) family income influences on early grade retention, and (3) the competency-based system of promotion that emphasized basic reading and arithmetic skills regardless of performance in other subject areas. After reading the commentary, I explored further the second possible explanation because "lower-income children were more likely than higher-income children to have been retained prior to third grade (p = .01)," and no Head Start children had been enrolled in Model AD preschools. Indeed, more Head Start children (35% of Head Start sample) than those who had attended pre-kindergarten (17% of pre-k sample) had been retained prior to third grade, X2(1, N = 159) = 3.64, p = .056. Although no difference (p = .92) in retention rates between CI and M preschools attended by Head Start graduates was found, differential rates of retention were noted for pre-k graduates, X2(1, N = 133) = 4.35, p = .11. Among pre-k graduates, the Model CI retention rate was as expected (~15%), whereas more Model M (~26%) and somewhat fewer Model AD graduates (~10%) than expected had been retained. Thus, in the full sample, the notably lower retention rate of children who had attended AD preschools could be partially attributed to these children being less poor. Lonigan's statement declaring the "game" over for Model CI is premature.

As described by Lonigan, the issue of retention does indeed complicate analysis of longitudinal data. Among researchers, there is, however, no agreed upon strategy for handling the problem. I took a developmental approach because number of years in school rather than grade may better reflect children's development during the early elementary years when progress is often uneven. The original article reported on children's progress after 5 years and 6 years of schooling, regardless of their retention status. Yes, it was the case that whatever grades children were in at the time of Year 5 and Year 6 were the grades from which report cards were used. Of those children who had been retained prior to third grade (Year 5 of school), 74% had repeated first grade and 26% had been retained at the end of second grade. Retained children did not contribute report cards from their repeat of third grade in the Year 5 analysis. In the Year 6 analysis, there were 10 third-graders who had been retained for the first time in third grade. A comparison of these 10 children's Year 5 (third grade) and Year 6 (repeated third grade) grade point average (GPA) showed they earned higher grades the second time around (p = .04)—with no model x year interaction noted (p = .98). Therefore, Lonigan is correct in predicting that children would receive better grades the second time they completed a particular grade than the first time, and that is another reason why I chose to analyze the data by year in school rather than grade in school. All children had an equal amount of time in school.

Although the approach I took is a reasonable one, it does not fully solve the dilemma of what to do with retained children in a longitudinal study. I agree with Lonigan's point that this strategy could be problematic because more "children from AD preschools (would be) contributing grades based on significantly more difficult material" due to fewer AD children in the overall sample having been retained. I was very interested in Lonigan's suggestions for dealing with this difficult problem because, contrary to his assertion that I had a preferred hypothesis in mind, I have always been interested in finding what, if any approach, would best prepare at-risk children to succeed in school. In fact, it is easy to see in published reports of the preschool findings (e.g., Marcon, 1999) and in my discussions with researchers and policy makers across the years that I expected to support the null hypothesis of no significant difference between models. If anything, the selection of report card grades as an outcome measure might be seen as favoring the AD approach in a school system where grades reflect number of objectives mastered in the competency-based curriculum. I was surprised that my initial preschool findings favored the CI approach and, therefore, proceeded to replicate earlier findings with two additional cohorts before publishing them in Developmental Psychology. After reading the commentary, I was eager to reexamine the data using the comparisons Lonigan proposed, although I, too, agreed that no single study could definitively answer questions about long-term effectiveness of varying preschool models.

Before presenting results of comparisons suggested by Lonigan, I would like to explain why Type I familywise (alpha FW) error rate is not as great a worry in this study as the commentary implies. Yes, alpha FW error can be a problem when conducting multiple statistical analyses. That is why I first analyzed children's overall GPA as a composite score. When this composite score was found to be statistically significant (p < .05) or approaching statistical significance (p < .10), univariate analyses of individual subject areas contributing to the overall GPA were performed to aid in interpretation of findings. Year 5 and Year 6 findings for all retrieved children from the original preschool study were presented as background information for understanding the main focus of the research—transition from Year 5 to Year 6. To me the most interesting aspect of the study was the longitudinal component that could help us better understand what approaches might facilitate or hinder academic performance across this notoriously difficult transition in children's school careers.

Two points regarding error need to be addressed. First, in each yearly analysis, three statistical tests were performed on the composite GPA (one each for the A main effect: Preschool Model, the B main effect: Children's Sex, and the A x B interaction). Although these three tests were performed, "these tests are conceptualized as each constituting a separate family of tests.(with) questions of the A main effect.representing one family of questions to be addressed.Questions of the (B) main effect and interaction are considered separately because they represent conceptually distinct questions.Thus, although the alpha level for the (study) as a whole is allowed to exceed .05, the alpha FW rate is set at .05 for each of the three families under consideration" (Maxwell & Delaney, 1990, pp. 259-260). Second, in field research, somewhat higher alpha levels than the conventional .05 level can be used if the researcher wishes to also avoid Type II error (accepting a false null hypothesis). Because of the quasi-experimental design of this study and noise associated with an array of uncontrolled error across the 5 years, I did report findings at a higher than conventional alpha level (p < .10). By doing so, I acknowledge that the Year 6 composite GPA result for Preschool Model (p < .07) is not as reliable as other reported findings that meet the conventional p < .05 criteria.

I should have clearly stated in both the Abstract and the Discussion that my interpretation of what happened in children's sixth year of school was based on the subsample of children for whom data were available on both sides of the Year 5 to Year 6 transition. For this transition subsample, the Model x Year interaction was significant (p = .02), and posthoc comparisons indicated (1) marginal increases (6%) for CI children, F(1, 44) = 3.04, p = .09; (2) nonsignificant decreases (4%) for M children, F(1, 48) = 2.18, p = .15; and (3) marginal decreases (8%) for AD children, F(1, 41) = 3.25, p = .08. But how would these findings hold up in comparisons that excluded children who had been previously retained? Would findings be similar for comparisons that included only those children who had attended pre-kindergarten and excluded Head Start graduates?

These are excellent questions, and the following table summarizes results of preschool model comparisons for children's GPA.

Preschool Model Comparison for Children's GPA
Year All Children Grade "On Schedule" Children
(excluding retained)
5 F (2, 153) = .47, p=.62 3 F (2, 119) = .67, p=.51
6

F (2, 176) = 2.68, p=.07

CI > AD (p<.10)
M = AD
CI = M

4

F (2, 120) = 5.67, p=.004

CI > AD (p<.10)
M > AD (p<.01)
CI = M

5 to 6

Model x Year
F
(2, 135) = 4.11, p=.02

CI: F(1, 44) = 3.04, p=.09
M: F(1, 48) = 2.18, p=.15
AD: F(1, 41) = 3.25, p=.08

3 to 4

Model x Year
F
(2, 107) = 3.92, p=.02

CI: F(1,30) = 1.23, p=.28
M: F(1, 31) = 1.70, p=.20
AD: F(1, 34) = 5.67, p=.02

Year All Pre-K Children
(excluding Head Start)
Grade "On Schedule" Pre-K Children
(excluding retained)
(excluding Head Start)
5 F (2, 127) = .35, p=.71 3

F (2, 80) = 3.91, p=.02

CI > AD (p<.10)
M > AD (p<.05)
CI = M

6

F (2, 145) = 4.36, p=.02

CI > AD (p<.01)
M = AD
CI > M (p<.10)

4

F (2, 80) = 5.90, p=.004

CI > AD (p<.01)
M > AD (p<.05)
CI = M

5 to 6

Model x Year
F
(2, 112) = 4.08, p=.02

CI: F(1, 28) = 2.33, p=.14
M: F(1, 41) = 2.63, p=.03
AD: F(1, 41) = 2.42, p=.08

3 to 4

Model x Year
F
(2, 80) = 4.03, p=.02

CI: F(1, 22) = 1.54, p=.23
M: F(1, 25) = 4.77, p=.04
AD: F(1, 31) = 5.50, p=.03

Several conclusions stand out in this reexamination of findings. First, the impact of Model CI on children's grades was not dependent on Head Start classrooms. Second, the decline in grades associated with Model AD was more evident among "on schedule" children. This school system's competency-based grading system makes it difficult to assume that differences between models were the result of differential grading practices. Forty-three percent of the schools in this follow-up study contributed data for children from two or three different models. Significant correlations (p < .001) between report cards and scores on the standardized achievement test battery administered for the first time in third grade were found in all subject areas as well as between children's GPA and total test battery score (r = .67). Thus, report card grades are reasonable outcomes to evaluate as an indicator of children's academic abilities.

At this point, it would be useful to revisit the distinctions between models because Professor Lonigan's commentary does not accurately describe the different approaches. Model CI preschool teachers do not "hide academically relevant experiences until children are in kindergarten" as suggested by Professor Lonigan. And, like a parent who knows how to individualize a learning opportunity to match the interests, age, and skill level of a child, the CI preschool teacher also does so for the individual children in his or her classroom. The CI classroom is not void of any teacher-directed activities; CI teachers do initiate activities when they are needed to facilitate children's learning.

The preschool models contrasted in this study were empirically derived and reflect a continuum of experiences, not an either/or categorization. The labels placed on varying models are just shorthand descriptors for an array of beliefs and practices that differentiate these approaches (see Marcon, 1999, for a complete description). For example, when describing their practices regarding initiation of activities in a preschool classroom using a 10-point scale (1 = teacher initiated and 10 = child initiated), CI teachers had a median score of 8. AD teachers had a median score of 3. When describing their goals for preschool children on a 10-point scale (1 = academic preparation and 10 = social and emotional growth), CI teachers had a median practice score of 8 and AD teachers' median score was 4. When describing the learning format of their preschool classroom on a 10-point scale (1 = group-oriented and 10 = individualized one-to-one), CI teachers had a median score of 8, and AD teachers a 5. Perhaps the best way to summarize differences between approaches is to contrast CI and AD with Model M teachers who attempt to combine approaches. While the CI teacher does initiate classroom activities when needed to facilitate children's learning, the Model M teacher is notably more engaged in leading groups of children in less-individualized activities for greater periods of time. Compared to the AD teacher, the Model M teacher allows children greater access to classroom materials, encourages peer interaction, and initiates fewer teacher-directed cognitive activities that are not integrated with other developmental domains. In all three approaches, preschool children are being exposed to academically relevant experiences. The difference is how these experiences are introduced and the extent to which they are balanced with other developmental domains that also prepare children to succeed in school.

Does this follow-up study provide "the answer" to questions concerning the impact of different approaches to early childhood education? Of course not. That was just hype in a press release designed to draw attention to an ongoing debate within the field. Does the study help us to better understand what facilitates or possibly hinders children's progress through school? Yes, despite the difficulty of conducting field research with all the inherent confounds and problems we encounter in real-world settings, the reexamination of these data demonstrates difficulties that graduates of AD preschools encounter. What we still need to know is why this is the case.

References

Marcon, Rebecca. (1999). Differential impact of preschool models on development and early learning of inner-city children: A three cohort study. Developmental Psychology, 35(2), 358-375. EJ 582 451.

Maxwell, Scott E., & Delaney, Harold D. (1990). Designing experiments and analyzing data: A model comparison perspective. Belmont, CA: Wadsworth.

Author Information

Rebecca A. Marcon, Ph.D., is a developmental psychologist and a professor of psychology at the University of North Florida. She received her B.A. in psychology from California State University-Fullerton and her M.A. from the University of California, Los Angeles. After working as a school psychologist in the barrios of east Los Angeles, she left California to pursue her Ph.D. in developmental psychology at Louisiana State University. Since completing her Ph.D., she has been a faculty member in the Departments of Psychology at Clemson University, Davidson College, and the University of North Florida. She was also a senior research associate in the District of Columbia Public Schools where she initiated an ongoing longitudinal study of early childhood educational practices. Her research interests include social and language development, early intervention, and public policy. She continues to serve young children and families in the District of Columbia Public Schools as a researcher and consultant. Dr. Marcon also is actively involved with Head Start programs serving young children in northeast Florida. She is a member of the Early Childhood Research Quarterly Editorial Board and has served as a Research in Review Editor for Young Children.

Rebecca A. Marcon, Ph.D.
Department of Psychology
University of North Florida
4567 St. Johns Bluff Road, South
Jacksonville, FL 32224-2673
Office Bldg. 39-4072
Telephone: 904-620-2807
Fax: 904-620-3814
Email: rmarcon@unf.edu