Date of Award

1995

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Educational Leadership, Research and Counseling

First Advisor

Eugene Kennedy

Abstract

As performance assessments grow in popularity, it becomes increasingly important to investigate the effect of such assessments on various population subgroups. The purpose of this study was to investigate the relative empirical power of three popular statistical procedures (an extension of the generalized Mantel-Haenszel procedure, Logistic Discriminant Function Analysis, and a combined t-test procedure) in identifying polytomously scored items that function differentially for two subgroups of examinees. In the Monte Carlo study computer simulations were conducted to study the behavior of these procedures for identifying items exhibiting varying degrees of differential-item functioning (DIF). Each statistic was converted to a probability value to examine the number of times that the method rejected an item at the.05 levels. The results, based on simulated twenty-four conditions, each replicated 50 times, indicate a preference for the logistic discriminant function analysis (LDFA) procedure for DIF identification in polytomously scored items. The effects of the number of DIF items on the matching variable seem significant for identifying DIF in performance assessment. The effect was stronger for detecting uniform DIF than for identifying nonuniform DIF. Based on the findings of the study, the following conclusions were drawn: (1) For DIF analysis in performance assessments, the LDFA can be recommended as the preferred method to test constructors or practitioners. (2) Through using the LDFA for identifying DIF in performance assessments, the appropriateness in test usage for different subgroups will be enlarged. (3) The effects of the number of DIF items on the matching variable seem significant for identifying DIF in performance assessment. Thus, in order to decrease the effects of the proportion of DIF items on the matching variable, it is recommended to emphasize the judgmental analysis to evaluate biased items in a test before entering DIF analysis. Finally, the statistics should be interpreted with caution. Although DIF analysis is essential for the appropriateness of test use that is related to subgroups influenced by testing, DIF analysis is only one component of the extensive research for the validity and fairness of performance assessment.

Pages

142

Share

COinS