Computerized adaptive testing (CAT) has been widely used in educational and psychological assessments because it can obtain efficient and precise ability estimation with fewer items than traditional paper and pencil tests. One of the important issues in CAT is the item selection algorithm. Test specifications specify a series of constraints for including items in a test (Swanson & Stocking). The construction of assessments in CAT usually involves fulfilling a large number of statistical (e.g. target item and test information) and non-statistical (e.g. content specifications and key balancing) constraints to meet the test specifications. Although different algorithms perform item selection sequentially or simultaneously while assembling tests, the item selection in CAT is sequential by nature (van der Linden). Therefore, it is challenging, while constructing assessments, to meet the various constraints in CAT simultaneously. To improve measurement precision and test validity, the priority index (PI; Cheng & Chang) approach was proposed to monitor item selection in CAT. Since the PI approach can be implemented easily and computed efficiently, it is important and useful for operational CATs. This talk will first review the development of the PI approach. Some studies and findings will also be included in my talk.
Keywords：CAT, priority index, item selection, constraint-weighted, IRT.