Looking for an acronym? Please see the acronyms listing.



A statement or argument that provides a justification for a selection, decision, or recommendation.


A principle of evidence quality that implies validity, but goes beyond it by also calling for clear explanation of what any information put forward is supposed to be evidence of and why it was chosen. This principle also implies that there is a clear and explicable link between what a particular measure is established to gauge and the substantive content of the Standard under which it is listed. 


The degree to which test scores for a group of test takers are consistent over repeated applications of a measurement procedure and hence are inferred to be dependable and repeatable for an individual test taker. A measure is said to have a high reliability if it produces consistent results under consistent conditions.

Reliable and Valid Evidence.

The credibility of the results from assessment and evaluation measures.

Reliable and Valid Model.

For CAEP purposes (p. 17 of the Commission report), a case study that is presented to meet one or more of CAEP’s standards in which key outcomes and processes are gauged, changes and supporting judgments are tracked, and the changes presented are actually improvements. To be reliable and valid as a model, the case study should have followed CAEP’s guidelines in identifying a worthwhile topic to study, generated ideas for change, defined the measurements, tested solutions, transformed promising ideas into sustainable solutions that achieve effectiveness reliably at scale, and shared knowledge.


The extent to which a measure or result is typical of an underlying situation or condition, not an isolated case. If statistics are presented based on a sample, evidence of the extent to which the sample is representative of the overall population ought to be provided, such as the relative characteristics of the sample and the parent population. If the evidence presented is qualitative—for example, case studies or narratives, multiple instances should be given or additional data shown to indicate the typicality of the chosen examples. CAEP holds that sampling is generally useful and desirable in generating measures efficiently. But in both sampling and reporting, care must be taken to ensure that what is claimed is typical and the evidence of representativeness must be subject to audit by a third party.


CAEP’s expectations other than those contained in the standards, including criteria for eligibility or candidacy, paying annual fees, submitting annual reports, publishing educator candidate performance data on websites, etc.

Retention Rates.

Comparison of the number of candidates who entered a program against the number who completed the program and were recommended for certification or licensure. Retention rates may also be collected for the number of new teachers who begin work in schools and who are still working in specified subsequent years.

Review Panel.

A 3-4 person group selected from an Accreditation Council that examines the selfstudy, site visitors’ report, and other accreditation documents related to an educator preparation provider’s (EPP) case for accreditation. The Review Panel makes a recommendation to the Joint Review Team of the Accreditation Council on the standards that are met and confirms or revises areas for improvement and/or stipulations.


The continuing accreditation decision made by the Accreditation Council to revoke an accredited status when the Accreditation Council has determined that the educator preparation provider (EPP) no longer meets two or more CAEP standards. 


In education, refers both to a challenging curriculum and to the consistency or stringency with which high standard for learning and performance are upheld (adapted from the Western Association of Schools and Colleges glossary).


A tool for scoring candidate work or performances, typically in the form of a table  or matrix, with criteria that describe the dimensions of the outcomes down the lefthand vertical axis, and levels of performance across the horizontal axis. The work of performance may be given an overall score (holistic scoring) or criteria may be scored individually (analytic scoring). Rubrics are also used for communicating expectations (adapted from the Western Association of Schools and Colleges glossary).