Explain what is meant by the term "reliability" as it applies to the use of instruments in educational research

October 26, 2019

Explain what is meant by the term "reliability" as it applies to the use of instruments in educational research.
The reliability is the correlation between the scores on the two instruments. If the results are consistent over time, the scores should be similar. The trick with test-retest reliability is determining how long to wait between the two administrations. All research is conducted via the use of scientific tests and measures, which yield certain observations and data. But for this data to be of any use, the tests must possess certain properties like reliability and validity, that ensure unbiased, accurate, and authentic results. This PsycholoGenie post explores these properties and explains them with the help of examples.


Validity and reliability are two important factors to consider when developing and testing any instrument (e.g., content assessment test, questionnaire) for use in a study. Attention to these considerations helps to ensure the quality of your measurement and of the data collected for your study.
Understanding and Testing Validity
Validity refers to the degree to which an instrument accurately measures what it intends to measure. Three common types of validity for researchers and evaluators to consider are content, construct, and criterion validities.
Content validity indicates the extent to which items adequately measure or represent the content of the property or trait that the researcher wishes to measure. Subject matter expert review is often a good first step in instrument development to assess content validity, about the area or field you are studying.
Construct validity indicates the extent to which a measurement method accurately represents a construct (e.g., a latent variable or phenomena that can’t be measured directly, such as a person’s attitude or belief) and produces an observation, distinct from that which is produced by a measure of another construct. Common methods to assess construct validity include, but are not limited to, factor analysis, correlation tests, and item response theory models (including Rasch model).
Criterion-related validity indicates the extent to which the instrument’s scores correlate with an external criterion (i.e., usually another measurement from a different instrument) either at present (concurrent validity) or in the future (predictive validity). A common measurement of this type of validity is the correlation coefficient between the two measures.

Often times, when developing, modifying, and interpreting the validity of a given instrument, rather than view or test each type of validity individually, researchers and evaluators test for evidence of several different forms of validity, collectively (e.g., see Samuel Messick’s work regarding validity).
Understanding and Testing Reliability
Reliability refers to the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities. Internal consistency reliability looks at the consistency of the score of individual items on an instrument, with the scores of a set of items, or subscale, which typically consists of several items to measure a single construct. Cronbach’s alpha is one of the most common methods for checking internal consistency reliability. Group variability, score reliability, several items, sample sizes, and difficulty level of the instrument also can impact the Cronbach’s alpha value.

Test-retest measures the correlation between scores from one administration of an instrument to another, usually within an interval of 2 to 3 weeks. Unlike pre-post tests, no treatment occurs between the first and second administrations of the instrument, in order to test-retest reliability. A similar type of reliability called alternate forms involves using slightly different forms or versions of an instrument to see if different versions yield consistent results. Inter-rater reliability checks the degree of agreement among raters (i.e., those completing items on an instrument). Common situations where more than one rater is involved may occur when more than one person conducts classroom observations, uses an observation protocol or scores an open-ended test, using a rubric or other standard protocol. Kappa statistics, correlation coefficients, and intra-class correlation (ICC) coefficient are some of the commonly reported measures of inter-rater reliability.

Share this article :
Blogger Tips and TricksLatest Tips And TricksBlogger Tricks

FB Page

 
@ELITE_Mv
Copyright © 166/A-1/2017/19. ELITE Institute - All Rights Reserved