Types of Reliability

Test-Retest Reliability

Test-retest reliability estimates are obtained by repeating the measurement using the same instrument under as nearly equivalent conditions as possible. The results of two administrators are then compared and the degree of correspondence is determined. The greater the differences , the lower is the reliability.

Alternative Form Reliability

Alternative-form reliability estimates are obtained by applying two equal forms of measuring instrument to the same subjects. As in test-retest reliability, the results of the two instruments are compared on an item by item basis and the degree of similarity is determined. The basic logic is the same as in test retest approach.

Two primary problems are associated with this approach. The first is the extra time, expense and trouble involved in obtaining two equivalent measures. The second and more important is the problem of constructing two truly equivalent forms. Thus a low degree of response similarity may reflect either an unreliable instrument or non-equivalent forms.

Internal Comparison Reliability

Internal comparison reliability is measured by intercorrelation among the scores of the items on multiple item index. All items on the index must be designed to measure precisely the same thing. For example; measure of the store image generally involve assessing a number of specific dimensions of the store such as price level, merchandise, service and location. Because these are somewhat independent, an internal comparison of reliability is not appropriate across dimensions. However it can be used within each dimension if several items are used to measure each dimension.

Scorer Reliability

Marketing researchers frequently rely on judgment to classify a consumer’s response. This occurs, for example, when projective techniques, focus groups, observations or open ended questions are used. In these situations, the judges or scorers may be unreliable, rather than the instrument or respondent. To estimate the level of sorer reliability, each scorer should have some of the items he or she scores judged independently by another scorer. The correlation between various judges is a measure of scorer reliability.

Be the first to comment on "Types of Reliability"

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.