Fourth, the researcher should indicate whether the codes selected for the study are considered random or solid effects. If the study coders are randomly selected from a larger population and have to generalize their assessments to that population, then the researcher can use a random effect model. These models are considered random because subjects and coders are both considered randomly selected. This can be used, for example, in a study evaluating the extent to which randomly selected psychologists give similar intelligence assessments to a group of subjects, with the intention of generalizing the results to a larger population of psychologists. If the researcher does not want to generalize coder evaluations in a study to a larger population of coders or if coders are not randomly explored in a study, they may use a mixed effects model. These models are called mixed because subjects are considered random, but coders are considered fixed. Note, however, that THE ICC estimates are identical for random and mixed models and that the distinction between random and mixed is important for interpreting the generalization of results and not for calculation (McGraw- Wong, 1996). The reliability of inter-rats and intra-rats is affected by the subtlety of discrimination in the data that collectors must make. If a variable has only two possible states and the conditions are highly differentiated, reliability is probably high. For example, in a study on the survival of sepsis patients, the outcome variable is either survives or does not survive.
It is unlikely that there will be significant reliability issues in the collection of this data. On the other hand, it is much more difficult to achieve reliability when data collectors are held to more subtle discrimination, such as the redness intensity of an injury. In such cases, the researcher is responsible for the careful training of data collectors and the examination of the extent to which they agree in their assessment of variables of interest. If the number of categories used is small (z.B. 2 or 3), the probability of 2 advisors agreeing by pure coincidence increases considerably. This is because the two advisors must limit themselves to the limited number of options available, which affects the overall agreement rate, not necessarily their propensity to enter into an “intrinsic” agreement (an agreement is considered “intrinsic” if not due to chance). The possible reasons for a low IRR should be discussed. B may be small due to the limited range, poor psychometric characteristics of a scale, poorly trained coder, difficulty observing or quantifying the construction of interest or for other reasons. Decisions to deposit or retain low-error variables from analyses should be discussed and alternative models could be proposed when variables are abandoned.
The spSS and the R-pack require users to indicate a single or two-way model, an absolute type of match or consistency, as well as individual or medium units. The design of the hypothetical study provides information on the correct choice of ICC variants. Note that the SPSS, but not the R-irr package, allows a user to indicate random or mixed effects, the calculation and results are identical for random and mixed effects. For this hypothetical study, all subjects were evaluated by all coders, meaning that the researcher should probably use a two-way ICC model, because the design is completely cross-referenced and an average CCI unit of measurement, because the researcher is probably interested in the reliability of the average evaluations provided by all coders. The researcher is interested in assessing the degree of correspondence between the coder`s assessments, so that higher ratings of one coder corresponded to higher ratings of another coder, but not to the extent that the coders agreed on the absolute values of their ratings, which justifies a type of ICC consistency. Coders were not chosen at random, and therefore the researcher is interested to know how well the coders are able to meet their assessments in the court