ipl-logo

How Did Matt And Ian Agree That Cooperative Behavior

708 Words3 Pages

To what extent did Matt and Ian agree that cooperative behaviour occurred during Observation 1? And, what sort of reliability is being assessed here? [2] To answer this question, calculate and write down the point-by-point agreement ratio using the following formula: [Agreement refers to when an X appears in a corresponding interval. For example, in interval 1 at Observation 1, there is an X for Ian, but none for Matt. So, that is a disagreement. At interval 2, however, both Ian and Matt have an X, so that is an agreement. When two corresponding boxes don’t have an X, that’s not an agreement or a disagreement, so you ignore it]. Calculations: 0x=nothing; 1x=disagreement; 2x=agreement. 0x=NA; 1x=8; 2x=2 inter-observer agreement= (2*100)/(8+2)= …show more content…

This calculated to be an inter-observer agreement percentage of 20% which shows little agreement between observers. The reliability being assessed with this particular calculation is the studies inter-rater reliability as it is assessing the degree of agreement between the two observers. The inter-rater reliability for this study is quite poor. How well did Ian and Matt measure what they thought they were measuring? Briefly explain your answer. [2] With only a 20% inter-observer agreement value it is clear that there are some discrepancies in regard to their measure, the two researchers clearly have two different views as to what cooperation amongst children may look like but this could also be due to the vagueness of the definition of cooperation they used. With this division of interpretation, the two researchers are likely not measuring what they think they are measuring therefore there is an issue with the face validity of the measure. If a measure has poor validity it cannot be said to be reliable. How consistent was Ian across the two Observation periods? Use the same formula as above and write down your result. In addition, tell us what sort of reliability is being assessed here. …show more content…

However, being a naturalistic observation there could be many other variables affecting his observations as the researcher has minimal control in how the children will act. One possibility could be that they habituated to the researcher’s presence and acted as they would normally, another could be they just got bored of each other. Therefore, it would be quite difficult to properly assess and determine the consistency of the measure. How do the percentage agreements that you calculated for questions 1 and 3, above, compare? What would you conclude? How would you explain these results? [2] The percentage differences of questions 1 (20%) and 3 (62.5%) show a 42.5% variance. This expresses that there may be an issue regarding the scale being defective or the observers not being trained well enough to perform such observations. With this experiment, it is likely the issue originates with the definition, ‘How much a child cooperates with another child’, cooperation may need to be further defined as it is a broad action and the two observers clearly have different views on what cooperative actions would look like with children. The reliability being assessed here is the internal consistency of the

Open Document