Multimethod approach to measuring values in a school context: exploring the association between Congruence — Discrepancy Index (CO DI) and task commitment
Abstract
There are considerable differences among the value hierarchies revealed by different methods of measurement. The quantitative measure of such a difference can be referred as the Congruence-Discrepancy Index (CODI). The more congruent the results of different methods are, the higher the CODI is. In the present study I compared value hierarchies obtained by the Schwartz Value Survey and an original instrument based on the constant-sum scale in two samples of adolescents (those in special schools for at-risk adolescents and those in ordinary secondary schools). The results show that the CODI for ordinary school students is significantly higher than that for adolescents recruited from special schools. A significant correlation between the CODI and school engagement was revealed for the ordinary school sample. The possibilities of using the CODI in value research are discussed.
Received: 28.12.2012
Accepted: 19.02.2013
Themes: Educational psychology; Psychological assessment
PDF: http://psychologyinrussia.com/volumes/pdf/2013_2/podolskiy_2013_44-54.Pdf
Pages: 44-54
DOI: 10.11621/pir.2013.0204
Keywords: values, adolescents, measurement, value theory, ratings, rankings
Since the turn of the century the value theory proposed by Schwartz (1992) has become a widespread and dominant approach to human values. It is recognized as a unifying theory that suggests a comprehensive set of 10 types of values. The theory establishes nearly universal relationships among motivations that underline values and therefore provides an a priori model for verifying empirical data. Although the theory is supported by consistent results obtained largely from normative populations of educated adults (Schwartz, Melech, Lehmann, Burgess, & Harris, 2001), considerable deviations from the theoretical propositions were revealed in educationally and culturally specific samples (Schwartz et al., 2001). Schwartz and colleagues (2001) attributed such discrepancies mostly to the method effect. The method effect refers to the extent to which an instrument by itself affects the obtained results. Inconsistency was found between the findings from comparison studies of the value orientations of delinquent and nondelinquent adolescents. Several studies showed that delinquents and nondelinquents shared the same hierarchy of values (Romero, Sobral, Luengo, & Marzoa, 2001; Zieman & Benson, 1983), while others showed differences in value orientations between the two studied groups (Goff & Goddard, 1999). However, sample characteristics and method effects were investigated separately in value studies. Method effects were extensively studied as a “ranking versus rating” issue.
One may propose that different methods that elicit personal values lead to different value priorities or hierarchies (Hansson, 2001). Comparison studies using ranking and rating methods have led to controversial results (Krosnick & Alwin, 1988; Maio & Olson, 1994; Rankin & Grube, 1980). In some of them discrepancies in value hierarchies gained by different methods were found, while in other studies using both methods led to similar results. Discussions of the issue have revealed the strengths and weaknesses of both methods. Some studies were aimed at assessing whether ranking or rating scales produce similar results and at determining which of the scales is preferable (Maio, Roese, Seligman & Katz, 1996), while others pointed out alternatives (McCarty & Shrum, 2000). Discussion of the studies resulted in the argument that each method might represent personal value systems in a specific way (Ovadia, 2004).
Most of the studies on the values domain have been carried out using a method for measuring values that does not allow for testing the abovementioned hypothesis. In order to do so one has to investigate the differences between the methods that are responsible for the divergent results. Such differences may refer to the form of the instrument, the scale, the content, or the activity of the subject while performing the task.
In the domain of values, a number of studies have been carried out to compare the results obtained with different methods of measurement (Lindeman & Verkasalo, 2005; Schwartz et al., 2001). However, such studies were focused more on the convergence between the results gained by different scales than on the differences between them.
In triangulation studies the researcher adopts competitive methodologies (for example quantitative and qualitative approaches) dealing with the same phenomena (Jick, 1979). Such studies may deal with the common or overlapping variance of the methods as well as with the unique variance, which is often neglected. Triangulation studies of unique variance have a comprehensive and complete approach to the construct, evaluating it from different perspectives. The researcher aims to explain the discrepancies between the results of different methods that may shed light on the nature of the object under study. Such studies have been done in the value domain using a multitrait-multimethod matrix (Schwartz et al., 2001). In the triangulation approach the discrepancies between the results for different methods of measuring values can be examined as a separate independent variable. I suggest that the agreement or discrepancy between the results of several scales measuring the same construct (values) be referred to as the Congruence-Discrepancy Index (CODI).
Only a few studies have focused on the nature of the differences between methods for measuring values (Krosnick & Alwin, 1988). Obtained results have emphasized motivation and the investment of cognitive effort in performing the task as factors affecting the results gained by a particular method. In other words, the data stress the importance of accounting for the testing situation, or testing context.
Testing in the school context
Adolescent research is usually carried out in the school context. It is often presupposed by researchers and educators that adolescents do their best (make every effort) to perform the tasks presented to them in testing situations. However, task engagement may vary depending on, for example, personal goals in the testing situation, the purpose of the task, and the perceived consequences (Pintrich & Schrauben, 1992). Students normally invest more effort in high-stakes tests (where the respondent considers the results or the consequences as important) than in low-stakes tests. The effort put into the task depends on whether a student is committed to performing school tasks in general. Engagement in school tasks relies on commitment to the school as an institution or authority (Fredericks, Blumenfeld, & Paris, 2004).
School commitment and engagement are considered to include components of behavior (learning effort) and affect (interest in and attitudes toward learning) (Finn, 1989). Students who are engaged in school life have intrinsic motivation and foster self-direction values (Marks, 2000; Shernoff, Csikszentmihalyi, Schneider, & Shernoff, 2003). In regard to gender differences in school engagement, girls are more likely to invest effort in studying than boys (Shernoff et al., 2003). Thus, school commitment may be measured using teachers’ assessments of the motivation of students to achieve and learn.
Data from studies of delinquent adolescents are also relevant to the issue of school engagement. Some studies show that delinquents and nondelinquents share the same hierarchy of values (Romero et al., 2001; Zieman & Benson, 1983), while others show differences in the value orientations of the two groups (Goff & Goddard, 1999). Such inconsistent results may reflect method differences and relate to the testing context: whether adolescents invest effort in the task. In value studies task effort or engagement is rarely measured or identified. For example, if many items are missed in the rating task, we may conclude that the student did not invest enough effort in the task. But not all cases are so easy to identify.
Because rule-following can also be a measure of task engagement, a constant-sum (CS) task can be adopted to assess the value priorities of adolescents. As the CS task includes the rule that exactly 30 points must be distributed among 20 values, it is possible to consider mistakes as non-rule-following. The extent to which adolescents follow the rule relates to their motivation to perform the task. According to the literature (Jenkins, 1997), there are more non-rule-followers in special schools for at-risk adolescents than in ordinary schools.
What is the congruence-Discrepancy index (CODI)?
The CODI indicates the difference or convergence of the results of different scales used to measure the same construct. In the domain of values it is considered to reveal the value hierarchy as a “picture,” or representation, of a respondent’s internal, personal value system. A personal hierarchy can be revealed by using any method in the research (ranking or rating). The rank-order correlation coefficient between the two hierarchies obtained by different scales is the CODI. The higher the CODI the higher the agreement of the results of different methods, and the lower the CODI the higher their discrepancy.
In accordance with Krosnick and Alwin (1988) one may hypothesize that discrepancies between the results of different methods can be attributed to (1) the difference in the motivation of respondents to perform the task, (2) the low self- perception of respondents, or (3) their low ability to differentiate between values or the extent to which they have formed an internal value structure.
The following hypotheses were tested in the present studies:
Hypothesis 1: The CODI will be smaller (higher discrepancy) with low commitment to the school context (Study 1).
Hypothesis 2: The CODI, commitment to school, and task engagement will be related (Study 2).
Study 1
Sample
In order to test the discrepancy effect in two contexts that differ in commitment to school, two groups of students were recruited. The samples were intended to be different in motivation toward school tasks, self-perception, and value differentiation. The first group was recruited in a special Moscow school for delinquent adolescents. Most of the students had been put in that school because of antisocial behavior and had committed minor crimes. A group of adolescents (N = 25, boys = 17, mean age = 15.3) were asked to fill out both scales: CS and rating. The second group of adolescents was recruited in four ordinary Moscow secondary schools (N = 160, boys = 46%, mean age = 14.8). Students from two schools were asked to fill out a CS scale, and students from the two other schools filled out a rating scale.
Method
ConstantSum (CS) Scale
The CS scale included 20 value items selected from the Schwartz Value Survey-57 (SVS-57) (Schwartz, 1992). A group of psychology students who had the task of choosing values that are important for adolescents conducted the selection process. In the CS scale, the respondent is asked to distribute 30 points among 20 listed values.
The CS method is different in several ways from a rating scale (Table 1).
Rating Scale
The rating scale included the same 20 items as the CS scale. The respondents were asked to assess each value according to personal importance on a 9-point scale from –1 (opposed to my values) to 7 (supreme importance). For further analyses, the data were recoded into a scale from 0 to 8.
The selected 20 values represented 8 motivational types of values according to the theory (Schwartz, 1992). Because of that I analyzed data on the single-items level.
Table 1. Comparison of Rating and CS Scales
Issue |
Rating Scale |
CS Scale |
Task |
To rate the listed values according to personal importance |
To distribute points among listed values according to personal importance |
Approach |
Direct evaluations of each value |
Dual activity: values prioritization and math task |
Rules |
To follow the scale limitations (does not provide the possibility for choice, does not have a measure of motivational involvement in the task performance) |
To distribute a particular number of points (provides room for choice and an opportunity to assess involvement in performing the task when the number of points is exceeded) |
Participants’ activity (observation and interview) |
Single value evaluations |
Quasi-systematic, pair-wise comparisons |
Scale |
Scale effects (end-pilling, tendency to use middle points, etc.) |
Participants construct the scale (different strategies are used: compromise and extreme) |
Results
In order to test whether a discrepancy between the results on the CS and rating scales could be found, I used the correlation-vector approach (Jensen, 1998). The first vector represents the mean scores on the CS scale, and the second vector represents the mean scores on the rating scale. The rank-order correlations of the four vectors (2 Methods × 2 Samples) are given in Table 2.
Table 2. Correlation Coefficients Between the Vectors That Are Defined by the Mean Scores on the CS Scale and the Mean Scores on the Rating Scale (Special and Ordinary School Students)
Vectors |
1 |
2 |
3 |
1. Rating scale (ordinary school sample) |
|||
2. Distribution scale (ordinary school sample) |
.60** |
||
3. Rating scale (special school sample) |
.85** |
.38 |
|
4. Distribution scale (special school sample) |
-.28 |
.35 |
-.30 |
**p<.01.
The correlation coefficients showed the correspondence between the value hierarchies of students from special schools and students from ordinary schools if measured by the rating scale. The value hierarchies of ordinary school students measured by the two methods appeared to be similar as well. The value hierarchies of special school students measured by two different methods were the opposite of each other (although the correlation was not significant because of the small number of value items). This result can be explained mainly by the discrepancy in scores on “wealth”: on the CS scale, students from special schools assigned greater importance to “wealth” than they did on the rating scale. No such differences between the methods were found for the ordinary school sample.
The correlation-vector approach showed the discrepancies on the group level. I wondered whether it would be possible to replicate that finding on the individual level using the CODI as the discrepancy measure. Adolescents with lower CODI (a larger discrepancy between the results of different methods) would tend to follow the path of the special school students, while adolescents with higher CODI (a larger congruence between the results of different methods) would be considered more socially adapted and committed to school.
Study 2
Sample
Two types of schools participated in the study: ordinary secondary schools and special “evening” schools. Evening schools support students who have been expelled from ordinary schools because of behavior problems and low academic achievement. In each school one or two classes were randomly selected for the study. In one of the schools, all classes from grades 8 to 10 took part in the study.
The sample of adolescents recruited in ordinary schools consisted of 215 students in grades 8 to 10 (mean age = 16.2 and SD = 1,1, girls = 54%), and in the special schools 99 students participated in the study (mean age = 16.0 and SD = 1.0, girls = 45%).
Method
Rating Scale
The rating scale comprised 20 value items. Each item was followed by a short explanation in parentheses. The respondents were asked to rate each item on a 9-point scale from –1 (contrary to my values) to 7 (great importance). In subsequent data analysis the scale was transformed to run from 0 to 8. Each of the 10 motivational value types was represented by two value items. The items were selected based on data on Russian adolescents obtained by Verkasalo, Tuomivaara, and Lindeman (1996). Only values that formed distinguished regions in the value circle and proved to have invariant interpretations across cultures were included. All the scores were centered on the individual mean to control for response style.
Constant Sum (CS) Scale
The CS scale included the same list of values as the rating instrument did. The task for the respondent was to distribute 30 points among presented values according to personal importance. Each item was followed by a short explanation in parentheses.
Schwartz Value Survey (SVS)
The Russian version of the SVS-57 (Schwartz, 1992) was used. The survey consists of 57 value items. Each item is followed by a short explanation in parentheses. The respondents were asked to rate each item on a 9-point scale from -1 (contrary to my values) to 7 (great importance). In subsequent data analysis the scale was trans- formed to run from 0 to 8. All scores were centered on the individual mean to control for response style (Verkasalo et al., 1996). Reliabilities of the scales were: power .69, achievement .64, hedonism .68, stimulation .49, self-direction .61, universalism .74, benevolence .74, tradition .63, conformity .67, and security .50.
Teachers’ Ratings
Teachers were asked to assess each student on four criteria using a 3-point scale (1 — low, 2 — medium, 3 — high). Such a scale was chosen for its simplicity and the fact that it was not time consuming, as the teachers had to rate every student in the class on four criteria. Each student was assessed by two teachers: a class teacher (who is responsible for a particular class in which the student is studying) and a subject teacher (who is responsible for teaching one of the subjects). The mean score for each student was used for subsequent analysis.
The four criteria were the following (the description of the criteria provided to the teachers is in parentheses):
- Learning abilities (to what extent the student demonstrates the ability to perform school tasks)
- Learning motivation (to what extent the student is motivated to study, wants to obtain new knowledge)
- Moral behavior (to what extent the student follows moral norms, behaves according moral principles)
- Popularity in the class (to what extent the student is popular among class- mates)
Results
I compared the results of the three scales (SVS, rating, CS) for two samples (adolescents from special schools and adolescents from ordinary schools) using the correlated-vector approach. Vectors were defined as means for the 10 motivational types of values. For adolescents from ordinary schools the results were congruent across all the scales (the lowest correlation between two different methods was .79; the highest was .95). The results for the sample of adolescents from special schools were somewhat different (Table 3).
Table 3. Correlations of the Results from Three Different Scales for Adolescents from Special Schools
SVS |
Rating |
CS |
|
SVS |
— |
||
Rating |
.86** |
— |
|
CS |
.85** |
.57* |
— |
*p<.05. **p<.01.
The results of the correlation vectors for the special sample of adolescents showed that although there was a little difference in the results between the SVS and the two other scales, there was a larger discrepancy between the rating scale and the CS scale. Although the previous results (see Table 2) showed a negative correlation between the rating and the CS vectors, here they had a positive but still insignificant correlation. The reason for the correlation increase could be that in the second study students were recruited from a less severe delinquent sample than in the first study. In any case, the differences between the value priorities measured by the rating scale and the CS scale persisted for the adolescents from special schools and were close to zero for adolescents from ordinary schools.
I wanted to test whether such a discrepancy could be replicated on the individual level for ordinary students with different levels of engagement with school. I used teachers’ ratings to identify students with a higher or lower commitment to school and computed a commitment score for each student. The commitment score was the mean of the sum of students’ ratings on learning ability and learning motivation (as assessed by the teachers). I calculated for each student an individual CODI between each two of the methods used (SVS, rating, CS). The correlation between commitment score and the CODI are presented in Table 4.
Table 4. Correlations Between Commitment Score and the CODI
CO DI |
|||
Between SVS and Rating |
Between SVS and CS |
Between Rating and CS |
|
Commitment score |
.22* |
.36** |
.41** |
*p<.05. **p<.01.
The data showed that the larger the congruence between the measures the more committed the student was, as evaluated by teachers. The most sensitive to commitment was the CODI between the rating and CS scales.
I also tested to ascertain whether there was a relationship between task engagement (measured as rule-following in the CS task: the number of points above or below 30) and school commitment (as measured by the teachers). A significant correlation was found for boys (r = -.39; p = .044) but not for girls (r = .11; p = .390). The correlation showed that the more points above 30 the less committed the boy was according to teachers’ ratings. The lack of a relationship between commitment score and task engagement for the girls may be explained by the fact that girls are commonly more committed to school than boys and so there is less variation in the girls’ subsample.
Discussion
The study tested whether conceptually different methods for measuring values would affect the value hierarchy. A distribution task and rating scales were used to reveal the value orientations of adolescents. In order to increase the variance, adolescent subjects were recruited from different educational contexts: mainstream secondary schools and special secondary schools. The results show that value hierarchies were similar across measures for ordinary students and, in contrast, were different (in relation to the method used) for special students. The extent of congruence between the results of the two different scales were operationalized as the CODI.
The congruence in the results of different methods may be viewed as a function of task commitment. Although alternative factors influencing congruence (intellectual level or maturity) that were not directly examined in this study are possible, there are several reasons for studying commitment. Learning abilities and learning motivation are, in teachers’ eyes, the sign of students’ commitment to school as an educational institution, to its goals, requirements, and norms. In that respect teachers evaluate not student abilities directly but the ability of the students to take the role required by the school.
Students who are committed to school interpret measurement situations as important in the school context regardless of whether it is a low-stakes or high-stakes exam. Those students who have low commitment may interpret low-stakes testing situations as unimportant (they expect no external gains). However, to a large extent, value research has been conducted in normative settings and with committed (socially adapted) samples.
Committed students cope better with difficulties, while less committed students perceive difficulties as threats. In the study, I used three different value scales that varied in difficulty for the respondents. The SVS presupposed attention to careful evaluation of 57 items. The distribution scale expected participants to do quasi-pair-wise comparisons between values. The rating scale was short and relatively simple. Interestingly, the rating scale revealed fewer differences in the value priorities of special and ordinary adolescents. The results suggest that using a distribution task in addition to the SVS would help to reveal additional information (the degree of congruence) and to evaluate the participants’ commitment to the task. Comparing test-retest measures shows remarkable differences between the constructs.
Conclusions
- The data achieved suggest that different methods for measuring the same construct may produce different results especially in a low-commitment context.
- The more the student is committed to the task the more congruent are the self-report value hierarchies measured by different methods. Such congruence/ discrepancy can be referred as the CODI.
- Control for commitment to school in adolescent samples might increase the validity of the results.
References
Finn, J. (1989). Withdrawing from school. Review of Educational Research 59 (2), 117–142. doi: 10.3102/00346543059002117
Fredericks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74, 59–109. doi: 10.3102/ 00346543074001059
Goff, B., & Goddard, H. (1999). Terminal core values associated with adolescent problem behaviors. Adolescence, 34(133), 47–60.
Hansson, S. O. (2001). The structure of values and norms. Cambridge, UK: Cambridge University Press. doi: 10.1017/CBO9780511498466
Jenkins, P.A. (1997) School delinquency and the school social bond. Journal of Research of Crime Delinquency. 34, 337–367. doi: 10.1177/0022427897034003003
Jensen, A. R. (1998). The g factor and the design of education. In R. J. Sternberg & W. M. Williams (Eds.), Intelligence, instruction, and assessment: Theory into practice (pp. 111–131). Mahwah, NJ: Erlbaum.
Jick, T. D. (1979). Mixing qualitative and quantitative methods: Triangulation in action. Administrative Science Quarterly, 24, 602–611. doi: 10.2307/2392366
Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52, 526–538. doi: 10.1086/269128
Lindeman, M., & Verkasalo, M. (2005). Measuring values with the Short Schwartz Value Survey. Journal of Personality Assessment, 85(2), 170–178. doi: 10.1207/s15327752jpa8502_09
Maio, G. R., & Olson, J. M. (1994). Value-attitude-behaviour relations: The moderating role ofattitude functions. British Journal of Social Psychology, 33, 301–312. doi: 10.1111/j.2044-8309.1994.tb01027.x
Maio, G. R., Roese, N. J., Seligman, C., & Katz, A. (1996). Ratings, rankings, and the measurement of values: Evidence for the superior validity of ratings. Basic and Applied Social Psychology, 18, 171–181. doi: 10.1207/s15324834basp1802_4
Marks, H.M. (2000), Student engagement in instructional activity: Patterns in the elementary,middle, and high school years. American Education Research Journal, 37 (1), 153-184. doi:10.3102/00028312037001153
McCarty, J.A. & Shrum, L. J. (2000). Alternative Rating Procedures for the Measurement ofPersonal Values. Public Opinion Quarterly, 64 (3), 271-298. doi: 10.1086/317989
Ovadia, S. (2004). Ratings and rankings: Reconsidering the structure of values and their measurement. International Journal of Social Research Methodology: Theory & Practice, 7, 403–414. doi: 10.1080/1364557032000081654
Pintrich, P. R., & Schrauben, B. (1992). Students’ motivational beliefs and their cognitive engagement in classroom academic tasks. In D. H. Schunk & J. Meece (Eds.), Student perceptions in the classroom (pp. 149–179). Hillsdale, NJ: Erlbaum. doi: 10.1002/ejsp.2420100303
Rankin, W. L., & Grube, J. W. (1980). A comparison of ranking and rating procedures for value system measurement. European Journal of Social Psychology, 10, 233–246. doi: 10.1002/ejsp.2420100303
Romero, E., Sobral, J., Luengo, M. A., & Marzoa, J. A. (2001). Values and antisocial behaviour among Spanish adolescents. Journal of Genetic Psychology, 162(1), 20–40. doi:10.1080/00221320109597879
Schwartz, S. H. (1992). Universals in the content and structure of values: Theory and empirical tests in 20 countries. In M. Zanna (Ed.), Advances in experimental social psychology (pp. 1–65). New York: Academic Press.
Schwartz, S. H., Melech, G., Lehmann, A., Burgess, S., & Harris, M. (2001). Extending the cross-cultural validity of the theory of basic human values with a different method of measurement. Journal of Cross Cultural Psychology, 32, 519–542. doi: 10.1177/0022022101032005001
Shernoff, D. J., Csikszentmihalyi, M., Schneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18, 158–176. doi: 10.1521/scpq.18.2.158.21860
Verkasalo, M., Tuomivaara, P., & Lindeman, M. (1996). 15-year-old pupils’ and their teachers’ values, and their beliefs about the values of an ideal pupil. Educational Psychology, 1, 35–47. doi: 10.1080/0144341960160103
Zieman, G. L., & Benson, G. P. (1983). Delinquency: The role of self-esteem and value orientation. Journal of Youth and Adolescence, 12, 426–438. doi: 10.1007/BF02088666
To cite this article: Dmitry A. Podolskiy (2013). Multimethod approach to measuring values in a school context: exploring the association between Congruence — Discrepancy Index (CO DI) and task commitment. Psychology in Russia: State of the Art, 6 (2), 44-54
The journal content is licensed with CC BY-NC “Attribution-NonCommercial” Creative Commons license.