Discriminant Validity Where There Should Be None

View Segments Segment :

  • Citations
  • Add to My List
  • Embed
  • Link
  • Help
  • Citations
  • Add to My List
  • Embed
  • Link
  • Help
Successfully saved clip
Find all your clips in My Lists
Failed to save clip
  • Transcript
  • Transcript

    Auto-Scroll: ONOFF 
    • 00:00

      [Discriminant Validity Where There Should Be None]

    • 00:08

      BERT WEIJTERS: In this video, I wantto walk you through some of the key findings of our paper,"Discriminant Validity Where There Should Be None,"co-authored with Alain de Beuckelaer and Hans Baumgartnerand published in Applied Psychological Measurement.[Discriminant Validity Where There Should Be None:Positioning Same-Scale Items in SeparatedBlocks of a Questionnaire]In this paper, we look at the effect of the waythat you position items in your questionnaire,

    • 00:30

      BERT WEIJTERS [continued]: on the item correlations, and on factor structure.[Questionnaire design]When designing a questionnaire, the researcherhas to make many decisions, includingthe question of which items to include,which item format to use, what response scale to use,and where and how to position your itemsin the questionnaire.The focus of this paper is on the third question.

    • 00:52

      BERT WEIJTERS [continued]: [Blocks]There are basically two common waysof organizing the items within a questionnaire.The first one is blocking the itemsfor construct, such that the items for construct Atogether form one block, followed by a second blockwith the items for construct B, followed by a third block

    • 01:13

      BERT WEIJTERS [continued]: with the items for construct C, et cetera.The advantage of this approach is that it'sreally easy for respondents.The downside, on the other hand, is alsothat it's too easy for respondents.And you often see that people will click the same responseoption repeatedly within the same block.[Randomized order]An alternative way of positioningthe items is to randomize the order of the items of all

    • 01:36

      BERT WEIJTERS [continued]: constructs so that you get a kindof a mix of different construct-related relateditems.This leads to less demand effect.So respondents don't really see what you're measuring exactly.And they will respond to each item independently.But, on the other hand, it's also more demanding,

    • 01:56

      BERT WEIJTERS [continued]: and it might lead to respondent fatigue.It has been argued that the first approach, blockingitems for construct, leads to better factor structure.That's a proposition that we want to question in this paper.In particular, we want to show that whenyou use the items from one and same skill that

    • 02:17

      BERT WEIJTERS [continued]: measure a construct in the same direction.So there is no reversed items, by simply grouping themin alternative ways, blocking them in alternative ways,you can get different factor structures.So what we did was we took a scale with eightitems, and then one experimental conditionwe put all four first items on the first page

    • 02:39

      BERT WEIJTERS [continued]: of the questionnaire, the last fouritems on the last page of the questionnaire,with eight pages of filler items in between.In the second experimental condition,we used a different blocking, in that items 1, 2, 7,and 8 were on the first page of the questionnaire.And the remainder items where positioned on the last page

    • 03:04

      BERT WEIJTERS [continued]: of the questionnaire.Our prediction was that due to proximity effect,the first condition would result in a two-factor structure,where the first four items were one factor,and the last four items were another factor.[Condition 1, Condition 2]In Condition 2, our prediction was that item 1, 2, 7,

    • 03:26

      BERT WEIJTERS [continued]: and 8 would form a factor, and the samewould happen for items 3, 4, 5, and 6.So basically we're saying despite the factthat we have eight items that have clearlybeen shown to form one factor, when they'rein one block in a questionnaire, usingalternative ways of blocking these items canlead to different factors.

    • 03:46

      BERT WEIJTERS [continued]: We collected the data, and we ran separate factor analysisfor the two conditions.In Condition 1, we got a factor with four first itemsand another factor with the last four items.The two correlated at approximately 0.60.In the second condition, we got the other expected factorstructure.So one factor with items 3, 4, 5, 6,

    • 04:08

      BERT WEIJTERS [continued]: another factor with items 1, 2, 7, 8.In each of those conditions, we testedthis two-factor structure that we expected.Again, some alternative factor structures.Obviously, the one-factor model which showed real bad fitto the data, as you can see, but also all possible alternative

    • 04:30

      BERT WEIJTERS [continued]: permutations where each item was assignedto either factor 1 or factor 2 and all possible permutationsof that.The average of that clearly does not fit the data well.And even the best possible alternativepermutation, apart from the one that we predictedeven that one clearly fits the data much worse than predicted

    • 04:53

      BERT WEIJTERS [continued]: one.So what we find is basically that, depending on the wayyou block your items of one on the same scale,you can get different factor structures.Those factor structures show good fit,and the two factors that come out show discriminant validitywith a correlation 0.60, which is clearly not the same

    • 05:17

      BERT WEIJTERS [continued]: as a correlation of 1.And we also found that the confidence intervalof the correlation does not include 1.[IMPLICATIONS]Of course, this has some implicationsfor survey research.[Implication 1]The first implication is that testingthe discriminant validity of constructs that were measuredby skills in different blocks is basically

    • 05:39

      BERT WEIJTERS [continued]: stacking the cards in favor of what you're hoping to find.So we should be cautious with this kind of approach.[Implication 2]The second implication is that when constructing scales,so for scale development, it mightbe useful to try out different ways of positioningyour items in the questionnaire and reallytest how robust your factor structure is

    • 06:01

      BERT WEIJTERS [continued]: against these alternative positions.[Implication 3]Third implication is that it's really importantto be very transparent and to reallyreport in a very complete way about howyou structure the items in your questionnaire.And if this information is not in your paper,it's really hard to assess and compare

    • 06:23

      BERT WEIJTERS [continued]: the correlations and the factor structure.They may not be comparable with other studies.[Implication 4]The fourth implication is that it can be misleadingif you include many skills in your questionnairebut do not mention that when reporting your analysisand results.Because the distance between the constructscales in your questionnaire can clearly

    • 06:44

      BERT WEIJTERS [continued]: affect factor structure and the correlationbetween these scales.[Implication 5]And the final implication, the fifth implication,is that when you're running a meta analysis,it might be important to use item positioningas a moderating variable.[Thanks for watching!]So thanks for watching.If you want to find out more about some survey methodsresearch, you can always check my profile on ResearchGate

    • 07:07

      BERT WEIJTERS [continued]: or on Google Scholar.

Discriminant Validity Where There Should Be None

View Segments Segment :


Dr. Bert Weijters explains his research into how questionnaire design can affect outcomes. Grouping items by theme can bias results, so it is important to fully disclose how questions were ordered.

Discriminant Validity Where There Should Be None

Dr. Bert Weijters explains his research into how questionnaire design can affect outcomes. Grouping items by theme can bias results, so it is important to fully disclose how questions were ordered.

Copy and paste the following HTML into your website

Back to Top