British Social Attitudes: The 19th Report


Edited by: Alison Park, John Curtice, Katarina Thomson, Lindsey Jarvis & Catherine Bromley

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • The National Centre for Social Research

    The National Centre for Social Research (NatCen) is an independent, non-profit social research institute. It has a large professional staff together with its own interviewing and coding resources. Some of NatCen's work – such as the survey reported in this book – is initiated by the institute itself and grant-funded by research councils or foundations. Other work is initiated by government departments, local authorities or quasi-government organisations to provide information on aspects of social or economic policy. NatCen also works frequently with other institutes and academics. Founded in 1969 and now Britain's largest social research institute, NatCen has a high reputation for the standard of its work in both qualitative and quantitative research. NatCen has a Survey Methods Centre and, with the Department of Sociology, University of Oxford, houses the Centre for Research into Elections and Social Trends (CREST). It also houses, with Southampton University, the Centre for Applied Social Surveys (CASS), an ESRC Resource Centre, two main functions of which are to run courses in survey methods and to establish and administer an electronic social survey question bank.

    The contributors

    Catherine Bromley

    Senior Researcher at NatCen, Scotland and Co-Director of the British Social Attitudes survey series

    Alex Bryson

    Principal Research Fellow at the Policy Studies Institute and Research Associate at the Centre for Economic Performance at LSE

    Ian Christie

    Associate Director of The Local Futures Group and associate of the New Economics Foundation and the Centre for Environmental Strategy, University of Surrey

    John Curtice

    Head of Research at NatCen, Scotland, Deputy Director of CREST, and Professor of Politics at Strathclyde University

    Geoffrey Evans

    Official Fellow in Politics, Nuffield College Oxford and Professor of the Sociology of Politics

    Sonia Exley

    Researcher at NatCen and Co-Director of the British Social Attitudes survey series

    Raphael Gomez

    Lecturer at the Interdisciplinary Institute of Management and a Research Associate at the Centre for Economic Performance at LSE

    Arthur Gould

    Reader in Swedish Social Policy at Loughborough University

    Charlotte Hastie

    Research Assistant in Social Policy at the University of Kent

    Anthony Heath

    Professor of Sociology at the University of Oxford and Deputy Director of CREST

    Lindsey Jarvis

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Stephen McKay

    Deputy Director of the Personal Finance Research Centre at the University of Bristol

    Alison Park

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Ceridwen Roberts

    Senior Research Fellow at the Department of Social Policy and Social Work at the University of Oxford

    Catherine Rothon

    Research Officer for CREST at the University of Oxford

    Karen Rowlingson

    Lecturer in the Department of Social and Policy Sciences at the University of Bath

    Nina Stratford

    Research Director at NatCen

    Peter Taylor-Gooby

    Professor of Social Policy at the University of Kent

    Katarina Thomson

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Ted Wragg

    Professor of Education at Exeter University


    View Copyright Page


    This book is dedicated to the memory of Sheila Vioche

    List of Tables and Figures

    • Table 1.1 Patterns of transport use, 1997–2001 5
    • Table 1.2 How bad is congestion? 6
    • Table 1.3 Measures to curb demand for car use: will they affect drivers' habits? 8
    • Table 1.4 Paying for better public transport 9
    • Table 1.5 Will improvements in alternative forms of transport change patterns of car use? 10
    • Table 1.6 Car dependency: how inconvenient would less car use be? 1997–2001 10
    • Table 1.7 Regular car and bus use 12
    • Table 1.8 Changing patterns of bus use, 1997–2001 13
    • Table 1.9 Support for congestion charging by income 14
    • Table 1.10 Attitudes towards bus travel 16
    • Table 1.11 Levels of satisfaction with features of bus services, 1998 and 2001 18
    • Table 2.1 Views on saving and borrowing 30
    • Table 2.2 Decisions to save or borrow 31
    • Table 2.3 Attitudes to credit 32
    • Table 2.4 The principle and practice of saving or spending 33
    • Table 2.5 Attitudes towards personal finance, and behaviour, by household income 34
    • Table 2.6 Personal finance attitudes and behaviour, by age 35
    • Table 2.7 Saving and borrowing, by age 35
    • Table 2.8 Attitudes to credit, by age 36
    • Table 2.9 Personal finance attitudes and behaviour by lifecycle group 37
    • Table 2.10 Expected source of retirement income by attitudes to saving and credit 38
    • Table 2.11 Responsibility for pensions by attitudes to saving and credit 39
    • Table 3.1 Shift-share analysis of the changing composition of the workplace and of union membership, 1983–2001 53
    • Table 3.2 Contribution of change in composition and within-group change to membership density trend, 1983–2001 54
    • Table 3.3 Perceptions of how well unions do their job in unionised workplaces, 1983–1985 to 1999–2001 58
    • Figure 3.1 Union density among employees in Britain, 1983–2001 45
    • Figure 3.2 The rise of ‘never-membership’, 1983–2001 57
    • Figure 3.3 Trends in the union wage premium, 1985–2001 60
    • Figure 3.4 The union wage premium and the business cycle, 1985–2001 61
    • Table 4.1 Attitudes to taxation and spending, 1983–2001 76
    • Table 4.2 First or second priorities for extra public spending, 1983–2001 77
    • Table 4.3 Spending in the main service areas as a proportion of total government spending, 1982/1983–2000/2001 78
    • Table 4.4 First or second priority for extra spending on social security benefits, 1983–2001 79
    • Table 4.5 Spending on different areas of social security as a proportion of total social security spending, 1982/1983–2000/2001 79
    • Table 4.6 Dissatisfaction with the NHS, by age, income, class, party identification and experience of the NHS, 1987–2001 81
    • Table 4.7 The impact of managerial change versus extra spending 83
    • Table 4.8 Support for hypothecated tax increases 84
    • Table 4.9 The perceived impact of tax increases on different public services 85
    • Table 4.10 Perceptions of the relative size of public spending in different spending areas 86
    • Table 4.11 Perceptions of the relative size of social security spending 87
    • Table 4.12 Perceptions of social issues compared with reality 88
    • Table 4.13 Public perceptions of the most expensive public spending area, by education, party identification, income and social class 90
    • Table 4.14 Public perceptions of the most expensive social security spending area, by education, party identification, income and social class 91
    • Table 4.15 Perceptions of spending and willingness to pay 3p extra in tax for higher spending 92
    • Figure 4.1 Dissatisfaction with the NHS, 1983–2001 80
    • Table 5.1 Highest priority for extra government spending within education, 1983–2001 100
    • Table 5.2 Most effective measure to improve primary education 1995–2001 101
    • Table 5.3 Success of Labour government in cutting class sizes in schools, 1997–2001 102
    • Table 5.4 Support for selection in secondary schools, 1984–2001 103
    • Table 5.5 Success of state secondary schools preparing young people for work, 1987–2001 104
    • Table 5.6 Success of state secondary schools bringing out pupils' natural abilities, 1987–2001 104
    • Table 5.7 Success of state secondary schools teaching three Rs, 1987–2001 105
    • Table 5.8 Most effective measure to improve secondary education, 1995–2001 106
    • Table 5.9 Attitudes towards the expansion of higher education, 1983–2001 108
    • Table 5.10 Attitudes towards the expansion of higher education, by level of education, 1994 and 2000 108
    • Table 5.11 Attitudes towards student grants and loans, 1983–2000 109
    • Table 5.12 Attitudes towards student grants, 1995–2000 110
    • Table 5.13 Attitudes towards student grants, by level of education, 2000 111
    • Table 5.14 Importance of qualities that universities should develop in students, 1994 and 2001 111
    • Table 5.15 Universities' performance in developing qualities in students, 1994 and 2001 112
    • Table 5.16 Smaller class sizes as most important measure to improve primary education, by social class, 1995–2001 113
    • Table 5.17 Support for Labour's policy of cutting class sizes, by social class, 1997–2001 113
    • Table 5.18 Support for selection, by social class, 1994–2001 114
    • Figure 5.1 First priority for extra government spending, 1983–2001 99
    • Table 6.1 Attitudes towards the legalisation of cannabis, 1983–2001 121
    • Table 6.2 Attitudes towards the legal status of cannabis, 1993, 1995 and 2001 121
    • Table 6.3 Attitudes towards the legal status of cannabis, heroin and ecstasy 122
    • Table 6.4 Attitudes towards prosecution for possession and supply of cannabis and heroin, 1995 and 2001 123
    • Table 6.5 The liberal/restrictive scale of attitudes towards drugs, 1995 and 2001 125
    • Table 6.6 Per cent with restrictive scores, by age and education, 1995 and 2001 126
    • Table 6.7 Cannabis use by various social groups, 1993 and 2001 127
    • Table 6.8 Per cent who agree that “we need to accept that using illegal drugs is a normal part of some people's lives”, by age, 1995 and 2001 128
    • Table 6.9 Per cent who agree that “smoking cannabis should be legalised”, by age cohort, 1983 and 2001 130
    • Table 6.10 Damage done by cannabis and heroin, 1993 and 2001 132
    • Table 6.11 Attitudes towards cannabis and heroin as a cause of crime and violence, 1993 and 2001 133
    • Table 6.12 Knowledge about the effects of different drugs 134
    • Table 6.13 Attitudes towards, and use of cannabis, by drug knowledge score 135 Table 6.14 Per cent who mention particular drugs as being the most harmful to regular users 136
    • Table 6.15 Per cent who agree with harm reduction strategies, by score on liberal-restrictive scale 137
    • Table 7.1 Trends in trust in government to place the needs of the nation above political party interests, 1986–2001 144
    • Table 7.2 Trends in system efficacy, 1987–2001 145
    • Table 7.3 Political trust and system efficacy, by age 146
    • Table 7.4 Political trust and system efficacy, by age, 1997 146
    • Table 7.5 Political trust and system efficacy, by highest educational Qualification, 1997 and 2001 147
    • Table 7.6 Trends in efficacy and trust, by party identification, 1996–2001 148
    • Table 7.7 Trends in civic duty, 1991–2001 149
    • Table 7.8 Trends in political interest, 1986–2001 150
    • Table 7.9 Trends in strength of party identification, 1987–2001 151
    • Table 7.10 Perceived difference between the parties, 1964–2001 152
    • Table 7.11 Trust in government and electoral participation, 1997 and 2001 154
    • Table 7.12 Political efficacy and electoral participation, 1997 and 2001 155
    • Table 7.13 Strength of party identification and electoral participation, 1997 and 2001 156
    • Table 7.14 Political interest and electoral participation, 1997 and 2001 157
    • Table 7.15 Perceptions of party difference and electoral participation, 1997 and 2001 157
    • Table 7.16 Turnout in the 2001 election, by perceptions of party difference, and strength of party identification 158
    • Table 7.17 Perceptions of party difference and strength of party ID, 1997 and 2001 158
    • Table 7.18 Age and electoral participation, 1997 and 2001 159
    • Table 7.19 Age and political interest, 1997 and 2001 160
    • Table 9.1 Existence of particular family members, by age 187
    • Table 9.2 Face-to-face contact with particular family members 188
    • Table 9.3 Frequency of contact by phone, letter, fax or e-mail with non-resident family members 189
    • Table 9.4 Frequency of face-to-face contact with non-resident family member, 1986, 1995 and 2001 190
    • Table 9.5 Co-residence with family members, by age 191
    • Table 9.6 Frequency of face-to-face contact with non-resident family member, by parenthood 192
    • Table 9.7 Frequency of face-to-face contact with non-resident family member, by sex 193
    • Table 9.8 Friends, by age 196
    • Table 9.9 Friends, by class and household income 197
    • Table 9.10 Organisational membership and participation over last 12 months 198
    • Table 9.11 Organisational participation, by age of leaving full-time education 199
    • Table 9.12 Family contact and friendship networks 200
    • Table 9.13 Organisational participation, and friendship patterns 201
    • Table 9.14 Organisational participation, and family contact patterns 201
    • Table 9.15 Sources of support in times of need 203
    • Table 10.1 Self-reported racial prejudice and related racial attitudes, 1994 215
    • Table 10.2 Racial prejudice, 1985–2001 215
    • Table 10.3 Self-reported racial prejudice, by education, 1985–2001 216
    • Table 10.4 Racial prejudice, by age, 1985–2001 217
    • Table 10.5 Attitudes towards homosexuality, 1985–2000 218
    • Table 10.6 Attitudes towards homosexuality, by education, 1985–2000 218
    • Table 10.7 Attitudes towards homosexuality, by age, 1985–2000 219
    • Table 10.8 Attitudes towards homosexuality – birth cohort analysis 220
    • Table 10.9 Self-reported racial prejudice – birth cohort analysis 220
    • Table 10.10 Support form for racial supremacists' civil rights, 1994 222
    • Table 10.11 Tolerance of racial supremacists' civil rights, by measures of own prejudice, 1994 223
    • Table 10.12 Support for racial supremacists' civil rights, by education, 1994 224
    • Table 10.13 Liberal values, political involvement and tolerance of white supremacists' civil rights, 1994 (Pearson correlations) 225


    This volume, like each of its annual predecessors, presents results, analyses and interpretations of the latest British Social Attitudes survey – the 19th in the series of reports on the studies designed and carried out by the National Centre for Social Research.

    The series has a widely acknowledged reputation as the authoritative map of contemporary British values. Its reputation owes a great deal to its many generous funders. We are particularly grateful to our core funder – the Gatsby Charitable Foundation (one of the Sainsbury Family Charitable Trusts) – whose continuous support of the series from the start has given it security and independence. Other funders have made long-term commitments to the study and we are ever grateful to them as well. These include the Department of Health, the Department for Work and Pensions, and the Department of Transport, all of whom funded the 2001 survey. Thanks are also due to the Health and Safety Executive1 and the Institute of Community Studies.

    We are particularly grateful to the Economic and Social Research Council (ESRC) who provided funding for three modules of questions in the 2001 survey. These covered: attitudes towards, and knowledge about, public policy; illegal drugs; and devolution and national identity (funded as part of the ESRC's Devolution and Constitutional Change Programme). The ESRC also supported the National Centre's participation in the International Social Survey Programme (ISSP), which now comprises 38 nations, each of whom help to design and then field a set of equivalent questions every year on a rotating set of issues. The topic in 2001 was social networks.

    We are also very grateful to the ESRC for its funding of the Centre for Research into Elections and Social Trends (CREST), an ESRC Research Centre that links the National Centre with the Department of Sociology at Oxford University. Although CREST's funding from the ESRC sadly ended in autumn 2002, we will endeavour to continue its role in uncovering and investigating long-run changes in Britain's social and political complexion

    One recent spin-off from the British Social Attitudes series has been the development of an annual Scottish Social Attitudes survey. This began in 1999 and is funded from a range of sources along similar lines to British Social Attitudes. It is closely associated with its British counterpart and incorporates many of the same questions to enable comparison north and south of the border, while also providing a detailed examination of attitudes to particular issues within Scotland. Two books have now been published about the survey (Paterson et al., 2000; Curtice et al., 2001) and a third is due to be published early in 2003.

    The British Social Attitudes series is a team effort. The researchers who design, direct and report on the study are supported by complementary teams who implement the sampling strategy and carry out data processing. They in turn depend on fieldwork controllers, area managers and field interviewers who are responsible for getting all the interviewing done, and on administrative staff to compile, organise and distribute the survey's extensive documentation. In this respect, particular thanks are due to Kerrie Gemmill and her colleagues in the National Centre's administrative office in Brentwood. Other thanks are due to Sue Corbett and her colleagues in our computing department who expertly translate our questions into a computer-assisted questionnaire. Meanwhile, the raw data have to be transformed into a workable SPSS system file – a task that has for many years been performed with great care and efficiency by Ann Mair at the Social Statistics Laboratory in the University of Strathclyde. Many thanks are also due to Lucy Robinson and Vanessa Harwood at Sage, our publishers.

    Unfortunately, this year the British Social Attitudes team has had to bid farewell to a much loved colleague, Sheila Vioche, who died in August. Among countless other things, Sheila played an invaluable role in organising, formatting and checking the content of our previous five reports. Her calm presence, style and wit will be sorely missed by all of us, as will her fantastic poems.

    Finally, we must praise the anonymous respondents across Britain who gave their time to take part in our 2001 survey. Like the 46,000 or so respondents who have participated before them, they are the cornerstone of this enterprise. We hope that some of them will one day come across this volume and read about themselves with interest.

    The Editors

    1. This funding supported a module of questions about health and safety in the workplace. Although its findings are not discussed in this report, they are explored at

    Paterson, L., Brown, A, Curtice, J., Hinds, K., McCrone, D., Park, A., Sproston, K. and Surridge, P. (2000), New Scotland, New Politics?, Edinburgh: Edinburgh University Press.
    Curtice, J., McCrone, D., Park, A. and Paterson, L. (eds.) (2001), New Scotland, New Society? Are social and political ties fragmenting?, Edinburgh: Edinburgh University Press.
  • Appendix I: Technical Details of the Survey

    In 2001, three versions of the British Social Attitudes questionnaire were fielded. Each ‘module’ of questions is asked either of the full sample (3,287 respondents) or of a random two-thirds or one-third of the sample. The structure of the questionnaire (versions A, B and C) is shown at the beginning of Appendix III.

    Sample Design

    The British Social Attitudes survey is designed to yield a representative sample of adults aged 18 or over. Since 1993, the sampling frame for the survey has been the Postcode Address File (PAF), a list of addresses (or postal delivery points) compiled by the Post Office.1

    For practical reasons, the sample is confined to those living in private households. People living in institutions (though not in private households at such institutions) are excluded, as are households whose addresses were not on the PAF.

    The sampling method involved a multi-stage design, with three separate stages of selection.

    Selection of Sectors

    At the first stage, postcode sectors were selected systematically from a list of all postal sectors in Great Britain. Before selection, any sectors with fewer than 1,000 addresses were identified and grouped together with an adjacent sector; in Scotland all sectors north of the Caledonian Canal were excluded (because of the prohibitive costs of interviewing there). Sectors were then stratified on the basis of:

    • 37 sub-regions
    • population density with variable banding used, in order to create three equal-sized strata per sub-region
    • ranking by percentage of homes that were owner-occupied in England and Wales and percentage of homes where the head of household was non-manual in Scotland.

    Two hundred postcode sectors were selected, with probability proportional to the number of addresses in each sector.

    Selection of Addresses

    Thirty-one addresses were selected in each of the 200 sectors. The sample was therefore 200 × 31 = 6,200 addresses, selected by starting from a random point on the list of addresses for each sector, and choosing each address at a fixed interval. The fixed interval was calculated for each sector in order to generate the correct number of addresses.

    The Multiple-Output Indicator (MOI) available through PAF was used when selecting addresses in Scotland. The MOI shows the number of accommodation spaces sharing one address. Thus, if the MOI indicates more than one accommodation space at a given address, the chances of the given address being selected from the list of addresses would increase so that it matched the total number of accommodation spaces. The MOI is largely irrelevant in England and Wales as separate dwelling units generally appear as separate entries on PAF. In Scotland, tenements with many flats tend to appear as one entry on PAF. However, even in Scotland, the vast majority of MOIs had a value of one. The remainder, which ranged between three and 16, were incorporated into the weighting procedures (described below).

    Selection of Individuals

    Interviewers called at each address selected from PAF and listed all those eligible for inclusion in the sample – that is, all persons currently aged 18 or over and resident at the selected address. The interviewer then selected one respondent using a computer-generated random selection procedure. Where there were two or more households or ‘dwelling units’ at the selected address, interviewers first had to select one household or dwelling unit using the same random procedure. They then followed the same procedure to select a person for interview.


    Data were weighted to take account of the fact that not all the units covered in the survey had the same probability of selection. The weighting reflected the relative selection probabilities of the individual at the three main stages of selection: address, household and individual.

    First, because addresses in Scotland were selected using the MOI, weights had to be applied to compensate for the greater probability of an address with an MOI of more than one being selected, compared to an address with an MOI of one. (This stage was omitted for the English and Welsh data.) Secondly, data were weighted to compensate for the fact that dwelling units at an address which contained a large number of dwelling units were less likely to be selected for inclusion in the survey than ones which did not share an address. (We use this procedure because in most cases of MOIs greater than one, the two stages will cancel each other out, resulting in more efficient weights.) Thirdly, data were weighted to compensate for the lower selection probabilities of adults living in large households compared with those living in small households. The weights were capped at 8.0 (causing five cases to have their weights reduced). The resulting weight is called ‘WtFactor’ and the distribution of weights is shown in the next table.

    Table A.1 Distribution of unscaled and scaled weights

    The mean weight was 1.80. The weights were then scaled down to make the number of weighted productive cases exactly equal to the number of unweighted productive cases (n = 3,287).

    All the percentages presented in this Report are based on weighted data.

    Questionnaire Versions

    Each address in each sector (sampling point) was allocated to either the A, B or C third of the sample. If one serial number was version A, the next was version B and the next after that version C. Thus each interviewer was allocated ten or 11 cases from each version and each version was assigned to 2,066 or 2,067 addresses.


    Interviewing was mainly carried out between June and September 2001, with a small number of interviews taking place in October and November.

    Fieldwork was conducted by interviewers drawn from the National Centre for Social Research's regular panel and conducted using face-to-face computer-assisted interviewing.2 Interviewers attended a one-day briefing conference to familiarise them with the selection procedures and questionnaires.

    The mean interview length was 65 minutes for version A of the questionnaire, 68 minutes for version B and 62 minutes for version C.3 Interviewers achieved an overall response rate of 59 per cent. Details are shown in the next table.

    Table A.2 Response rate on British Social Attitudes 2001

    As in earlier rounds of the series, the respondent was asked to fill in a self-completion questionnaire which, whenever possible, was collected by the interviewer. Otherwise, the respondent was asked to post it to the NationalCentre for Social Research. If necessary, up to three postal reminders were sent to obtain the self-completion supplement.

    A total of 492 respondents (15 per cent of those interviewed) did not return their self-completion questionnaire. Version A of the self-completion questionnaire was returned by 85 per cent of respondents to the face-to-face interview, version B by 87 per cent and version C by 83 per cent. As in previous rounds, we judged that it was not necessary to apply additional weights to correct for non-response.

    Advance Letter

    Interviewers were supplied with letters describing the purpose of the survey and the coverage of the questionnaire, which they posted to sampled addresses before making any calls.4

    Analysis Variables

    A number of standard analyses have been used in the tables that appear in this report. The analysis groups requiring further definition are set out below. For further details see Thomson et al. (2001).


    The ten Standard Statistical Regions (with Greater London distinguished from the rest of the South East) or twelve Government Office Regions have been used. Sometimes these have been grouped into what we have termed ‘compressed region’: ‘Northern’ includes the North, North West, Yorkshire and Humberside. East Anglia is included in the ‘South’, as is the South West.

    Standard Occupational Classification

    Respondents are classified according to their own occupation, not that of the ‘head of household’. Each respondent was asked about their current or last job, so that all respondents except those who had never worked were coded. Additionally, if the respondent was not working but their spouse or partner was working, their spouse or partner is similarly classified.

    With the 2001 survey, we began coding occupation to the new Standard Occupational Classification 2000 (SOC 2000) instead of the Standard Occupational Classification 1990 (SOC 90). The main socio-economic grouping based on SOC 2000 is the National Statistics Socio-Economic Classification (NS-SEC). However, to maintain time series, some analysis has continued to use the older schemes based on SOC 90 – Registrar General's Social Class, Socio-Economic Group and the Goldthorpe schema.

    National Statistics Socio-Economic Classification (NS-SEC)

    The combination of SOC 2000 an employment status for current or last job generates the following NS-SEC analytic classes:

    • Employers in large organisations, higher managerial and professional
    • Lower professional and managerial; higher technical and supervisory
    • Intermediate occupations
    • Small employers and own account workers
    • Lower supervisory and technical occupations
    • Semi-routine occupations
    • Routine occupations

    The remaining respondents are grouped as “never had a job” or “not classifiable”. For some analyses, it may be more appropriate to classify respondents according to their current socio-economic status, which takes into account only their present economic position. In this case, in addition to the seven classes listed above, the remaining respondents not currently in paid work fall into one of the following categories: “not classifiable”, “retired”, “looking after the home”, “unemployed” or “others not in paid occupations”.

    Registrar General's Social Class

    As with NS-SEC, each respondent's Social Class is based on his or her current or last occupation. The combination of SOC 90 with employment status for current or last job generates the following six Social Classes:

    They are usually collapsed into four groups: I & II, III Non-manual, III Manual, and IV & V.

    Socio-Economic Group

    As with NS-SEC, each respondent's Socio-economic Group (SEG) is based on his or her current or last occupation. SEG aims to bring together people with jobs of similar social and economic status, and is derived from a combination of employment status and occupation. The full SEG classification identifies 18 categories, but these are usually condensed into six groups:

    • Professionals, employers and managers
    • Intermediate non-manual workers
    • Junior non-manual workers
    • Skilled manual workers
    • Semi-skilled manual workers
    • Unskilled manual workers

    As with NS-SEC, the remaining respondents are grouped as “never had a job” or “not classifiable”.

    Goldthorpe Schema

    The Goldthorpe schema classifies occupations by their ‘general comparability’, considering such factors as sources and levels of income, economic security, promotion prospects, and level of job autonomy and authority. The Goldthorpe schema was derived from the SOC 90 codes combined with employment status. Two versions of the schema are coded: the full schema has 11 categories; the ‘compressed schema’ combines these into the five classes shown below.

    • Salariat (professional and managerial)
    • Routine non-manual workers (office and sales)
    • Petty bourgeoisie (the self-employed, including farmers, with and without employees)
    • Manual foremen and supervisors
    • Working class (skilled, semi-skilled and unskilled manual workers, personal service and agricultural workers)

    There is a residual category comprising those who have never had a job or who gave insufficient information for classification purposes.


    All respondents whose occupation could be coded were allocated a Standard Industrial Classification 1992 (SIC 92). Two-digit class codes are used. As with Social Class, SIC may be generated on the basis of the respondent's current occupation only, or on his or her most recently classifiable occupation.

    Party Identification

    Respondents can be classified as identifying with a particular political party on one of three counts: if they consider themselves supporters of that party, as closer to it than to others, or as more likely to support it in the event of a general election (responses are derived from Qs.151–153). The three groups are generally described respectively as partisans, sympathisers and residual identifiers. In combination, the three groups are referred to as ‘identifiers’.

    Attitude Scales

    Since 1986, the British Social Attitudes surveys have included two attitude scales which aim to measure where respondents stand on certain underlying value dimensions – left–right and libertarian–authoritarian. Since 1987 (except 1990), a similar scale on ‘welfarism’ has been asked.5

    A useful way of summarising the information from a number of questions of this sort is to construct an additive index (DeVellis, 1991; Spector, 1992). This approach rests on the assumption that there is an underlying – ‘latent’ – attitudinal dimension which characterises the answers to all the questions within each scale. If so, scores on the index are likely to be a more reliable indication of the underlying attitude than the answers to any one question.

    Each of these scales consists of a number of statements to which the respondent is invited to “agree strongly”, “agree”, “neither agree nor disagree”, “disagree”, or “disagree strongly”.

    The items are:

    • Left–right scale
      • Government should redistribute income from the better-off to those who are less well off. [Redistrb]
      • Big business benefits owners at the expense of workers. [BigBusnN]
      • Ordinary working people do not get their fair share of the nation's wealth. [Wealth]6
      • There is one law for the rich and one for the poor. [RichLaw]
      • Management will always try to get the better of employees if it gets the chance. [Indust4]
    • Libertarian–authoritarian scale
      • Young people today don't have enough respect for traditional British values. [TradVals]
      • People who break the law should be given stiffer sentences. [StifSent]
      • For some crimes, the death penalty is the most appropriate sentence. [DeathApp]
      • Schools should teach children to obey authority. [Obey]
      • The law should always be obeyed, even if a particular law is wrong. [WrongLaw]
      • Censorship of films and magazines is necessary to uphold moral standards. [Censor]
    • Welfarism scale
      • The welfare state encourages people to stop helping each other. [WelfHelp]
      • The government should spend more money on welfare benefits for the poor, even if it leads to higher taxes. [MoreWelf]
      • Around here, most unemployed people could find a job if they really wanted one. [UnempJob]
      • Many people who get social security don't really deserve any help. [SocHelp]
      • Most people on the dole are fiddling in one way or another. [DoleFidl]
      • If welfare benefits weren't so generous, people would learn to stand on their own two feet. [WelfFeet]
      • Cutting welfare benefits would damage too many people's lives. [DamLives]
      • The creation of the welfare state is one of Britain's proudest achievements. [ProudWlf]

    The indices for the three scales are formed by scoring the leftmost, most libertarian or most pro-welfare position, as 1 and the rightmost, most authoritarian or most anti-welfarist position, as 5. The “neither agree nor disagree” option is scored as 3. The scores to all the questions in each scale are added and then divided by the number of items in the scale giving indices ranging from 1 (leftmost, most libertarian, most pro-welfare) to 5 (rightmost, most authoritarian, most anti-welfare). The scores on the three indices have been placed on the dataset.7

    The scales have been tested for reliability (as measured by Cronbach's alpha). The Cronbach's alpha (unstandardized items) for the scales in 2000 are 0.81 for the left–right scale, 0.80 for the ‘welfarism’ scale and 0.73 for the libertarian–authoritarian scale. This level of reliability can be considered “very good” for the left–right scale and welfarism scales and “acceptable” for the libertarian–authoritarian scale (DeVellis, 1991: 85).

    Other Analysis Variables

    These are taken directly from the questionnaire and to that extent are self-explanatory. The principal ones are:

    • Sex (Q.35)
    • Age (Q.36)
    • Household income (Q.1001)
    • Economic position (Q.307)
    • Religion (Q.753)
    • Highest educational qualification obtained (Qs.803–860)
    • Marital status (Q.127)
    • Benefits received (Qs.951–993)
    Sampling Errors

    No sample precisely reflects the characteristics of the population it represents because of both sampling and non-sampling errors. If a sample were designed as a random sample (if every adult had an equal and independent chance of inclusion in the sample) then we could calculate the sampling error of any percentage, p, using the formula:

    where n is the number of respondents on which the percentage is based. Once the sampling error had been calculated, it would be a straightforward exercise to calculate a confidence interval for the true population percentage. For example, a 95 per cent confidence interval would be given by the formula:

    Clearly, for a simple random sample (srs), the sampling error depends only on the values of p and n. However, simple random sampling is almost never used in practice because of its inefficiency in terms of time and cost.

    As noted above, the British Social Attitudes sample, like that drawn for most large-scale surveys, was clustered according to a stratified multi-stage design into 200 postcode sectors (or combinations of sectors). With a complex design like this, the sampling error of a percentage giving a particular response is not simply a function of the number of respondents in the sample and the size of the percentage; it also depends on how that percentage response is spread within and between sample points.

    The complex design may be assessed relative to simple random sampling by calculating a range of design factors (DEFTs) associated with it, where

    and represents the multiplying factor to be applied to the simple random sampling error to produce its complex equivalent. A design factor of one means that the complex sample has achieved the same precision as a simple random sample of the same size. A design factor greater than one means the complex sample is less precise than its simple random sample equivalent. If the DEFT for a particular characteristic is known, a 95 per cent confidence interval for a percentage may be calculated using the formula:

    Calculations of sampling errors and design effects were made using the statistical analysis package STATA.

    The following table gives examples of the confidence intervals and DEFTs calculated for a range of different questions: some fielded on all three versions of the questionnaire and some on one only; some asked on the interview questionnaire and some on the self-completion supplement. It shows that most of the questions asked of all sample members have a confidence interval of around plus or minus two to three per cent of the survey proportion. This means that we can be 95 per cent certain that the true population proportion is within two to three per cent (in either direction) of the proportion we report.

    It should be noted that the design effects for certain variables (notably those most associated with the area a person lives in) are greater than those for other variables. For example, the question about benefit levels for the unemployed has high design effects, which may reflect differing rates of unemployment across the country. Another case in point is housing tenure, as different kinds of tenures (such as council housing, or owner-occupied properties) tend to be concentrated in certain areas; consequently the design effects calculated for these variables in a clustered sample are greater than the design effects calculated for variables less strongly associated with area, such as attitudinal variables.

    These calculations are based on the 3,287 respondents to the main questionnaire and 2,795 returning self-completion questionnaires; on the A version respondents (1,107 for the main questionnaire and 941 for the self-completion); on the B version respondents (1,081 and 942 respectively); or on the C version respondents (1,099 and 912 respectively). As the examples above show, sampling errors for proportions based only on respondents to just one of the three versions of the questionnaire, or on subgroups within the sample, are somewhat larger than they would have been had the questions been asked of everyone.

    Table A.3 Complex standard errors and confidence intervals of selected variables

    Analysis Techniques

    Regression analysis aims to summarise the relationship between a ‘dependent’ variable and one or more ‘independent’ variables. It shows how well we can estimate a respondent's score on the dependent variable from knowledge of their scores on the independent variables. It is often undertaken to support a claim that the phenomena measured by the independent variables cause the phenomenon measured by the dependent variable. However, the causal ordering, if any, between the variables cannot be verified or falsified by the technique. Causality can only be inferred through special experimental designs or through assumptions made by the analyst.

    All regression analysis assumes that the relationship between the dependent and each of the independent variables takes a particular form. In linear regression, the most common form of regression analysis, it is assumed that the relationship can be adequately summarised by a straight line. This means that a one point increase in the value of an independent variable is assumed to have the same impact on the value of the dependent variable on average irrespective of the previous values of those variables.

    Strictly speaking the technique assumes that both the dependent and the independent variables are measured on an interval level scale, although it may sometimes still be applied even where this is not the case. For example, one can use an ordinal variable (e.g. a Likert scale) as a dependent variable if one is willing to assume that there is an underlying interval level scale and the difference between the observed ordinal scale and the underlying interval scale is due to random measurement error. Categorical or nominal data can be used as independent variables by converting them into dummy or binary variables; these are variables where the only valid scores are 0 and 1, with 1 signifying membership of a particular category and 0 otherwise.

    The assumptions of linear regression can cause particular difficulties where the dependent variable is binary. The assumption that the relationship between the dependent and the independent variables is a straight line means that it can produce estimated values for the dependent variable of less than 0 or greater than 1. In this case it may be more appropriate to assume that the relationship between the dependent and the independent variables takes the form of an S-curve, where the impact on the dependent variable of a one-point increase in an independent variable becomes progressively less the closer the value of the dependent variable approaches 0 or 1. Logistic regression is an alternative form of regression which fits such an S-curve rather than a straight line. The technique can also be adapted to analyse multinomial non-interval level dependent variables, that is, variables which classify respondents into more than two categories.

    The two statistical scores most commonly reported from the results of regression analyses are:

    A measure of variance explained: This summarises how well all the independent variables combined can account for the variation in respondent's scores in the dependent variable. The higher the measure, the more accurately we are able in general to estimate the correct value of each respondent's score on the dependent variable from knowledge of their scores on the independent variables.

    A parameter estimate: This shows how much the dependent variable will change on average, given a one unit change in the independent variable (while holding all other independent variables in the model constant). The parameter estimate has a positive sign if an increase in the value of the independent variable results in an increase in the value of the dependent variable. It has a negative sign if an increase in the value of the independent variable results in a decrease in the value of the dependent variable. If the parameter estimates are standardised, it is possible to compare the relative impact of different independent variables; those variables with the largest standardised estimates can be said to have the biggest impact on the value of the dependent variable.

    Regression also tests for the statistical significance of parameter estimates. A parameter estimate is said to be significant at the five per cent level, if the range of the values encompassed by its 95 per cent confidence interval (see also section on sampling errors) are either all positive or all negative. This means that there is less than a five per cent chance that the association we have found between the dependent variable and the independent variable is simply the result of sampling error and does not reflect a relationship that actually exists in the general population.

    Factor Analysis

    Factor analysis is a statistical technique which aims to identify whether there are one or more apparent sources of commonality to the answers given by respondents to a set of questions. It ascertains the smallest number of factors (or dimensions) which can most economically summarise all of the variation found in the set of questions being analysed. Factors are established where respondents who give a particular answer to one question in the set, tend to give the same answer as each other to one or more of the other questions in the set. The technique is most useful when a relatively small number of factors is able to account for a relatively large proportion of the variance in all of the questions in the set.

    The technique produces a factor loading for each question (or variable) on each factor. Where questions have a high loading on the same factor then it will be the case that respondents who give a particular answer to one of these questions tend to give a similar answer to the other questions. The technique is most commonly used in attitudinal research to try to identify the underlying ideological dimensions which apparently structure attitudes towards the subject in question.

    International Social Survey Programme

    The International Social Survey Programme (ISSP) is run by a group of research organisations, each of which undertakes to field annually an agreed module of questions on a chosen topic area. Since 1985, an International Social Survey Programme module has been included in one of the British Social Attitudes self-completion questionnaires. Each module is chosen for repetition at intervals to allow comparisons both between countries (membership is currently standing at 38) and over time. In 2001, the chosen subject was Social Networks, and the module was carried on the C version of the self-completion questionnaire (Qs.1–33).


    1. Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series – for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable ways, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).

    2. In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of lap-top computers during the interview, with interviewers entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. Over a longer period, there could also be significant cost savings. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.

    Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in The 11th Report (Lynn and Purdon, 1994).

    3. Interview times of less than 20 and more than 150 minutes were excluded as these were likely to be errors.

    4. An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992), which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on the British Social Attitudes surveys since 1991.

    5. Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year.

    6. In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation's wealth. [Wealth1]

    7. In constructing the scale, a decision had to be taken on how to treat missing values (‘Don't knows’ and ‘Refused’/not answered). Respondents who had more than two missing values on the left–right scale and more than three missing values on the libertarian–authoritarian and welfare scale were excluded from that scale. For respondents with just a few missing values, ‘Don't knows’ were recoded to the midpoint of the scale and not answered or ‘Refused’ were recoded to the scale mean for that respondent on their valid items.

    DeVellis, R. F. (1991), ‘Scale development: theory and applications’, Applied Social Research Methods Series, 26, Newbury Park: Sage.
    Jowell, R., Brook, L., Prior, G. and Taylor, B. (1992), British Social Attitudes: the 9th Report, Aldershot: Dartmouth
    Lynn, P. and Purdon, S. (1994), ‘Time-series and lap-tops: the change to computer-assisted interviewing’ in Jowell, R., Curtice, J., Brook, L. and Ahrendt, D. (eds.), British Social Attitudes: the 11th Report, Aldershot: Dartmouth
    Lynn, P. and Taylor, B. (1995), ‘On the bias and variance of samples of individuals: a comparison of the Electoral Registers and Postcode Address File as sampling frames’, The Statistician, 44: 173–194.
    Spector, P. E. (1992), ‘Summated rating scale construction: an introduction’, Quantitative Applications in the Social Sciences, 82, Newbury Park: Sage.
    Thomson, K., Park, A., Jarvis, L., Bromley, C. and Stratford, N. (2001), British Social Attitudes 1999 survey: Technical Report, London: National Centre for Social Research.

    Appendix II: Notes on the Tabulations

    • Figures in the tables are from the 2001 British Social Attitudes survey unless otherwise indicated.
    • Tables are percentaged as indicated.
    • In tables, ‘*’ indicates less than 0.5 per cent but greater than zero, and ‘-’ indicates zero.
    • When findings based on the responses of fewer than 100 respondents are reported in the text, reference is generally made to the small base size.
    • Percentages equal to or greater than 0.5 have been rounded up in all tables (e.g. 0.5 per cent = one per cent, 36.5 per cent = 37 per cent).
    • In many tables the proportions of respondents answering “Don't know” or not giving an answer are omitted. This, together with the effects of rounding and weighting, means that percentages will not always add to 100 per cent.
    • The self-completion questionnaire was not completed by all respondents to the main questionnaire (see Appendix I). Percentage responses to the self-completion questionnaire are based on all those who completed it.
    • The bases shown in the tables (the number of respondents who answered the question) are printed in small italics. The bases are unweighted, unless otherwise stated.

    Appendix III: The Questionnaires

    As explained in Appendix I, three different versions of the questionnaire (A, B and C) were administered, each with its own self-completion supplement. The diagram that follows shows the structure of the questionnaires and the topics covered (not all of which are reported on in this volume).

    The three interview questionnaires reproduced on the following pages are derived from the Blaise program in which they were written. For ease of reference, each item has been allocated a question number. Gaps in the numbering system indicate items that are essential components of the Blaise program but which are not themselves questions, and so have been omitted. In addition, we have removed the keying codes and inserted instead the percentage distribution of answers to each question. We have also included the SPSS variable name, in square brackets, beside each question. Above the questions we have included filter instructions. A filter instruction should be considered as staying in force until the next filter instruction. Percentages for the core questions are based on the total weighted sample, while those for questions in versions A, B or C are based on the appropriate weighted sub-samples. We reproduce first version A of the interview questionnaire in full; then those parts of version B and version C that differ. The three versions of the self-completion questionnaire follow, with those parts fielded in more than one version reproduced in one version only.

    The percentage distributions do not necessarily add up to 100 because of weighting and rounding, or for one or more of the following reasons:

    • Some sub-questions are filtered – that is, they are asked of only a proportion of respondents. In these cases the percentages add up (approximately) to the proportions who were asked them. Where, however, a series of questions is filtered, we have indicated the weighted base at the beginning of that series (for example, all employees), and throughout have derived percentages from that base.
    • At a few questions, respondents were invited to give more than one answer and so percentages may add to well over 100 per cent. These are clearly marked by interviewer instructions on the questionnaires.

    As reported in Appendix I, the 2001 British Social Attitudes self-completion questionnaire was not completed by 15 per cent of respondents who were successfully interviewed. The answers in the supplement have been percentaged on the base of those respondents who returned it. This means that the distribution of responses to questions asked in earlier years are comparable with those given in Appendix III of all earlier reports in this series except in The 1984 Report, where the percentages for the self-completion questionnaire need to be recalculated if comparisons are to be made.


    • Loading...
Back to Top

Copy and paste the following HTML into your website