British Social Attitudes: The 23rd Report: Perspectives on a Changing Society


Edited by: Alison Park, John Curtice, Katarina Thomson, Miranda Phillips & Mark Johnson

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • The National Centre for Social Research

    The National Centre for Social Research (NatCen) is an independent, non-profit social research organisation. It has a large professional staff together with its own interviewing and coding resources. Some of NatCen's work – such as the survey reported in this book – is initiated by NatCen itself and grant-funded by research councils or charitable foundations. Other work is initiated by government departments or quasi-government organisations to provide information on aspects of social or economic policy. NatCen also works frequently with other institutes and academics. Founded in 1969 and now Britain's largest social research organisation, NatCen has a high reputation for the standard of its work in both qualitative and quantitative research. NatCen has a Survey Methods Unit and, with the Department of Sociology, University of Oxford, houses the Centre for Research into Elections and Social Trends (CREST).

    The contributors

    Alex Bryson

    Principal Research Fellow at the Policy Studies Institute and the Manpower Fellow at the Centre for Economic Performance at the London School of Economics

    Elizabeth Clery

    Senior Researcher at NatCen and Co-Director of the British Social Attitudes survey series

    Rosemary Crompton

    Professor of Sociology at City University

    John Curtice

    Research Consultant at the Scottish Centre for Social Research and Professor of Politics at Strathclyde University

    Gabriella Elgenius

    Research Fellow in the Department of Sociology, University of Oxford

    Helen Fawcett

    Lecturer in Politics at Strathclyde University

    Stephen Fisher

    Lecturer in Political Sociology and Fellow of Trinity College, University of Oxford

    Conor Gearty

    Professor of Human Rights Law and Rausing Director of the Centre for the Study of Human Rights at the London School of Economics

    Anthony Heath

    Professor of Sociology at the University of Oxford

    Mark Johnson

    Senior Researcher at NatCen and Co-Director of the British Social Attitudes survey series

    Mansur Lalljee

    University Lecturer in Social Psychology and Fellow of Jesus College, University of Oxford

    Laurence Lessard-Phillips

    DPhil student in the Department of Sociology and Nuffield College, University of Oxford

    Clare Lyonette

    Research Officer at City University

    Sheila McLean

    Professor of Law and Ethics in Medicine at Glasgow University

    Jean Martin

    Senior Research Fellow in the Department of Sociology, University of Oxford

    Pippa Norris

    McGuire Lecturer in Comparative Politics at the John F Kennedy School of Government, Harvard University

    Rachel Ormston

    Senior Researcher at the Scottish Centre for Social Research, part of NatCen

    Alison Park

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Miranda Phillips

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    John Rigg

    Research Officer at the Centre for Analysis of Social Exclusion (CASE), an ESRC Research Centre at the London School of Economics

    Katarina Thomson

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Katrin Voltmer

    Senior Lecturer in Political Communication at the Institute of Communications Studies, University of Leeds


    View Copyright Page

    List of Tables and Figures

    • Table 1.1 Prompted and unprompted class identity, 1965–2005 4
    • Table 1.2 Strength of party identification, 1987–2005 7
    • Table 1.3 Religious belonging, 1964–2005 9
    • Table 1.4 Trends in British national identity, 1996–2005 10
    • Table 1.5 Trends in ‘forced choice’ national identity, 1974–2003 11
    • Table 1.6 Sense of class community 14
    • Table 1.7 Sense of community with other party supporters 15
    • Table 1.8 Religion and sense of community 15
    • Table 1.9 National identity and sense of community 16
    • Table 1.10 Class identity, sense of community, and class attitudes 18
    • Table 1.11 Class identity and class attitudes, 1987–2005 20
    • Table 1.12 Party identity, sense of community and political attitudes 22
    • Table 1.13 Religious identity, sense of community and moral values 24
    • Table 1.14 The changing relation between religious identity and moral values, 1984–2005 25
    • Table 1.15 British identity and British attitudes 27
    • Table 2.1 Levels of support for voluntary euthanasia 39
    • Table 2.2 Levels of support for alternative forms of assisted dying 41
    • Table 2.3 Attitudes towards living wills and patient representatives 43
    • Table 2.4 Mean scores on euthanasia scale, by respondent characteristics and attitudes 46
    • Table 2.5 Attitudes to voluntary euthanasia, 1984–2005 48
    • Table 2.6 Attitudes towards assisted dying, 1995–2005 49
    • Table 3.1 Non-financial employment commitment, by year and sex 58
    • Table 3.2 Importance of extrinsic and intrinsic rewards of work, by year and sex 59
    • Table 3.3 Important attributes in a job, by year and sex 60
    • Table 3.4 Attributes of respondent's own job, by year and sex 60
    • Table 3.5 Time allocation preferences, by year, sex, and employment status 62
    • Table 3.6 ‘Work-life balance’, by sex and employment status 63
    • Table 3.7 Job to family conflict, by social class and sex 64
    • Table 4.1 Who should pay for care for elderly people, by country 73
    • Table 4.2 Belief in universal state funding of care for the elderly, by age and class 74
    • Table 4.3 Views on individual responsibility to save for care/pensions 76
    • Table 4.4 Views on compelling individuals to save for care/pensions 77
    • Table 4.5 Belief in government responsibility for paying for care/income in old age, by country and socio-economic class 79
    • Table 4.6 Belief in individual responsibility for saving for care/pensions, by socio-economic class 80
    • Table 4.7 Attitudes to individual responsibility for paying for care, by attitudes to selling homes 81
    • Table 4.8 Responsibility for paying for care/income in old age, by political orientation 83
    • Table 4.9 Agree/strongly agree individual responsible for saving for care/pensions, by political orientation 83
    • Table 4.10 Attitudes to providing 10 hours a week of care for parent(s) 85
    • Table 5.1 Respect for political opponents 98
    • Table 5.2 Socio-demographic characteristics and political respect 99
    • Table 5.3 Political orientations and political respect 100
    • Table 5.4 Newspaper endorsement in 2005 election and readers' opposed party 103
    • Table 5.5 Exposure to news media and political respect 104
    • Table 5.6 Frequency of political discussion with different kinds of people 106
    • Table 5.7 Proportion of discussants supporting the party respondent opposes, by type of conversation partner 107
    • Table 5.8 Effect of similarity and difference in discussant and respondents' views on political respect 109
    • Table 5.9 Regression model of political respect 110
    • Table 5.10 Mean level of political trust (out of four), by party opposed and political respect 113
    • Table 6.1 Trends in civic duty, 1991–2005 122
    • Table 6.2 Trends in trust in governments to place the needs of the nation above political party interests, 1987–2005 123
    • Table 6.3 Trend in government and electoral participation, 1997–2005 124
    • Table 6.4 Trends in system efficacy, 1987–2005 124
    • Table 6.5 Perceived difference between Conservative and Labour parties, 1964–2005 125
    • Table 6.6 Perceptions of party difference and interest in politics, 1997, 2001, 2005 126
    • Table 6.7 Political interest and electoral participation, 1997–2005 127
    • Table 6.8 Turnout, by political interest and perceptions of the parties, 2005 127
    • Table 6.9 Attitudes towards proportional representation, 1992–2005 129
    • Table 6.10 Turnout by political knowledge and electoral system 133
    • Table 6.11 Average difference in score given to most and least liked party, by political knowledge and electoral system 135
    • Figure 6.1 Trends in per cent with “a great deal” or “quite a lot” of interest in politics, 1986–2005 122
    • Figure 6.2 Trends in attitudes towards changing the electoral system, 1983–2005 130
    • Table 7.1 Proportions viewing different rights as important or not important to democracy 144
    • Table 7.2 Attitudes to the right to protest against the government, 1985–2005 146
    • Table 7.3 Attitudes to the rights of revolutionaries, 1985–2005 147
    • Table 7.4 Attitudes to presumptions of innocence, 1985–2005 148
    • Table 7.5 Attitudes to legal representation for suspects, 1990–2005 149
    • Table 7.6 Attitudes to identity cards, 1990–2005 149
    • Table 7.7 Attitudes to civil liberties, by age 151
    • Table 7.8 Per cent thinking people who wish to overthrow the government by revolution should definitely be allowed to hold public meetings, by age cohort 152
    • Table 7.9 Per cent thinking that public meetings to protest against the government should definitely be allowed, by party support, 1985–2005 153
    • Table 7.10 Per cent disagreeing that the police should be allowed to question suspects for up to a week without letting them see a solicitor, by party support, 1990–2005 154
    • Table 7.11 Per cent disagreeing that every adult should have to carry an identity card, by party, 1990–2005 155
    • Table 7.12 Views on civil liberties, by views on the risk of terror attack 157
    • Table 7.13 Per cent viewing anti-terrorist measures as unacceptable or a price worth paying 159
    • Table 7.14 Per cent thinking various measures are unacceptable, by views on whether people exaggerate the risk of terrorism 161
    • Table 7.15 Attitudes to trade-offs, by party identification 163
    • Table 7.16 Acceptability of trade-offs, by newspaper readership 164
    • Table 7.17 Factors significant in regression model for believing the trade-offs are definitely unacceptable 166
    • Table 7.18 Attitudes to international human rights law 167
    • Figure 7.1 Per cent agreeing the death penalty is the most appropriate sentence for some crimes, 1986–2005 150
    • Table 8.1 Workforce composition and union membership, 1998–2005 187
    • Table 8.2 The difference a union makes to the workplace, 1998 and 2005 190
    • Table 8.3 Management attitudes to union membership, 1998 and 2005 192
    • Table 8.4 Employee perceptions of union power at the workplace, 1989, 1998 and 2005 193
    • Table 8.5 Additive scale of employee perceptions of union effectiveness, 1998 and 2005 194
    • Table 8.6 The climate of employment relations, 1998 and 2005 195
    • Table 8.7 Likelihood that employees in non-union workplaces would join a union if there was one, 1998 and 2005 199
    • Figure 8.1 Union membership density among employees, 1983–2005 186
    • Table 9.1 Whether respondent has any pre-defined health condition or disability 215
    • Table 9.2 Proportions thinking a person with each impairment is disabled, by exposure to disability 217
    • Table 9.3 Proportions thinking a person with each impairment is disabled, by age and education 218
    • Table 9.4 Perceptions of prejudice against disabled people, 1998, 2000, 2005 219
    • Table 9.5 Views on extent of prejudice against disabled people, by exposure to disability 220
    • Table 9.6 Views on extent of prejudice against disabled people, by age and education 221
    • Table 9.7 Views on societal attitudes to disabled people and personal views on disabled people 223
    • Table 9.8 Societal and personal attitudes to disabled people, by exposure to disability 224
    • Table 9.9 Societal and personal attitudes to disabled people, by age and education 225
    • Table 9.10 Views on participation of disabled people, by exposure to disability 226
    • Table 9.11 Views on amount of prejudice against different impairment groups 228
    • Table 9.12 Proportion who think there is a lot of prejudice against different impairment groups, by exposure to disability 229
    • Table 9.13 Views on amount of prejudice against different impairment groups, by age and education 230
    • Table 9.14 Level of comfort by impairment group and situation 231
    • Table 9.15 Proportion who would not feel very comfortable if disabled person were to move in next door, by exposure to disability 232
    • Table 9.16 Proportion who would not feel comfortable if disabled person were to move in next door, by age and education 233
    • Figure 9.1 The relationship between inclusionary attitudes towards disability and scope of definition of disability 227


    A Changing Society

    The British Social Attitudes survey series began as long ago as 1983 and the country whose views and opinions the survey has charted and analysed regularly ever since is now very different from the one in which that first survey was conducted. Analysing what the consequences of some of those changes have been for public attitudes is, indeed, the leitmotiv of this, our 23rd annual Report, based primarily on the findings of the 2005 British Social Attitudes survey.

    To begin with, women are far more likely to be in employment now than they were in the 1980s. Chapter 3 examines whether this growth in female employment has been accompanied by a change in the attitudes of women towards work. It considers, too, how women – and men – react to the pressures of maintaining a satisfactory balance between work and life, given the need to combine the demands of work with family responsibilities.

    Meanwhile the internet has only come to be used widely within the last ten years. Yet it appears to be one of the biggest revolutions so far in the history of communications technology. It has certainly changed the way that many of us shop, bank, acquire information or undertake our work. But its impact on our social lives is less clear. Perhaps it means we spend more time alone with our computers and less time socialising with each other. Or perhaps the internet helps us keep in contact with friends and relatives and to get involved in local groups. Chapter 10 assesses whether or not users of the internet have become more socially isolated.

    Britain's population is ageing and is expected to do so further. This creates new challenges for public policy. One is how we should fund the cost of the care that older people may come to need. This question has occasioned one of the most heavily popularised differences in policy between the UK government and the devolved Scottish Executive. The former has decided that the amount of help someone living in England gets to pay for the cost of ‘personal care’ should depend on their means; the latter has decided such care should be provided ‘free’ to all who need it. Which of these two approaches is the more popular, and why, is the subject of Chapter 4.

    The ageing of Britain's population has also helped to sharpen the debate about a difficult moral dilemma. This is whether there are ever circumstances, such as a painful terminal illness, in which the law should allow someone to help another person to die. Chapter 2 examines our attitudes towards this dilemma and whether there is much support for changing the current position whereby giving such help is illegal.

    Meanwhile the incidence of disabled people has increased. At the same time government policy and the law has placed increasing emphasis on the need to include disabled people in the everyday life of the community. Attitudes towards disabled people are examined in Chapter 9, which asks in particular how far the public does, in fact, accept that they should be fully included in the life of their communities.

    Since 1997 Britain has had a Labour government, whereas the Conservatives were in power throughout the first fourteen years of the British Social Attitudes survey series. But it is a very different Labour government from its predecessors. For example, although it has enacted some legislation designed to strengthen the power of trade unions, it has been less inclined to meet the policy demands of the union movement. Chapter 8 considers how the trade union movement is viewed after a number of years of New Labour government and whether the advent of that government has helped to reverse the decline in membership and influence that the trade unions suffered during much of the 1980s and 1990s.

    The political environment has changed in other respects too. Systems of proportional representation have been introduced for European and devolved elections. Meanwhile, people have been inclined to stay away from the polls at election time. Turnout in the 2005 general election was only marginally higher than the record low recorded in 2001. Chapter 6 examines why people stayed away from the polls again in 2005, and considers whether the continued use of the first-past-the-post electoral system in elections to the House of Commons discourages some kinds of voters in particular from voting. Meanwhile, Chapter 5 assesses whether an electoral system that arguably encourages parties to emphasise their differences from each other, serves to undermine our willingness to respect the views of supporters of political parties with which we disagree.

    But perhaps the biggest change of all in the political environment in recent years has been the increased concern with terrorism following the use of hijacked planes to demolish the Twin Towers in New York on 11th September 2001, and the use of suicide bombs in London on 7th July 2005. This concern has resulted in the passage of laws, such as the extension of the time that the police can hold a person without charging them, that some have argued threaten ‘fundamental’ civil rights. Chapter 7 undertakes an in-depth examination of how the public views the potential trade-off between civil liberties on the one hand and measures argued to reduce the threat of terrorism on the other.

    We begin our analysis, however, with an examination of what impact long-term social change may have had on our sense of identity. Over the last few decades, the development of competitive and ever-changing labour markets, together with the existence of the modern welfare state, is thought to have eroded our linkages with traditional social groups. Instead of inheriting a sense of identity with a particular class, religion, or political party from our parents we now make and choose our own identities – and these may have little to do with class, religion or party at all. Chapter 1 tests whether this argument really does ring true.

    Our Thanks

    British Social Attitudes could not take place without its many generous funders. The Gatsby Charitable Foundation (one of the Sainsbury Family Charitable Trusts) has provided core funding on a continuous basis since the survey's inception, and in so doing has ensured the survey's security and independence. A number of government departments have regularly funded modules of interest to them, while respecting the independence of the study. In 2005 we gratefully acknowledge the support of the Departments for Education and Skills, Health, Transport, Trade and Industry, and Work and Pensions. We are also grateful to the Disability Rights Commission for supporting a module on attitudes to disability.

    The Economic and Social Research Council (ESRC), the body primarily responsible for funding academic social science research in Britain, has regularly provided the funds needed to field modules on the survey. In 2005 it continued to support the participation of Britain in the International Social Survey Programme, a collaboration whereby surveys in over 40 countries field an identical module of questions in order to facilitate comparative research. In 2005 this module was about attitudes to work and the data collected in Britain forms the basis of Chapter 3. Meanwhile, in 2005, the ESRC also funded our participation in a second international collaboration, the Comparative Study of Electoral Systems project, together with modules on social identities, political respect, civil liberties and terrorism, and the impact of the internet.

    The Nuffield Foundation, a charitable foundation that supports a wide range of social research, has also provided invaluable support to the series. In 2005 it funded a module on attitudes towards euthanasia together with one on attitudes towards funding the needs of old age that was included in our sister survey, the Scottish Social Attitudes survey. This latter module provides much of the evidence reported in Chapter 4. Further information about the Scottish Social Attitudes survey itself can be found in Bromley et al. (2006).

    We would also like to thank Professor Richard Topf of London Metropolitan University for all his work in creating and maintaining access to an easy to use internet-based website that provides a fully searchable database of all the questions that have ever been carried on a British Social Attitudes survey, together with details of the pattern of responses to every question. This site provides an invaluable resource for those who want to know more than can be found in this report. It is located at

    The British Social Attitudes survey is a team effort. The research group that designs, directs and reports on the study is supported by complementary teams who implement the survey's sampling strategy and carry out data processing. Those teams in turn depend on fieldwork controllers, area managers and field interviewers who are responsible for all the interviewing, and without whose efforts the survey would not happen at all. The survey is heavily dependent too on administrative staff who compile, organise and distribute the survey's extensive documentation, for which we would pay particular thanks to Neil Barton and his colleagues in NatCen's administrative office in Brentwood. We are also grateful to Sandra Beeson in our computing department who expertly translates our questions into a computer assisted questionnaire, and to Roger Stafford who has the uneviable task of editing, checking and documenting the data. Meanwhile the raw data have to be transformed into a workable SPSS system file – a task that has for many years been performed with great care and efficiency by Ann Mair at the Social Statistics Laboratory at the University of Strathclyde. Many thanks are also due to David Mainwaring, Kate Gofton-Salmond and Emily Lawrence at our publishers, Sage.

    Finally, however, we must praise the people who anonymously gave of their time to answer our 2005 survey. They are the cornerstone of this enterprise. We hope that some of them might come across this volume and read about themselves and their fellow citizens with interest.

    The Editors
    Bromley, C., Curtice, J., McCrone, D. and Park, A. (eds.) (2006), Has Devolution Delivered?, Edinburgh: Edinburgh University Press
  • Appendix I: Technical Details of the Survey

    In 2005, the sample for the British Social Attitudes survey was split into four sections: versions A, B C and D each made up a quarter of the sample. Depending on the number of versions in which it was included, each ‘module’ of questions was thus asked either of the full sample (4,268 respondents) or of a random quarter, half or three-quarters of the sample.

    The structure of the questionnaire is shown at the beginning of Appendix III.

    Sample Design

    The British Social Attitudes survey is designed to yield a representative sample of adults aged 18 or over. Since 1993, the sampling frame for the survey has been the Postcode Address File (PAF), a list of addresses (or postal delivery points) compiled by the Post Office.1

    For practical reasons, the sample is confined to those living in private households. People living in institutions (though not in private households at such institutions) are excluded, as are households whose addresses were not on the PAF.

    The sampling method involved a multi-stage design, with three separate stages of selection.

    Selection of Sectors

    At the first stage, postcode sectors were selected systematically from a list of all postal sectors in Great Britain. Before selection, any sectors with fewer than 500 addresses were identified and grouped together with an adjacent sector; in Scotland all sectors north of the Caledonian Canal were excluded (because of the prohibitive costs of interviewing there). Sectors were then stratified on the basis of:

    • 37 sub-regions
    • population density with variable banding used, in order to create three equal-sized strata per sub-region
    • ranking by percentage of homes that were owner-occupied.

    Two hundred and eighty-six postcode sectors were selected, with probability proportional to the number of addresses in each sector.

    Selection of Addresses

    Thirty addresses were selected in each of the 286 sectors. The issued sample was therefore 286 × 30 = 8,580 addresses, selected by starting from a random point on the list of addresses for each sector, and choosing each address at a fixed interval. The fixed interval was calculated for each sector in order to generate the correct number of addresses.

    The Multiple-Occupancy Indicator (MOI) available through PAF was used when selecting addresses in Scotland. The MOI shows the number of accommodation spaces sharing one address. Thus, if the MOI indicates more than one accommodation space at a given address, the chances of the given address being selected from the list of addresses would increase so that it matched the total number of accommodation spaces. The MOI is largely irrelevant in England and Wales, as separate dwelling units generally appear as separate entries on PAF. In Scotland, tenements with many flats tend to appear as one entry on PAF. However, even in Scotland, the vast majority of MOIs had a value of one. The remainder, which ranged between three and 13, were incorporated into the weighting procedures (described below).

    Selection of Individuals

    Interviewers called at each address selected from PAF and listed all those eligible for inclusion in the British Social Attitudes sample – that is, all persons currently aged 18 or over and resident at the selected address. The interviewer then selected one respondent using a computer-generated random selection procedure. Where there were two or more ‘dwelling units’ at the selected address, interviewers first had to select one dwelling unit using the same random procedure. They then followed the same procedure to select a person for interview within the selected dwelling unit.


    The British Social Attitudes survey has previously only been weighted to correct for the unequal selection of addresses, dwelling units (DU) and individuals. However, falling response in recent years prompted the introduction of non-response weights. This weighting was carried out in 2005; in addition to the selection weights, a set of weights were generated to correct for any biases due to differential non-response. The final sample was then calibrated to match the population in terms of age, sex and region.

    Selection Weights

    Selection weights are required because not all the units covered in the survey had the same probability of selection. The weighting reflects the relative selection probabilities of the individual at the three main stages of selection: address, DU and individual. First, because addresses in Scotland were selected using the MOI, weights were needed to compensate for the greater probability of an address with an MOI of more than one being selected, compared to an address with an MOI of one. (This stage was omitted for the English and Welsh data.) Secondly, data were weighted to compensate for the fact that a DU at an address that contained a large number of DUs was less likely to be selected for inclusion in the survey than a DU at an address that contained fewer DUs. (We use this procedure because in most cases where the MOI is greater than one, the two stages will cancel each other out, resulting in more efficient weights.) Thirdly, data were weighted to compensate for the lower selection probabilities of adults living in large households, compared with those in small households.

    At each stage the selection weights were trimmed to avoid a small number of very high or very low weights in the sample; such weights would inflate standard errors, reducing the precision of the survey estimates and causing the weighted sample to be less efficient. Less than one per cent of the sample was trimmed at each stage.

    Non-Response Model

    It is known that certain subgroups in the population are more likely to respond to surveys than others. These groups can end up over-represented in the sample, which can bias the survey estimates. Where information is available about non-responding households, the response behaviour of the sample members can be modelled and the results used to generate a non-response weight. This non-response weight is intended to reduce bias in the sample resulting from differential response to the survey.

    The data was modelled using logistic regression, with the dependent variable indicating whether or not the selected individual responded to the survey. Ineligible households2 were not included in the non-response modelling. A number of area level and interviewer observation variables were used to model response. Not all the variables examined were retained for the final model: variables not strongly related to a household's propensity to respond were dropped from the analysis.

    The variables found to be related to response were Government Office Region (GOR), proportion of the local population from a minority ethnic group and proportion of households owner-occupied. The model shows that the propensity for a household to not respond increases if it is located in an area where a high proportion of the residents are from a non-white ethnic group. Response is lower in areas where a low proportion of households are owner-occupied and if households are located in the West Midlands, London or the South. The full model is given in Table A.1 below.

    Table A.1 The final non-response model

    The non-response weight is calculated as the inverse of the predicted response probabilities saved from the logistic regression model. The non-response weight was then combined with the selection weights to create the final non-response weight. The top and bottom one per cent of the weight were trimmed before the weight was scaled to the achieved sample size (resulting in the weight being standardised around an average of one).

    Calibration Weighting

    The final stage of the weighting was to adjust the final non-response weight so that the weighted respondent sample matched the population in terms of age, sex and region. Only adults aged 18 and over are eligible to take part in the survey, therefore the data have been weighted to the British population aged 18+ based on the 2004 mid-year population estimates from the Office for National Statistics/General Register Office for Scotland.

    The survey data were weighted to the marginal age/sex and GOR distributions using raking-ratio (or rim) weighting. As a result, the weighted data should exactly match the population across these three dimensions. This is shown in Table A.2.

    Table A.2 Weighted and unweighted sample distribution, by GOR, age and sex

    The calibration weight is the final non-response weight to be used in the analysis of the 2005 survey; this weight has been scaled to the responding sample size. The range of the weights is given in Table A.3.

    Table A.3 Range of weights
    Effective Sample Size

    The effect of the sample design on the precision of survey estimates is indicated by the effective sample size (neff). The effective sample size measures the size of an (unweighted) simple random sample that would achieve the same precision (standard error) as the design being implemented. If the effective sample size is close to the actual sample size then we have an efficient design with a good level of precision. The lower the effective sample size is, the lower the level of precision. The efficiency of a sample is given by the ratio of the effective sample size to the actual sample size. Samples that select one person per household tend to have lower efficiency than samples that select all household members. The final calibrated non-response weights have an effective sample size (neff) of 3,494 and efficiency of 82 per cent.

    All the percentages presented in this Report are based on weighted data.

    Questionnaire Versions

    Each address in each sector (sampling point) was allocated to either the A, B, C or D portion of the sample. If one serial number was version A, the next was version B, the third version C and the fourth version D. Thus, each interviewer was allocated seven or eight cases from each of versions A, B, C and D. There were 2,145 issued addresses for each version.


    Interviewing was mainly carried out between June and September 2005, with a small number of interviews taking place in October and November.

    Fieldwork was conducted by interviewers drawn from the National Centre for Social Research's regular panel and conducted using face-to-face computer-assisted interviewing. Interviewers attended a one-day briefing conference to familiarise them with the selection procedures and questionnaires.

    The mean interview length was 64 minutes for version A of the questionnaire, 73 minutes for version B, 75 minutes for version C and 68 minutes for version D.4 Interviewers achieved an overall response rate of 55 per cent. Details are shown in Table A.4.

    Table A.4 Response rate on British Social Attitudes, 2005
    Addresses issued8,580
    Vacant, derelict and other out of scope802
    In scope7,778100.0
    Interview achieved4,26854.9
    Interview not achieved3,51045.1
    Other non-response3424.4

    1 ‘Refused’ comprises refusals before selection of an individual at the address, refusals to the office, refusal by the selected person, ‘proxy’ refusals (on behalf of the selected respondent) and broken appointments after which the selected person could not be recontacted

    2 ‘Non-contacted’ comprises households where no one was contacted and those where the selected person could not be contacted

    As in earlier rounds of the series, the respondent was asked to fill in a self-completion questionnaire which, whenever possible, was collected by the interviewer. Otherwise, the respondent was asked to post it to the National Centre for Social Research. If necessary, up to three postal reminders were sent to obtain the self-completion supplement.

    A total of 709 respondents (17 per cent of those interviewed) did not return their self-completion questionnaire. Version A of the self-completion questionnaire was returned by 83 per cent of respondents to the face-to-face interview, version B by 80 per cent, version C by 85 per cent and version D by 86 per cent. As in previous rounds, we judged that it was not necessary to apply additional weights to correct for non-response.

    Advance Letter

    Interviewers were supplied with letters describing the purpose of the survey and the coverage of the questionnaire, which they posted to sampled addresses before making any calls.5

    Analysis Variables

    A number of standard analyses have been used in the tables that appear in this Report. The analysis groups requiring further definition are set out below. For further details see Stafford and Thomson (2006).


    The dataset is classified by the 12 Government Office Regions.

    Standard Occupational Classification

    Respondents are classified according to their own occupation, not that of the ‘head of household’. Each respondent was asked about their current or last job, so that all respondents except those who had never worked were coded. Additionally, all job details were collected for all spouses and partners in work.

    With the 2001 survey, we began coding occupation to the new Standard Occupational Classification 2000 (SOC 2000) instead of the Standard Occupational Classification 1990 (SOC 90). The main socio-economic grouping based on SOC 2000 is the National Statistics Socio-Economic Classification (NS-SEC). However, to maintain time-series, some analysis has continued to use the older schemes based on SOC 90 – Registrar General's Social Class, Socio-Economic Group and the Goldthorpe schema.

    National Statistics Socio-Economic Classification (NS-SEC)

    The combination of SOC 2000 and employment status for current or last job generates the following NS-SEC analytic classes:

    • Employers in large organisations, higher managerial and professional
    • Lower professional and managerial; higher technical and supervisory
    • Intermediate occupations
    • Small employers and own account workers
    • Lower supervisory and technical occupations
    • Semi-routine occupations
    • Routine occupations

    The remaining respondents are grouped as “never had a job” or “not classifiable”. For some analyses, it may be more appropriate to classify respondents according to their current socio-economic status, which takes into account only their present economic position. In this case, in addition to the seven classes listed above, the remaining respondents not currently in paid work fall into one of the following categories: “not classifiable”, “retired”, “looking after the home”, “unemployed” or “others not in paid occupations”.

    Registrar General's Social Class

    As with NS-SEC, each respondent's Social Class is based on his or her current or last occupation. The combination of SOC 90 with employment status for current or last job generates the following six Social Classes:

    They are usually collapsed into four groups: I & II, HI Non-manual, III Manual, and IV & V.

    Socio-Economic Group

    As with NS-SEC, each respondent's Socio-Economic Group (SEG) is based on his or her current or last occupation. SEG aims to bring together people with jobs of similar social and economic status, and is derived from a combination of employment status and occupation. The full SEG classification identifies 18 categories, but these are usually condensed into six groups:

    • Professionals, employers and managers
    • Intermediate non-manual workers
    • Junior non-manual workers
    • Skilled manual workers
    • Semi-skilled manual workers
    • Unskilled manual workers

    As with NS-SEC, the remaining respondents are grouped as “never had a job” or “not classifiable”.

    Goldthorpe Schema

    The Goldthorpe schema classifies occupations by their ‘general comparability’, considering such factors as sources and levels of income, economic security, promotion prospects, and level of job autonomy and authority. The Goldthorpe schema was derived from the SOC 90 codes combined with employment status. Two versions of the schema are coded: the full schema has 11 categories; the ‘compressed schema’ combines these into the five classes shown below.

    • Salariat (professional and managerial)
    • Routine non-manual workers (office and sales)
    • Petty bourgeoisie (the self-employed, including farmers, with and without employees)
    • Manual foremen and supervisors
    • Working class (skilled, semi-skilled and unskilled manual workers, personal service and agricultural workers)

    There is a residual category comprising those who have never had a job or who gave insufficient information for classification purposes.


    All respondents whose occupation could be coded were allocated a Standard Industrial Classification 2003 (SIC 03). Two-digit class codes are used. As with Social Class, SIC may be generated on the basis of the respondent's current occupation only, or on his or her most recently classifiable occupation.

    Party Identification

    Respondents can be classified as identifying with a particular political party on one of three counts: if they consider themselves supporters of that party, as closer to it than to others, or as more likely to support it in the event of a general election (responses are derived from Qs. 237–239). The three groups are generally described respectively as partisans, sympathisers and residual identifiers. In combination, the three groups are referred to as ‘identifiers’.

    Attitude Scales

    Since 1986, the British Social Attitudes surveys have included two attitude scales which aim to measure where respondents stand on certain underlying value dimensions – left–right and libertarian-authoritarian.6 Since 1987 (except 1990), a similar scale on ‘welfarism’ has been asked. Some of the items in the welfarism scale were changed in 2000–2001. The current version of the scale is listed below.

    A useful way of summarising the information from a number of questions of this sort is to construct an additive index (DeVellis, 1991; Spector, 1992). This approach rests on the assumption that there is an underlying – ‘latent’ – attitudinal dimension which characterises the answers to all the questions within each scale. If so, scores on the index are likely to be a more reliable indication of the underlying attitude than the answers to any one question.

    Each of these scales consists of a number of statements to which the respondent is invited to “agree strongly”, “agree”, “neither agree nor disagree”, “disagree” or “disagree strongly”.

    The items are:

    Left-Right Scale

    Government should redistribute income from the better off to those who are less well off. [Redistrb]

    Big business benefits owners at the expense of workers. [BigBusnN]

    Ordinary working people do not get their fair share of the nation's wealth. [Wealth]7

    There is one law for the rich and one for the poor. [RichLaw]

    Management will always try to get the better of employees if it gets the chance. [Indust4]

    Libertarian–Authoritarian Scale

    Young people today don't have enough respect for traditional British values. [TradVals]

    People who break the law should be given stiffer sentences. [StifSent]

    For some crimes, the death penalty is the most appropriate sentence. [DeathApp]

    Schools should teach children to obey authority. [Obey]

    The law should always be obeyed, even if a particular law is wrong. [WrongLaw]

    Censorship of films and magazines is necessary to uphold moral standards. [Censor]

    Welfarism Scale

    The welfare state encourages people to stop helping each other. [WelfHelp]

    The government should spend more money on welfare benefits for the poor, even if it leads to higher taxes. [MoreWelf]

    Around here, most unemployed people could find a job if they really wanted one. [UnempJob]

    Many people who get social security don't really deserve any help. [SocHelp]

    Most people on the dole are fiddling in one way or another. [DoleFidl]

    If welfare benefits weren't so generous, people would learn to stand on their own two feet. [WelfFeet]

    Cutting welfare benefits would damage too many people's lives. [DamLives]

    The creation of the welfare state is one of Britain's proudest achievements. [ProudWlf]

    The indices for the three scales are formed by scoring the leftmost, most libertarian or most pro-welfare position as 1 and the rightmost, most authoritarian or most anti-welfarist position as 5. The “neither agree nor disagree” option is scored as 3. The scores to all the questions in each scale are added and then divided by the number of items in the scale, giving indices ranging from 1 (leftmost, most libertarian, most pro-welfare) to 5 (rightmost, most authoritarian, most anti-welfare). The scores on the three indices have been placed on the dataset.8

    The scales have been tested for reliability (as measured by Cronbach's alpha). The Cronbach's alpha (unstandardised items) for the scales in 2005 are 0.80 for the left-right scale, 0.80 for the ‘welfarism’ scale and 0.75 for the libertarian-authoritarian scale. This level of reliability can be considered “very good” for the left-right and welfarism scales and “respectable” for the libertarian-authoritarian scale (DeVellis, 1991: 85).

    Other Analysis Variables

    These are taken directly from the questionnaire and to that extent are self-explanatory. The principal ones are:

    • Sex (Q. 41)
    • Age (Q. 42)
    • Household income (Q. 1394)
    • Economic position (Q. 993)
    • Religion (Q. 1143)
    • Highest educational qualification obtained (Qs. 1273–1274)
    • Marital status (Q. 135)
    • Benefits received (Qs. 1349–1387)
    Sampling Errors

    No sample precisely reflects the characteristics of the population it represents, because of both sampling and non-sampling errors. If a sample were designed as a random sample (if every adult had an equal and independent chance of inclusion in the sample) then we could calculate the sampling error of any percentage, p, using the formula:

    where n is the number of respondents on which the percentage is based. Once the sampling error had been calculated, it would be a straightforward exercise to calculate a confidence interval for the true population percentage. For example, a 95 per cent confidence interval would be given by the formula:

    Clearly, for a simple random sample (srs), the sampling error depends only on the values of p and n. However, simple random sampling is almost never used in practice because of its inefficiency in terms of time and cost.

    As noted above, the British Social Attitudes sample, like that drawn for most large-scale surveys, was clustered according to a stratified multi-stage design into 286 postcode sectors (or combinations of sectors). With a complex design like this, the sampling error of a percentage giving a particular response is not simply a function of the number of respondents in the sample and the size of the percentage; it also depends on how that percentage response is spread within and between sample points.

    The complex design may be assessed relative to simple random sampling by calculating a range of design factors (DEFTs) associated with it, where:

    and represents the multiplying factor to be applied to the simple random sampling error to produce its complex equivalent. A design factor of one means that the complex sample has achieved the same precision as a simple random sample of the same size. A design factor greater than one means the complex sample is less precise than its simple random sample equivalent. If the DEFT for a particular characteristic is known, a 95 per cent confidence interval for a percentage may be calculated using the formula:

    Calculations of sampling errors and design effects were made using the statistical analysis package STATA.

    Table A.5 gives examples of the confidence intervals and DEFTs calculated for a range of different questions. Most background variables were fielded on the whole sample, whereas many attitudinal variables were asked only of a half or quarter of the sample; some were asked on the interview questionnaire and some on the self-completion supplement. The table shows that most of the questions asked of all sample members have a confidence interval of around plus or minus two to three per cent of the survey percentage. This means that we can be 95 per cent certain that the true population percentage is within two to three per cent (in either direction) of the percentage we report.

    Variables with much larger variation are, as might be expected, those closely related to the geographic location of the respondent (for example, whether they live in a big city, a small town or a village). Here, the variation may be as large as five or six per cent either way around the percentage found on the survey. Consequently, the design effects calculated for these variables in a clustered sample will be greater than the design effects calculated for variables less strongly associated with area. Also, sampling errors for percentages based only on respondents to just one of the versions of the questionnaire, or on subgroups within the sample, are larger than they would have been had the questions been asked of everyone.

    Table A.5 Complex standard errors and confidence intervals of selected variables

    Analysis Techniques

    Regression analysis aims to summarise the relationship between a ‘dependent’ variable and one or more ‘independent’ variables. It shows how well we can estimate a respondent's score on the dependent variable from knowledge of their scores on the independent variables. It is often undertaken to support a claim that the phenomena measured by the independent variables cause the phenomenon measured by the dependent variable. However, the causal ordering, if any, between the variables cannot be verified or falsified by the technique. Causality can only be inferred through special experimental designs or through assumptions made by the analyst.

    All regression analysis assumes that the relationship between the dependent and each of the independent variables takes a particular form. In linear regression, it is assumed that the relationship can be adequately summarised by a straight line. This means that a one percentage point increase in the value of an independent variable is assumed to have the same impact on the value of the dependent variable on average, irrespective of the previous values of those variables.

    Strictly speaking the technique assumes that both the dependent and the independent variables are measured on an interval level scale, although it may sometimes still be applied even where this is not the case. For example, one can use an ordinal variable (e.g. a Likert scale) as a dependent variable if one is willing to assume that there is an underlying interval level scale and the difference between the observed ordinal scale and the underlying interval scale is due to random measurement error. Often the answers to a number of Likert-type questions are averaged to give a dependent variable that is more like a continuous variable. Categorical or nominal data can be used as independent variables by converting them into dummy or binary variables; these are variables where the only valid scores are 0 and 1, with 1 signifying membership of a particular category and 0 otherwise.

    The assumptions of linear regression cause particular difficulties where the dependent variable is binary. The assumption that the relationship between the dependent and the independent variables is a straight line means that it can produce estimated values for the dependent variable of less than 0 or greater than 1. In this case it may be more appropriate to assume that the relationship between the dependent and the independent variables takes the form of an S-curve, where the impact on the dependent variable of a one-point increase in an independent variable becomes progressively less the closer the value of the dependent variable approaches 0 or 1. Logistic regression is an alternative form of regression which fits such an S-curve rather than a straight line. The technique can also be adapted to analyse multinomial non-interval level dependent variables, that is, variables which classify respondents into more than two categories.

    The two statistical scores most commonly reported from the results of regression analyses are:

    • A measure of variance explained: This summarises how well all the independent variables combined can account for the variation in respondent's scores in the dependent variable. The higher the measure, the more accurately we are able in general to estimate the correct value of each respondent's score on the dependent variable from knowledge of their scores on the independent variables.
    • A parameter estimate: This shows how much the dependent variable will change on average, given a one-unit change in the independent variable (while holding all other independent variables in the model constant). The parameter estimate has a positive sign if an increase in the value of the independent variable results in an increase in the value of the dependent variable. It has a negative sign if an increase in the value of the independent variable results in a decrease in the value of the dependent variable. If the parameter estimates are standardised, it is possible to compare the relative impact of different independent variables; those variables with the largest standardised estimates can be said to have the biggest impact on the value of the dependent variable.

      Regression also tests for the statistical significance of parameter estimates. A parameter estimate is said to be significant at the five per cent level if the range of the values encompassed by its 95 per cent confidence interval (see also section on sampling errors) are either all positive or all negative. This means that there is less than a five per cent chance that the association we have found between the dependent variable and the independent variable is simply the result of sampling error and does not reflect a relationship that actually exists in the general population.

    Factor Analysis

    Factor analysis is a statistical technique which aims to identify whether there are one or more apparent sources of commonality to the answers given by respondents to a set of questions. It ascertains the smallest number of factors (or dimensions) which can most economically summarise all of the variation found in the set of questions being analysed. Factors are established where respondents who give a particular answer to one question in the set, tend to give the same answer as each other to one or more of the other questions in the set.

    The technique is most useful when a relatively small number of factors are able to account for a relatively large proportion of the variance in all of the questions in the set. The technique produces a factor loading for each question (or variable) on each factor. Where questions have a high loading on the same factor, then it will be the case that respondents who give a particular answer to one of these questions tend to give a similar answer to the other questions. The technique is most commonly used in attitudinal research to try to identify the underlying ideological dimensions which apparently structure attitudes towards the subject in question.

    International Social Survey Programme

    The International Social Survey Programme (ISSP) is run by a group of research organisations, each of which undertakes to field annually an agreed module of questions on a chosen topic area. Since 1985, an International Social Survey Programme module has been included in one of the British Social Attitudes self-completion questionnaires. Each module is chosen for repetition at intervals to allow comparisons both between countries (membership is currently standing at over 40) and over time. In 2005, the chosen subject was Work Orientations, and the module was carried on the A version of the self-completion questionnaire (Qs. 1–36).


    1. Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series – for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable ways, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).

    2. This includes households not containing any adults aged 18 and over, vacant dwelling units, derelict dwelling units, non-resident addresses and other deadwood.

    3. In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of lap-top computers during the interview, with interviewers entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.

    Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in The 11th Report (Lynn and Purdon, 1994).

    4. Interview times recorded as less than 20 minutes were excluded, as these timings were likely to be errors.

    5. An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992) which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on the British Social Attitudes surveys since 1991.

    6. Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year.

    7. In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation's wealth. [Wealthl]

    8. In constructing the scale, a decision had to be taken on how to treat missing values (‘Don't knows,’ ‘Refused’ and ‘Not answered’). Respondents who had more than two missing values on the left-right scale and more than three missing values on the libertarian-authoritarian and welfarism scale were excluded from that scale. For respondents with just a few missing values, ‘Don't knows’ were recoded to the midpoint of the scale and ‘Refused’ or ‘Not answered’ were recoded to the scale mean for that respondent on their valid items.

    DeVellis, R.F. (1991), ‘Scale development: theory and applications’, Applied Social Research Methods Series, 26, Newbury Park: Sage
    Jowell, R., Brook, L., Prior, G. and Taylor, B. (1992), British Social Attitudes: the 9th Report, Aldershot: Dartmouth
    Lynn, P. and Purdon, S. (1994), ‘Time-series and lap-tops: the change to computer-assisted interviewing’, in Jowell, R., Curtice, J., Brook, L. and Ahrendt, D. (eds.), British Social Attitudes: the 11th Report, Aldershot: Dartmouth
    Lynn, P. and Taylor, B. (1995), ‘On the bias and variance of samples of individuals: a comparison of the Electoral Registers and Postcode Address File as sampling frames’, The Statistician, 44: 173–194
    Spector, P.E. (1992), ‘Summated rating scale construction: an introduction’, Quantitative Applications in the Social Sciences, 82, Newbury Park: Sage
    Stafford, R. and Thomson, K. (2006), British Social Attitudes and Young People's Social Attitudes surveys 2003: Technical Report, London: National Centre for Social Research

    Appendix II: Notes on the Tabulations in Chapters

    • Figures in the tables are from the 2005 British Social Attitudes survey unless otherwise indicated.
    • Tables are percentaged as indicated by the percentage signs.
    • In tables, ‘*’ indicates less than 0.5 per cent but greater than zero, and ‘–’ indicates zero.
    • When findings based on the responses of fewer than 100 respondents are reported in the text, reference is made to the small base size.
    • Percentages equal to or greater than 0.5 have been rounded up (e.g. 0.5 per cent = one per cent; 36.5 per cent = 37 per cent).
    • In many tables the proportions of respondents answering “Don't know” or not giving an answer are not shown. This, together with the effects of rounding and weighting, means that percentages will not always add to 100 per cent.
    • The self-completion questionnaire was not completed by all respondents to the main questionnaire (see Appendix I). Percentage responses to the self-completion questionnaire are based on all those who completed it.
    • The bases shown in the tables (the number of respondents who answered the question) are printed in small italics. The bases are unweighted, unless otherwise stated.

    Appendix III: The Questionnaires

    As explained in Appendix I, four different versions of the questionnaire (A, B, C and D) were administered, each with its own self-completion supplement. The diagram that follows shows the structure of the questionnaires and the topics covered (not all of which are reported on in this volume).

    The four interview questionnaires reproduced on the following pages are derived from the Blaise computer program in which they were written. For ease of reference, each item has been allocated a question number. Gaps in the numbering system indicate items that are essential components of the Blaise program but which are not themselves questions, and so have been omitted. In addition, we have removed the keying codes and inserted instead the percentage distribution of answers to each question. We have also included the SPSS variable name, in square brackets, at each question. Above the questions we have included filter instructions. A filter instruction should be considered as staying in force until the next filter instruction. Percentages for the core questions are based on the total weighted sample, while those for questions in versions A, B, C or D are based on the appropriate weighted sub-samples.

    The four versions of the self-completion questionnaire follow. We begin by reproducing version A of the interview questionnaire in full; then those parts of versions B, C and D that differ.

    The percentage distributions do not necessarily add up to 100 because of weighting and rounding, or for one or more of the following reasons:

    • Some sub-questions are filtered – that is, they are asked of only a proportion of respondents. In these cases the percentages add up (approximately) to the proportions who were asked them. Where, however, a series of questions is filtered, we have indicated the reduced weighted base (for example, all employees), and have derived percentages from that base.
    • At a few questions, respondents were invited to give more than one answer and so percentages may add to well over 100 per cent. These are clearly marked by interviewer instructions on the questionnaires.

    As reported in Appendix I, the 2005 British Social Attitudes self-completion questionnaire was not completed by 17 per cent of respondents who were successfully interviewed. The answers in the supplement have been percentaged on the base of those respondents who returned it. This means that the distribution of responses to questions asked in earlier years are comparable with those given in Appendix III of all earlier reports in this series except in The 1984 Report, where the percentages for the self-completion questionnaire need to be recalculated if comparisons are to be made.

    • Loading...
Back to Top

Copy and paste the following HTML into your website