British Social Attitudes, The 20th Report: Continuity and Change Over Two Decades


Edited by: Alison Park, John Curtice, Katarina Thomson, Lindsey Jarvis & Catherine Bromley

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • The National Centre for Social Research

    The National Centre for Social Research (NatCen) is an independent, non-profit social research institute. It has a large professional staff together with its own interviewing and coding resources. Some of NatCen's work — such as the survey reported in this book — is initiated by the institute itself and grant-funded by research councils or foundations. Other work is initiated by government departments, local authorities or quasi-government organisations to provide information on aspects of social or economic policy. NatCen also works frequently with other institutes and academics. Founded in 1969 and now Britain's largest social research institute, NatCen has a high reputation for the standard of its work in both qualitative and quantitative research. NatCen has a Survey Methods Centre and, with the Department of Sociology, University of Oxford, houses the Centre for Research into Elections and Social Trends (CREST).

    The contributors

    Arturo Alvarez Rosete

    Research Officer at the King's Fund

    John Appleby

    Chief Economist at the King's Fund

    Michaela Brockmann

    Research Officer at City University

    Catherine Bromley

    Senior Researcher at NatCen, Scotland and Co-Director of the British Social Attitudes survey series

    Ian Christie

    Associate of the New Economics Foundation and visiting professor at the Centre for Environmental Strategy, University of Surrey

    Rosemary Crompton

    Professor of Sociology at City University

    John Curtice

    Research Consultant at NatCen, Scotland, Deputy Director of CREST, and Professor of Politics at Strathclyde University

    Geoffrey Evans

    Professor and Official Fellow in Politics, Nuffield College Oxford

    Sonia Exley

    Researcher at NatCen and Co-Director of the British Social Attitudes survey series

    Steve Fisher

    Lecturer in Political Sociology and Fellow of Trinity College Oxford

    Anthony Heath

    Professor of Sociology at the University of Oxford and Co-Director of CREST

    Lindsey Jarvis

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Alison Park

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Catherine Rothon

    Research Officer for CREST at the University of Oxford

    Tom Sefton

    Research Fellow at the ESRC Research Centre for Analysis of Social Exclusion (CASE) at the London School of Economics

    Ben Seyd

    Senior Research Fellow at the Constitution Unit, University College London

    Paula Surridge

    Lecturer in Sociology at the University of Bristol

    Katarina Thomson

    Research Director at NatCen and Co-Director of the British Social Attitudes survey series

    Richard D Wiggins

    Professor of Social Statistics at City University

    Ted Wragg

    Professor of Education at Exeter University


    View Copyright Page

    List of Tables and Figures

    • Chapter 1
      • Table 1.1 Changing attitudes towards welfare spending and redistribution by party identification, 1987–2002 6
      • Table 1.2 Public attitudes towards welfare spending on poor or vulnerable groups 8
      • Table 1.3 Changing attitudes towards welfare spending and redistribution by age group, 1987–2002 9
      • Table 1.4 Changing attitudes towards spending on public services by age group, 1985–1996 10
      • Table 1.5 First or second priorities for extra government spending, 1983–2002 11
      • Table 1.6 Changes in overall spending priorities by sub-group, 1983–2002 12
      • Table 1.7 Changes in priorities for social benefits spending by sub-group, 1983–2001 13
      • Table 1.8 Attitudes towards the benefits system by party identification, 1987–2002 17
      • Table 1.9 Attitudes towards the social security system by age group, 1987–2002 18
      • Table 1.10 Attitudes towards spending on welfare benefits for the poor by related attitudes 19
      • Table 1.11 Attitudes towards higher public spending by use of private welfare, 2001 21
      • Table 1.12 Attitudes towards government's responsibility for and spending on the elderly 22
      • Table 1.13 Main expected source of income in retirement 23
      • Figure 1.1 Dissatisfaction with health and education and attitudes towards public spending, 1983–2002 4
      • Figure 1.2 Attitudes towards spending on welfare benefits and redistribution, 1986–2002 5
      • Figure 1.3 Benefits for the unemployed are too high or too low, 1983–2002 15
      • Figure 1.4 “If welfare benefits weren't so generous, people would learn to stand on their own two feet”, 1987–2002 16
      • Figure 1.5 “Around here, most unemployed people could find a job if they really wanted one”, 1987–2002 16
      • Figure 1.6 Attitudes of private welfare users towards spending on the welfare state, 1986–2002 20
    • Chapter 2
      • Table 2.1 Satisfaction with the NHS, 1983–2002 31
      • Table 2.2 Use of NHS services within last year and satisfaction with NHS 33
      • Table 2.3 Satisfaction with the overall running of the NHS by birth cohort, 1983 and 2002 36
      • Table 2.4 Attitudes towards a ‘two-tier’ NHS, 1983–2002 39
      • Figure 2.1 Satisfaction with individual NHS services, 1983–2002 32
      • Figure 2.2 UK NHS funding and overall rate of satisfaction with the NHS, 1983–2002 35
      • Figure 2.3 Health as first or second priority for extra public spending, 1983–2002 38
    • Chapter 3
      • Table 3.1 Patterns of transport use, 1993–2002 47
      • Table 3.2 How important is it to cut the number of cars on the road? 48
      • Table 3.3 Environmental impact of car use, 1991–2002 49
      • Table 3.4 How bad a problem is congestion on motorways and in urban areas to the respondent, 1997–2002 51
      • Table 3.5 ‘Stick’ policies to curb car use: what impact on drivers’ habits, 1996–2002 54
      • Table 3.6 How should we fund improvements in public transport? 56
      • Table 3.7 ‘Carrot’ policies to curb car use: what impact on drivers’ habits, 1997–2002 57
      • Table 3.8 Characteristics of people who ‘never’ take the train, 1993 and 2002 59
      • Table 3.9 Attitudes towards rail travel 60
    • Chapter 4
      • Table 4.1 Attitudes to inequality, 1987, 1992 and 1999 77
      • Table 4.2 Attitudes towards inequality by social class, self-rated economic hardship, and household income 79
      • Table 4.3 Attitudes towards inequality by party identification 80
      • Table 4.4 Trends in attitudes to income inequality by self-rated economic hardship, 1983–2002 81
      • Table 4.5 Trends in attitudes to income inequality by party identification, 1983–2002 82
      • Table 4.6 Perceptions of over- and underpaid occupations, 1987, 1992 and 1999 84
      • Table 4.7 Perceived average earnings of unskilled factory workers and company chairmen, and what they should earn, 1987, 1992 and 1999 85
      • Table 4.8 Attitudes towards income inequality across the generations 87
      • Table 4.9 Whether government is responsible for reducing income inequality 1985–2000 88
      • Table 4.10 Attitudes towards redistribution, 1986–2002 88
      • Table 4.11 Whether people on high incomes should pay a larger share of their income in taxes compared with people on low incomes, 1987, 1992 and 1999 89
      • Figure 4.1 Income inequality in Britain measured by the Gini coefficent, 1970–2001/2 72
      • Figure 4.2 Trends in attitudes to income inequality and wealth sharing, 1983–2002 76
    • Chapter 5
      • Table 5.1 Highest educational qualification reached, 1986–2002 94
      • Table 5.2 Trends in personal efficacy and education level, 1986–2002 96
      • Table 5.3 Voting and personal efficacy, 1987–2001 97
      • Table 5.4 Turnout and education level, 1987–2001 97
      • Table 5.5 Trade union membership by education level, 1986–2002 98
      • Table 5.6 Non-electoral participation by educational attainment 99
      • Table 5.7 Non-electoral participation by personal efficacy and trust in government 100
      • Table 5.8 Trends in potential non-electoral participation, 1983–2002 102
      • Table 5.9 Trends in actual non-electoral participation, 1986–2002 102
      • Table 5.10 Trends in number of actual non-electoral activities undertaken by education level, 1986–2002 103
    • Chapter 6
      • Table 6.1 First or second priorities for extra government spending, 1983–2002 111
      • Table 6.2 Highest priority for extra government spending on education, 1983–2002 111
      • Table 6.3 Attitudes towards the expansion of higher education, 1983–2002 113
      • Table 6.4 Attitudes towards student grants and loans, 1983–2000 115
      • Table 6.5 Attitudes towards student grants, 1995–2000 115
      • Table 6.6 Attitudes towards tuition fees, 2001 116
      • Table 6.7 Most effective measure to improve primary education, 1995–2002 118
      • Table 6.8 Most effective measure to improve secondary education, 1995–2002 118
      • Table 6.9 Number of tests and exams taken in schools 119
      • Table 6.10 Assessment by exams or through classroom work, 1987–2002 120
      • Table 6.11 Advice to a 16 year old about their future, 1995 and 2002 121
      • Table 6.12 Options which give people more opportunities and choice in life, 1993–1002 121
      • Table 6.13 Publication of exam results, 1983–1996
      • Table 6.14 Support for selection in secondary schools, 1984–2002 123
      • Table 6.15 Success of state secondary schools teaching three Rs, 1987–2002 124
      • Table 6.16 Success of state secondary schools preparing young people for work, 1987–2002 124
      • Table 6.17 Success of state secondary schools bringing out pupils' natural abilities, 1987–2002 124
      • Table 6.18 Standards of local schools 125
      • Table 6.19 Most important characteristics of a first job, 1986–2002 126
      • Table 6.20 Careers best suited to offering certain characteristics to a young person 127
    • Chapter 7
      • Table 7.1 Left/right values 135
      • Table 7.2 Libertarian and authoritarian values 135
      • Table 7.3 Relationship between value scales and particular attitudes 136
      • Table 7.4 Left/right values by social class 138
      • Table 7.5 Left/right values by household income, education and tenure 138
      • Table 7.6 Libertarian/authoritarian values by education 140
      • Table 7.7 Libertarian/authoritarian values by class, income and religion 141
      • Table 7.8 Left/right values by party identification 142
      • Table 7.9 Libertarian/authoritarian values by party identification 143
      • Table 7.10 Left/right and libertarian/authoritarian values, 1986 and 2002 144
      • Table 7.11 Libertarianism/authoritarianism and education, 1986 and 2002 145
      • Table 7.12 Authoritarianism, by birth cohort, 1986 and 2002 146
      • Table 7.13 Class composition, 1986 and 2002 148
      • Table 7.14 Left/right values and class, 1986 and 2002 148
      • Table 7.15 Left/right values and income, 1986 and 2002 149
      • Table 7.16 Left/right values by party identification, 1986 and 2002 150
      • Table 7.17 Libertarian/authoritarian values by party identification, 1986 and 2002 151
    • Chapter 8
      • Table 8.1 Own working patterns when child(ren) were under school age, 1989–2002 163
      • Table 8.2 “A man's job is to earn money; a woman's job is to look after the home and family”, 1989–2002 164
      • Table 8.3 Women should stay at home when there is a child under school age, 1989–2002 164
      • Table 8.4 “A working mother can establish just as warm and secure a relationship with her children as a mother who does not work, 1989–2002 165
      • Table 8.5 Distribution of household tasks within couples, 1989–2002 166
      • Table 8.6 Distribution of household tasks between men and women by economic status of household 167
      • Table 8.7 “All in all, family life suffers when the woman has a full-time job”, 1989–2002 167
      • Table 8.8 Proportion of women reporting they “stayed at home” when their youngest child was under school age, 1989–2002 168
      • Table 8.9 “A man's job is to earn money; a woman's job is to look after the home and family”, by age cohort, 1989–2002 170
      • Table 8.10 A woman should stay at home when there are children under school age, by age cohort, 1989–2002 171
      • Table 8.11 Proportion who have arrived home from work too tired to do the chores which needed to be done (in the last three months) 173
      • Table 8.12 Proportion who have found it difficult to concentrate at work because of family responsibiities (in the last three months) 174
      • Table 8.13 “My job is rarely stressful” by occupational class and sex 175
      • Table 8.14 Family-friendly working conditions by occupational class and sex 175
      • Table 8.15 Work—life balance by occupational class 177
      • Table 8.16 Work—life balance by career aspirations among professional and managerial women 179
      • Table 8.17 Work—life balance by career aspirations among routine and manual men 180
      • Figure 8.1 Occupational class, sex, promotion aspirations and work—life stress 178
    • Chapter 9
    • Chapter 10
      • Table 10.1 Should Britain stay in the EU or withdraw — two categories, 1983–1997 216
      • Table 10.2 Should Britain stay in the EU or withdraw — five categories, 1992–2002 217
      • Table 10.3 Should Britain use the Euro, the pound or both, 1992–1999 219
      • Table 10.4 Likely vote in a Euro referendum, 1999–2002 219
      • Table 10.5 Attitudes towards the Euro by class and education 221
      • Table 10.6 Attitudes towards the Euro by newspaper readership 222
      • Table 10.7 Attitudes towards the Euro by whether proud to be British, 2000 223
      • Table 10.8 Knowledge about the Euro 224
      • Table 10.9 Knowledge about the Euro by attitudes to the Euro and EU 225
      • Table 10.10 How likely Britain is to join the Euro by attitudes towards the Euro 226
      • Table 10.11 Attitudes to the EU by general election vote, 1997 and 2001 227
      • Table 10.12 Likely vote in a Euro referendum by party identification 228
    • Chapter 11
      • Table 11.1 How Britain moved to the right, 1983–2002 236
      • Table 11.2 Trends in attitudes towards the welfare state and income inequality, 1983–2002 238
      • Table 11.3 Party identification, 1983–2002 239
      • Table 11.4 Class and party identification, 1986–2002 240
      • Table 11.5 The relationship between attitudes and party support, 1983–2002 242
      • Table 11.6 Attitudes towards redistribution and position on the left-right scale amongst Conservative and Labour supporters, 1985–2002 244
      • Table 11.7 Attitudes towards taxation and unemployment benefits amongst Conservative and Labour supporters, 1983–2002 245
      • Table 11.8 Changing attitudes of new, old and consistent Labour identifiers, 1992–2001 247
      • Table 11.9 Decomposing Labour identifiers' move to the right 249
    • Conclusion
      • Table 12.1 Attitudes to welfare spending by class, 1986–2002 258
    • Appendix I


    This report analyses and describes the findings of the latest British Social Attitudes survey. The series began in 1983 and, some 50,000 interviews later, we mark its twentieth year by paying particular attention to the myriad ways in which Britain's attitudes and values have changed over the last two decades — and to the many areas in which they have remained remarkably constant over time.

    Changing Britain?

    How has Britain changed over the last two decades? In this introduction, we briefly consider some of the most important demographic, political and social changes that have occurred over this period — many of which, as we shall see later, have had important implications for our attitudes and values.

    The labour market is one area which has radically changed over the last twenty years. In the early 1980s, unemployment was at its peak, with twelve per cent of the labour force unemployed in 1984, well over double the rate now (five per cent). The nature of the work available has also changed markedly, with structural changes in the economy (particularly the shift from manufacturing to service sector industries) resulting in a sharp increase in more middle-class ‘white collar’ occupations, and a concomitant decline in working-class ‘blue collar’ ones. The steepest increase has been in professional and managerial occupations — in 1986, 22 per cent of people were classified as belonging to this group, compared with nearly a third now (32 per cent), making it the single largest social class in Britain. Meanwhile, the ‘working class’ has shrunk considerably. As described in our 19th Report, so too has union membership (Bryson and Gomez, 2002). In Chapter 7, we consider whether these sorts of changes mean that the notion of different ‘classes’ no longer applies in modern Britain.

    Such labour market changes have been accompanied by shifts in the attributes of those entering the labour force. Perhaps the most dramatic has been the marked increase in the proportion of graduates; in 1986, seven per cent of British Social Attitudes respondents had a degree — now 16 per cent do so. This reflects the fact that, by 2003, more than one pupil in three was entering the higher education sector, compared with one in seven two decades earlier. Just what this means for attitudes to education is discussed in Chapter 6. Meanwhile, the proportion of women, particularly mothers, in the labour market has continued to increase. Chapter 8 considers how attitudes towards gender roles have shifted over the same period.

    The rewards of work have also changed. Since the mid-1980s, average incomes have increased by 50 per cent. But this has been accompanied by a marked increase in income inequality, leading some to argue that we are now more tolerant of inequality than in the past. In Chapter 4, we consider whether or not this is the case, focusing particularly on whether British society has become more accepting of inequality in the wake of Margaret Thatcher's period in office.

    Of course, it is not just the labour market that has changed. So too have other aspects of British society. Over the last decade, Britain's ethnic minority population grew by just over 50 per cent, and now represents eight per cent of the population of the United Kingdom. Whether such increased ethnic diversity has lead to a change in levels of racial prejudice in Britain is explored in Chapter 9.

    Car ownership has expanded. At the end of 2000, there were over 24 million cars registered in the UK, twice the total for 1975. Over 70 per cent of households had regular use of a car. Car use dominates our travel patterns, accounting for 85 per cent of all passenger kilometres in 2000 (ONS, 2001). At the same time, evidence of the deleterious consequences of car use (such as its contribution to global warming and poor air quality) has grown. In Chapter 3, we examine how attitudes to public and private transport have changed over time, and how far we have come to accept that car use might need to be curbed.

    Other important changes form the backdrop to more than one chapter in this report. Demographically, British society is ageing, the result of lower fertility rates and increased life expectancy, combined with fluctuations in the birth rate over time. The number of people aged 65 and over in the UK has increased by 51 per cent since 1961, to 9.4 million in 2001 (Summerfield and Babb, 2003). We are less religious too; in 1983, only 31 per cent of British Social Attitudes respondents said they did not regard themselves as belonging to any particular religion, but by 2002 this had increased to 41 per cent. Family life has changed, with notable increases in the proportions of people who cohabit, are divorced, or remain single. As outlined in our 18th Report, cohabitation rates have increased particularly dramatically, from five to 15 per cent of all couples between 1986 and 1999 (Barlow et al., 2001). The proportion of children born outside marriage has increased, with a quarter of children now born to cohabiting families. In 1983, 65 per cent of British Social Attitudes respondents were owner-occupiers; by 2002, this group included nearly three-quarters of the sample (74 per cent). Conversely, the proportion in ‘social housing’ (that is, those renting from local authorities or housing associations) fell from 27 to 16 per cent. Technologies have changed, and increased affluence has meant that increasing numbers have earlier access to them. The new technologies of 1983 were the compact disc and camcorder; since then, we have witnessed the exponential rise of the mobile phone and the Internet, both of which have had profound implications for the way in which we live our lives.

    It is not just Britain that has changed. So too has the world in which it operates. In 1983, the Berlin Wall still divided Europe, symbolising the continued conflict between ‘East’ and ‘West’. That year, Yuri Andropov was President of the USSR and Ronald Reagan was the President of the USA. There was widespread global concern about security and, not surprisingly, a major theme in the first British Social Attitudes questionnaire was nuclear weapons and nuclear disarmament. Now, of course, the Cold War has been replaced by a quite different international conflict, and much of Eastern and Central Europe is on the verge of joining a European Union, most of whose members have adopted a common currency. Britain now lives in a ‘globalised’ world in which more of its citizens take foreign holidays, restrictions on the movement of capital are few and far between, and worldwide communication is facilitated by the Internet. Just how our attitudes towards Europe have evolved against this backdrop is discussed in Chapter 10.

    Politically too, 1983 seems very long ago. Margaret Thatcher's Conservative Party enjoyed their second election victory, winning 44 per cent of the vote and all but wiping out Michael Foot's Labour Party as an electoral force in the south of England. Labour's share of the vote that year, at 28 per cent, was not far off the 26 per cent achieved by the alliance between the newly formed Social Democratic Party and the Liberals. The aftermath of the election saw Michael Foot resign, to be replaced by Neil Kinnock (who, in turn, would be replaced by John Smith after another election defeat in 1992). These defeats were to persuade the Labour Party of the need to reform itself before it could end the Conservatives' apparently endless success and, in 1997, New Labour secured a record-breaking victory under the leadership of Tony Blair. But how far was the Conservatives' success underpinned by an ideological drift to the right to which Labour had to accommodate itself before winning power? Chapter Eleven considers this by comparing public opinion during Margaret Thatcher's premiership and Tony Blair's period as Labour leader.

    In the aftermath of the 1983 election, the focus was largely upon how those who had voted (73 per cent of the electorate) had chosen to cast their ballots. But by 2001, fewer than six in ten (59 per cent) voted at all, the lowest level since 1918. Not surprisingly then, the attention paid to those who did not vote has almost equalled that paid to those who did. The extent to which these developments mark a crisis of political participation is explored in Chapter 5.

    Throughout the last twenty years, one subject has been pre-eminent in the country's political debate — the appropriate balance between taxation and spending. Policy, of course, has been far from constant. Under Margaret Thatcher's regime, income tax rates fell. But, more recently, the Labour government has increased national insurance to help fund large increases in public spending. Chapter 1 examines how attitudes to taxation and spending have changed, looking in particular at attitudes towards welfare benefits and their recipients. Meanwhile, Chapter 2 focuses upon one of the most popular nominees for any extra public spending, the National Health Service, and assesses the extent to which attitudes to the NHS are affected by spending levels.

    Our Thanks

    Over the years, the British Social Attitudes series has developed a widely acknowledged reputation as the authoritative map of contemporary British values. In achieving this, it owes a great deal to its many generous funders. We are particularly grateful to our core funder — the Gatsby Charitable Foundation (one of the Sainsbury Family Charitable Trusts) — whose continuous support of the series from the start has given it security and independence. Many other funders have also made long-term commitments to the study and we are ever grateful to them as well. In 2002 these included the Department of Health, the Department for Transport, the Departments for Education and Skills, Trade and Industry, and Work and Pensions, and the Office of the Deputy Prime Minister. Thanks are also due to the Institute of Community Studies. We are also very grateful to the Economic and Social Research Council (ESRC) who provided funding for two modules of questions that year, one examining political legitimacy and participation (funded through its Democracy and Participation Programme) and a second which examined employment, the family and work—life balance.

    The ESRC also supported the National Centre's participation in the International Social Survey Programme, which now comprises 42 nations, each of whom help to design and then field a set of equivalent questions every year on a rotating set of issues. The topic in 2002 was the family and changing gender roles, the British results of which are explored in Chapter 8.

    One recent spin-off from the British Social Attitudes series has been the development of an annual Scottish Social Attitudes survey. This began in 1999 and is funded from a range of sources along similar lines to British Social Attitudes. It is closely associated with its British counterpart and incorporates many of the same questions to enable comparison north and south of the border, while also providing a detailed examination of attitudes to particular issues within Scotland. Three books have now been published about the survey (Paterson et al., 2000; Curtice et al., 2001; Bromley et al., 2003).

    The British Social Attitudes series is a team effort. A research group designs, directs and reports on the study. This year, the group bid farewell to a valuable colleague, Sonia Exley, now at Nuffield College, Oxford. The researchers are supported by complementary teams who implement the sampling strategy and carry out data processing. They in turn depend on fieldwork controllers, area managers and field interviewers who are responsible for getting all the interviewing done, and on administrative staff to compile, organise and distribute the survey's extensive documentation. In this respect, particular thanks are due to Kerrie Gemmill and her colleagues in the National Centre's Operations Department in Brentwood. Other thanks are due to Susan Corbett and her colleagues in our computing department who expertly translate our questions into a computer-assisted questionnaire. Meanwhile, the raw data have to be transformed into a workable SPSS system file — a task that has for many years been performed with great care and efficiency by Ann Mair at the Social Statistics Laboratory in the University of Strathclyde. Many thanks are also due to Lucy Robinson and Fabienne Pedroletti at Sage, our publishers.

    Last, but by no means least, we must praise the anonymous respondents across Britain who gave their time to take part in our 2002 survey. Like the 50,000 or so respondents who have participated before them, they are the cornerstone of this enterprise. We hope that some of them will one day come across this volume and read about themselves with interest.

    The Editors
    Barlow, A., Duncan, S., James, G. and Park, A. (2001), ‘Just a piece of paper? Marriage and cohabition’, in Park, A., Curtice, J., Thomson, K., Jarvis, L. and Bromley, C. (eds.), British Social Attitudes: the 18th Report, London: Sage
    Bromley, C., Curtice, J., Hinds, K. and Park, A. (eds.) (2003), Devolution — Scottish Answers to Scottish Questions?, Edinburgh: Edinburgh University Press.
    Bryson, A. and Gomez, R. (2002), ‘Marching on together? Recent trends in union membership’, in Park, A., Curtice, J., Thomson, K., Jarvis, L. and Bromley, C. (eds.), British Social Attitudes: the 19th Report, London: Sage
    Curtice, J., McCrone, D., Park, A. and Paterson, L. (eds.) (2001), New Scotland, New Society? Are social and political ties fragmenting?, Edinburgh: Edinburgh University Press.
    Office for National Statistics (ONS) (2001), Social Trends 2001, London: The Stationery Office.
    Paterson, L., Brown, A., Curtice, J., Hinds, K., McCrone, D., Park, A., Sproston, K. and Surridge, P. (2000), New Scotland, New Politics?, Edinburgh: Edinburgh University Press.
    Summerfield, C. and Babb, P. (eds) (2003), Social Trends No 33, London: The Stationery Office.
  • Appendix I: Technical Details of the Survey

    In 2002, three versions of the British Social Attitudes questionnaire were fielded. Each ‘module’ of questions is asked either of the full sample (3,435 respondents) or of a random two-thirds or one-third of the sample. The structure of the questionnaire (versions A, B and C) is shown at the beginning of Appendix III.

    Sample Design

    The British Social Attitudes survey is designed to yield a representative sample of adults aged 18 or over. Since 1993, the sampling frame for the survey has been the Postcode Address File (PAF), a list of addresses (or postal delivery points) compiled by the Post Office.1

    For practical reasons, the sample is confined to those living in private households. People living in institutions (though not in private households at such institutions) are excluded, as are households whose addresses were not on the PAF.

    The sampling method involved a multi-stage design, with three separate stages of selection.

    Selection of Sectors

    At the first stage, postcode sectors were selected systematically from a list of all postal sectors in Great Britain. Before selection, any sectors with fewer than 500 addresses were identified and grouped together with an adjacent sector; in Scotland all sectors north of the Caledonian Canal were excluded (because of the prohibitive costs of interviewing there). Sectors were then stratified on the basis of:

    • 37 sub-regions
    • population density with variable banding used, in order to create three equal-sized strata per sub-region
    • ranking by percentage of homes that were owner-occupied in England and Wales and percentage of homes where the head of household was non-manual in Scotland.

    Two hundred postcode sectors were selected, with probability proportional to the number of addresses in each sector.

    Selection of Addresses

    Thirty-one addresses were selected in each of the 200 sectors. The sample was therefore 200 × 31 = 6,200 addresses, selected by starting from a random point on the list of addresses for each sector, and choosing each address at a fixed interval. The fixed interval was calculated for each sector in order to generate the correct number of addresses.

    The Multiple-Output Indicator (MOI) available through PAF was used when selecting addresses in Scotland. The MOI shows the number of accommodation spaces sharing one address. Thus, if the MOI indicates more than one accommodation space at a given address, the chances of the given address being selected from the list of addresses would increase so that it matched the total number of accommodation spaces. The MOI is largely irrelevant in England and Wales as separate dwelling units generally appear as separate entries on PAF. In Scotland, tenements with many flats tend to appear as one entry on PAF. However, even in Scotland, the vast majority of MOIs had a value of one. The remainder, which ranged between three and 12, were incorporated into the weighting procedures (described below).

    Selection of Individuals

    Interviewers called at each address selected from PAF and listed all those eligible for inclusion in the sample — that is, all persons currently aged 18 or over and resident at the selected address. The interviewer then selected one respondent using a computer-generated random selection procedure. Where there were two or more households or ‘dwelling units’ at the selected address, interviewers first had to select one household or dwelling unit using the same random procedure. They then followed the same procedure to select a person for interview.


    Data were weighted to take account of the fact that not all the units covered in the survey had the same probability of selection. The weighting reflected the relative selection probabilities of the individual at the three main stages of selection: address, household and individual.

    Table A.1 Distribution of unscaled and scaled weights
    Unsealed weightNumber%Scaled weight
    Base: 3435

    First, because addresses in Scotland were selected using the MOI, weights had to be applied to compensate for the greater probability of an address with an MOI of more than one being selected, compared to an address with an MOI of one. (This stage was omitted for the English and Welsh data.) Secondly, data were weighted to compensate for the fact that dwelling units at an address which contained a large number of dwelling units were less likely to be selected for inclusion in the survey than ones which did not share an address. (We use this procedure because in most cases of MOIs greater than one, the two stages will cancel each other out, resulting in more efficient weights.) Thirdly, data were weighted to compensate for the lower selection probabilities of adults living in large households compared with those living in small households. The weights were capped at 8.0 (causing three cases to have their weights reduced). The resulting weight is called ‘WtFactor’ and the distribution of weights is shown in Table A1.

    The mean weight was 1.82. The weights were then scaled down to make the number of weighted productive cases exactly equal to the number of unweighted productive cases (n = 3,435).

    All the percentages presented in this Report are based on weighted data.

    Questionnaire Versions

    Each address in each sector (sampling point) was allocated to either the A, B or C third of the sample. If one serial number was version A, the next was version B and the next after that version C. Thus each interviewer was allocated 10 or 11 cases from each version and each version was assigned to 2,066 or 2,067 addresses.


    Interviewing was mainly carried out between June and September 2002, with a small number of interviews taking place in October and November.

    Table A.2 Response rate on British Social Attitudes, 2002

    Fieldwork was conducted by interviewers drawn from the National Centre for Social Research's regular panel and conducted using face-to-face computer-assisted interviewing.2 Interviewers attended a one-day briefing conference to familiarise them with the selection procedures and questionnaires.

    The mean interview length was 49 minutes for version A of the questionnaire, 51 minutes for version B and 44 minutes for version C.3 Interviewers achieved an overall response rate of 61 per cent. Details are shown in table A2.

    As in earlier rounds of the series, the respondent was asked to fill in a self-completion questionnaire which, whenever possible, was collected by the interviewer. Otherwise, the respondent was asked to post it to the National Centre for Social Research. If necessary, up to three postal reminders were sent to obtain the self-completion supplement.

    A total of 535 respondents (16 per cent of those interviewed) did not return their self-completion questionnaire. Version A of the self-completion questionnaire was returned by 84 per cent of respondents to the face-to-face interview, version B by 83 per cent and version C by 86 per cent. As in previous rounds, we judged that it was not necessary to apply additional weights to correct for non-response.

    Advance Letter

    Interviewers were supplied with letters describing the purpose of the survey and the coverage of the questionnaire, which they posted to sampled addresses before making any calls.4

    Analysis Variables

    A number of standard analyses have been used in the tables that appear in this report. The analysis groups requiring further definition are set out below. For further details see Exley et al. (2003).


    The dataset is classified by the 12 Government Office Regions.

    Standard Occupational Classification

    Respondents are classified according to their own occupation, not that of the ‘head of household’. Each respondent was asked about their current or last job, so that all respondents except those who had never worked were coded. Additionally, if the respondent was not working but their spouse or partner was working, their spouse or partner is similarly classified.

    With the 2001 survey, we began coding occupation to the new Standard Occupational Classification 2000 (SOC 2000) instead of the Standard Occupational Classification 1990 (SOC 90). The main socio-economic grouping based on SOC 2000 is the National Statistics Socio-Economic Classification (NS-SEC). However, to maintain time series, some analysis has continued to use the older schemes based on SOC 90 — Registrar General's Social Class, Socio-Economic Group and the Goldthorpe schema.

    National Statistics Socio-Economic Classification (NS-SEC)

    The combination of SOC 2000 and employment status for current or last job generates the following NS-SEC analytic classes:

    • Employers in large organisations, higher managerial and professional
    • Lower professional and managerial; higher technical and supervisory
    • Intermediate occupations
    • Small employers and own account workers
    • Lower supervisory and technical occupations
    • Semi-routine occupations
    • Routine occupations

    The remaining respondents are grouped as “never had a job” or “not classifiable”. For some analyses, it may be more appropriate to classify respondents according to their current socio-economic status, which takes into account only their present economic position. In this case, in addition to the seven classes listed above, the remaining respondents not currently in paid work fall into one of the following categories: “not classifiable”, “retired”, “looking after the home”, “unemployed” or “others not in paid occupations”.

    Registrar General's Social Class

    As with NS-SEC, each respondent's Social Class is based on his or her current or last occupation. The combination of SOC 90 with employment status for current or last job generates the following six Social Classes:

    They are usually collapsed into four groups: I & II, III Non-manual, III Manual, and IV & V.

    Socio-Economic Group

    As with NS-SEC, each respondent's Socio-economic Group (SEG) is based on his or her current or last occupation. SEG aims to bring together people with jobs of similar social and economic status, and is derived from a combination of employment status and occupation. The full SEG classification identifies 18 categories, but these are usually condensed into six groups:

    • Professionals, employers and managers
    • Intermediate non-manual workers
    • Junior non-manual workers
    • Skilled manual workers
    • Semi-skilled manual workers
    • Unskilled manual workers

    As with NS-SEC, the remaining respondents are grouped as “never had a job” or “not classifiable”.

    Goldthorpe Schema

    The Goldthorpe schema classifies occupations by their ‘general comparability’, considering such factors as sources and levels of income, economic security, promotion prospects, and level of job autonomy and authority. The Goldthorpe schema was derived from the SOC 90 codes combined with employment status. Two versions of the schema are coded: the full schema has 11 categories; the ‘compressed schema’ combines these into the five classes shown below.

    • Salariat (professional and managerial)
    • Routine non-manual workers (office and sales)
    • Petty bourgeoisie (the self-employed, including farmers, with and without employees)
    • Manual foremen and supervisors
    • Working class (skilled, semi-skilled and unskilled manual workers, personal service and agricultural workers)

    There is a residual category comprising those who have never had a job or who gave insufficient information for classification purposes.


    All respondents whose occupation could be coded were allocated a Standard Industrial Classification 1992 (SIC 92). Two-digit class codes are used. As with Social Class, SIC may be generated on the basis of the respondent's current occupation only, or on his or her most recently classifiable occupation.

    Party Identification

    Respondents can be classified as identifying with a particular political party on one of three counts: if they consider themselves supporters of that party, as closer to it than to others, or as more likely to support it in the event of a general election (responses are derived from Qs.151–153). The three groups are generally described respectively as partisans, sympathisers and residual identifiers. In combination, the three groups are referred to as ‘identifiers’.

    Attitude Scales

    Since 1986, the British Social Attitudes surveys have included two attitude scales which aim to measure where respondents stand on certain underlying value dimensions — left-right and libertarian-authoritarian. Since 1987 (except 1990), a similar scale on ‘welfarism’ has been asked.5

    A useful way of summarising the information from a number of questions of this sort is to construct an additive index (DeVellis, 1991; Spector, 1992). This approach rests on the assumption that there is an underlying — ‘latent’ — attitudinal dimension which characterises the answers to all the questions within each scale. If so, scores on the index are likely to be a more reliable indication of the underlying attitude than the answers to any one question.

    Each of these scales consists of a number of statements to which the respondent is invited to “agree strongly”, “agree”, “neither agree nor disagree”, “disagree”, or “disagree strongly”.

    The items are:

    Left-right scale

    Government should redistribute income from the better-off to those who are less well off. [Redistrb]

    Big business benefits owners at the expense of workers. [BigBusnN]

    Ordinary working people do not get their fair share of the nation's wealth. [Wealth]6

    There is one law for the rich and one for the poor. [RichLaw]

    Management will always try to get the better of employees if it gets the chance. [Indust4]

    Libertarian–authoritarian scale

    Young people today don't have enough respect for traditional British values. [TradVals]

    People who break the law should be given stiffer sentences. [StifSent]

    For some crimes, the death penalty is the most appropriate sentence. [DeathApp]

    Schools should teach children to obey authority. [Obey]

    The law should always be obeyed, even if a particular law is wrong. [WrongLaw]

    Censorship of films and magazines is necessary to uphold moral standards. [Censor]

    Welfarism scale

    The welfare state encourages people to stop helping each other. [WelfHelp]

    The government should spend more money on welfare benefits for the poor, even if it leads to higher taxes. [MoreWelf]

    Around here, most unemployed people could find a job if they really wanted one. [UnempJob]

    Many people who get social security don't really deserve any help. [SocHelp]

    Most people on the dole are fiddling in one way or another. [DoleFidl]

    If welfare benefits weren't so generous, people would learn to stand on their own two feet. [WelfFeet]

    Cutting welfare benefits would damage too many people's lives. [DamLives]

    The creation of the welfare state is one of Britain's proudest achievements. [ProudWlf]

    The indices for the three scales are formed by scoring the leftmost, most libertarian or most pro-welfare position, as 1 and the rightmost, most authoritarian or most anti-welfarist position, as 5. The “neither agree nor disagree” option is scored as 3. The scores to all the questions in each scale are added and then divided by the number of items in the scale giving indices ranging from 1 (leftmost, most libertarian, most pro-welfare) to 5 (rightmost, most authoritarian, most anti-welfare). The scores on the three indices have been placed on the dataset.7

    The scales have been tested for reliability (as measured by Cronbach's alpha). The Cronbach's alpha (unstandardized items) for the scales in 2002 are 0.83 for the left-right scale, 0.80 for the ‘welfarism’ scale and 0.73 for the libertarian-authoritarian scale. This level of reliability can be considered “very good” for the left-right scale and welfarism scales and “respectable” for the libertarian-authoritarian scale (DeVellis, 1991: 85).

    Other Analysis Variables

    These are taken directly from the questionnaire and to that extent are self-explanatory. The principal ones are:

    Sex (Q.36)Highest educational qualification
    Age (Q.37)obtained (Q.588)
    Household income (Q.778)Marital status (Q.130)
    Economic position (Q.257)Benefits received (Qs.750–767)
    Religion (Q.485)
    Sampling Errors

    No sample precisely reflects the characteristics of the population it represents because of both sampling and non-sampling errors. If a sample were designed as a random sample (if every adult had an equal and independent chance of inclusion in the sample) then we could calculate the sampling error of any percentage, p, using the formula:

    where n is the number of respondents on which the percentage is based. Once the sampling error had been calculated, it would be a straightforward exercise to calculate a confidence interval for the true population percentage. For example, a 95 per cent confidence interval would be given by the formula:

    Clearly, for a simple random sample (srs), the sampling error depends only on the values of p and n. However, simple random sampling is almost never used in practice because of its inefficiency in terms of time and cost.

    As noted above, the British Social Attitudes sample, like that drawn for most large-scale surveys, was clustered according to a stratified multi-stage design into 200 postcode sectors (or combinations of sectors). With a complex design like this, the sampling error of a percentage giving a particular response is not simply a function of the number of respondents in the sample and the size of the percentage; it also depends on how that percentage response is spread within and between sample points.

    The complex design may be assessed relative to simple random sampling by calculating a range of design factors (DEFTs) associated with it, where

    and represents the multiplying factor to be applied to the simple random sampling error to produce its complex equivalent. A design factor of one means that the complex sample has achieved the same precision as a simple random sample of the same size. A design factor greater than one means the complex sample is less precise than its simple random sample equivalent. If the DEFT for a particular characteristic is known, a 95 per cent confidence interval for a percentage may be calculated using the formula:

    Calculations of sampling errors and design effects were made using the statistical analysis package STATA.

    The following table gives examples of the confidence intervals and DEFTs calculated for a range of different questions: some fielded on all three versions of the questionnaire and some on one only; some asked on the interview questionnaire and some on the self-completion supplement. It shows that most of the questions asked of all sample members have a confidence interval of around plus or minus two to three per cent of the survey proportion. This means that we can be 95 per cent certain that the true population proportion is within two to three per cent (in either direction) of the proportion we report. Variables with much larger variation are, as might be expected, those closely related to the geographic location of the respondent (e.g. whether living in a big city, a small town or a village). Here the variation may be as large as six or seven per cent either way around the percentage found on the survey.

    It should be noted that the design effects for certain variables (notably those most associated with the area a person lives in) are greater than those for other variables. For example, the question about benefit levels for the unemployed has high design effects, which may reflect differing rates of unemployment across the country. Another case in point is housing tenure, as different kinds of tenures (such as council housing, or owner-occupied properties) tend to be concentrated in certain areas; consequently the design effects calculated for these variables in a clustered sample are greater than the design effects calculated for variables less strongly associated with area, such as attitudinal variables.

    These calculations are based on the 3,435 respondents to the main questionnaire and 2,900 returning self-completion questionnaires; on the A version respondents (1,123 for the main questionnaire and 940 for the self-completion); on the B version respondents (1,164 and 971 respectively); or on the C version respondents (1,148 and 989 respectively). As the examples show, sampling errors for proportions based only on respondents to just one of the three versions of the questionnaire, or on subgroups within the sample, are somewhat larger than they would have been had the questions been asked of everyone.

    Table A.3 Complex standard errors and confidence intervals of selected variables

    Analysis Techniques

    Regression analysis aims to summarise the relationship between a ‘dependent’ variable and one or more ‘independent’ variables. It shows how well we can estimate a respondent's score on the dependent variable from knowledge of their scores on the independent variables. It is often undertaken to support a claim that the phenomena measured by the independent variables cause the phenomenon measured by the dependent variable. However, the causal ordering, if any, between the variables cannot be verified or falsified by the technique. Causality can only be inferred through special experimental designs or through assumptions made by the analyst.

    All regression analysis assumes that the relationship between the dependent and each of the independent variables takes a particular form. In linear regression, it is assumed that the relationship can be adequately summarised by a straight line. This means that a one percentage point increase in the value of an independent variable is assumed to have the same impact on the value of the dependent variable on average irrespective of the previous values of those variables.

    Strictly speaking the technique assumes that both the dependent and the independent variables are measured on an interval level scale, although it may sometimes still be applied even where this is not the case. For example, one can use an ordinal variable (e.g. a Likert scale) as a dependent variable if one is willing to assume that there is an underlying interval level scale and the difference between the observed ordinal scale and the underlying interval scale is due to random measurement error. Categorical or nominal data can be used as independent variables by converting them into dummy or binary variables; these are variables where the only valid scores are 0 and 1, with 1 signifying membership of a particular category and 0 otherwise.

    The assumptions of linear regression cause particular difficulties where the dependent variable is binary. The assumption that the relationship between the dependent and the independent variables is a straight line means that it can produce estimated values for the dependent variable of less than 0 or greater than 1. In this case it may be more appropriate to assume that the relationship between the dependent and the independent variables takes the form of an S-curve, where the impact on the dependent variable of a one-point increase in an independent variable becomes progressively less the closer the value of the dependent variable approaches 0 or 1. Logistic regression is an alternative form of regression which fits such an S-curve rather than a straight line. The technique can also be adapted to analyse multinomial non-interval level dependent variables, that is, variables which classify respondents into more than two categories.

    The two statistical scores most commonly reported from the results of regression analyses are:

    A measure of variance explained: This summarises how well all the independent variables combined can account for the variation in respondent's scores in the dependent variable. The higher the measure, the more accurately we are able in general to estimate the correct value of each respondent's score on the dependent variable from knowledge of their scores on the independent variables.

    A parameter estimate: This shows how much the dependent variable will change on average, given a one unit change in the independent variable (while holding all other independent variables in the model constant). The parameter estimate has a positive sign if an increase in the value of the independent variable results in an increase in the value of the dependent variable. It has a negative sign if an increase in the value of the independent variable results in a decrease in the value of the dependent variable. If the parameter estimates are standardised, it is possible to compare the relative impact of different independent variables; those variables with the largest standardised estimates can be said to have the biggest impact on the value of the dependent variable.

    Regression also tests for the statistical significance of parameter estimates. A parameter estimate is said to be significant at the five per cent level, if the range of the values encompassed by its 95 per cent confidence interval (see also section on sampling errors) are either all positive or all negative. This means that there is less than a five per cent chance that the association we have found between the dependent variable and the independent variable is simply the result of sampling error and does not reflect a relationship that actually exists in the general population.

    Factor Analysis

    Factor analysis is a statistical technique which aims to identify whether there are one or more apparent sources of commonality to the answers given by respondents to a set of questions. It ascertains the smallest number of factors (or dimensions) which can most economically summarise all of the variation found in the set of questions being analysed. Factors are established where respondents who give a particular answer to one question in the set, tend to give the same answer as each other to one or more of the other questions in the set. The technique is most useful when a relatively small number of factors is able to account for a relatively large proportion of the variance in all of the questions in the set.

    The technique produces a factor loading for each question (or variable) on each factor. Where questions have a high loading on the same factor then it will be the case that respondents who give a particular answer to one of these questions tend to give a similar answer to the other questions. The technique is most commonly used in attitudinal research to try to identify the underlying ideological dimensions which apparently structure attitudes towards the subject in question.

    International Social Survey Programme

    The International Social Survey Programme (ISSP) is run by a group of research organisations, each of which undertakes to field annually an agreed module of questions on a chosen topic area. Since 1985, an International Social Survey Programme module has been included in one of the British Social Attitudes self-completion questionnaires. Each module is chosen for repetition at intervals to allow comparisons both between countries (membership is currently standing at 40) and over time. In 2002, the chosen subject was Family and Changing Gender Roles, and the module was carried on the B and C versions of the self-completion questionnaire (Qs.1–24). In 2002, the ISSP module was supplemented by a module of questions on Employment and the Family asked in Britain, Portugal, France, Norway, Finland, the Czech Republic and Hungary.


    1. Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series — for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable ways, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).

    2. In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of lap-top computers during the interview, with interviewers entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. Over a longer period, there could also be significant cost savings. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.

    Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in The 11th Report (Lynn and Purdon, 1994).

    3. Interview times of less than 20 and more than 150 minutes were excluded as these were likely to be errors.

    4. An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992), which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on the British Social Attitudes surveys since 1991.

    5. Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year.

    6. In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation's wealth. [Wealth 1]

    7. In constructing the scale, a decision had to be taken on how to treat missing values (‘Don't knows’ and ‘Refused’/Not answered). Respondents who had more than two missing values on the left-right scale and more than three missing values on the libertarian-authoritarian and welfare scale were excluded from that scale. For respondents with just a few missing values, ‘Don't knows’ were recoded to the midpoint of the scale and Not answered or ‘Refused’ were recoded to the scale mean for that respondent on their valid items.

    DeVellis, R.F. (1991), ‘Scale development: theory and applications’, Applied Social Research Methods Series, 26, Newbury Park: Sage.
    Exley, S., Bromley, C., Jarvis, L., Park, A., Stratford, N. and Thomson, K. (2003), British Social Attitudes 2000 survey: Technical Report, London: National Centre for Social Research.
    Jowell, R., Brook, L., Prior, G. and Taylor, B. (1992), British Social Attitudes: the 9th Report, Aldershot: Dartmouth
    Lynn, P. and Purdon, S. (1994), ‘Time-series and lap-tops: the change to computer-assisted interviewing’, in Jowell, R., Curtice, J., Brook, L. and Ahrendt, D. (eds.), British Social Attitudes: the 11th Report, Aldershot: Dartmouth
    Lynn, P. and Taylor, B. (1995), ‘On the bias and variance of samples of individuals: a comparison of the Electoral Registers and Postcode Address File as sampling frames’, The Statistician, 44: 173–194.
    Spector, P.E. (1992), ‘Summated rating scale construction: an introduction’, Quantitative Applications in the Social Sciences, 82, Newbury Park: Sage.

    Appendix II: Notes on the Tabulations in Chapters

    • Figures in the tables are from the 2002 British Social Attitudes survey unless otherwise indicated.
    • Tables are percentaged as indicated.
    • In tables, ‘*’ indicates less than 0.5 per cent but greater than zero, and ‘-’ indicates zero.
    • When findings based on the responses of fewer than 100 respondents are reported in the text, reference is generally made to the small base size.
    • Percentages equal to or greater than 0.5 have been rounded up (e.g. 0.5 per cent = one per cent; 36.5 per cent = 37 per cent).
    • In many tables the proportions of respondents answering “Don't know” or not giving an answer are omitted. This, together with the effects of rounding and weighting, means that percentages will not always add to 100 per cent.
    • The self-completion questionnaire was not completed by all respondents to the main questionnaire (see Appendix I). Percentage responses to the self-completion questionnaire are based on all those who completed it.
    • The bases shown in the tables (the number of respondents who answered the question) are printed in small italics. The bases are unweighted, unless otherwise stated.

    Appendix III: The Questionnaires

    As explained in Appendix I, three different versions of the questionnaire (A, B and C) were administered, each with its own self-completion supplement. The diagram that follows shows the structure of the questionnaires and the topics covered (not all of which are reported on in this volume).

    The three interview questionnaires reproduced on the following pages are derived from the Blaise computer program in which they were written. For ease of reference, each item has been allocated a question number. Gaps in the numbering system indicate items that are essential components of the Blaise program but which are not themselves questions, and so have been omitted. In addition, we have removed the keying codes and inserted instead the percentage distribution of answers to each question. We have also included the SPSS variable name, in square brackets, beside each question. Above the questions we have included filter instructions. A filter instruction should be considered as staying in force until the next filter instruction. Percentages for the core questions are based on the total weighted sample, while those for questions in versions A, B or C are based on the appropriate weighted sub-samples. We reproduce first version A of the interview questionnaire in full; then those parts of version B and version C that differ. The three versions of the self-completion questionnaire follow, with those parts fielded in more than one version reproduced in one version only.

    The percentage distributions do not necessarily add up to 100 because of weighting and rounding, or for one or more of the following reasons:

    • Some sub-questions are filtered — that is, they are asked of only a proportion of respondents. In these cases the percentages add up (approximately) to the proportions who were asked them. Where, however, a series of questions is filtered, we have indicated the weighted base at the beginning of that series (for example, all employees), and throughout have derived percentages from that base.
    • At a few questions, respondents were invited to give more than one answer and so percentages may add to well over 100 per cent. These are clearly marked by interviewer instructions on the questionnaires.

    As reported in Appendix I, the 2002 British Social Attitudes self-completion questionnaire was not completed by 16 per cent of respondents who were successfully interviewed. The answers in the supplement have been percentaged on the base of those respondents who returned it. This means that the distribution of responses to questions asked in earlier years are comparable with those given in Appendix III of all earlier reports in this series except in The 1984 Report, where the percentages for the self-completion questionnaire need to be recalculated if comparisons are to be made.

    • Loading...
Back to Top

Copy and paste the following HTML into your website