Basic Research Methods: An Entry to Social Science Research

Books

Gerard Guthrie

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Dedication

    For Karina

    List of Tables and Figures

    List of Boxes

    Preface

    This book is intended for students undertaking a first social science research project in anthropology, education, geography, psychology, sociology or other subjects. You probably have taken undergraduate or postgraduate subject courses and now you are getting into research methods. Here your task is to complete a research paper, dissertation or thesis containing primary data collection and analysis. The report will have to be tight, logical and detailed, presenting the evidence before you and only that evidence.

    The deadline seems quite some time away, but it will arrive impossibly fast. Suddenly your prior study does not seem to help very much. That study was probably about academic issues and theory derived from research findings. Now, the methods behind the findings have to be learned. You understand some of the Why, but the issues you face are practical: What? Where? How? This manual gives practical guidance on some of the basic principles and practices of research.

    Textbooks dealing with research methodology have two approaches. One is a deductive approach of studying research principles before practice is developed. By helping students learn the theory and principles, the assumption is that you will better deduce their application to research projects. The approach in this book is a second, inductive one that points you in the direction of practical experience. From an experiential base, you should be able to build a more advanced conceptual and theoretical understanding of research.

    Three learning objectives are implicit. You should:

    • understand social science research design and methods, including conventional data collection and analysis techniques;
    • apply practical skills in a first research project; and
    • value a systematic approach to problem solving.

    I have aimed for very clear and direct English to make a difficult subject easier to understand. The start of each chapter outlines its contents, and the conclusion is a brief summary supplemented by annotated references for further reading. In between are examples of the practices discussed. Learning exercises are not included within each chapter because the assumption is that you will practicse the skills, first, on a research proposal and, then, on your project itself.

    Throughout are practical illustrations of research techniques from research in the Asia-Pacific region. Many of these come from research projects in which I have been involved because they allow me to add more flavour to research techniques by explaining some of the background reasons for taking certain paths. The examples are a starting point for thinking about similar parts of your own work.

    The ideas in here are distilled from experience gained over 40 years of carrying out research in many forms. Most of the research methods in this book are ‘common knowledge’ and in wide use. It is impossible to reference this common knowledge to the many books, experiences and people from which it derives in my case, except for citations about specific borrowings. Books particularly useful when I was learning to research included Zen and the Art of Motorcycle Maintenance by Robert Persig (1974), which is a novel that makes some of the metaphysics underlying thinking about research very accessible. Karl Popper's Objective Knowledge (1979) is a formal but understandable work on the philosophy of science that resolves some metaphysical conundrums for researchers. Fred Kerlinger's Foundations of Behavioral Research (1986) is the best advanced methodology text that I have used. Sidney Siegel's seminal Nonparametric Statistics for the Behavioral Sciences (originally published in 1956) is a classic text that very clearly relates tests to particular types of data. Graham Vulliamy, Keith Lewin and David Stephens’ book, Doing Educational Research in Developing Countries (1990), on qualitative research in developing countries, is grounded in practical experience. William Strunk Jr and E.B. White's The Elements of Style (2000) remains a model of brevity and clarity for written English.

    So, this is a ‘how to’ book that provides a map of where it all fits. The book aims to demystify research and to provide clear and direct instruction on carrying out research projects. It takes you through the stages of a project by giving practical guidance on conducting some of the more common types of social science research. The techniques are basic ones, but many masters and doctoral research studies use them. Unlike many introductory texts that focus on abstract methodology, this one is mainly about research in practice, and it therefore contains more than usual information about data analysis and presentation, which is where many projects become stuck. However, advanced research does require an understanding of the methodological principles from which basic techniques derive. You will need to add deeper conceptual understandings as you undertake more research.

    There are good reasons to learn about research and its methods. Not only can you learn more about the world around you, your own thinking can become clearer and more disciplined. Whether or not you become a professional researcher, you will find that understanding research methods helps you better understand scientific information. Your ability to think should also improve, helping give clarity in both your private and professional lives. Whatever you take out of this book, I hope research is as satisfying and interesting for you as it has been for me.

    GerardGuthrie <gerardguthrie@hotmail.com>

    Acknowledgements

    Some portions of Chapters 1, 3 and 9 of Basic Research Techniques, DER Report No. 55, published by the National Research Institute, Papua New Guinea, have been used with the kind permission of the Institute.

  • Glossary

    Abstracting: A higher order intellectual skill that analyses research material for the key principles that might apply to other situations. An abstract presents key concepts, bringing in detail only in outline to show the type of evidence used to support the main ideas. In contrast, a summary shows understanding by representing evenly all parts of an article and includes more detail. See Chapter 3: 3.5.

    Action research: Research concerned with working on particular activities to make improvements. It is especially used to evaluate the success or failure of new projects or to improve workplace practices. See Chapter 1: 1.1 and 1.3.

    Analysis: A higher order intellectual skill that breaks material into parts to explore understandings, doing so through classification, comparison, illustrating and investigating. See Chapter 3: 3.3.

    Applied research: Research concerned with topics that have potential for practical application. The research often starts from scientific curiosity, but is not designed keeping in mind a particular way of implementing the results. See Chapter 1: 1.1 and 1.3.

    Attribute: An attribute is a characteristic of something. It is a concept or a construct expressing the qualities possessed by a physical or mental object of study. See Chapter 8: 8.1.

    Available data: Data from existing sources, usually as documentary evidence in libraries and archives. It can include primary data, such as interviews and personal reports from participants in events, and secondary data, which is reportage based on others’ accounts. Internal criticism involves consideration of the meaning of the data, which relates to reliability. External criticism involves identifying whether the data is genuine, which is a validity issue. See Chapter 9.

    Case study method: A research method undertaking detailed examination of one, possibly two or three particular cases in-depth and holistically. Ethnography takes a situation as given and particularly tries to find out what it means to the participants. Commonly, case studies are associated with qualitative research, but often they combine different research techniques. The comparative case study method holds variables constant to make comparisons more rigorous. See Chapter 6.

    Causation: Identification of the antecedents that caused an effect. To demonstrate cause-and-effect rigorously requires strictly controlled experimental research. Experiments usually look for a single cause (unicausality), but researchers need to open to alternate causes, and to multiple causation or equifinality (equifinality can also mean that more than one cause is necessary for an effect to occur). One cause can also have many effects. See Chapter 8: 8.2.

    Control:

    • The management of variables so that their effect can be measured and held constant statistically. See Chapter 8: 8.3.
    • Control groups that do not receive an experimental treatment are matched groups used in experiments to compare with experimental groups that do receive the experimental treatment. See Chapter 8.

    Correlation:

    • Correlations are measures of a relationship between two variables, usually on a scale from +1.00 to −1.00. This describes an association between the variables and does not establish causation unless as part of an experimental design. See Chapter 16: 16.3.
    • Correlation studies are usually surveys that measure associations between single and multiple variables. They are not experiments and cannot formally establish cause-and-effect, although they can indicate important avenues for follow-up research. See Chapter 8: 8.5.

    Creating: The highest order intellectual skill. It generates new ideas and patterns by constructing, designing, formulating and synthesising. Research requires this level of skill, which is why it is insufficient for literature reviews to just repeat others’ ideas. See Chapter 3: 3.3.

    Ethics: Standards of professional behaviour. See Chapter 2.

    Evaluation:

    • A high order intellectual skill that makes judgements through assessment, critique, judging and rating. See Chapter 3: 3.3.
    • Research concerned with assessing the performance of activities. Formative evaluation is action research occurring during implementation and is orientated to improving performance. Summative evaluations at the end of activities assess whether they have met their objectives. See Chapter 6: 6.3.

    Experimental method: A research method aimed at establishing causation through rigorous quantitative experimental designs. Experiments need to demonstrate that a randomised experimental group exposed to a treatment did change, a randomised matched control group not exposed to the treatment stayed the same and an alternative independent variable did not determine the result. Quasi-experimental (as if experimental) designs apply experimental logic to attempt to control factors at play in field research. They follow the principles of experimental design except that randomisation of control and experimental groups is not possible. Ex post facto (after the event) designs reverse the experimental method by searching backwards from the post-test, case study or survey to infer prior causes logically. See Chapter 8.

    Generalisation: Prediction from a sample to the whole population from which it is drawn. See part of Chapter 1 (1.4) and all of Chapter 5.

    Grounded research: Research that is based in participants’ experience rather than preceding theories. The role is to review the data and see what patterns might emerge rather than to review theory, deduce hypotheses and use data to test the hypotheses. See Chapter 4: 4.1.

    Hypotheses: Informed guesses about the answer to a research problem. A research hypothesis predicts a positive relationship between variables so that the hypothesis can be tested and either accepted or rejected, perhaps defining it further through an operational hypothesis. Deductive hypotheses are derived beforehand from existing theory. Inductive hypotheses are derived from grounded data. A hypothesis that is not supported is rejected, refuted or falsified. A hypothesised relationship cannot be proven absolutely, so the operational test is for its non-existence using the null hypothesis, which is a prediction that no difference will be found from expected. Type I errors are false positive results (that is, incorrect rejection of the null hypotheses). Type II errors are false negative results (that is, incorrect acceptance of the null hypotheses). See Chapters 4 and 14.

    Informed consent: Agreement to participate in research based on knowledge of the research and its aims. See Chapter 2.

    Interviews: A data collection technique where the researcher asks questions directly of the interviewee. Unstructured interviews generate qualitative data by raising issues in conversational form. Semi-structured interviews use interview guides so that information from different interviews is directly comparable. Focus groups are a form of semi-structured group interview. Structured interviews use formal standardised questionnaires. Interviewer bias is a risk, especially in ethnographic case studies where the researcher might identify with the participants and not assess data objectively. See part of Chapter 6 (6.6) and all of Chapter 11.

    Literature review: A major component of the research proposal. It is an analysis of relevant publications that sets the context for and defines the research topic. The review is always oriented towards narrowing the field to provide a research problem that can guide operational research. See Chapter 3.

    Measurement scales: Technically defined methods for classifying or categorising data (whether words or numbers) on the binary, nominal, ordinal, interval and ratio scales. See Chapter 14: 14.1.

    Metaphysics: The study of the nature of reality. The idealist position is that the world exists only in the mind. The materialist position is that it exists outside the mind. The doubting sceptic view is that nothing can be proved. A commonsense view is realism—acceptance that the real world exists, even though this can be neither demonstrated nor refuted. Philosophical pragmatism can be used to synthesise these views by treating knowledge as useful in terms of its practical effect. See Chapter 4.

    Mixed methods: Combination of qualitative and quantitative research techniques to cancel out their weaknesses. Triangulation is a particular application that uses different techniques to study an issue from different angles. A further application is meta-analysis, that is, analysis of large numbers of similar studies to see if an overall pattern emerges. See Chapter 4: 4.5 and 4.6

    Non-response rate: The percentage of people in a sample who could not be contacted, had moved, refused to answer questions or could not answer for other reasons. See Chapter 5: 5.6.

    Objective research(objectivity): Research that treats the physical and social worlds as objects that we can sense in some direct form, for example, by seeing them. The objective social world consists of people, for example, as counted in censuses. Subjective research (subjectivity) deals with mental constructs that we cannot directly see but which we infer from what people say about them or from various forms of measurement such as attitude scales. Subjective in this sense does not mean personal opinion but research of the subjective. See Chapter 4.

    Observation: A research technique where the researcher collects primary data by direct observation. Structured observation typically uses observation schedules in formal settings. Ethnography takes extended periods in natural settings to learn in detail about particular cultures and the meaning of those cultures to their members. Participant observation means that the researcher takes part in the research situation as a member of the group. Non-participant observation requires the researcher to be present but not to participate in group actions. Hidden observation occurs when the observer is out of sight. See Chapter 10.

    Paradigm: A system of intellectual thought that constitutes a way of viewing reality for the researchers that share them. Paradigms such as positivist and post-positivist research methodologies are social constructs, that is, sets of social beliefs. This viewpoint is consistent with a subjectivist school called phenomenology, which holds that all researchers are actors whose belief systems are integral to their research. See Chapter 4.

    Participatory research: Participatory research considers that research is a political process, that the researchers’ own constructs or ways of thinking affect their behaviour, and that this behaviour is not an entitlement from independent scientific rules that override other considerations. In this view, research should be an ethical process of reciprocal social action in which researchers and participants are on an equal footing. See Chapter 2: 2.6.

    Pilot study: A form of restricted case study, for example, to trial a draft questionnaire. See Chapter 6: 6.2.

    Plagiarism: Cheating through failure to give acknowledgement by copying material from the literature without citation, or by copying the work of other students. See Chapter 3: 3.7.

    Policy research: Research based on practical issues of interest to those who make decisions about them. See Chapter 1: 1.1 and 1.3.

    Pragmatism: A school of methodology that views knowledge as useful in terms of its practical effect. It puts prime emphasis on research objectives and what is useful in achieving them. Pragmatic preference is the policy concern for practical action, which proceeds through use of the best-tested alternative, that is, the option that has the most information available to support it at the time when action has to be taken. Theoretical preference is the scientific quest for truth, especially true explanatory theories, which proceeds through the process of falsifiability. See Chapter 4 and Chapter 19: 19.4.

    Probabilities: Mathematical predictions about the likelihood of an event occurring. In science, all predictions are based on probabilities. The social sciences usually set 95 per cent as the acceptable likelihood of an outcome occurring. Statistical analysis does not express the outcome of hypothesis testing as levels of probability (the chances of being right), but as levels of confidence (the chances of not being wrong). See Chapter 4: 4.1 and Chapter 14: 14.3.

    Pure research: Research that is concerned solely with scientific outcomes. The purpose is to expand knowledge and to discover new things because they are of interest to the scientist and to science. See Chapter 1: 1.1 and 1.3.

    Qualitative research:

    • Research that focuses primarily on the subjective meaning of attributes to individuals or groups of people. In contrast, quantitative research primarily focuses on the objective measurement of variables. See Chapter 4 and Section 3.
    • Qualitative data is information represented usually as words not numbers, while quantitative data is information represented as numbers. See Chapters 15 and 16.

    Questionnaires: A research technique where the researcher collects primary data by asking questions and filling out questionnaire forms. Questionnaires are one of many techniques that can be used to collect data using the survey method. See Chapter 12.

    Randomisation:

    • Allocation of individuals to control and experimental groups randomly so that their composition is equalised. The assumption in randomisation is that all characteristics, measured or not, will be assigned randomly between the groups and thus, they should not have a significant effect on the results. See Chapter 8.
    • Random events are ones where each outcome cannot be predicted individually. See Chapter 14: 14.4.

    Rating: Exercise of judgement in numerical coding, requiring inferences to be drawn about the meaning of qualitative data. Low inference judgements require little interpretation by the observer or scorer. High inference judgements require considerable judgement by the scorer about actions being recorded. Inter-rater agreement requires independent and competent judges to agree on scoring and interpretation of the data. See Chapter 10: 10.4.

    Relevance: The relevance of research is established by its usefulness to consumers of the results. See Chapter 1: 1.4.

    Reliability: The ability to replicate the same research results using the same techniques, that is, to provide results that other researchers could repeat. See Chapter 1: 1.4.

    Research methodology: Refers to the broader principles of research underscored by philosophical rationales. Positivism is a quantitative methodology that studies the world and people in it as objective things by direct observation according to strict rules. In this paradigm, research is about the scientific rules that researchers follow. In contrast, post-positivism views knowledge as subjective, value laden and not based on cause-and-effect. In this paradigm, research is what researchers do. See also, pragmatism. See Chapter 4.

    Research methods: Key principles of research design, such as the case study method. Research techniques are particular approaches for collecting and analysing data, such as observation. Research tools are resources used in conducting research, such as computers. See Chapter 1, Section 2.

    Research problem: The first stage of research requires a simple, clear and analytical formulation of the topic. Theoretical questions are relevant to the development of science, while practical problems deal with real-world issues. See Chapter 1 (1.2) and Chapter 3.

    Sampling: The total group to be researched is the population or universe, which is the group to be generalised about. The usual focus of study is a subgroup or sample. Selecting the sample group is sampling. The sample fraction is the sample as a percentage of the population. A random sample gives every member of the population an equal chance of selection from a sample frame, which is a list of all the members in the population. A haphazard sample is a non-random sample, such as a case study. Structured samples include list, proportionate and disproportionate stratified, area, grid and cluster samples. They may be single-stage, two-stage or multi-stage. See Chapter 5.

    Semantic differential: Polar opposite adjectives, such as ‘high-low', used in defining variables and formulating answer scales in questionnaires and tests. See Chapter 8 (8.1) and Chapter 12 (12.2).

    Statistics: Numerical representations of data. Descriptive statistics such as percentages and means summarise numbers and can be represented in graphs. Inferential testsanalyse statistical significance for testing hypotheses and drawing inferences about the strength of findings. Parametric tests are based on an assumption of a normal distribution in the data. Non-parametric tests do not make an assumption of normalcy. They are especially useful with small samples. See Chapter 16.

    Survey method: A research method used for developing generalisations about populations through sampling. Surveys are useful mainly for describing patterns in large groups rather than in-depth analysis of individuals’ views. Censuses are the most complete type of survey. Cross-sectional surveys represent a particular population at a particular time. Longitudinal surveys repeat cross-sectional surveys as trend, cohort and panel studies. See Chapter 7.

    Testing: A research technique where the researcher collects primary data through some form of test, usually written. Criterion-referenced tests aim to show whether students have achieved a given learning objective, with performance on a test item treated as a behaviour that demonstrates learning. In mastery tests, the pass mark is usually set at 80 per cent of the questions. Norm-referenced tests aim to find out who scores higher or lower. Performance can be scaled to represent a normal distribution. See Chapter 13.

    Validity: The correctness of data (sometimes called internal validity). External validity is the extent to which research can be generalised to other situations (also called ecological validity). Face validity is the researcher's judgement. Construct validity focuses on the property that a test measures based on theoretical interest in different types of human behaviour. Criterion-related validity predicts subsequent performance. Content validity focuses on the adequacy with which a test samples particular knowledge. See Chapter 1 (1.4) and Chapter 13 (13.3).

    Variables: Variables use numerical values to measure attributes. A variable is a quantity that expresses a quality in numbers so that it can be measured more precisely. An independent variable is a presumed cause introduced under controlled conditions during experiments as a treatment to which an experimental group is exposed. A dependent variable is the presumed effect measured before (pre-test) and after (post-test) the treatment to see whether change occurs. A background variable is an antecedent that could affect the study. An intervening variable is a measurable event between the treatment and the post-test measurement that might affect the outcome. An extraneous variable is an uncontrolled event that might affect the outcome during a study. Alternative independent variables suggest different causes from the independent variable. A research study can be univariate (studies a single variable), bivariate (studies two variables) or multivariate (studies three or more). All variables need to be unidimensional (that is, capable of being described by a semantic differential to measure one attribute only). Sample variables should be tested statistically against the equivalent population parameters to see if the sample reliably represents the population. See Chapter 8.

    Weighting: Adjustment of disproportionate samples before data analysis to represent the population proportions correctly. See Chapter 5: 5.7.

    References

    American Psychological Association. (2002). The Ethical Principles of Psychologists and Code of Conduct. Washington: APA.
    American Sociological Association. (1999). Code of Ethics. New York: ASA.
    Anderson, L. and D.Krathwohl (eds). (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman.
    Arellano, E., T.Barcenal, P.Bilbao, M.Castellano, S.Nichols and D.Tippins. (2001). ‘Using Case-based Pedagogy in the Philippines: A Narrative Enquiry’, Research in Science Education, 31 (2): 211–26. http://dx.doi.org/10.1023/A:1013162730036
    Ary, D., L.Jacobs and A.Razavieh (1996). Introduction to Research in Education. Fort Worth: Harcourt Brace.
    Babbie, E. (2007). The Practice of Social Research,
    11th edition
    . Belmont: Wadsworth.
    Badgett, J. and E.Christmann. (2009). Designing Elementary Instruction and Assessment: Using the Cognitive Domain. Thousand Oaks: Sage.
    Beeby, C.E. (1966). The Quality of Education in Developing Countries. Cambridge: Harvard University Press.
    Best, J. and J.Kahn. (2005). Research in Education,
    10th edition
    . Needham Heights: Allyn & Bacon.
    Bhatia, B. (2006). ‘Dalit Rebellion against Untouchability in Chakwada, Rajasthan’, Contributions to Indian Sociology, 40 (1): 29–61. http://dx.doi.org/10.1177/006996670504000102
    Billingham, J. (2005). Editing and Revising Text. New Delhi: Oxford University Press.
    Boorer, D. (2004). ‘Andragogy: The Lecturers Speak’, Papua New Guinea Journal of Education, 40 (1): 24–30.
    Campbell, D. and J.Stanley. (1966). Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally.
    Cozby, P. (2009). Methods in Behavioral Research,
    10th edition
    . Boston: McGraw Hill.
    Dasgupta, S., M.Huq, M.Khaliquzzaman, K.Pandey and W.Wheeler. (2004). ‘Indoor Air Quality for Poor Families: New Evidence from Bangladesh’, Policy Research Working Paper 3393, World Bank, Washington.
    Department of Education. (1975). ‘Duty Statement’, Regional Secondary Inspector (Position No. SE 7–15)’, Port Moresby.
    Department of Education. (1980a). ‘Agendas and Minutes of Inspectors’ Conferences’ [various titles], Port Moresby, 24–27 March.
    Department of Education. (1980b). ‘Structure Chart of the National Education Department’, Papua New Guinea Education Gazette, 14 (5): 112.
    Department of Education. (1980c). ‘Minutes of RSI Conference’, 24–27 March 1980, Wewak.
    Desai, V. and R.Potter (eds). (2006). Doing Development Research. New Delhi: Vistaar.
    Fien, J., D.Yencken and H.Sykes (eds). (2002). Young People and the Environment: An Asia-Pacific Perspective. Dordrecht: Kluwer Academic Publishers. http://dx.doi.org/10.1007/0-306-47721-1
    Frame, Janet. (1961). Faces in the Water. New York: Braziller
    Frame, Janet. (1989). An Autobiography (Collected Edition). Auckland: Century Hutchinson. Reprinted in 2008 as An Angel at My Table. London: Virago.
    Gall, M., J.Gall and W.Borg. (2006). Educational Research: An Introduction,
    8th edition
    . Needham Heights: Allyn & Bacon.
    Gaur, A. and S.Gaur. (2009). Statistical Methods for Practice and Research: A Guide to Data Analysis Using SPSS,
    2nd edition
    . New Delhi: Response.
    Gillham, B. (2005). Research Interviewing: The Range of Techniques. Maidenhead: Open University Press.
    Guthrie, G. (1977). ‘The Tribal System of Appropriation in Aboriginal Australia’, Pacific Viewpoint, 18 (2): 149–66.
    Guthrie, G. (1980). ‘Stages of Educational Development? Beeby Revisited’, International Review of Education, 26 (4): 411–38. http://dx.doi.org/10.1007/BF01421422
    Guthrie, G. (1982). ‘Reviews of Teacher Training and Teacher Performance in Developing Countries: Beeby Revisited (2)’, International Review of Education, 28 (3): 291–306. http://dx.doi.org/10.1007/BF00597895
    Guthrie, G. (1983a). An Evaluation of the Secondary Teacher Training System. Report No. 44, Educational Research Unit, University of Papua New Guinea, Port Moresby.
    Guthrie, G. (1983b). The Secondary Inspectorate. Report No. 45, Educational Research Unit, University of Papua New Guinea, Port Moresby.
    Guthrie, G. (1984). ‘Secondary Teacher Training Effectiveness in Papua New Guinea’, Studies in Educational Evaluation, 10 (2): 205–8. http://dx.doi.org/10.1016/S0191-491X%2884%2980020-6
    Guthrie, G. (ed.). (1985). Basic Research Techniques. DER Report No. 55, National Research Institute, Port Moresby.
    Guthrie, G. (ed.). (2007a). Community Crime Surveys: Interviewer Training Manual. Port Moresby: Justice Advisory Group.
    Guthrie, G. (ed.). (2007b). Highlands Highway Crime Study 2005, Special Publication No. 42, National Research Institute, Port Moresby.
    Guthrie, G. (ed.). (2008). Urban Crime Victimisation in Papua New Guinea, 2004–2008: A Synthesis. Port Moresby: Justice Advisory Group.
    Guthrie, G. and J.Laki (2007). Yumi Lukautim Mosbi: Impact Evaluation 2006, Justice Advisory Group, Port Moresby. Available at http://www.lawandjustice.gov.pg/resources/documents/YLM_IMPACT_EVALUATION_REPORT_FINAL_2201071.pdf/ (accessed on 26 February 2010).
    Guthrie, G., F.Hukula and J.Laki. (2007). Bougainville Community Crime Survey, 2006, Special Publication No. 52, National Research Institute, Port Moresby.
    Henn, M., M.Weinstein and M.Foard. (2006). A Short Introduction to Social Research. New Delhi: Vistaar.
    Hennink, M. and S.Clements. (2004). ‘Impact of Franchised Family Planning Clinics in Urban Poor Areas in Pakistan’, Applications & Policy Working Paper A04/16, Southhampton Statistical Sciences Research Institute, University of Southampton.
    Hewison, K. (2004). ‘Thai Migrant Workers in Hong Kong’, Journal of Contemporary Asia, 34 (3): 318–35. http://dx.doi.org/10.1080/00472330480000131
    Husen, T., L.Saha and R.Noonan. (1978). ‘Teacher Training and Student Achievement in Less Developed Countries’, Staff Working Paper No. 310, World Bank, Washington.
    International Labour Office (ILO). (2009). ‘Evaluation: Sri Lanka: Integrated Rural Accessibility Planning Project (IRAP)—A Component of UNOP's Community Access Programming’, ILO Evaluation Summaries, ILO, Geneva.
    Iredale, R., F.Guo and S.Rosario (eds). (2003). Return Migration in the Asia Pacific. Cheltenham: Edward Elgar.
    Israel, D. (2008). Data Analysis in Business Research: A Step-by-Step Nonparametric Approach. New Delhi: Response.
    Kanji, K. (2006). 100 Statistical Tests,
    3rd edition
    . New Delhi: Vistaar.
    Karuppusami, G. and R.Gandhinathan. (2007). ‘Web-based Measurement of the Level of Implementation of TQM in Indian Industries’, Total Quality Management, 18 (4): 379–91. http://dx.doi.org/10.1080/14783360701231351
    Kerlinger, F. (1986). Foundations of Behavioral Research,
    3rd edition
    . Orlando: Harcourt Brace.
    Kline, T. (2005). Psychological Testing: A Practical Approach to Design and Evaluation. New Delhi: Vistaar.
    Krejcie, R. and D.Morgan. (1970). ‘Determining Sample Size for Research Activities’, Educational and Psychological Measurement, 30 (3): 607–10.
    Kreuger, R. and M.Casey. (2009). Focus Groups: A Practical Guide for Applied Research,
    4th edition
    . Thousand Oaks: Sage.
    Kvale, S. (2008). Doing Interviews. London: Sage.
    Leedy, P. and J.Ormrod. (2010). Practical Research: Planning and Design,
    9th edition
    . Needham Heights: Allyn & Bacon.
    Maxwell, T. (ed.). (1992). University of New England Thesis and Dissertation Guide. Armidale: University of New England.
    McKinnon, K.R. (1968). ‘Education in Papua and New Guinea: The Twenty Post-War Years’, Australian Journal of Education, 12 (1): 4–5.
    McLachlan, B.A. (1965). ‘The Role of the District Inspector in Secondary Education 1965–70, paper presented on 10 March 1965 to the Senior Education Officers’ Conference, Port Moresby.
    McNamee, M. and D.Bridges. (2002). The Ethics of Educational Research. Oxford: Blackwell.
    Meadmore, P. (1978). ‘The Decline of Formalism in Queensland Primary Education, 1950–70’, The Forum of Education, 37 (1): 27–34.
    Morrow, M., Q.Nguyen, S.Caruana, B.Biggs, N.Doan and T.Nong. (2009). ‘Pathways to Malaria Persistence in Remote Central Vietnam: A Mixed Method Study of Health Care and the Community’, BMC Public Health, 9 (1): 85. Available at http://www.biomedcentral.com/1471-2458/9/85 (accessed on 26 February 2010). http://dx.doi.org/10.1186/1471-2458-9-85
    Mounsey, C. (2005). Essays and Dissertations. New Delhi: Oxford University Press.
    Nepal, N. and A.Calves. (2004). ‘Income Generation Programmes in Nepal: Participants’ Perspective’, paper presented on 14 August 2004 at the Annual Meeting of the American Sociological Association, San Francisco.
    Papua Annual Report (1940–41). Melbourne and Canberra: Government Printer.
    Papua Annual Report (1947–48). Melbourne and Canberra: Government Printer.
    Parten, M. (1950). Surveys, Polls, and Samples. New York: Harper.
    Perecman, E. and S.Curran. (2006). A Handbook for Social Science Field Research: Essays and Bibliographic Sources on Research Design and Methods. Thousand Oaks: Sage.
    Persig, R. (1974). Zen and the Art of Motorcycle Maintenance. London: Bodley Head.
    Popper, K. (1979). Objective Knowledge: An Evolutionary Approach (
    revised edition
    ). Oxford: Oxford University Press.
    Punch, K. (2006). Developing Effective Research Proposals
    2nd edition
    . London: Sage.
    Ralph, R.C. (1965). ‘The Role of the Inspector of Schools in the System of Educational Administration in Papua New Guinea’, paper presented on 10 March 1965 to the Senior Education Officers’ Conference, Port Moresby.
    Scheyvens, R. and D.Storey (eds). (2003). Development Fieldwork: A Practical Guide. London: Sage.
    Siegel, S. (1956). Nonparametric Statistics for the Behavioral Sciences. New York: McGraw Hill.
    Strunk, W., Jr and E.B.White. (2000). The Elements of Style,
    4th edition
    . Needham Heights: Allyn & Bacon.
    Turney, C. (1970). The Rise and Decline of an Australian Inspectorate. Melbourne Studies in Education. Melbourne: Melbourne University Press.
    Vulliamy, G., K.Lewin and D.Stephens. (1990). Doing Educational Research in Developing Countries: Qualitative Strategies. London: Falmer.
    Walliman, N. (2005). Your Undergraduate Dissertation: The Essential Guide for Success. London: Sage.
    Weeks, S. (1985). ‘The Case Study Method’, in G.Guthrie (ed.), Basic Research Techniques, DER Report No. 55. National Research Institute, Port Moresby: 50–59.

    About the Author

    Gerard Guthrie has been Managing Director of Guthrie Development Consultancy Pty Ltd., Canberra since 2004. He is a Doctor of Philosophy in Education. An educationalist with around 40 years of experience, his career has had two main parts: one as an academic, and the other as a governmental aid official. He has worked in universities, aid management and aid consultancy in Asia, Africa and the South Pacific, particularly China and Papua New Guinea, and also, briefly, in Bangladesh, Bhutan, Botswana, Indonesia, Japan, Kenya, Malaysia, Mauritius, Nepal, Tanzania, Zambia and Zimbabwe.

    Dr Guthrie has a background in development theory and practice and in social science research. He has postgraduate degrees in geography, social science and education, with over 180 publications and papers to his name. His own research has included major projects on migration, teacher education and crime victimisation, as well as aid activity design and evaluation. His publications include: Cherbourg: A Queensland Aboriginal Reserve (1977); Mt. Hagen Community Crime Survey, 2006, co-authored with F. Hukula and J. Laki (2007); Urban Crime Victimisation in Papua New Guinea, 2004–2008: A Synthesis (2008); and The Progressive Education Fallacy in Developing Countries: In Favour of Formalism (2010).


    • Loading...
Back to Top