Basics of Structural Equation Modeling
Publication Year: 1998
DOI: http://dx.doi.org/10.4135/9781483345109
Subject: Structural Equation Modeling, Quantitative Techniques for Business & Management Research
 Chapters
 Front Matter
 Back Matter
 Subject Index

Part 1: Background
 Chapter 1: What does it Mean to Model Hypothesized Causal Processes with Nonexperimental Data?
 Methods for Structural Equation Analyses
 Overview
 Chapter 2: History and Logic of Structural Equation Modeling
 History
 Sewell Wright
 Path Analysis in the Social Sciences
 Unidirectional Flow Models
 Moving beyond Path Analysis in Structural Equation Modeling Research
 Why Use Structural Equation Modeling Techniques?
Part 2: Basic Approaches to Modeling with Single Observed Measures of Theoretical Variables
 Chapter 3: The Basics: Path Analysis and Partitioning of Variance
 Logic of Correlations and Covariances
 Decomposing Relationships between Variables into Causal and Noncausal Components
 Direct Causal Effects
 Indirect Causal Effects
 Noncausal Relationships Due to Shared Antecedents
 Noncausal Unanalyzed Prior Association Relationships
 Approaches for Decomposing Effects
 Determining Degrees of Freedom of Models
 Presenting Partial Regression and Partial Correlation as Path Models
 Partial Regression
 Partial Correlation
 Peer Popularity and Academic Achievement: An Illustration
 Chapter 4: Effects of Collinearity on Regression and Path Analysis
 Regression and Collinearity
 Illustrating Effects of Collinearity
 Confidence Intervals for Correlations
 Ridge or Reduced Variance Regression
 Chapter 5: Effects of Random and Nonrandom Error on Path Models
 Measurement Error
 Background
 Specifying Relationships between Theoretical Variables and Measures
 Random Measurement Error
 Nonrandom Error
 Method Variance and MultitraitMultimethod Models
 Method Variance
 Additive MultitraitMultimethod Models
 Nonadditive MultitraitMultimethod Models
 Summary
 Chapter 6: Recursive and Longitudinal Models: Where Causality Goes in More than One Direction and where Data are Collected over Time
 Models with Multidirectional Paths
 Logic of Nonrecursive Models
 Estimation of Nonrecursive Models
 Model Identification
 Longitudinal Models
 Logic Underlying Longitudinal Models
 Terminology of Panel Models
 Identification
 Stability
 Temporal Lags in Panel Models
 Growth across Time in Panel Models
 Stability of Causal Processes
 Effects of Excluded Variables
 Correlation and Regression Approaches for Analyzing Panel Data
 Summary
Part 3: Factor Analysis and Path Modeling
 Chapter 7: Introducing the Logic of Factor Analysis and Multiple Indicators to Path Modeling
 Factor Analysis
 Logic of Factor Analysis
 Exploratory Factor Analysis
 Confirmatory Factor Analysis
 Use of Confirmatory Factor Analysis Techniques
 Constraining Relations of Observed Measures with Factors
 Confirmatory Factor Analysis and Method Factors
 The Basic Confirmatory Factor Analysis Path Model for MultitraitMultimethod Matrices
 Confirmatory Factor Analysis Approaches to MultitraitMultimethod Matrices and Model Identification
 Summary of Confirmatory Factor Analysis and MultitraitMultimethod Models
 Initial Testing of Plausibility of Models: Consistency Tests
 Number of Indicators and Consistency Tests
 Costner's Original Consistency Model
Part 4: Latent Variable Structural Equation Models
 Chapter 8: Putting it All Together: Latent Variable Structural Equation Modeling
 The Basic Latent Variable Structural Equation Model
 The Measurement Model
 Reference Indicators
 The Structural Model
 An Illustration of Structural Equation Models
 Model Specification
 Identification
 Equations and Matrices
 Basic Ideas Underlying Fit/Significance Testing
 Individual Parameter Significance
 Model Fitting
 The Measurement Model
 The Structural Model
 The Variance/Covariance Matrices
 Chapter 9: Using Latent Variable Structural Equation Modeling to Examine Plausibility of Models
 Example 1: A Longitudinal Path Model
 Example 2: A Nonrecursive MultipleIndicator Model
 Example 3: A Longitudinal MultipleIndicator Panel Model
 Chapter 10: Logic of Alternative Models and Significance Tests
 Nested Models
 Tests of Overall Model Fit
 Absolute Indexes
 Relative Indexes
 Adjusted Indexes
 Fit Indexes for Comparing NonNested Models
 Setting up Nested Models
 Why Models may Not Fit
 Illustrating Fit Tests
 Chapter 11: Variations on the Basic Latent Variable Structural Equation Model
 Analyzing Structural Equation Models when Multiple Populations are Available
 Overview of Methods
 Comparing Processes across Samples
 Testing Plausibility of Contraints
 Constraints in the Measurement Model
 Constraints in the Structural Model
 When and how to Impose Equality Constraints
 SecondOrder Factor Models
 AllY Models
 Chapter 12: Wrapping up
 Criticisms of Structural Equation Modeling Approaches
 “Internal” Critics
 “External” Critics
 Emerging Criticisms
 Post Hoc Model Modification
 Topics not Covered
 Power Analysis
 Nonlinear Relationships
 Alternative Estimation Techniques
 Analysis of Noncontinuous Variables
 Adding Analysis of Means
 Multilevel Structural Equation Modeling
 Writing up Papers Containing Structural Equation Modeling Analysis
 Selecting a Computer Program to do Latent Variable Structural Equation Modeling

Dedication
To members of my family, who made this project possible, I dedicate this book:
my parents, George and Helen; my wife, Barbara; and our children, Kristie and Dan.
Copyright
Copyright © 1998 by Sage Publications, Inc.
All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher.
For information:
SAGE Publications, Inc.
2455 Teller Road
Thousand Oaks, California 91320
Email: order@sagepub.com
SAGE Publications Ltd
1 Oliver's Yard
55 City Road
London EC1Y 1SP
SAGE Publications India Pvt Ltd
B42, Panchsheel Enclave
Post Box 4109
New Delhi 110 017
Printed in the United States of America
Library of Congress CataloginginPublication Data
Maruyama, Geoffrey M.
Basics of structural equation modeling/by Geoffrey M. Maruyama.
p.cm.
Includes bibliographical references and index.
ISBN 0803974086 (cloth).—ISBN 0803974094 (pbk.)
1. Multivariate analysis. 2. Social sciences—Statistical methods. I. Title.
QA278.M374
1997
974839
519.5′35—dc21
03 10 9 8 7 6
Acquiring Editor: C. Deborah Laughton
Editorial Assistant: Eileen Carr
Production Editor: Diana E. Axelsen
Production Assistant: Denise Santoyo
Typesstter/Designer: Marion Warren
CoverDesigner: Candice Harman
Print Buyer: Anna Chin
Preface
This book is intended for researchers who want to use structural equation approaches but who feel that they need more background and a somewhat more “gentle” approach than has been provided by books published previously. From my perspective as a longtime user of structural equation methods, many individuals who try to use these techniques make fundamental errors because they lack understanding of the underlying roots and logic of the methods. They also can make “silly” mistakes that not only frustrate them but also invalidate their analyses because writers have assumed that readers would understand basics of the methods (e.g., what are called reference indicators).
Because I came to these techniques fairly early (in the early 1970s), what now is history was what was current then. I learned about methods such as path analysis as contemporary methods, and they evolved into the current methods over time. I hope that I effectively transmit the strengths and limitations of these techniques as well as the ways in which they led to current methods.
I began teaching about these techniques in the spring of 1977 by default, for I was the only person in my department who had used the latent variable methods and understood them, and I also was one of the few who had access to the programs. For years, I patched together my course and looked for a book that I liked. Finally, I decided to write about what I taught. The product of that decision is this book.
I wrote this not as a statistician on the cutting edge of the approaches but rather as a user with strong interest in methods. The book reflects the way in which I came to these methods, namely, beginning with theory and a data set from a school desegregation study and looking for methods that could use the nonexperimental data from that project to examine plausibility of different theoretical views. In fact, I practically advertise the substantive problems that led me to these methods, for they appear repeatedly in examples and illustrations. As I will say again in the text, I do not use my data as examples because they are great data sets or because the models fit perfectly. They are not and do not. At the same time, they are the kinds of data that researchers find themselves having, and the substantive problems are ones that are accessible to readers. If they are as accessible as I think they are, then I am likely to get notes and comments from readers about the alternative models they generated from the data sets!
Throughout the book, I tried to present topics and issues in a way that will help readers conceptualize their models. In particular, I tried to spend time discussing logic of alternative approaches. One example is the discussion of nonrecursive versus longitudinal models. Although I have my preference, it is a relative one rather than an absolute one, and my own ultimate decision in any instance would be driven by a combination of methodological and conceptual issues.
As I progressed into the writing of the book, I found out quickly how hard it is to describe some of the more complex methods in simple terms. I suspect that there are instances in which I did not manage to stay “gentle” despite my best intentions. I would appreciate feedback from readers about where the complexity is too great or where the descriptions are unclear.
The book is divided into four parts: background, singlemeasure approaches, factor analysis and multiple indicators, and latent variable structural equation approaches. Readers with strong quantitative skills and backgrounds should be able to sample selectively from the first three parts of the book (Chapters 1–7) and focus on the remaining chapters, which present latent variable structural equation modeling. All readers, however, should be sure that they understand the logic underlying the methods. Furthermore, they should look at the examples and illustrations, for those make concrete many of the issues presented in more abstract ways.
Finally, once readers get to the end and go on to try to use the techniques, they should be able to go back to the illustrations and compare their analyses to those I report. I have appended LISREL control statements for most of the examples. Comments and queries can be sent by email to geofmar@vx.cis.umn.edu.
Acknowledgments
First, I have to thank my former students, who used earlier drafts of this book in my class as I refined it. Their feedback was very helpful, and I hope it results in a book that will be kind to future students. Second, thanks are due to those who helped me learn structural equation modeling (SEM) techniques: Norman Miller, my graduate school adviser who gave me my first SEM problem and helped me think about conceptual issues; Norman Cliff, who happened to have the Educational Testing Service publication on LISREL and focused me away from multiple regression approaches to latent variable SEM; Ward Keesling, an earlier developer of SEM approaches who provided advice and served informally on my thesis committee even though his university was different from mine; Peter Bentler, who allowed me to sit in on a class of his (at that same other university) as he explored SEM issues; and my colleagues here at Minnesota—George Huba, who shared the first SEM class with me as we stayed ahead of our students, and Bob Cudeck, who provided support, feedback, ideas, and resources as I worked on this book. Third are the array of colleagues and students who came to me with their problems, for they enriched my views about what comes easily and what is difficult to understand in SEM. Fourth, this book would not have been done were it not for the encouragement of (or was that prodding by?) my editor, C. Deborah Laughton, and the excellent and helpful group of reviewers that she found. Reviewers whose good advice I did not follow should know that I tried to incorporate their feedback, and where the advice was consistent and clear, I did. At the same time, I found instances in which there was not agreement among them, which gave me license both to pick and choose and perhaps to stay close to the views that I had acquired over time.

Appendix A: A Brief Introduction to Matrix Algebra and Structural Equation Modeling
Matrix algebra provides a way in which to represent multiple equations in a form that both consolidates information and allows efficient data analysis. By working with matrices, mathematical operations can be expressed in a compact fashion. Finally, with respect to structural equation modeling (SEM) and regression approaches, matrix algebra simplifies and makes more accessible the mathematic operations that are used. (Readers searching for a second source on matrix algebra could see Kerlinger & Pedhazur, 1973.)
What is a Matrix?A matrix is an m × n rectangle containing numbers, symbols that stand for numbers, or variable names. The order of the matrix is m rows by n columns. For example, a 2 × 3 matrix has two rows and three columns. To illustrate,
The pairs of numbers in parentheses are not intended to be values; rather, they represent the coordinates of each element of the matrix. For example, the (row 1, column 1) element will be located where (1,1) is in the matrix. In other words, the coordinates first give the number of the row of any element and then give the number of the column of that element. The coordinates of elements are important for a number of reasons, including (a) they often may be used as subscripts for unknown coefficients (e.g., b12, (b) they can be used to identify the variables that are being related (e.g., r21), and (c) they are used in some SEM programs to specify parameters to be estimated in matrices used by those programs.
A 2 × 3 matrix with values rather than coordinates would look like
With labels for rows and columns, it would look like
So, for example, the value of the (row 1, column 2) element, namely, (1, 2) in the preceding instance, is 3. A special case of a matrix is one called a null matrix, which contains only 0's.
Sometimes the matrix may totally contain algebraic representations of the elements, for example, using subscripts ij where i represents the row coordinate and j the column coordinate. The preceding 2×3 matrix could be presented as
In the example, b21 = 6 and b23 = 8.
In SEM analyses, each row corresponds to a dependent or endogenous variable. That is, each dependent variable has its own equation, which is a row. By contrast, columns correspond to predictor variables, which may be either exogenous or endogenous. Thus, if we have a system of structural equations containing three dependent variables, then matrix representations of those equations would require matrices with three rows. The number of columns in a matrix containing the SEM path (regression) coefficients would be the sum of the number of (a) exogenous or independent variables that were in the equations and (b) endogenous variables (sometimes limited to endogenous variables that are used to predict other endogenous variables).
If a matrix has either one row or one column, then it is called a vector. Vectors are used regularly to present variables and residuals. If, for example, our endogenous variables were peer popularity and achievement, then a vector for those variables would be
The vector has two rows and one column. Vectors also can have only a single row plus multiple columns.
A common matrix operation is one that turns a matrix on its side by turning rows into columns and columns into rows. It is called taking the transpose of a matrix. If we were to take the transpose of Matrix B in the preceding, then the new matrix (B‘), using the elements as labeled previously, would be
Note that the elements with two identical subscripts do not move, whereas the others move “around the diagonal.” What formerly was the (row 1, column 3) element (b13) now is found in the third row but first column. It has kept its “old” coordinates, so it still is b13. Of course, if we were to give the elements of the transpose new coordinates, then those would correspond to the new rows and columns and b13 would become b31, reflecting its new row and column coordinates.
Square MatricesIf the numbers of rows and columns are identical, then the matrix is called square. For square matrices, the set of elements running from the upper lefthand corner of the matrix to the lower righthand corner is called the diagonal. In terms of coordinates, the diagonal is made up of elements that have two identical coordinates (e.g., r11). If the only nonzero elements of a matrix are found on the diagonal, then the matrix is called a diagonal matrix.
Symmetric (Square) MatricesMatrices like correlation and covariance matrices are called symmetric. Rows and columns are defined by the same variables in the same order, and the matrices have the same elements above the diagonal as below the diagonal except that the elements are transposed. All correlation matrices or covariance matrices have to be both square and symmetric. Here is an example of a symmetric matrix:
Note that (2, 1) equals (1, 2), that (3, 1) equals (1, 3), and that (3, 2) equals (2, 3).
Identity MatricesA special form of a diagonal symmetric matrix is an identity matrix. It contains 1's on the diagonal and 0's (by its definition as diagonal) everywhere else. It is designated by I. A 3 × 3 diagonal matrix would be
The identity matrix serves the same function as the number 1; anything multiplied by I equals itself. So, if Matrix A were to be multiplied by I, then the result would be A. In other words,
Matrix OperationsThere are three basic matrix operations.
Multiplying a Matrix Times a Single Number (or scalar)
The result is that each element of the matrix is multiplied times that number. So, if Matrix B in the preceding discussion were multiplied times the number 3, then each element would be three times larger. For example, b12 would now be 3 × b12 or 3b12.
Taking the Sum of (or difference between) Two Matrices
For two matrices to be summed, those matrices have to be the same size, that is, have identical numbers of rows and columns. If they are the same size, then each will have the same number of elements. Addition and subtraction are done by combining corresponding elements of the matrices on an elementbyelement basis. So, for example, imagine that we want to add together Matrices C and D, where
If we were subtracting D from C, then we would be doing the equivalent of multiplying each element in D by a −1 (described earlier as Operation 1), which would change the signs of each of the elements of D, and then adding the corresponding elements together. So, (1, 1) would be 4 + (−2) rather than 4 + 2 when C and D are summed. For addition and subtraction, the rules are simply that (a) the matrices have to be the same size and (b) corresponding elements in the two matrices must be combined.
Multiplying Two Matrices Together
Multiplication of matrices is addressed in two steps. First, the conditions under which multiplication can be done are described. Second, the mechanics of matrix multiplication are explained.
When Multiplication is PossibleTo be able to multiply two matrices together, the first matrix (or that which appears on the left) needs to have a number of columns equivalent to the number of rows of the second (or right) matrix. The rows of the first matrix and columns of the second matrix define the size of the resulting matrix. If we were trying to multiply Matrix E (r×s) times Matrix F (t× u), E with r rows and s columns and F with t rows and u columns, then s must equal t, and the resulting matrix has dimensions of r by u. Ordering of the matrices is very important, for Matrix E times Matrix F is not the same as F times E. To multiply F times E, r would have to equal u. If they are not equal, then even though it is possible to compute E × F, it is not possible to compute F × E using matrix algebra. One way in which to do notation is to put the number of rows in a matrix to the left of the matrix name and the number of rows to the right, as rEs × tFu. In such notation, s and t can be compared readily.
For example, can we multiply E (3 × 2) times F (2 × 3), or 3E2 × 2F3? Yes, for E has two columns and F has two rows, so the requirement is met. As stated, the matrix that is the product E×F will have the same number of rows as E and columns as F and will be 3 × 3, the “outside” numbers in 3E2 × 2F3. Illustrating the differences between E×F and F × E is simple; F×E also could be computed, but the result would be a 2 × 2 matrix. If G were substituted for F and was 2×4, then E×G could be calculated, for the two columns of E correspond to the two rows of G. By contrast, G × E cannot be computed, for G‘s four columns do not align with E‘s three rows.
Computations for Matrix AlgebraIn matrix multiplication, the row elements from the first matrix are multiplied by their corresponding column elements from the second matrix. What that means concretely can best be explained through illustration. Matrix E (3 × 2) and Matrix F (2 × 3) will be multiplied:
For any element (i, j) of the resulting product matrix, the elements of row i from E are combined with the elements of column j from F. So,
For example, element (1, 1) in the product matrix is determined by multiplying the elements of the first row of £ by the elements of the first column of F. Element (1, 1) is [(1 × 3) + (2 × 2)] = 7, where (1 × 3) is (first row, first element of first matrix) times (first column, first element of second matrix) and (2 × 2) is (first row, second element of first matrix) times (first column, second element of second matrix).
Elements of row 1 in the resulting matrix all use the firstrow elements from Matrix E but combine with the corresponding values from the columns of F.
Illustrations are done for elements (2, 2) and (3, 1) of E × F. Element (2, 2) uses the second row of E and the second column of F; thus, 23 = (2 × 4) + (3 × 5), where 2 is the first element of the second row of E, 4 is the first element of the third column of F, 3 is the second element of the second row of E, and 5 is the second element of the third column of F.
Element (3,1) uses the third row of E and the first column of F; thus, 14 = (4 × 3) + (1 × 2)
Up to this point, nothing has been said about division of matrices, and for good reason. In fact, division cannot be done. The closest thing to division is multiplying a matrix times the inverse of some matrix, where the inverse is analogous to a reciprocal of a number. The discussion of collinearity in this book centers around issues tied to invertibility, for correlation or covariance matrices with perfect collinearity have no inverse (are not invertible), and regression approaches cannot produce a valid solution.
Inverting MatricesBy definition, an inverse of a Matrix H, written as H−1, is the matrix that, when multiplied times another matrix, yields an identity matrix. That is, H−1H = I. Because I matrices always are square, only square matrices can have inverses. At the same time, as already noted, many square matrices do not have inverses. Those that do not have inverses cause problems for SEM analyses.
Specifically, if there exists a Matrix B such that A×B = I, then A is said to be nonsingular or invertible and B is the inverse of A. Similarly, B is nonsingular and A is its inverse (in this case, A×B = B ×A = I). Because calculating inverses is complicated and tedious, details are not provided here. Most important, standard statistical packages calculate inverses in regression and factor analysis programs, and often they offer inverses and determinants, which are described next, as optional output. As is explained in the text, the diagonal elements of inverses of correlation matrices provide information about collinarity among variables.
DeterminantsDeterminants are a single numerical value associated with any square matrix. Mathematics of calculating inverses is not covered here, for as matrices get larger, the calculations get more complex and more difficult to illustrate. Readers interested in attaining a fuller understanding should consult a book on matrices or matrix algebra (e.g., Marcus & Minc, 1964).
For the simplest case, a 2 × 2 matrix, calculation is fairly simple. Consider Matrix A:
The determinant is (a × d) − (b × c).
then its determinant is (3 × 5) − (2 × 1) = 13.
For correlation matrices, the determinant will range between 1 (if all variables are totally uncorrelated) and 0 (if there is perfect collinearity between one or more variables). If a determinant is very close to 0, then there must be substantial relationships between variables, and the data should be examined to look for problems tied to collinearity.
Matrices and RulesFinally, a brief reminder about how some common rules apply to matrices:
Commutative: A + B = B + A; however, A × B and B × A are not equal except in special circumstances
Associative: A + (B + C) = (A + B) + C; A(BC) = (AB)C
Distributive: A(B + C) = AB + AC; (B + C)A = BA + CA
Distributing Transposes: (AB)‘= B'A′ (note that the order of elements is reversed).
Answers to Chapter Discussion Questions
 Yes, remembering that dichotomous and ordered categorical measures can fall under the class of quantitative data.
 Yes, provided sample size issues are addressed. That is, some time series analyses and other analyses at aggregated levels can have samples too small for these methods.
 No, moderation implies an interaction type of effect between variables. Chapter 12 will address moderating effects.
 Equivalent models are those that predict the exact same pattern of relationships between measures. For example, Job—Success—Family is mathematically equivalent to Figure 1.2, as is Success causing both Family and Job, for they predict the relationship between Family and Job to be the product of the other two correlations. Because the models always start with theory, one should have chosen a model from among equivalent ones that matches the hypothesized relationships. That of course does not make the model correct, but it affirms plausibility of the theory.
 Yes, technically speaking, path analysis always uses standardized data.
 Yes, for identified or overidentified models, multiple regression yields optimal estimates. Yes, path coefficients are partial regression coefficients.
 Again speaking technically, no, for when data are longitudinal, covariance matrices need to be analyzed. On the other hand, if nonstandardized coefficients are examined, then regression techniques can be used to analyze longitudinal data. (Those coefficients sometimes were called path regression coefficients in the path analysis literature.).
 Degrees of freedom for path models are determined by the number of pieces of information that are available to use for solving for path coefficients. In the same way that subjects are bits of information for many analyses, each correlation (covariance) is one piece of information for path models. Each path to be estimated “uses up” a degree of freedom, so eliminating a path “puts back” a degree of freedom in the model. Degrees of freedom provide the opportunity for models to diverge from the data, and therefore allow the possibility of model disconfirmation.
 Underidentified models cannot be solved. If their plausibility is of interest, they need to be reconceptualized to make them identified.
 Surprisingly, the answer is Yes, even though there are few cases in which they are appropriate, for they are far inferior to the latent variable structural equation techniques described later in this book.
 Because path analysis is regression analysis, it analyzes correlation or covariance matrices that are used in regression analysis.
 Partial correlation attempts to completely eliminate the relationships of the controlled variable with the remaining variables and their relationships with one another, while partial regression attempts to spread common variance across the various predictors. Partial correlation would be picked to look at residual relationships after removing some variable or variables.
 Although path analysis approaches do not formally talk about working with partial correlation matrices, there may be instances in which, due to sample size limitations, control variables like age or gender would need to be partialed out so the sample is sufficient for SEM techniques. Remember, however, that decisions to partial need to be guided theoretically, and therefore likely should not be done if the variable to be partialed is expected to display different causal structures at different levels.
 The signs of the nonstandardized coefficients will be the same as the standardized coefficients, and the nonstandardized values are very descriptive insofar as they describe the relationship in raw score units. For example, we could say that for every year (1 raw score unit) of education “produces” an X dollar (raw score units) increase in expected annual earnings.
 Stepwise regression can be guided by theory, and is when used for decomposing effects. If it is not, yes, it can be misleading. Remember, however, that at the last step of the regression analysis, if all variables are entered into the equation, the order of entry used does not matter; all orders of entry yield the same final outcome provided the same variables are in the equation.
 Yes, the logic of decomposition of effects is the same across all different types of SEM techniques.
 The matrix form is very appealing, for it works for overidentified as well as justidentified models. The other approaches work as well, requiring that coefficients that are omitted from the model take on a value of 0.
 ANOVAs are not used, although path analysis can be used to model experimental studies. Path modeling can be particularly effective if some variable is viewed as a mediating variable. There it can be used to test plausibility of a mediation model. It also can be valuable if there are questions about the conceptual variable that is being assessed by some independent or dependent variable. It may be possible to use SEM techniques to aggregate measures into conceptual variables.
 The information on variability within the sample is very important, and it is lost in converting to correlations. Of course the tradeoff is that a correlation metric (ranging from −1 to 1, the meaning of r2, etc.) is so intuitive.
 Here, if we are talking about assumptions of regression, viz., independence of residuals, then we are thinking only about path analysis models. Other path models can allow residuals to covary, which is why it is tricky to talk about “too much nonrandom error” as a violation of an assumption. Whether or not assuming residuals to be independent is reasonable is a question that begins with theory but then is “tested” by data. As will be explained later, tests of fit are all based upon residuals, the part of relationships that are not accounted for by models. Sometimes, it will be obvious by looking at a correlation matrix that a hypothesized model will not fit. For example, if a set of four measures is hypothesized as assessing a single construct, but two of the measures have a correlation twice that of all the others, it is clear that a single factor model will not work well. Other times, the pattern of relationships will be more complicated, and can be examined only through looking at the residual matrix and the measures of model fit.
 To go from standardized to nonstandardized coefficients or vice versa, it is the standard deviations, not the standard errors, that are used. Yes, there is a fairly simple conversion from one to the other that requires only dividing by (or multiplying by) a ratio of two standard deviations. There has been a controversy in the literature about the meaning of standardized versus nonstandardized coefficients and how to explain that difference. From my perspective, it is most important to think of it conceptually. Standardized coefficients describe relations in standard deviation units, whereas nonstandardized coefficients describe relations in real raw score units.
 Decisions about relationships between methods should be driven at the data collection stage. It seems to me that the ideal answer is “no” that it would be simpler if there was no method variance. At the same time, one needs to trade off difficulty in collecting data where methods do not exert influence on answers versus simplicity in getting needed data. If one decides that the best decision is to collect data from measures that have method variance, then that variability needs to be modeled so variability can be adequately partitioned.
 Systematically attending to method variability is clearly as important today as it was in the past. Using multiple methods is highly desirable. Yet, as will be described in Chapter 7, MTMM matrices produce problems of estimation in certain types of models, which reduces somewhat their value.
 Lag refers to the passage of time. The exact amount of time varies with the nature of the conceptual issues.
 I suspect that I am guilty of accepting ways of talking about stability that do not fit with some other uses of terms, and, more importantly, I have not been as clear in my usage as I should have been. To clarify (hopefully): Stability as I have used it refers only to single variables. In the absolute sense, stability means absence of change in some variable across some time period. With respect to covariances, however, stability means only that the relative position of a group of individuals on some dimension does not change. For example, if all children in a class grew at a common rate, their height scores would all increase by a constant, and their heights at the first time (before growing) would correlate perfectly with their heights at the second time (after growing), and height would be perfectly stable. Said differently, height at time 1 perfectly predicts height at time 2. Finally, if a relationship between 2 variables is called stable, then their covariance should not have changed from some point in time to some later time.
References
(1994). AMOS: Analysis of moment structures. Psychometrika, 59, 135–137. http://dx.doi.org/10.1007/BF02294272(1997). AMOS users'guide: Version 3.6. Chicago: SPSS.(1991). On the use of SE models in experimental designs: Two extensions. International Journal of Research in Marketing, 8, 125–140. http://dx.doi.org/10.1016/01678116%2891%29900208(1986). The moderatormediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. http://dx.doi.org/10.1037/00223514.51.6.1173, & (1983). Specious causal attributions in the social sciences: The reformulated steppingstone theory of heroin use as exemplar. Journal of Personality and Social Psychology, 45, 1289–1298. http://dx.doi.org/10.1037/00223514.45.6.1289(1989). EQS: Structural equation manual. Los Angeles: BMDP Statistical Software.(1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246. http://dx.doi.org/10.1037/00332909.107.2.238(1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88, 588–606. http://dx.doi.org/10.1037/00332909.88.3.588, & (1996). Covariance structure analysis: Statistical practice, theory, directions. Annual Review of Psychology. 47, 563–592. http://dx.doi.org/10.1146/annurev.psych.47.1.563, & (1964). Causal inferences in nonexperimental research. Chapel Hill: University of North Carolina Press.(1989). A new incremental fit index for general structural equation models. Sociological Methods & Research, 17, 303–316.(1990). Overall fit in covariance structure models: Two types of sample size effects. Psychological Bulletin, 107, 256–259. http://dx.doi.org/10.1037/00332909.107.2.256(1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110, 305–314. http://dx.doi.org/10.1037/00332909.110.2.305, & (1993). Testing structural equation models. Newbury Park, CA: Sage., & (1987). Model selection and Akaike's information criteria (AIC): The general theory and its analytical extensions. Psychometrika, 52, 345–370. http://dx.doi.org/10.1007/BF02294361(1990). Applications of covariance structure modeling in psychology: Cause for concern?Psychological Bulletin, 107, 260–273. http://dx.doi.org/10.1037/00332909.107.2.260(1974). Teacherstudent relationships: Causes and consequences. New York: Holt, Rinehart & Winston., & (1984). The decomposition of multitraitmultimethod matrices. British Journal of Mathematical and Statistical Psychology, 37, 1–21. http://dx.doi.org/10.1111/j.20448317.1984.tb00785.x(1995). Specification and estimation of mean and covariancestructure models. In G.Arminger, C.C.Clogg, & M.E.Sobel (eds.), Handbook of statistical modeling for the social and behavioral sciences (pp. 185–249). New York: Plenum., & (1989). Single sample crossvalidation indices for covariance structures. Multivariate Behavioral Research, 24, 445–455. http://dx.doi.org/10.1207/s15327906mbr2404_4, & (1993). Alternative ways of assessing model fit. In K.A.Bollen & J.S.Long (eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage., & (1992). Hierarchical linear models: Applications and research methods. Newbury Park, CA: Sage., & (1989). A primer of LISREL: Basic applications and programming for confirmatory factor analysis models. New York: SpringerVerlag. http://dx.doi.org/10.1007/9781461388852(1994). Structural equation modeling with EQS and EQS/Windows: Basic concepts, applications, and programming. Thousand Oaks, CA: Sage.(1989). Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105, 456–466. http://dx.doi.org/10.1037/00332909.105.3.456, , & (1973). Interpersonal attraction. Annual Review of Psychology, 24, 317–336. http://dx.doi.org/10.1146/annurev.ps.24.020173.001533, & (1977). Selfconcept of ability and perceived evaluation of others: Cause or effect of academic achievement?Journal of Educational Psychology, 69, 136–145. http://dx.doi.org/10.1037/00220663.69.2.136, & (1959). Convergent and discriminant validation by the multitraitmultimethod matrix. Psychological Bulletin, 56, 81–105. http://dx.doi.org/10.1037/h0046016, & (1967). Methods factors in multitraitmultimethod matrices: Multiplicative rather than additive?Multivariate Behavioral Research, 2. 409–426. http://dx.doi.org/10.1207/s15327906mbr0204_1, & (1983). Some cautions concerning the application of causal modelling methods. Multivariate Behavioral Research, 18, 115–126. http://dx.doi.org/10.1207/s15327906mbr1801_7(1992). A power primer. Psychological Bulletin, 112. 155–159. http://dx.doi.org/10.1037/00332909.112.1.155(1987). Utility of confirmatory factor analysis in test validation research. Journal of Consulting and Clinical Psychology, 55, 584–594. http://dx.doi.org/10.1037/0022006X.55.4.584(1993). Multivariate group comparisons of variable systems: MANOVA and structural equation modelling. Psychological Bulletin, 114, 174–184. http://dx.doi.org/10.1037/00332909.114.1.174, , , & (1978, October). Explanatory observational studies. Educational Researcher, pp. 9–15.(1969). Theory, deduction, and rules of correspondence. American Journal of Sociology. 75, 245–263http://dx.doi.org/10.1086/224770(1973). Diagnosing indicator ills in multiple indicator models. In A.S.Goldberger & O.D.Duncan (eds.), Structural equation models in the social sciences (pp. 167–199). New York: Seminar Press., & (1994). Prejudice against fat people: Ideology and selfinterest. Journal of Personality and Social Psychology, 66, 882–894. http://dx.doi.org/10.1037/00223514.66.5.882(1988). Multiplicative models and MTMM matrices. Journal of Educational Statistics, 13, 131–147. http://dx.doi.org/10.2307/1164750(1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin. 105, 317–327. http://dx.doi.org/10.1037/00332909.105.2.317(1983). Crossvalidation of covariance structures. Multivariate Behavioral Research, 18, 147–167. http://dx.doi.org/10.1207/s15327906mbr1802_2, & (1991). Model selection in covariance structures analysis and the “problem” of sample size: A clarification. Psychological Bulletin, 109, 512–519. http://dx.doi.org/10.1037/00332909.109.3.512, & (1978). Reducedvariance regression. Psychological Bulletin, 85, 1238–1255. http://dx.doi.org/10.1037/00332909.85.6.1238(1990). Regression and linear models. New York: McGrawHill.(1966). Path analysis: Sociological examples. American Journal of Sociology, 72, 1–16. http://dx.doi.org/10.1086/224256(1975). Introduction to structural equation models. New York: Academic Press.(1993). Modelling covariances and latent variables using EQS. London: Chapman & Hall., & (1986). The application of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291–314. http://dx.doi.org/10.1111/j.17446570.1986.tb00583.x, , & (1996). Validity of exploratory factor analysis as a precursor to confirmatory factor analysis. Structural Equation Modeling, 3, 62–72. http://dx.doi.org/10.1080/10705519609540030, & (1964). Econometric theory. New York: John Wiley.(1968). Issues in multiple regression. American Journal of Sociology, 73, 592–616. http://dx.doi.org/10.1086/224533(1983). Factor analysis ((2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.1993). Evaluating interventions with differential attrition: The importance of nonresponse mechanisms and use of followup data. Journal of Applied Psychology, 78, 119–128. http://dx.doi.org/10.1037/00219010.78.1.119, & (1994). Analysis with missing data in drug prevention research. In L.M.Collins & L.Seitz (eds.), Advances in data analysis for prevention intervention research (pp. 13–53). Washington, DC: American Psychological Association., , & (1977). Parameter sensitivity in multivariate methods. Multivariate Behavioral Research, 12, 263–288. http://dx.doi.org/10.1207/s15327906mbr1203_1(1996). LISREL issues, debates, and strategies. Baltimore, MD: Johns Hopkins University Press.(1995). AMOS, EQS, and LISREL for Windows: A comparative review. Structural Equation Modeling, 2, 79–91. http://dx.doi.org/10.1080/10705519509539996(1995). Structural equation modelling: Concepts, issues, and applications. Thousand Oaks, CA: Sage.(1995). Writing about structural equation models. In R.H.Hoyle (ed.), Structural equation modelling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage., & (1995). Evaluating model fit. In R.H.Hoyle (ed.), Structural equation modelling: Concepts, issues, and applications (pp. 76–99). Thousand Oaks, CA: Sage., & (1995). Measurement error in the analysis of interaction effects between continuous predictors using multiple regression: Multiple indicators and structural equation models. Psychological Bulletin, 117, 348–357. http://dx.doi.org/10.1037/00332909.117.2.348, & (1996). LISREL approaches to interaction effects in multiple regression(Quantitative Applications in the Social Sciences, Vol. 114). Thousand Oaks, CA: Sage.& (1982). Causal analysis: Assumptions, models, and data. Beverly Hills, CA: Sage., , & (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183–202.(1971). Statistical analyses of sets of congeneric tests. Psychometrika, 36, 109–133.(1973). A general method for estimating a linear structural equation system. In A.S.Goldberger & O.D.Duncan (eds.), Structural equation models in the social sciences (pp. 85–112). New York: Seminar Press.(1993). Testing structural equation models. In K.A.Bollen & J.S.Long (eds), Testing structural equation models (pp. 294–316). Newbury Park, CA: Sage.(1988). LISREL 7: A guide to the program and applications. Chicago: SPSS., & (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Mooresville, IN: Scientific Software., & (1990). Evaluating and modifying covariance structure models: A review and recommendation. Multivariate Behavioral Research, 25, 137–155. http://dx.doi.org/10.1207/s15327906mbr2502_1(1972, June). Maximum likelihood approaches to causal flow analysis. Unpublished doctoral dissertation. School of Education, University of Chicago.(1979). Correlation and causation. New York: John Wiley.(1984). Estimating the nonlinear and interactive effects of latent variables. Psychological Bulletin, 96, 201–210. http://dx.doi.org/10.1037/00332909.96.1.201, & (1992). Analysis of the multitraitmultimethod matrix by confirmatory factor analysis. Psychological Bulletin, 112, 165–172. http://dx.doi.org/10.1037/00332909.112.1.165, & (1973). Multiple regression in behavioral research. New York: Holt, Rinehart & Winston., & (1969). Principles of path analysis. In E.F.Borgatta (ed.), Sociological methodology, 1969 (pp. 3–37). San Francisco: JosseyBass.(1974). Contribution of crossrace friendship to minority group achievement in desegregated classrooms. Sociometry, 37, 79–91. http://dx.doi.org/10.2307/2786468, & (1983). The structure of the Philadelphia Geriatric Center morale scale: A reinterpretation. Journal of Gerontology, 38, 181–189. http://dx.doi.org/10.1093/geronj/38.2.181, & (1982). Review of “Correlation and Causation.” Journal of the American Statistical Association, 77, 489–491. http://dx.doi.org/10.2307/2287275(1987). Statistical analysis with missing data. New York: John Wiley., & (1990). The analysis of social science data with missing values. Sociological Methods & Research, 18, 292–326., & (1992). Latent variable models: An introduction to factor, path, and structural analysis ((2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.1986). Specification searches in covariance structure modelling. Psychological Bulletin. 100, 107–120. http://dx.doi.org/10.1037/00332909.100.1.107(1995). Model specification: Procedures, strategies, and related issues. In R.H.Hoyle (ed.), Structural equation modelling: Concepts, issues, and applications (pp. 16–36). Thousand Oaks, CA: Sage.(1993). The use of causal indicators in covariance structure models: Some practical issues. Psychological Bulletin, 114, 533–541. http://dx.doi.org/10.1037/00332909.114.3.533, & (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. http://dx.doi.org/10.1037/1082989X.1.2.130, , & (1995). Distinguishing between moderator and quadratic effect in multiple regression. Psychological Bulletin, 118, 405–421. http://dx.doi.org/10.1037/00332909.118.3.405, & (1992). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490–504. http://dx.doi.org/10.1037/00332909.111.3.490, , & (1994). Alternative strategies for crossvalidation of covariance structure models. Multivariate Behavioral Research, 29, 1–32. http://dx.doi.org/10.1207/s15327906mbr2901_1, , , & (1993). The problem of equivalent models in application of covariance structure analysis. Psychological Bulletin, 114, 185–199. http://dx.doi.org/10.1037/00332909.114.1.185, , , & (1964). A survey of matrix theory and matrix inequalities. Boston: Allyn & Bacon., & (1988). Goodnessoffit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin, 103, 391–411. http://dx.doi.org/10.1037/00332909.103.3.391, , & (1993). Confirmatory factor analysis of MTMM selfconcept data: Betweengroup and withingroup invariance constraints. Multivariate Behavioral Research, 28, 313–349. http://dx.doi.org/10.1207/s15327906mbr2803_2, & (1995). Latent variable models of multitraitmultimethod data. In R.H.Hoyle (ed.), Structural equation modeling: Concepts, issues, and applications (pp. 177–198). Thousand Oaks, CA: Sage., & (1985). Application of confirmatory factor analysis to the study of selfconcept: First and higher order factor models and their invariance across groups. Psychological Bulletin, 97, 562–582. http://dx.doi.org/10.1037/00332909.97.3.562, & (1977). A causal model analysis of variables related to primary school achievement. Dissertation Abstracts International, 38, 1470B. (Doctoral dissertation, Department of Psychology, University of Southern California)(1982). How should attributions be measured? A reanalysis of data from Elig and Frieze. American Educational Research Journal, 19, 552–558. http://dx.doi.org/10.3102/00028312019004552(1993). Models of social psychological influences in schooling. In H.J.Walberg (ed.), Advances in educational productivity (Vol. 3, pp. 3–19). Greenwich, CT: JAI.(1980). Evaluating causal models: An application of maximum likelihood analysis of structural equations. Psychological Bulletin, 87, 502–512. http://dx.doi.org/10.1037/00332909.87.3.502, & (1979). Reexamination of normative influence processes in desegregated classrooms. American Educational Research Journal, 16, 272–283. http://dx.doi.org/10.3102/00028312016003273, & (1980). Physical attractiveness, race, and essay evaluation. Personality and Social Psychology Bulletin, 6, 384–390. http://dx.doi.org/10.1177/014616728063008, & (1981). Physical attractiveness and personality. In B.Maher (ed), Progress in experimental personality research (Vol. 10, pp. 203–280). New York: Academic Press., & (1986). The relation between popularity and achievement: A longitudinal test of the lateral transmission of values hypothesis. Journal of Personality and Social Psychology, 51, 730–741. http://dx.doi.org/10.1037/00223514.51.4.730, , & (1990). Patterns of change within latent variable structural equation models. In A.von Eye (ed.), Statistical methods in longitudinal research (Vol. 1, pp. 151–224). New York: Academic Press.& (1966). Multilevel models for a multiple group structural equation perspective. In G.A.Marcoulides & R.E.Schumacher (eds.), Advanced structural equation modeling: Issues and techniques (pp. 57–88). Mahwah, NJ: Lawrence Erlbaum., & (1986). Modem racism, ambivalence, and the modem racism scale. In J.Dovidio & S.L.Gaertner (eds.), Prejudice, discrimination, and racism: Theory and research (pp. 91–124). New York: Academic Press.(1990). Choosing a multivariate model: Noncentrality and goodness of fit. Psychological Bulletin, 107, 247–255. http://dx.doi.org/10.1037/00332909.107.2.247, & (1977). Scoring field dependence: A methodological comparison of five rodandframe scoring systems. Applied Psychological Measurement, I. 433–446. http://dx.doi.org/10.1177/014662167700100312, , & (1984). Measurement and evaluation in education and psychology (, & (3rd ed.). New York: Holt, Rinehart & Winston.1964). Rotation to achieve factorial invariance. Psychometrika, 29, 187–206. http://dx.doi.org/10.1007/BF02289700(1995). Coefficient alpha: A basic introduction from the perspective of classical test theory and structural equation modeling. Structural Equation Modeling, 2, 255–273. http://dx.doi.org/10.1080/10705519509540013(1972). The foundations of factor analysis. New York: McGrawHill.(1989). Evaluation of goodnessoffit indices for structural equation models. Psychological Bulletin. 105, 430–445. http://dx.doi.org/10.1037/00332909.105.3.430, , , , , & (1984). A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators. Psychometrika, 49, 115–132. http://dx.doi.org/10.1007/BF02294210(1988). LISCOMP: Analysis of linear structural equations with a comprehensive measurement model. Chicago: Scientific Software.(1993). Goodness of fit with categorical and nonnormal variables. In K.A.Bollen & J.S.Long (eds), Testing structural equation models (pp. 205–234). Newbury Park, CA: Sage.(1994). Multilevel covariance structure analysis. Sociological Methods & Research, 22, 376–398.(1987). On structural equation modeling for data that are not missing completely at random. Psychometrika, 52, 431–462. http://dx.doi.org/10.1007/BF02294365, , & (1975). Applied multivariate analysis and experimental design. New York: McGrawHill., , & (1995). Correlations redux. Psychological Bulletin, 118, 155–164. http://dx.doi.org/10.1037/00332909.118.1.155, & (1964). Detecting causal priorities in panel study data. American Sociological Review, 29, 836–848. http://dx.doi.org/10.2307/2090866, & (1995). A parsimonious estimating technique for interaction and quadratic latent variables. Journal of Marketing Research, 32, 336–347. http://dx.doi.org/10.2307/3151985(1977). Ridge regression: Application to nonexperimental data. Psychological Bulletin, 84, 759–766. http://dx.doi.org/10.1037/00332909.84.4.759(1991). Reporting structural equation modeling results in psychology and aging: Some proposed guidelines. Psychology and Aging, 6, 499–503. http://dx.doi.org/10.1037/08827974.6.4.499, , & (1995). A necessary and sufficient identification rule for structural equation models estimated in practice. Multivariate Behavioral Research, 30, 359–383. http://dx.doi.org/10.1207/s15327906mbr3003_4(1988). Some theory and applications of confirmatory secondorder factor analysis. Multivariate Behavioral Research, 23, 51–67. http://dx.doi.org/10.1207/s15327906mbr2301_3, & (1980). A critique of crosslagged correlations. Psychological Bulletin, 88, 245–258. http://dx.doi.org/10.1037/00332909.88.2.245(1969). More plausible rival hypotheses in the crosslagged panel correlation technique. Psychological Bulletin, 71, 74–80. http://dx.doi.org/10.1037/h0026863, & (1976). Causal inference in crosslagged panel analysis. Political Methodology, 3, 95–133.(1985). Use of null models in evaluating the fit of covariance structure models. In N.B.Tuma (ed). Sociological methodology, 1985 (pp. 152–178). San Francisco: JosseyBass., & (1974). A general method for studying differences in factor means and factor structures between groups. British Journal of Mathematical and Statistical Psychology, 27, 229–239.(1982). Structural equation models with structured means. In K.G.Jöreskog & H.Wold (eds), Systems under direct observation (pp. 183–195). Amsterdam: North Holland.(1989). EZPATH: A supplementary module for SYSTAT and SYGRAPH. Evanston, IL: SYSTAT.(1990). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research, 25, 173–180. http://dx.doi.org/10.1207/s15327906mbr2502_4(1993). Consequences of adolescent drug use on young adult job behavior and job satisfaction. Journal of Applied Psychology, 3, 463–474. http://dx.doi.org/10.1037/00219010.78.3.463, , & (1993). Multifaceted conceptions of fit in structural equation models. In K.A.Bollen & J.S.Long (eds.), Testing structural equation models (pp. 10–39). Newbury Park, CA: Sage.(1984). Confirmatory hierarchical factor analysis of psychological distress measures. Journal of Personality and Social Psychology, 46, 621–635. http://dx.doi.org/10.1037/00223514.46.3.621, & (1990). Theory testing in personality and social psychology with structural equation models: A primer in 20 questions. In C.Hendrick & M.S.Clark (eds.), Review of personality and social psychology (Vol 11, pp. 217–242). Newbury Park, CA: Sage., , , & (1938). Primary mental abilities. Psychometric Monographs, No. 1.(1973). The reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38, 1–10. http://dx.doi.org/10.1007/BF02291170, & (1993). Seven confirmatory factor analysis programs: EQS, EZPATH, LINCS, LISCOMP, LISREL 7, SIMPLIS, and CALIS. Applied Psychological Measurement, 17, 73–100. http://dx.doi.org/10.1177/014662169301700112(1973). The identification problem for structural equation models with unmeasured variables. In A.S.Goldberger & O.D.Duncan (eds.), Structural equation models in the social sciences (pp. 69–83). New York: Seminar Press.(1994). Using covariance analyses to detect correlates and predictors of individual change over time. Psychological Bulletin, 116. 363–381. http://dx.doi.org/10.1037/00332909.116.2.363, & (1986). Normalization issues in latent variable modeling. Sociological Methods & Research, 15. 24–43., & (1987, April). Multivariate linear models of the multitraitmultimethod matrix. Paper presented at the annual meeting of the American Educational Research Association, Washington, DC.(1921). Correlation and causation. Journal of Agricultural Research, 20, 557–585.(1934). The method of path coefficients. Annals of Mathematical Statistics, 5, 161–215. http://dx.doi.org/10.1214/aoms/1177732676(1993). Fitting of a general nonlinear factor analysis model. American Statistical Association Proceedings (Statistical Computing Section), pp. 118–122., & (Author Index
 Aber, M. S., 108
 Akaike, H., 237, 241, 246
 Amemiya, Y., 280
 Arbuckle, J. L., 19, 179, 261, 283
 Andrews, F. M., 120–121
 Arminger, G., 259
 Arvey, R., 282
 Bagozzi, R. P., 3
 Balla, J. R., 200, 239, 241, 244
 Baron, R. M., 40, 281
 Baumrind, D., 276
 Bennett, N., 239–241, 243, 245
 Bentler, P. M., 19, 179, 239, 240, 241, 242–245, 247–248, 281
 Blalock, H. M. Jr., 17, 106–108
 Bohrnstedt, G. W., 247
 Bollen, K. A., 12, 81, 106, 189, 200, 238, 239, 241, 244, 256
 Bonett, D. G., 240, 243–244, 247–248
 Bozdogen, H., 241, 246
 Breckler, S. J., 274–275
 Brett, J. M., 241, 245, 247–249
 Brophy, J. E., 6
 Browne, M. W., 73, 81, 97, 199, 237, 241, 246, 247, 250, 259, 280
 Bryk, A. S., 282
 Byrne, B. M., 31, 149, 195, 259
 Byrne, D., 6
 Calsyn, R. J., 109
 Carter, L. F., 106–108
 Campbell, D. T., 92–96, 120–121, 149, 151–152
 Cliff, N., 135, 139, 272–275, 278
 Cohen, J., 280
 Cole, D. A., 149, 282
 Cooley, W. W., 3
 Costner, H. L., 132, 154, 158, 279
 Crandall, C. S., 89
 Cudeck, R., 34, 73, 97, 118, 199, 237, 241, 246, 247, 249–250, 280
 Darlington, R. B., 61, 73
 Donaldson, S. I., 217
 Dudgeon, P., 281
 Duncan, O. D., 17, 29, 46
 Dunn, G., 149, 261
 Everitt, B., 149, 261
 Fabrigar, L. R., 276
 Finn, J. D., 71
 Fiske, D. W., 92–96, 120–121, 149, 151–152
 Ford, J. K., 136
 Gerbing, D. W., 138
 Goldberger, A. S., 62
 Good, T. L., 6
 Gordon, R. A., 66–70, 75
 Gorsuch, R. L., 80, 132, 134
 Graham, J. W., 217
 Grayson, D., 153
 Green, B. F., 63
 Griffitt, W., 6
 Guy, S. M., 281
 Hamagami, F., 283
 Hamilton, J. G., 138
 Hayduk, L. A., 12, 284
 Henly, S. J., 249–250
 Hocevar, D., 256, 265
 Hofer, S. M., 217
 Hollis, M., 217
 Holtz, R., 5, 204, 214–220
 Hox, J.J., 284
 Hoyle, R. H., 12,
 Hoyle, R. H., 238, 244, 245, 254, 283
 Hu, L., 239, 242–245, 281
 Huba, G., 81, 243
 Jaccard, J., 280, 281
 James, L. R., 239–241, 243, 245, 247–249
 Joreskog, J. G., 19, 20, 147, 178, 179, 187–200, 246, 261, 278–280
 Judd, C. M., 280
 Kano, Y., 284
 Kaplan, D., 200, 217
 Kashy, D. A., 96, 149, 152–154
 Keesling, W., 187
 Kenny, D. A., 40, 85, 96, 104–105, 109, 132, 149, 152–154, 157–160, 276, 280, 281
 Kerlinger, F. N., 285
 Land, K. C., 18, 49
 Lehmann, I. J., 80, 84
 Lennox, R., 81
 Lewis, C., 240, 244
 Lewis, R., 203–209, 211, 220
 Liang, J., 256
 Lind, S., 239–241, 243, 245
 Ling, R. F., 276
 Little, R. J. A., 217
 Loehlin, J. C., 136
 Long, J. S., 200, 238, 239
 MacCallum, R. C., 81, 136, 276, 278, 279, 280
 Mar, C. M., 279, 280
 Marcus, M., 292
 Marsh, H. W., 149, 153, 200, 239, 241, 244, 245, 256, 265
 Maruyama, G., 5, 6, 63, 94, 102, 113, 153, 203–220, 221, 234, 250–254, 257, 270
 Maxwell, S. E., 282
 McArdle, J. J., 108, 283
 McConahay, J. B., 89
 McDonald, R. P., 200, 239, 241, 244, 245
 McGarvey, B., 5, 94, 102, 209–214, 234, 250–254, 257, 270
 Mehrens, W. A., 80, 84
 Meredith, W., 217
 Miller, M. B., 136
 Miller, N., 5, 6, 63, 94, 203–209, 214–220
 Minc, H., 292
 Mulaik, S. A., 132, 239–241, 243, 245, 247–249
 Muthen, B., 31, 217, 259, 282, 283
 Namboodiri, N. K., 106–108
 Necowitz, L. B., 4, 279
 Nesselroade, R. J., 283
 O'Connell, E. J., 96, 149
 Olkin, I., 71
 Panter, A. T., 81, 238, 244, 245, 254, 283
 Pedhazur, E. J., 285
 Pelz, D. C., 120–121
 Piccinin, A. M., 217
 Pickles, A., 149, 261
 Ping, R. A., 280
 Price, B., 74–75
 Raudenbush, S. W., 282
 Raykov, T., 283
 Reith, J. V., 279
 Rigdon, E., 106, 190
 Rindskopf, D., 256
 Rogosa, D., 109, 121
 Rose, T., 256
 Rozelle, R. M., 120–121
 Roznowski, M., 4, 279
 Rubin, D. B., 217
 St. John, N., 203–209, 211, 220
 Salas, E., 282
 Sayer, A. G., 108
 Schoenberg, R., 132, 154, 158, 279
 Shavelson, R. J., 31, 259
 Shingles, R. D., 109, 121
 Smith, G. M., 281
 Sobel, M. E., 247
 Sorbom, D., 19, 31, 78, 179, 259, 261
 Steiger, J. H., 241, 246, 284
 Stein, J. A., 281
 Stilwell, C. D., 239–241, 243, 245
 Sugawara, H. M., 280
 Tait, M., 136
 Tanaka, J. S., 81, 243, 246
 Thomson, E., 258
 Thurstone, L. L., 133
 Tomer, A., 283
 Tucker, L. R., 240, 244
 Uchino, B. N., 276
 Van Alstine, J., 239–241, 243, 245
 Waller, N. G., 284
 Wan, C. K., 280, 281
 Wegener, D. T., 276
 Wiley, D. E., 20, 187, 197
 Willett, J. B., 108
 Williams, R., 258
 Winbourne, W. C., 81
 Wothke, W., 152
 Wright, S., 9, 15, 16
 Yalcin, I., 280
About the Author
Geoffrey M. Maruyama is Vice Provost for Academic Affairs in the Office of the Provost for Professional Studies at the University of Minnesota. His responsibilities include academic planning, curricular and instructional issues, graduate education, faculty issues including promotion and tenure, and research. He received his Ph.D. in psychology from the University of Southern California in 1977. He has been a faculty member in the Department of Educational Psychology since September 1976. Before his appointment as Vice Provost, he spent 10 years as director of the Human Relations Program in the Department of Educational Psychology, 3 years as director of the Center for Applied Research and Educational Improvement, and 1 year as Acting Associate Dean in the College of Education and Human Development. His academic experience also includes 9 years of active involvement in faculty governance and 4 years as lobbyist for faculty issues at the Minnesota state legislature.
Maruyama has also written another book, Research in Educational Settings (with Stan Deno), as well as 13 book chapters and more than 50 articles. His research interests cluster around (a) methodological issues including application of structural equation modeling, action research and its implications for collaborate research, and applied research methods/program evaluation; and (b) substantive issues tied to the interface of psychology and education, including school reform, school achievement processes, and effective educational techniques for diverse schools.

164440 Loading...
Also from SAGE Publishing
 CQ Library American political resources opens in new tab
 Data Planet A universe of data opens in new tab
 Lean Library Increase the visibility of your library opens in new tab
 SAGE Journals Worldclass research journals opens in new tab
 SAGE Research Methods The ultimate methods library opens in new tab
 SAGE Stats Data on demand opens in new tab