Gov NonGov
0 1
Today’s example is from a bootstrap resample of 177 undergraduate students at a large state university in the Midwest. The survey was a measure of 10 questions about their beliefs in various conspiracy theories that were being passed around the internet in the early 2010s. Additionally, gender was included in the survey. All items responses were on a 5- point Likert scale with:
Our purpose in using this instrument is to provide a context that we all may find relevant as many of these conspiracy theories are still prevalent today.
Questions:
Questions:
Latent Variables
Latent variables are built by specification:
You create latent variables by specifying which items measure which latent variables in an analysis model
The alignment provides a specification of which latent variables are measured by which items
The mathematical definition of either of these terms is simply whether or not a latent variable appears as a predictor for an item
Gov NonGov
0 1
The model for the first item is then built with only the factors measured by the item as being present:
\[ f(E\left(Y_{p1} \mid \boldsymbol{\theta}_p\right) ) = \mu_1 + \lambda_{11} \theta_{p1} \]
The model for the first item is then built with only the factors measured by the item as being present:
\[ f(E\left(Y_{p1} \mid \boldsymbol{\theta}_p\right) ) = \mu_1 + \lambda_{11} \theta_{p1} \]
Where:
The second factor is not included in the model for the item.
We could show the model with the Q-matrix entries:
\[ f(E\left(Y_{p1} \mid \boldsymbol{\theta}_p\right) ) = \mu_1 + q_{11}\left( \lambda_{11} \theta_{p1} \right) + q_{12}\left( \lambda_{12} \theta_{p2} \right) = \mu_1 + \boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{1} \]
Where:
\(\boldsymbol{\lambda}_{1} = \left[ \begin{array}{c} \lambda_{11} \\ \lambda_{12} \end{array} \right]\) contains all possible factor loadings for item 1 (size \(2 \times 1\))
\(\boldsymbol{\theta}_p = \left[ \begin{array}{cc} \theta_{p1} & \theta_{p2} \\ \end{array} \right]\) contains the factor scores for person \(p\) (size \(1 \times 2\))
\(\text{diag}\left(\boldsymbol{q}_i \right) = \boldsymbol{q}_i \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cc} 0 & 1 \\ \end{array} \right] \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right]\) is a diagonal matrix of ones times the vector of Q-matrix entries for item 1
The matrix product then gives:
\[\boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{1} = \left[ \begin{array}{cc} \theta_{p1} & \theta_{p2} \\ \end{array} \right]\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} \lambda_{11} \\ \lambda_{12} \end{array} \right] = \left[ \begin{array}{cc} \theta_{p1} & \theta_{p2} \\ \end{array} \right]\left[ \begin{array}{c} 0 \\ \lambda_{12} \end{array} \right] = \lambda_{12}\theta_{p2}\]
The Q-matrix functions like a partial version of the model (predictor) matrix that we saw in linear models
Gov NonGov
item1 0 1
item2 1 0
item3 0 1
item4 0 1
item5 1 0
item6 0 1
item7 1 0
item8 1 0
item9 1 0
item10 0 1
Given the Q-matrix each item has its own model using the alignment specifications
\[ \begin{array}{c} f(E\left(Y_{p1} \mid \boldsymbol{\theta}_p\right) ) = \mu_1 + \lambda_{12} \theta_{p2} \\ f(E\left(Y_{p2} \mid \boldsymbol{\theta}_p\right) ) = \mu_2 + \lambda_{21} \theta_{p1} \\ f(E\left(Y_{p3} \mid \boldsymbol{\theta}_p\right) ) = \mu_3 + \lambda_{32} \theta_{p2} \\ f(E\left(Y_{p4} \mid \boldsymbol{\theta}_p\right) ) = \mu_4 + \lambda_{42} \theta_{p2} \\ f(E\left(Y_{p5} \mid \boldsymbol{\theta}_p\right) ) = \mu_5 + \lambda_{51} \theta_{p1} \\ f(E\left(Y_{p6} \mid \boldsymbol{\theta}_p\right) ) = \mu_6 + \lambda_{62} \theta_{p2} \\ f(E\left(Y_{p7} \mid \boldsymbol{\theta}_p\right) ) = \mu_7 + \lambda_{71} \theta_{p1} \\ f(E\left(Y_{p8} \mid \boldsymbol{\theta}_p\right) ) = \mu_8 + \lambda_{81} \theta_{p1} \\ f(E\left(Y_{p9} \mid \boldsymbol{\theta}_p\right) ) = \mu_9 + \lambda_{91} \theta_{p1} \\ f(E\left(Y_{p,10} \mid \boldsymbol{\theta}_p\right) ) = \mu_{10} + \lambda_{10,2} \theta_{p2} \\ \end{array} \]
For our example, we will assume the set of traits follows a multivariate normal distribution
\[ f\left(\boldsymbol{\theta}_p \right) = \left(2 \pi \right)^{-\frac{D}{2}} \det\left(\boldsymbol{\Sigma}_\theta \right)^{-\frac{1}{2}}\exp\left[-\frac{1}{2}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)^T\boldsymbol{\Sigma}_\theta^{-1}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right) \right] \] Where:
Alternatively, we would specify \(\boldsymbol{\theta}_p \sim N_D\left( \boldsymbol{\mu}_\theta, \boldsymbol{\Sigma}_\theta \right)\); but, we cannot always estimate \(\boldsymbol{\mu}_\theta\) and \(\boldsymbol{\Sigma}_\theta\)
Psychometric models require two types of identification to be valid:
Bayesian priors can help to make models with fewer items than these criteria suggest estimable
Psychometric models require two types of identification to be valid:
Bayesian priors can let you believe you can estimate more parameters than the non-Bayesian standards suggest
Like empirical identification, these estimates are often unstable and are not recommended
Most common:
Today, we will use standardized factors
For each item, we will use a graded- (ordinal logit) response modelwhere:
\[P\left(Y_{ic } = c \mid \boldsymbol{\theta}_p \right) = \left\{ \begin{array}{lr} 1-P^*\left(Y_{i1} \gt 1 \mid \boldsymbol{\theta}_p \right) & \text{if } c=1 \\ P^*\left(Y_{i{c-1}} \gt c-1 \mid \boldsymbol{\theta}_p \right) - P^*\left(Y_{i{c}} \gt c \mid \boldsymbol{\theta}_p \right) & \text{if } 1<c<C_i \\ P^*\left(Y_{i{C_i -1} } \gt C_i-1 \mid \boldsymbol{\theta}_p \right) - 0 & \text{if } c=C_i \\ \end{array} \right.\]
Where:
\[ P^*\left(Y_{i{c}} > c \mid \theta \right) = \frac{\exp(\mu_{ic}+\boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{i})} {1+\exp(\mu_{ic}+\boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{i})}\]
With:
Building the Multidimensional Model in Stan
As Stan uses thresholds instead of intercepts, our model then becomes:
\[P\left(Y_{ic } = c \mid \boldsymbol{\theta}_p \right) = \left\{ \begin{array}{lr} 1-P\left(Y_{i1} \gt 1 \mid \boldsymbol{\theta}_p \right) & \text{if } c=1 \\ P\left(Y_{i{c-1}} \gt c-1 \mid \boldsymbol{\theta}_p \right) - P\left(Y_{i{c}} \gt c \mid \boldsymbol{\theta}_p \right) & \text{if } 1<c<C_i \\ P\left(Y_{i{C_i -1} } \gt C_i-1 \mid \boldsymbol{\theta}_p \right) & \text{if } c=C_i \\ \end{array} \right.\]
Where:
\[ P\left(Y_{i{c}} > c \mid \theta \right) = \frac{\exp(-\tau_{ic}+\boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{i})} {1+\exp(-\tau_{ic}+\boldsymbol{\theta}_p \text{diag}\left(\boldsymbol{q}_i \right) \boldsymbol{\lambda}_{i})}\]
With:
We can convert thresholds to intercepts by multiplying by negative one: \(\mu_c = -\tau_c\)
model {
lambda ~ multi_normal(meanLambda, covLambda);
thetaCorrL ~ lkj_corr_cholesky(1.0);
theta ~ multi_normal_cholesky(meanTheta, thetaCorrL);
for (item in 1:nItems){
thr[item] ~ multi_normal(meanThr[item], covThr[item]);
Y[item] ~ ordered_logistic(thetaMatrix*lambdaMatrix[item,1:nFactors]', thr[item]);
}
}
thetaMatrix
is matrix of latent variables for each person \(\boldsymbol{\Theta}\) (size \(N \times D\); \(N\) is number of people, \(D\) is number of dimensions)lambdaMatrix
is, for each item, a matrix of factor loading/discrimination parameters along with zeros by Q-matrix (size \(I \times D\); \(I\) is number of items)
lambda
is a vector of estimated factor loadings (we use a different block to put these into lambdaMatrix
)
model {
lambda ~ multi_normal(meanLambda, covLambda);
thetaCorrL ~ lkj_corr_cholesky(1.0);
theta ~ multi_normal_cholesky(meanTheta, thetaCorrL);
for (item in 1:nItems){
thr[item] ~ multi_normal(meanThr[item], covThr[item]);
Y[item] ~ ordered_logistic(thetaMatrix*lambdaMatrix[item,1:nFactors]', thr[item]);
}
}
theta
is an array of latent variables (we use a different block to put these into thetaMatrix
)theta
follows a multivariate normal distribution as a prior where:
theta
to zero (the mean for each factor is set to zero)theta
to one (the variance for each factor is set to one)lkj_corr_cholesky
and multi_normal_cholesky
From Stan’s Functions Reference, for a correlation matrix \(\textbf{R}_\theta\)
Correlation Matrix Properties:
LKJ Prior, with hyperparameter \(\eta\), is proportional to the determinant of the correlation matrix
\[\text{LKJ}\left(\textbf{R}_\theta \mid \eta \right) \propto \det\left(\textbf{R}_\theta \right)^{(\eta-1)} \]
For this example, we set \(\eta=1\), noting a uniform prior over all correlation matrices
The functions we are using do not use the correlation matrix directly
\[\textbf{R}_\theta = \textbf{L}_\theta \textbf{L}_\theta^T\]
\[-\frac{1}{2}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)^T\textbf{R}_\theta^{-1}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right) = -\frac{1}{2}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)^T\left(\textbf{L}_\theta \textbf{L}_\theta^T\right)^{-1}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right) \]
\[ -\frac{1}{2}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)^T\textbf{L}_\theta^{-T} \textbf{L}_\theta^{-1}\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)\] Then, we solve by back substitution: \(\left(\boldsymbol{\theta}_p - \boldsymbol{\mu}_\theta \right)^T\textbf{L}_\theta^{-T}\)
parameters {
array[nObs] vector[nFactors] theta; // the latent variables (one for each person)
array[nItems] ordered[maxCategory-1] thr; // the item thresholds (one for each item category minus one)
vector[nLoadings] lambda; // the factor loadings/item discriminations (one for each item)
cholesky_factor_corr[nFactors] thetaCorrL;
}
Notes:
theta
is an array (for the MVN prior)thr
is the same as the unidimensional modellambda
is the vector of all factor loadings to be estimated (needs nLoadings
)thetaCorrL
is of type cholesky_factor_corr
, a built in type that identifies this as the lower diagonal of a Cholesky-factorized correlation matrixWe need a couple other blocks to link our parameters to the model
transformed data{
int<lower=0> nLoadings = 0; // number of loadings in model
for (factor in 1:nFactors){
nLoadings = nLoadings + sum(Qmatrix[1:nItems, factor]);
}
array[nLoadings, 2] int loadingLocation; // the row/column positions of each loading
int loadingNum=1;
for (item in 1:nItems){
for (factor in 1:nFactors){
if (Qmatrix[item, factor] == 1){
loadingLocation[loadingNum, 1] = item;
loadingLocation[loadingNum, 2] = factor;
loadingNum = loadingNum + 1;
}
}
}
}
transformed data {}
block runs prior to the Markov chain
transformed data{
int<lower=0> nLoadings = 0; // number of loadings in model
for (factor in 1:nFactors){
nLoadings = nLoadings + sum(Qmatrix[1:nItems, factor]);
}
array[nLoadings, 2] int loadingLocation; // the row/column positions of each loading
int loadingNum=1;
for (item in 1:nItems){
for (factor in 1:nFactors){
if (Qmatrix[item, factor] == 1){
loadingLocation[loadingNum, 1] = item;
loadingLocation[loadingNum, 2] = factor;
loadingNum = loadingNum + 1;
}
}
}
}
nLoadings
loadingsMatrix
used in the model {}
blocktransformed parameters{
matrix[nItems, nFactors] lambdaMatrix = rep_matrix(0.0, nItems, nFactors);
matrix[nObs, nFactors] thetaMatrix;
// build matrix for lambdas to multiply theta matrix
for (loading in 1:nLoadings){
lambdaMatrix[loadingLocation[loading,1], loadingLocation[loading,2]] = lambda[loading];
}
for (factor in 1:nFactors){
thetaMatrix[,factor] = to_vector(theta[,factor]);
}
}
Notes:
transformed parameters {}
block runs prior to each iteration of the Markov chain
thetaMatrix
(converting theta
from an array to a matrix)lambdaMatrix
(puts the loadings and zeros from the Q-matrix into correct position)
lambdaMatrix
initializes at zero (so we just have to add the loadings in the correct position)data {
// data specifications =============================================================
int<lower=0> nObs; // number of observations
int<lower=0> nItems; // number of items
int<lower=0> maxCategory; // number of categories for each item
// input data =============================================================
array[nItems, nObs] int<lower=1, upper=5> Y; // item responses in an array
// loading specifications =============================================================
int<lower=1> nFactors; // number of loadings in the model
array[nItems, nFactors] int<lower=0, upper=1> Qmatrix;
// prior specifications =============================================================
array[nItems] vector[maxCategory-1] meanThr; // prior mean vector for intercept parameters
array[nItems] matrix[maxCategory-1, maxCategory-1] covThr; // prior covariance matrix for intercept parameters
vector[nItems] meanLambda; // prior mean vector for discrimination parameters
matrix[nItems, nItems] covLambda; // prior covariance matrix for discrimination parameters
vector[nFactors] meanTheta;
}
meanTheta
: Factor means (hyperparameters) are added (but we will set these to zero)nFactors
: Number of latent variables (needed for Q-matrix)Qmatrix
: Q-matrix for modelNotes:
mu
thetaCorr
by multiplying Cholesky-factorized lower triangle with upper triangle
thetaCorr
when looking at model outputStan Analyses
We run stan the same way we have previously:
Notes:
[1] 1.066736
# A tibble: 54 × 10
variable mean median sd mad q5 q95 rhat ess_bulk
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 lambda[1] 1.99 1.98 0.266 0.265 1.58 2.45 1.00 2970.
2 lambda[2] 2.83 2.81 0.383 0.383 2.24 3.49 1.00 2898.
3 lambda[3] 2.40 2.38 0.341 0.333 1.88 2.99 1.00 3532.
4 lambda[4] 2.92 2.89 0.397 0.386 2.32 3.61 1.00 2965.
5 lambda[5] 4.35 4.30 0.594 0.578 3.44 5.38 1.00 2501.
6 lambda[6] 4.22 4.18 0.565 0.551 3.37 5.19 1.00 2569.
7 lambda[7] 2.85 2.83 0.407 0.405 2.21 3.55 1.00 3043.
8 lambda[8] 4.11 4.08 0.557 0.549 3.25 5.09 1.00 2468.
9 lambda[9] 2.91 2.89 0.429 0.425 2.24 3.64 1.00 3174.
10 lambda[10] 2.44 2.41 0.424 0.424 1.79 3.17 1.00 4309.
11 mu[1,1] 1.88 1.88 0.288 0.283 1.41 2.37 1.00 1579.
12 mu[2,1] 0.815 0.809 0.300 0.299 0.329 1.31 1.00 946.
13 mu[3,1] 0.100 0.104 0.261 0.257 -0.337 0.530 1.00 974.
14 mu[4,1] 0.998 0.990 0.318 0.310 0.481 1.54 1.00 922.
15 mu[5,1] 1.32 1.32 0.425 0.426 0.651 2.04 1.00 766.
16 mu[6,1] 0.976 0.969 0.401 0.405 0.318 1.64 1.00 749.
17 mu[7,1] -0.102 -0.100 0.291 0.294 -0.577 0.371 1.00 851.
18 mu[8,1] 0.699 0.691 0.388 0.384 0.0691 1.35 1.00 766.
19 mu[9,1] -0.0597 -0.0602 0.295 0.290 -0.548 0.431 1.00 888.
20 mu[10,1] -1.36 -1.34 0.309 0.302 -1.90 -0.881 1.00 1297.
21 mu[1,2] -0.208 -0.206 0.230 0.231 -0.593 0.168 1.00 1104.
22 mu[2,2] -1.52 -1.51 0.319 0.316 -2.06 -1.01 1.00 1024.
23 mu[3,2] -1.11 -1.10 0.272 0.270 -1.57 -0.670 1.00 1090.
24 mu[4,2] -1.14 -1.13 0.314 0.310 -1.68 -0.643 1.00 977.
25 mu[5,2] -1.97 -1.95 0.448 0.451 -2.75 -1.28 1.00 966.
26 mu[6,2] -1.99 -1.98 0.426 0.427 -2.72 -1.32 1.01 905.
27 mu[7,2] -1.94 -1.93 0.336 0.328 -2.53 -1.41 1.00 1184.
28 mu[8,2] -1.82 -1.81 0.411 0.405 -2.51 -1.16 1.00 1028.
29 mu[9,2] -1.95 -1.94 0.345 0.344 -2.55 -1.42 1.00 1204.
30 mu[10,2] -2.60 -2.58 0.368 0.356 -3.24 -2.03 1.00 1879.
31 mu[1,3] -2.03 -2.03 0.275 0.274 -2.50 -1.59 1.00 1467.
32 mu[2,3] -3.38 -3.36 0.411 0.409 -4.08 -2.75 1.00 1787.
33 mu[3,3] -3.68 -3.66 0.433 0.428 -4.44 -3.01 1.00 2767.
34 mu[4,3] -3.85 -3.84 0.462 0.461 -4.64 -3.13 1.00 1881.
35 mu[5,3] -4.59 -4.55 0.609 0.589 -5.64 -3.66 1.00 1717.
36 mu[6,3] -5.63 -5.59 0.681 0.683 -6.79 -4.57 1.00 2624.
37 mu[7,3] -4.14 -4.11 0.498 0.494 -5.01 -3.39 1.00 2239.
38 mu[8,3] -6.42 -6.38 0.769 0.763 -7.76 -5.22 1.00 4332.
39 mu[9,3] -3.25 -3.24 0.415 0.408 -3.97 -2.60 1.00 1519.
40 mu[10,3] -3.77 -3.74 0.464 0.459 -4.56 -3.05 1.00 2408.
41 mu[1,4] -3.98 -3.96 0.464 0.456 -4.77 -3.25 1.00 4398.
42 mu[2,4] -4.92 -4.89 0.580 0.577 -5.92 -4.02 1.00 3244.
43 mu[3,4] -4.78 -4.76 0.555 0.551 -5.73 -3.92 1.00 4647.
44 mu[4,4] -4.76 -4.74 0.547 0.542 -5.71 -3.92 1.00 2442.
45 mu[5,4] -6.86 -6.82 0.841 0.836 -8.32 -5.57 1.00 3061.
46 mu[6,4] -7.95 -7.91 0.967 0.982 -9.62 -6.45 1.00 4485.
47 mu[7,4] -5.70 -5.67 0.690 0.685 -6.86 -4.62 1.00 4292.
48 mu[8,4] -8.47 -8.40 1.08 1.09 -10.4 -6.82 1.00 5871.
49 mu[9,4] -4.95 -4.92 0.584 0.569 -5.97 -4.05 1.00 2476.
50 mu[10,4] -4.02 -4.00 0.490 0.490 -4.87 -3.26 1.00 2909.
51 thetaCorr[1,1] 1 1 0 0 1 1 NA NA
52 thetaCorr[2,1] 0.990 0.992 0.00742 0.00672 0.976 0.999 1.07 72.0
53 thetaCorr[1,2] 0.990 0.992 0.00742 0.00672 0.976 0.999 1.07 72.0
54 thetaCorr[2,2] 1 1 0 0 1 1 NA NA
# ℹ 1 more variable: ess_tail <dbl>
Note:
Plots of draws for person 1:
theta[1,1] theta[1,2]
theta[1,1] 1.0000000 0.8221942
theta[1,2] 0.8221942 1.0000000
Plots of draws for person 1:
Wrapping Up
Stan makes multidimensional latent variable models fairly easy to implement
But…