Generalized Measurement Models: Modeling Observed Data

Lecture 4b

Today’s Lecture Objectives

  1. Show different modeling specifications for different types of item response data
  2. Show how parameterization differs for standardized latent variables vs. marker item scale identification

Example Data: Conspiracy Theories

Today’s example is from a bootstrap resample of 177 undergraduate students at a large state university in the Midwest. The survey was a measure of 10 questions about their beliefs in various conspiracy theories that were being passed around the internet in the early 2010s. Additionally, gender was included in the survey. All items responses were on a 5- point Likert scale with:

  1. Strongly Disagree
  2. Disagree
  3. Neither Agree or Disagree
  4. Agree
  5. Strongly Agree

Please note, the purpose of this survey was to study individual beliefs regarding conspiracies. The questions can provoke some strong emotions given the world we live in currently. All questions were approved by university IRB prior to their use.

Our purpose in using this instrument is to provide a context that we all may find relevant as many of these conspiracy theories are still prevalent today.

Conspiracy Theory Questions 1-5

Questions:

  1. The U.S. invasion of Iraq was not part of a campaign to fight terrorism, but was driven by oil companies and Jews in the U.S. and Israel.
  2. Certain U.S. government officials planned the attacks of September 11, 2001 because they wanted the United States to go to war in the Middle East.
  3. President Barack Obama was not really born in the United States and does not have an authentic Hawaiian birth certificate.
  4. The current financial crisis was secretly orchestrated by a small group of Wall Street bankers to extend the power of the Federal Reserve and further their control of the world’s economy.
  5. Vapor trails left by aircraft are actually chemical agents deliberately sprayed in a clandestine program directed by government officials.

Conspiracy Theory Questions 6-10

Questions:

  1. Billionaire George Soros is behind a hidden plot to destabilize the American government, take control of the media, and put the world under his control.
  2. The U.S. government is mandating the switch to compact fluorescent light bulbs because such lights make people more obedient and easier to control.
  3. Government officials are covertly Building a 12-lane "NAFTA superhighway" that runs from Mexico to Canada through America’s heartland.
  4. Government officials purposely developed and spread drugs like crack-cocaine and diseases like AIDS in order to destroy the African American community.
  5. God sent Hurricane Katrina to punish America for its sins.

Data Visualization: Q1-Q5

Data Visualization: Q6-Q10

Conspiracy Theories: Assumed Latent Variable

For today’s lecture, we will assume each of the 10 items measures a single latent variable representing a person’s tendency to believe in conspiracy theories

  • We will denote this latent variable as \(\theta_p\) for each person
    • \(p\) is the index for person (with \(p=1, \ldots, P\))
  • We will assume this latent variable is:
    • Continuous
    • Normally distributed: \(\theta_p \sim N\left(\mu_\theta, \sigma_\theta \right)\)
      • We will make differing assumptions about the scale (the latent variable mean and standard deviation) to show how syntax works for either
  • Across all people, we will denote the set of vector of latent variables as

\[\boldsymbol{\Theta} = \left[\theta_1, \ldots, \theta_P \right]^T \]

Building Measurement Models

Measurement Model Analysis Steps

  1. Specify model
  2. Specify scale identification method for latent variables
  3. Estimate model
  4. Examine model-data fit
  5. Iterate between steps 1-4 until adequate fit is achieved

Measurement Model Auxiliary Components

  1. Score estimation (and secondary analyses with scores)
  2. Item evaluation
  3. Scale construction
  4. Equating
  5. Measurement invariance/differential item functioning

Modeling Observed Variables with Normal Distributions

Observed Variables with Normal Distributions

A psychometric model posits that one or more hypothesized latent variables predict a person’s response to observed items

  • Our hypothesized latent variable: Tendency to Believe in Conspiracies (\(\theta_p\))
    • One variable: Unidimensional
  • Each observed variable (item response) is included in the model
  • Today, we will assume each response follows a normal distribution
    • This is the assumption underlying confirmatory factor analysis (CFA) models
    • This assumption is tenuous at best

Normal Distribution: Linear Regression

As we saw in linear models, when an outcome variable (here \(Y_p\)) is assumed to follow a (conditional) normal distribution, this places a linear regression-style model on the outcome:

For example, take the following linear regression:

\[Y_p = \beta_0 + \beta_1 X_p + e_p,\]

with \(e_p \sim N\left(0, \sigma_e \right)\)

This implies:

\[ Y_p \sim N\left(\beta_0 + \beta_1 X_p, \sigma_e \right)\]

Where:

  • The (conditional) mean of \(Y_p\) is \(\beta_0 + \beta_1 X_p\)
  • The (residual) standard deviation of \(Y_p\) is \(\sigma_e\)

The Psychometric Model

For the psychometric model:

  • We replace the observed variable \(X_p\) with the latent variable \(\theta_p\) (for all observed variables)

\[ \begin{array}{cc} Y_{p1} = \mu_1 + \lambda_1 \theta_p + e_{p, 1}; & e_{p,1} \sim N\left(0, \psi_1^2 \right) \\ Y_{p2} = \mu_2 + \lambda_2 \theta_p + e_{p, 2}; & e_{p,2} \sim N\left(0, \psi_2^2 \right) \\ Y_{p3} = \mu_3 + \lambda_3 \theta_p + e_{p, 3}; & e_{p,3} \sim N\left(0, \psi_3^2 \right) \\ Y_{p4} = \mu_4 + \lambda_4 \theta_p + e_{p, 4}; & e_{p,4} \sim N\left(0, \psi_4^2 \right) \\ Y_{p5} = \mu_5 + \lambda_5 \theta_p + e_{p, 5}; & e_{p,5} \sim N\left(0, \psi_5^2 \right) \\ Y_{p6} = \mu_6 + \lambda_6 \theta_p + e_{p, 6}; & e_{p,6} \sim N\left(0, \psi_6^2 \right) \\ Y_{p7} = \mu_7 + \lambda_7 \theta_p + e_{p, 7}; & e_{p,7} \sim N\left(0, \psi_7^2 \right) \\ Y_{p8} = \mu_8 + \lambda_8 \theta_p + e_{p, 8}; & e_{p,8} \sim N\left(0, \psi_8^2 \right) \\ Y_{p9} = \mu_9 + \lambda_9 \theta_p + e_{p, 9}; & e_{p,9} \sim N\left(0, \psi_9^2 \right) \\ Y_{p10} = \mu_{10} + \lambda_{10} \theta_p + e_{p, 10}; & e_{p,10} \sim N\left(0, \psi_{10}^2 \right) \\ \end{array} \]

Measurement Model Parameters

For an item \(i\) the model is:

\[ \begin{array}{cc} Y_{pi} = \mu_i + \lambda_i \theta_p + e_{p,i}; & e_{p,i} \sim N\left(0, \psi_i^2 \right) \\ \end{array} \]

The parameters of the model use different notation from typical linear regression models and have different names (they are called item parameters)

  • \(\mu_i\): Item intercept
    • The expected score on the item when \(\theta_p = 0\)
    • Similar to \(\beta_0\)
  • \(\lambda_i\): Factor loading or item discrimination
    • The change in the expected score of an item for a one-unit increase in \(\theta_p\)
    • Similar to \(\beta_1\)
  • \(\psi^2_i\): Unique variance (Note: In Stan, we will have to specify \(\psi_e\); the unique standard deviation)
    • The variance of the residuals (the expected score minus observed score)
    • Similar to residual variance \(\sigma^2_e\)

Model Specification

The set of equations on the previous slide formed step #1 of the Measurement Model Analysis Steps:

  1. Specify Model

The next step is:

  1. Specify scale identification method for latent variables

We will initially assume \(\theta_p \sim N(0,1)\), which allows us to estimate all item parameters of the model

  • This is what we call a standardized latent variable
    • They are like Z-scores

Implementing Normal Outcomes in Stan

Implementing Normal Outcomes in Stan

There are a few changes needed to make Stan estimate psychometric models with normal outcomes:

  • The model (predictor) matrix cannot be used
    • This is because the latent variable will be sampled–so the model matrix cannot be formed as a constant
  • The data will be imported as a matrix
    • More than one outcome means more than one column vector of data
  • The parameters will be specified as vectors of each type
    • Each item will have its own set of parameters
    • Implications for the use of prior distributions

Stan’s data {} Block

data {
  int<lower=0> nObs;                 // number of observations
  int<lower=0> nItems;               // number of items
  matrix[nObs, nItems] Y;            // item responses in a matrix

  vector[nItems] meanMu;
  matrix[nItems, nItems] covMu;      // prior covariance matrix for coefficients
  
  vector[nItems] meanLambda;         // prior mean vector for coefficients
  matrix[nItems, nItems] covLambda;  // prior covariance matrix for coefficients
  
  vector[nItems] psiRate;            // prior rate parameter for unique standard deviations
}

Stan’s parameters {} Block

parameters {
  vector[nObs] theta;                // the latent variables (one for each person)
  vector[nItems] mu;                 // the item intercepts (one for each item)
  vector[nItems] lambda;             // the factor loadings/item discriminations (one for each item)
  vector<lower=0>[nItems] psi;       // the unique standard deviations (one for each item)   
}

Here, the parameterization of \(\lambda\) (factor loadings/discrimination parameters) can lead to problems in estimation

  • The issue: \(\lambda_i \theta_p = (-\lambda_i)(-\theta_p)\)
    • Depending on the random starting values of each of these parameters (per chain), a given chain may converge to a different region
  • To demonstrate (later), we will start with a different random number seed
    • Currently using 09102022: works fine
    • Change to 25102022: big problems

Stan’s model Block

model {
  
  lambda ~ multi_normal(meanLambda, covLambda); // Prior for item discrimination/factor loadings
  mu ~ multi_normal(meanMu, covMu);             // Prior for item intercepts
  psi ~ exponential(psiRate);                   // Prior for unique standard deviations
  
  theta ~ normal(0, 1);                         // Prior for latent variable (with mean/sd specified)
  
  for (item in 1:nItems){
    Y[,item] ~ normal(mu[item] + lambda[item]*theta, psi[item]);
  }
  
}

The loop here conducts the model, separately, for each item

  • Assumption of conditional independence enables this
    • Non-independence would need multivariate normal model
  • The item mean is set by the conditional mean of the model
    • The item SD is set by the unique variance parameter
  • The loop puts each item’s parameters into the equation

Choosing Prior Distributions for Parameters

There is not uniform agreement about the choices of prior distributions for item parameters

  • We will use uninformative priors on each to begin
    • After first model analysis, we will discuss these choices and why they were made
  • For now:
    • Item intercepts: \(\mu_i \sim N\left(0, \sigma^2_{\mu_i} = 1000\right)\)
    • Factor loadings/item discriminations: \(\lambda_i \sim N(0, \sigma^2_{\lambda_i} = 1000)\)
    • Unique standard deviations: \(\psi_i \sim \text{exponential}\left(.01\right)\)

Prior Density Function Plots

R’s Data List Object

# data dimensions
nObs = nrow(conspiracyItems)
nItems = ncol(conspiracyItems)

# item intercept hyperparameters
muMeanHyperParameter = 0
muMeanVecHP = rep(muMeanHyperParameter, nItems)

muVarianceHyperParameter = 1000
muCovarianceMatrixHP = diag(x = muVarianceMatrixHP, nrow = nItems)

# item discrimination/factor loading hyperparameters
lambdaMeanHyperParameter = 0
lambdaMeanVecHP = rep(lambdaMeanHyperParameter, nItems)

lambdaVarianceHyperParameter = 1000
lambdaCovarianceMatrixHP = diag(x = lambdaVarianceHyperParameter, nrow = nItems)

# unique standard deviation hyperparameters
psiRateHyperParameter = .01
psiRateVecHP = rep(.1, nItems)

modelCFA_data = list(
  nObs = nObs,
  nItems = nItems,
  Y = conspiracyItems, 
  meanMu = muMeanVecHP,
  covMu = muCovarianceMatrixHP,
  meanLambda = lambdaMeanVecHP,
  covLambda = lambdaCovarianceMatrixHP,
  psiRate = psiRateVecHP
)

Running the Model In Stan

The Stan program takes longer to run than in linear models:

  • Number of parameters = 207
    • 10 observed variables (with three item parameters each: \(\mu_i\), \(\lambda_i\), and \(\psi_i\))
    • 177 latent variables (one for each person: 177 parameters)
  • Key syntax block is the observed variable definition:
Y[,item] ~ normal(mu[item] + lambda[item]*theta, psi[item]);
  • cmdstanr samples call:
modelCFA_samples = modelCFA_stan$sample(
  data = modelCFA_data,
  seed = 09102022,
  chains = 4,
  parallel_chains = 4,
  iter_warmup = 1000,
  iter_sampling = 2000
)
  • Note: Typically, longer chains are needed for larger models like this
    • These will become even longer when we use non-normal distributions for observed data

Model Results

  • Checking convergence with \(\hat{R}\) (PSRF):
[1] 1.006423
  • Item Parameter Results:
# A tibble: 30 × 10
   variable    mean median     sd    mad    q5   q95  rhat ess_bulk ess_tail
   <chr>      <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
 1 mu[1]      2.37   2.37  0.0876 0.0862 2.23  2.51   1.00    1447.    2855.
 2 mu[2]      1.96   1.96  0.0850 0.0853 1.82  2.10   1.00     995.    2338.
 3 mu[3]      1.88   1.88  0.0825 0.0849 1.75  2.01   1.00    1081.    2746.
 4 mu[4]      2.01   2.01  0.0852 0.0856 1.87  2.15   1.00    1060.    2587.
 5 mu[5]      1.99   1.99  0.0842 0.0838 1.85  2.12   1.01     768.    1596.
 6 mu[6]      1.89   1.89  0.0775 0.0770 1.77  2.02   1.01     825.    1647.
 7 mu[7]      1.73   1.73  0.0766 0.0774 1.60  1.85   1.00    1025.    2523.
 8 mu[8]      1.84   1.84  0.0733 0.0739 1.72  1.97   1.01     803.    1771.
 9 mu[9]      1.81   1.81  0.0882 0.0883 1.67  1.96   1.00    1079.    2748.
10 mu[10]     1.52   1.52  0.0809 0.0802 1.39  1.65   1.00    1464.    3239.
11 lambda[1]  0.740  0.739 0.0838 0.0828 0.607 0.882  1.00    3429.    4389.
12 lambda[2]  0.870  0.867 0.0774 0.0778 0.748 1.00   1.00    2010.    4141.
13 lambda[3]  0.802  0.802 0.0766 0.0755 0.680 0.932  1.00    2249.    4009.
14 lambda[4]  0.843  0.840 0.0763 0.0760 0.723 0.973  1.00    2383.    4144.
15 lambda[5]  0.997  0.994 0.0715 0.0708 0.886 1.12   1.00    1439.    2565.
16 lambda[6]  0.899  0.896 0.0650 0.0651 0.796 1.01   1.00    1408.    2668.
17 lambda[7]  0.764  0.762 0.0704 0.0695 0.653 0.884  1.00    2473.    3577.
18 lambda[8]  0.854  0.851 0.0609 0.0603 0.757 0.959  1.00    1528.    2882.
19 lambda[9]  0.861  0.859 0.0806 0.0815 0.735 0.997  1.00    2196.    3286.
20 lambda[10] 0.673  0.669 0.0772 0.0759 0.550 0.804  1.00    3108.    4190.
21 psi[1]     0.893  0.891 0.0498 0.0492 0.815 0.978  1.00   15449.    6480.
22 psi[2]     0.734  0.732 0.0433 0.0427 0.667 0.809  1.00   12834.    6393.
23 psi[3]     0.782  0.780 0.0447 0.0442 0.712 0.859  1.00   14422.    6595.
24 psi[4]     0.757  0.755 0.0438 0.0444 0.690 0.833  1.00   14700.    6978.
25 psi[5]     0.544  0.543 0.0372 0.0369 0.486 0.607  1.00    8549.    6223.
26 psi[6]     0.505  0.503 0.0334 0.0331 0.452 0.561  1.00    9757.    6302.
27 psi[7]     0.686  0.684 0.0400 0.0400 0.623 0.755  1.00   13777.    5845.
28 psi[8]     0.480  0.478 0.0317 0.0318 0.431 0.534  1.00    9750.    6699.
29 psi[9]     0.782  0.781 0.0459 0.0461 0.710 0.859  1.00   12873.    6177.
30 psi[10]    0.840  0.838 0.0467 0.0460 0.765 0.919  1.00   13693.    6461.

Modeling Strategy vs. Didactic Strategy

At this point, one should investigate model fit of the model we just ran

  • If the model does not fit, then all model parameters could be biased
    • Both item parameters and person parameters (\(\theta_p\))
  • Moreover, the uncertainty accompanying each parameter (the posterior standard deviation) may also be biased
    • Especially bad for psychometric models as we quantify reliaiblity with these numbers

But, to teach generalized measurement models, we will first talk about differing models for observed data

  • Different distributions
  • Different parameterizations across the different distributions

Then we will discuss model fit methods

Investigating Item Parameters

One plot that can help provide information about the item parameters is the item characteristic curve (ICC)

  • Not called this in CFA (but equivalent)
  • The ICC is the plot of the expected value of the response conditional on the value of the latent traits, for a range of latent trait values

\[E \left(Y_{pi} \mid \theta_p \right) = \mu_{i} +\lambda_{i}\theta_p \]

  • Because we have sampled values for each parameter, we can plot one ICC for each posterior draw

Posterior ICC Plots

Posterior Distribution for Item Parameters

Before moving onto the latent variables, let’s note the posterior distribution of the item parameters (for a single item):

\[f(\mu_i, \lambda_i, \psi_i \mid \boldsymbol{Y}) \propto f\left(\boldsymbol{Y} \mid \mu_i, \lambda_i, \psi_i \right) f\left(\mu_i, \lambda_i, \psi_i \right) \]

\(f(\mu_i, \lambda_i, \psi_i \mid \boldsymbol{Y})\) is the (joint) posterior distribution of the parameters for item \(i\)

  • The distribution of the parameters conditional on the data

\(f\left(\boldsymbol{Y} \mid \mu_i, \lambda_i, \psi_i \right)\) is the distribution we defined for our observed data:

\[ f\left(\boldsymbol{Y} \mid \mu_i, \lambda_i, \psi_i \right) \sim N \left(\mu_i+\lambda_i\theta_p, \psi_i \right)\]

\(f\left(\mu_i, \lambda_i, \psi_i \right)\) is the (joint) prior distribution for each of the parameters, which, are independent:

\[ f\left(\mu_i, \lambda_i, \psi_i \right) = f(\mu_i)f(\lambda_i)f(\psi_i)\]

Investigating the Latent Variables

The estimated latent variables are then:

# A tibble: 177 × 10
    variable     mean  median    sd   mad      q5      q95  rhat ess_b…¹ ess_t…²
    <chr>       <dbl>   <dbl> <dbl> <dbl>   <dbl>    <dbl> <dbl>   <dbl>   <dbl>
  1 theta[1]   0.0182  0.0165 0.253 0.245 -0.399   0.441    1.00   6111.   4880.
  2 theta[2]   1.53    1.53   0.261 0.258  1.10    1.96     1.00   3708.   4902.
  3 theta[3]   1.70    1.70   0.266 0.265  1.27    2.14     1.00   3294.   4749.
  4 theta[4]  -0.931  -0.927  0.250 0.249 -1.35   -0.528    1.00   4841.   4979.
  5 theta[5]   0.0408  0.0369 0.244 0.239 -0.357   0.439    1.00   5235.   5090.
  6 theta[6]  -0.986  -0.985  0.254 0.252 -1.40   -0.569    1.00   4620.   4205.
  7 theta[7]  -0.342  -0.344  0.245 0.240 -0.746   0.0667   1.00   5411.   5294.
  8 theta[8]  -0.0569 -0.0542 0.250 0.252 -0.463   0.355    1.00   5289.   5432.
  9 theta[9]  -0.790  -0.789  0.250 0.249 -1.21   -0.385    1.00   5271.   4348.
 10 theta[10]  0.0373  0.0385 0.239 0.239 -0.352   0.432    1.00   5413.   5248.
 11 theta[11] -0.986  -0.983  0.250 0.247 -1.40   -0.582    1.00   4511.   4411.
 12 theta[12]  0.254   0.255  0.246 0.245 -0.150   0.661    1.00   5364.   5243.
 13 theta[13] -0.756  -0.756  0.249 0.249 -1.17   -0.347    1.00   4101.   4237.
 14 theta[14] -0.981  -0.975  0.255 0.254 -1.42   -0.575    1.00   4542.   4187.
 15 theta[15] -0.844  -0.844  0.251 0.251 -1.26   -0.435    1.00   5013.   5375.
 16 theta[16] -0.0694 -0.0680 0.245 0.247 -0.480   0.329    1.00   5559.   5647.
 17 theta[17] -0.0709 -0.0681 0.250 0.251 -0.487   0.343    1.00   5524.   5316.
 18 theta[18] -0.667  -0.665  0.247 0.251 -1.08   -0.270    1.00   5153.   4951.
 19 theta[19]  1.86    1.86   0.270 0.270  1.43    2.31     1.00   3579.   5417.
 20 theta[20] -0.932  -0.933  0.250 0.249 -1.34   -0.520    1.00   4524.   4578.
 21 theta[21] -0.257  -0.255  0.247 0.240 -0.665   0.153    1.00   5381.   4773.
 22 theta[22] -0.986  -0.983  0.252 0.254 -1.41   -0.571    1.00   4493.   5363.
 23 theta[23] -0.985  -0.982  0.249 0.249 -1.39   -0.576    1.00   4811.   5085.
 24 theta[24] -0.985  -0.982  0.246 0.246 -1.40   -0.586    1.00   4111.   4824.
 25 theta[25]  0.537   0.534  0.249 0.250  0.129   0.949    1.00   5298.   4877.
 26 theta[26]  0.734   0.732  0.250 0.250  0.331   1.14     1.00   5634.   5089.
 27 theta[27] -0.120  -0.117  0.250 0.247 -0.532   0.288    1.00   5345.   5639.
 28 theta[28]  1.21    1.20   0.251 0.252  0.797   1.62     1.00   3920.   4651.
 29 theta[29]  0.888   0.884  0.247 0.244  0.485   1.31     1.00   4583.   4945.
 30 theta[30]  1.68    1.67   0.264 0.265  1.25    2.12     1.00   3506.   5167.
 31 theta[31]  0.539   0.539  0.253 0.253  0.124   0.960    1.00   5058.   5649.
 32 theta[32] -0.422  -0.418  0.255 0.255 -0.842  -0.00893  1.00   4924.   5576.
 33 theta[33]  0.315   0.314  0.247 0.244 -0.0879  0.732    1.00   5116.   5386.
 34 theta[34] -0.981  -0.982  0.252 0.250 -1.40   -0.566    1.00   5031.   4321.
 35 theta[35] -0.672  -0.673  0.250 0.249 -1.08   -0.270    1.00   4187.   4924.
 36 theta[36] -0.802  -0.800  0.253 0.252 -1.22   -0.390    1.00   4567.   4408.
 37 theta[37] -0.984  -0.981  0.253 0.253 -1.41   -0.575    1.00   4562.   4491.
 38 theta[38] -0.220  -0.217  0.249 0.255 -0.633   0.185    1.00   5240.   4626.
 39 theta[39] -0.986  -0.981  0.251 0.254 -1.40   -0.585    1.00   4419.   4995.
 40 theta[40]  0.0370  0.0391 0.247 0.245 -0.368   0.444    1.00   5660.   5300.
 41 theta[41] -0.984  -0.982  0.247 0.246 -1.40   -0.582    1.00   4614.   4970.
 42 theta[42] -0.0895 -0.0913 0.255 0.259 -0.502   0.327    1.00   4973.   5083.
 43 theta[43] -0.647  -0.649  0.253 0.257 -1.06   -0.237    1.00   5213.   4936.
 44 theta[44] -0.0736 -0.0748 0.248 0.243 -0.479   0.339    1.00   5283.   5202.
 45 theta[45]  0.373   0.376  0.255 0.250 -0.0467  0.792    1.00   5716.   4510.
 46 theta[46]  0.264   0.264  0.244 0.244 -0.133   0.666    1.00   5277.   5111.
 47 theta[47]  0.502   0.503  0.247 0.244  0.0920  0.906    1.00   4611.   5292.
 48 theta[48] -0.839  -0.837  0.251 0.249 -1.25   -0.425    1.00   4539.   5404.
 49 theta[49] -0.985  -0.981  0.254 0.254 -1.40   -0.569    1.00   4992.   4528.
 50 theta[50]  0.154   0.155  0.255 0.257 -0.269   0.578    1.00   6206.   5071.
 51 theta[51] -0.983  -0.982  0.248 0.254 -1.39   -0.575    1.00   4857.   5115.
 52 theta[52]  0.317   0.320  0.249 0.253 -0.0977  0.719    1.00   4789.   4877.
 53 theta[53] -0.984  -0.982  0.246 0.243 -1.39   -0.584    1.00   4562.   5642.
 54 theta[54]  1.21    1.20   0.257 0.256  0.795   1.64     1.00   3650.   4396.
 55 theta[55] -0.980  -0.977  0.249 0.252 -1.39   -0.579    1.00   4146.   4891.
 56 theta[56] -0.982  -0.977  0.251 0.248 -1.40   -0.570    1.00   4726.   4783.
 57 theta[57] -0.739  -0.738  0.249 0.249 -1.15   -0.331    1.00   4973.   4965.
 58 theta[58] -0.986  -0.984  0.247 0.248 -1.39   -0.585    1.00   4612.   4831.
 59 theta[59] -0.0745 -0.0739 0.247 0.248 -0.476   0.327    1.00   4742.   5624.
 60 theta[60] -0.645  -0.644  0.248 0.245 -1.06   -0.232    1.00   4642.   4862.
 61 theta[61]  1.38    1.38   0.253 0.254  0.965   1.79     1.00   3888.   5105.
 62 theta[62]  0.317   0.315  0.245 0.244 -0.0768  0.717    1.00   6126.   5119.
 63 theta[63] -0.979  -0.974  0.250 0.246 -1.39   -0.573    1.00   4571.   4331.
 64 theta[64]  1.37    1.37   0.255 0.253  0.954   1.79     1.00   3952.   4615.
 65 theta[65]  1.21    1.20   0.255 0.253  0.793   1.63     1.00   4012.   4258.
 66 theta[66] -0.755  -0.750  0.252 0.245 -1.17   -0.345    1.00   4891.   4691.
 67 theta[67]  0.539   0.540  0.253 0.252  0.132   0.960    1.00   5462.   5232.
 68 theta[68]  0.856   0.853  0.250 0.250  0.446   1.27     1.00   4504.   5234.
 69 theta[69] -0.535  -0.536  0.256 0.251 -0.965  -0.117    1.00   5020.   4966.
 70 theta[70]  0.886   0.883  0.251 0.249  0.480   1.31     1.00   4264.   4161.
 71 theta[71]  0.0572  0.0538 0.244 0.245 -0.331   0.457    1.00   5794.   4826.
 72 theta[72]  1.04    1.04   0.258 0.254  0.620   1.47     1.00   4113.   4464.
 73 theta[73] -0.0194 -0.0191 0.247 0.244 -0.421   0.388    1.00   5284.   4857.
 74 theta[74] -0.983  -0.987  0.249 0.244 -1.39   -0.573    1.00   4865.   4948.
 75 theta[75] -0.985  -0.985  0.244 0.246 -1.39   -0.583    1.00   4211.   5083.
 76 theta[76]  3.24    3.24   0.298 0.299  2.77    3.74     1.00   2148.   3639.
 77 theta[77] -0.792  -0.787  0.250 0.251 -1.21   -0.394    1.00   5120.   5001.
 78 theta[78]  1.14    1.14   0.252 0.249  0.735   1.57     1.00   4355.   4895.
 79 theta[79]  1.87    1.86   0.268 0.266  1.44    2.32     1.00   3602.   5397.
 80 theta[80]  0.606   0.606  0.247 0.245  0.202   1.01     1.00   5240.   4944.
 81 theta[81]  0.0395  0.0406 0.245 0.242 -0.361   0.438    1.00   5591.   4887.
 82 theta[82]  0.0136  0.0142 0.250 0.251 -0.399   0.419    1.00   5968.   4499.
 83 theta[83] -0.985  -0.983  0.250 0.250 -1.40   -0.582    1.00   4853.   4605.
 84 theta[84]  1.57    1.57   0.264 0.263  1.15    2.01     1.00   3455.   4587.
 85 theta[85]  0.202   0.200  0.246 0.250 -0.207   0.611    1.00   5346.   5489.
 86 theta[86] -0.982  -0.977  0.254 0.252 -1.41   -0.574    1.00   4628.   5101.
 87 theta[87]  0.163   0.164  0.244 0.247 -0.236   0.566    1.00   5157.   5108.
 88 theta[88] -0.324  -0.321  0.246 0.245 -0.728   0.0797   1.00   5429.   5161.
 89 theta[89] -0.985  -0.985  0.249 0.247 -1.39   -0.575    1.00   4405.   4974.
 90 theta[90]  0.832   0.830  0.252 0.250  0.417   1.25     1.00   5060.   5131.
 91 theta[91] -0.844  -0.846  0.246 0.249 -1.24   -0.433    1.00   4310.   4696.
 92 theta[92]  1.05    1.05   0.257 0.254  0.634   1.49     1.00   4260.   4654.
 93 theta[93]  0.0148  0.0164 0.249 0.251 -0.392   0.423    1.00   5648.   5471.
 94 theta[94]  2.59    2.58   0.274 0.270  2.15    3.06     1.00   2326.   3879.
 95 theta[95]  1.65    1.64   0.263 0.263  1.23    2.10     1.00   3661.   5009.
 96 theta[96]  0.315   0.313  0.249 0.249 -0.0910  0.724    1.00   5005.   5328.
 97 theta[97]  0.262   0.262  0.243 0.239 -0.142   0.662    1.00   5407.   5180.
 98 theta[98] -0.981  -0.979  0.248 0.249 -1.39   -0.580    1.00   4751.   5057.
 99 theta[99]  1.21    1.20   0.258 0.258  0.789   1.64     1.00   4004.   3943.
100 theta[10… -0.934  -0.932  0.248 0.244 -1.34   -0.518    1.00   4387.   4600.
101 theta[10… -0.714  -0.715  0.247 0.244 -1.12   -0.309    1.00   4983.   4596.
102 theta[10…  2.81    2.80   0.286 0.282  2.35    3.30     1.00   2501.   4112.
103 theta[10… -0.669  -0.668  0.250 0.246 -1.08   -0.261    1.00   4514.   4899.
104 theta[10…  0.857   0.853  0.264 0.266  0.427   1.30     1.00   4378.   5812.
105 theta[10… -0.0878 -0.0882 0.256 0.256 -0.505   0.334    1.00   5778.   5086.
106 theta[10…  1.12    1.12   0.258 0.260  0.705   1.55     1.00   4134.   4548.
107 theta[10…  1.57    1.57   0.264 0.262  1.14    2.01     1.00   3233.   4522.
108 theta[10…  0.836   0.833  0.253 0.253  0.430   1.25     1.00   4734.   5399.
109 theta[10…  1.03    1.02   0.253 0.253  0.621   1.45     1.00   4386.   5151.
110 theta[11…  0.0423  0.0421 0.253 0.243 -0.378   0.462    1.00   6152.   4665.
111 theta[11…  0.113   0.112  0.248 0.241 -0.300   0.518    1.00   4459.   4603.
112 theta[11… -0.647  -0.650  0.246 0.252 -1.05   -0.248    1.00   5500.   4747.
113 theta[11…  0.364   0.362  0.247 0.244 -0.0388  0.775    1.00   5724.   5312.
114 theta[11… -0.322  -0.321  0.254 0.253 -0.741   0.0900   1.00   5777.   5154.
115 theta[11…  0.852   0.847  0.259 0.257  0.432   1.29     1.00   3967.   5212.
116 theta[11… -0.172  -0.171  0.249 0.255 -0.579   0.228    1.00   6242.   5337.
117 theta[11…  0.0551  0.0511 0.242 0.239 -0.341   0.454    1.00   5062.   4429.
118 theta[11…  0.315   0.315  0.246 0.249 -0.0873  0.724    1.00   5223.   4676.
119 theta[11… -0.837  -0.833  0.247 0.249 -1.24   -0.437    1.00   5169.   4449.
120 theta[12… -0.987  -0.986  0.249 0.247 -1.40   -0.576    1.00   4792.   5002.
121 theta[12…  0.198   0.198  0.251 0.250 -0.217   0.608    1.00   5798.   5866.
122 theta[12… -0.643  -0.638  0.246 0.244 -1.05   -0.239    1.00   4539.   4935.
123 theta[12…  1.20    1.20   0.255 0.252  0.793   1.63     1.00   4452.   5116.
124 theta[12…  1.03    1.03   0.250 0.250  0.625   1.45     1.00   4251.   4635.
125 theta[12… -0.984  -0.982  0.257 0.258 -1.41   -0.569    1.00   4209.   4992.
126 theta[12… -0.647  -0.646  0.251 0.252 -1.06   -0.238    1.00   5043.   4895.
127 theta[12…  1.12    1.12   0.258 0.259  0.695   1.54     1.00   4753.   5162.
128 theta[12… -0.851  -0.852  0.249 0.251 -1.26   -0.448    1.00   4437.   4982.
129 theta[12… -0.669  -0.666  0.242 0.244 -1.07   -0.272    1.00   4287.   5121.
130 theta[13…  1.86    1.85   0.264 0.264  1.43    2.30     1.00   3292.   4752.
131 theta[13… -0.342  -0.336  0.248 0.245 -0.748   0.0709   1.00   6356.   5519.
132 theta[13…  1.65    1.65   0.263 0.268  1.23    2.09     1.00   3584.   4993.
133 theta[13… -0.984  -0.979  0.249 0.246 -1.40   -0.583    1.00   4221.   4639.
134 theta[13…  2.59    2.58   0.280 0.279  2.14    3.06     1.00   2548.   4471.
135 theta[13… -0.795  -0.793  0.248 0.246 -1.21   -0.381    1.00   5351.   4675.
136 theta[13… -0.983  -0.982  0.248 0.248 -1.39   -0.581    1.00   4665.   5578.
137 theta[13…  0.773   0.772  0.249 0.248  0.367   1.18     1.00   4585.   5084.
138 theta[13… -0.792  -0.793  0.254 0.252 -1.22   -0.379    1.00   4540.   4743.
139 theta[13…  0.0375  0.0374 0.249 0.246 -0.369   0.444    1.00   5480.   4968.
140 theta[14… -0.0602 -0.0612 0.251 0.248 -0.471   0.358    1.00   6477.   5454.
141 theta[14… -0.845  -0.844  0.246 0.244 -1.25   -0.442    1.00   5029.   5051.
142 theta[14… -0.982  -0.978  0.249 0.251 -1.40   -0.574    1.00   4476.   4072.
143 theta[14… -0.986  -0.986  0.251 0.249 -1.40   -0.576    1.00   4736.   5210.
144 theta[14…  0.812   0.810  0.247 0.247  0.411   1.22     1.00   4872.   4622.
145 theta[14… -0.982  -0.979  0.251 0.257 -1.40   -0.574    1.00   4775.   4894.
146 theta[14…  0.603   0.604  0.251 0.247  0.193   1.02     1.00   5027.   5028.
147 theta[14…  0.850   0.849  0.255 0.253  0.436   1.27     1.00   4995.   4336.
148 theta[14…  0.117   0.116  0.256 0.261 -0.303   0.539    1.00   5353.   6165.
149 theta[14… -0.983  -0.983  0.248 0.248 -1.39   -0.580    1.00   4671.   4570.
150 theta[15… -0.985  -0.982  0.247 0.249 -1.39   -0.590    1.00   4557.   5248.
151 theta[15… -0.641  -0.638  0.248 0.246 -1.05   -0.234    1.00   4594.   5012.
152 theta[15… -0.983  -0.979  0.252 0.260 -1.40   -0.577    1.00   4416.   4825.
153 theta[15…  1.52    1.52   0.255 0.252  1.11    1.94     1.00   3273.   4800.
154 theta[15… -0.933  -0.931  0.251 0.250 -1.35   -0.531    1.00   5345.   5741.
155 theta[15… -0.984  -0.980  0.251 0.252 -1.39   -0.574    1.00   4897.   5091.
156 theta[15… -0.844  -0.844  0.254 0.256 -1.26   -0.436    1.00   4832.   4382.
157 theta[15… -0.837  -0.837  0.245 0.248 -1.24   -0.435    1.00   4081.   4738.
158 theta[15… -0.0176 -0.0192 0.245 0.245 -0.424   0.390    1.00   5124.   4501.
159 theta[15…  0.604   0.603  0.252 0.253  0.192   1.02     1.00   5151.   5274.
160 theta[16… -0.985  -0.982  0.247 0.244 -1.40   -0.585    1.00   4118.   4905.
161 theta[16…  1.29    1.29   0.251 0.248  0.891   1.71     1.00   3948.   5023.
162 theta[16…  0.164   0.165  0.249 0.250 -0.247   0.568    1.00   5887.   4862.
163 theta[16… -0.985  -0.987  0.255 0.255 -1.40   -0.570    1.00   4361.   4463.
164 theta[16… -0.546  -0.543  0.257 0.262 -0.971  -0.132    1.00   6051.   5156.
165 theta[16…  0.154   0.154  0.255 0.253 -0.263   0.575    1.00   5766.   5078.
166 theta[16… -0.983  -0.978  0.253 0.257 -1.40   -0.579    1.00   4344.   4447.
167 theta[16…  0.316   0.316  0.245 0.242 -0.0825  0.717    1.00   4859.   4793.
168 theta[16…  1.37    1.36   0.259 0.262  0.952   1.80     1.00   3668.   4562.
169 theta[16… -0.355  -0.355  0.247 0.243 -0.762   0.0506   1.00   5240.   4809.
170 theta[17…  0.110   0.109  0.245 0.244 -0.291   0.506    1.00   5557.   4983.
171 theta[17… -0.753  -0.754  0.248 0.243 -1.16   -0.352    1.00   5032.   4757.
172 theta[17…  0.194   0.189  0.246 0.245 -0.204   0.603    1.00   5027.   5084.
173 theta[17…  0.609   0.606  0.251 0.256  0.197   1.02     1.00   5140.   5286.
174 theta[17… -0.711  -0.710  0.253 0.251 -1.12   -0.288    1.00   4736.   5131.
175 theta[17… -0.669  -0.665  0.243 0.240 -1.07   -0.269    1.00   4456.   4886.
176 theta[17… -0.794  -0.795  0.247 0.246 -1.20   -0.386    1.00   5158.   5554.
177 theta[17… -0.442  -0.444  0.246 0.245 -0.846  -0.0367   1.00   5852.   5379.
# … with abbreviated variable names ¹​ess_bulk, ²​ess_tail

EAP Estimates of Latent Variables

Density of EAP Estimates

Density of All Posterior Draws

Comparing Two Posterior Distributions

Comparing EAP Estimate with Posterior SD

Comparing EAP Estimate with Sum Score

Posterior Distribution for Person Parameters

The posterior distribution of the person parameters (the latent variable; for a single person):

\[ f(\theta_p \mid \boldsymbol{Y}) \propto f\left(\boldsymbol{Y} \mid \theta_p \right) f\left(\theta_p \right) \]

Here:

  • \(f(\theta_p \mid \boldsymbol{Y})\) is the posterior distribution of the latent variable conditional on the observed data
  • $f( _p ) $ is the model (data) likelihood

\[ f\left(\boldsymbol{Y} \mid \theta_p \right) = \prod_{i=1}^I f(Y_i \mid \theta_p)\]

Measurement Model Estimation Fails

Recall: Stan’s parameters {} Block

parameters {
  vector[nObs] theta;                // the latent variables (one for each person)
  vector[nItems] mu;                 // the item intercepts (one for each item)
  vector[nItems] lambda;             // the factor loadings/item discriminations (one for each item)
  vector<lower=0>[nItems] psi;       // the unique standard deviations (one for each item)   
}

Here, the parameterization of \(\lambda\) (factor loadings/discrimination parameters) can lead to problems in estimation

  • The issue: \(\lambda_i \theta_p = (-\lambda_i)(-\theta_p)\)
    • Depending on the random starting values of each of these parameters (per chain), a given chain may converge to a different region
  • To demonstrate (later), we will start with a different random number seed
    • Currently using 09102022: works fine
    • Change to 25102022: big problems

New Samples Syntax

Trying the same model with a different random number seed:

modelCFA_samplesFail = modelCFA_stan$sample(
  data = modelCFA_data,
  seed = 25102022,
  chains = 4,
  parallel_chains = 4,
  iter_warmup = 2000,
  iter_sampling = 2000
)

Convergence: FAIL

[1] 1.741764

Why Convergence Failed

The issue: \(\lambda_i \theta_p = (-\lambda_i)(-\theta_p)\)

# A tibble: 30 × 10
   variable       mean   median     sd    mad     q5   q95  rhat ess_b…¹ ess_t…²
   <chr>         <dbl>    <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl>   <dbl>   <dbl>
 1 mu[1]       2.37e+0  2.37    0.0865 0.0852  2.22  2.51   1.00  1.32e3   2582.
 2 mu[2]       1.95e+0  1.95    0.0837 0.0838  1.82  2.09   1.00  9.35e2   2294.
 3 mu[3]       1.87e+0  1.87    0.0831 0.0812  1.74  2.01   1.00  1.14e3   2591.
 4 mu[4]       2.01e+0  2.01    0.0841 0.0842  1.87  2.15   1.00  9.70e2   2281.
 5 mu[5]       1.98e+0  1.98    0.0838 0.0840  1.84  2.12   1.00  7.40e2   1639.
 6 mu[6]       1.89e+0  1.89    0.0761 0.0775  1.77  2.02   1.00  7.12e2   1580.
 7 mu[7]       1.72e+0  1.72    0.0767 0.0754  1.60  1.85   1.00  9.66e2   2777.
 8 mu[8]       1.84e+0  1.84    0.0730 0.0720  1.72  1.96   1.00  7.00e2   1438.
 9 mu[9]       1.81e+0  1.81    0.0864 0.0865  1.66  1.95   1.00  1.00e3   2556.
10 mu[10]      1.52e+0  1.52    0.0812 0.0804  1.39  1.66   1.00  1.42e3   2979.
11 lambda[1]  -1.41e-3  0.00962 0.745  1.09   -0.849 0.845  1.74  6.08e0    117.
12 lambda[2]  -1.70e-3 -0.00212 0.874  1.29   -0.969 0.969  1.74  6.08e0    161.
13 lambda[3]  -1.89e-3 -0.0291  0.807  1.19   -0.906 0.906  1.74  6.06e0    104.
14 lambda[4]  -3.35e-4 -0.0119  0.848  1.25   -0.944 0.948  1.74  6.08e0    115.
15 lambda[5]  -1.54e-3  0.00194 1.00   1.48   -1.10  1.09   1.74  6.11e0    135.
16 lambda[6]  -1.22e-3 -0.00280 0.903  1.33   -0.986 0.984  1.74  6.09e0    119.
17 lambda[7]  -2.25e-3  0.0241  0.769  1.13   -0.858 0.855  1.74  6.07e0    105.
18 lambda[8]  -1.17e-3  0.00361 0.857  1.26   -0.938 0.935  1.74  6.11e0    132.
19 lambda[9]  -1.48e-3 -0.0127  0.867  1.28   -0.968 0.966  1.74  6.10e0    121.
20 lambda[10] -5.68e-4 -0.0454  0.678  0.995  -0.776 0.775  1.74  6.06e0    104.
21 psi[1]      8.92e-1  0.890   0.0511 0.0513  0.812 0.979  1.00  1.40e4   5818.
22 psi[2]      7.35e-1  0.732   0.0439 0.0437  0.666 0.811  1.00  1.24e4   6036.
23 psi[3]      7.81e-1  0.779   0.0451 0.0453  0.711 0.857  1.00  1.34e4   6747.
24 psi[4]      7.58e-1  0.756   0.0444 0.0440  0.688 0.834  1.00  1.30e4   6175.
25 psi[5]      5.44e-1  0.543   0.0373 0.0368  0.484 0.607  1.00  8.34e3   6597.
26 psi[6]      5.05e-1  0.504   0.0334 0.0333  0.452 0.561  1.00  8.38e3   6575.
27 psi[7]      6.85e-1  0.684   0.0396 0.0391  0.623 0.751  1.00  1.27e4   5950.
28 psi[8]      4.80e-1  0.479   0.0313 0.0309  0.431 0.534  1.00  8.72e3   6298.
29 psi[9]      7.81e-1  0.780   0.0455 0.0455  0.709 0.859  1.00  1.25e4   6226.
30 psi[10]     8.39e-1  0.837   0.0470 0.0467  0.766 0.921  1.00  1.40e4   6265.
# … with abbreviated variable names ¹​ess_bulk, ²​ess_tail

Posterior Trace Plots of \(\lambda\)

Posterior Density Plots of \(\lambda\)

Examining Latent Variables

# A tibble: 177 × 10
   variable       mean    median    sd   mad     q5   q95  rhat ess_bulk ess_t…¹
   <chr>         <dbl>     <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl>    <dbl>   <dbl>
 1 theta[1]  -0.00146  -0.00315  0.249 0.246 -0.408 0.403  1.00  4198.    4672. 
 2 theta[2]   0.00245  -0.122    1.55  2.25  -1.85  1.86   1.73     6.07    98.4
 3 theta[3]   0.00266  -0.00440  1.72  2.52  -2.04  2.05   1.74     6.06   117. 
 4 theta[4]   0.00206  -0.0296   0.959 1.37  -1.24  1.25   1.73     6.06   107. 
 5 theta[5]  -0.00233  -0.00457  0.250 0.251 -0.409 0.410  1.02   201.    4934. 
 6 theta[6]  -0.00499   0.0738   1.01  1.45  -1.30  1.29   1.73     6.08   124. 
 7 theta[7]   0.00155  -0.00312  0.423 0.502 -0.655 0.664  1.61     6.59   106. 
 8 theta[8]  -0.000405  0.000558 0.250 0.251 -0.409 0.407  1.03   108.    4176. 
 9 theta[9]  -0.00238  -0.00591  0.826 1.16  -1.11  1.11   1.73     6.06   104. 
10 theta[10]  0.000514 -0.00102  0.252 0.244 -0.411 0.412  1.02   238.    4244. 
# … with 167 more rows, and abbreviated variable name ¹​ess_tail

Posterior Trace Plots of \(\theta\)

Posterior Density Plots of \(\theta\)

Fixing Convergence

Stan allows starting values to be set via cmdstanr

  • Documentation is very lacking, but with some trial and a lot of error, I will show you how

Alternatively:

  • Restrict \(\lambda\) to be positive
    • Truncates prior distribution with MVN
  • Can also choose prior that has strictly positive range (like log-normal)
  • Note: The restriction on the space of \(\lambda\) will not permit truely negative values
    • Not ideal as negative \(\lambda\) values are informative as a problem with data
parameters {
  vector[nObs] theta;                // the latent variables (one for each person)
  vector[nItems] mu;                 // the item intercepts (one for each item)
  vector<lower=0>[nItems] lambda;             // the factor loadings/item discriminations (one for each item)
  vector<lower=0>[nItems] psi;       // the unique standard deviations (one for each item)   
}

Setting Starting Values in Stan

Starting values (initial values) are the first values used when an MCMC chain starts

  • In Stan, by default, parameters are randomly started between -2 and 2
    • Bounded parameters are transformed so they are unbounded in the algorithm
  • What we need:
    • Randomly start all \(\lambda\) parameters so that they converge to the \(\lambda_i\theta_p\) mode
      • As opposed to the \((-\lambda_i)(-\theta_p)\) mode

cmdstanr Syntax for Initial Values

Add the init option to the $sample() function of the cmdstanr object:

# set starting values for some of the parameters
modelCFA_samples2fixed = modelCFA_stan$sample(
  data = modelCFA_data,
  seed = 25102022,
  chains = 4,
  parallel_chains = 4,
  iter_warmup = 2000,
  iter_sampling = 2000, 
  init = function() list(lambda=rnorm(nItems, mean=10, sd=2))
)

The init option can be specified as a function, here, randomly starting each \(\lambda\) following a normal distribution

Initialization Process

See the lecture R syntax for information on how to confirm starting values are set

Final Results: Parameters

[1] 1.008906
# A tibble: 30 × 10
   variable    mean median     sd    mad    q5   q95  rhat ess_bulk ess_tail
   <chr>      <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
 1 mu[1]      2.37   2.37  0.0867 0.0877 2.23  2.51   1.00    1484.    3017.
 2 mu[2]      1.96   1.96  0.0855 0.0849 1.81  2.10   1.01    1023.    1964.
 3 mu[3]      1.88   1.88  0.0846 0.0828 1.74  2.02   1.00    1210.    2686.
 4 mu[4]      2.01   2.01  0.0865 0.0873 1.87  2.16   1.00    1168.    2681.
 5 mu[5]      1.99   1.99  0.0845 0.0842 1.85  2.12   1.01     787.    1846.
 6 mu[6]      1.90   1.90  0.0774 0.0751 1.77  2.02   1.01     818.    1814.
 7 mu[7]      1.72   1.72  0.0776 0.0763 1.60  1.85   1.01    1065.    2588.
 8 mu[8]      1.84   1.85  0.0731 0.0721 1.72  1.96   1.01     812.    1596.
 9 mu[9]      1.81   1.81  0.0876 0.0857 1.66  1.95   1.00    1148.    2308.
10 mu[10]     1.52   1.52  0.0817 0.0807 1.39  1.66   1.00    1507.    2944.
11 lambda[1]  0.740  0.738 0.0832 0.0822 0.608 0.880  1.00    3278.    4524.
12 lambda[2]  0.871  0.869 0.0757 0.0742 0.749 0.998  1.00    1916.    3797.
13 lambda[3]  0.802  0.800 0.0778 0.0778 0.678 0.931  1.00    2568.    4543.
14 lambda[4]  0.844  0.842 0.0766 0.0773 0.721 0.974  1.00    2120.    3592.
15 lambda[5]  0.998  0.996 0.0710 0.0707 0.885 1.12   1.00    1460.    2807.
16 lambda[6]  0.900  0.897 0.0640 0.0638 0.799 1.01   1.00    1449.    3056.
17 lambda[7]  0.765  0.762 0.0701 0.0699 0.654 0.883  1.00    2126.    3744.
18 lambda[8]  0.854  0.852 0.0611 0.0605 0.757 0.959  1.00    1392.    2754.
19 lambda[9]  0.862  0.860 0.0792 0.0780 0.737 0.996  1.00    2265.    4269.
20 lambda[10] 0.673  0.670 0.0773 0.0762 0.549 0.805  1.00    3194.    4112.
21 psi[1]     0.892  0.890 0.0498 0.0505 0.814 0.976  1.00   12915.    5995.
22 psi[2]     0.734  0.733 0.0424 0.0415 0.667 0.807  1.00   13155.    6267.
23 psi[3]     0.782  0.780 0.0455 0.0448 0.711 0.860  1.00   13300.    6263.
24 psi[4]     0.757  0.755 0.0442 0.0437 0.688 0.833  1.00   13017.    5909.
25 psi[5]     0.545  0.543 0.0373 0.0374 0.487 0.608  1.00    8467.    6525.
26 psi[6]     0.505  0.504 0.0337 0.0329 0.451 0.562  1.00    8723.    6907.
27 psi[7]     0.686  0.684 0.0398 0.0389 0.623 0.755  1.00   12916.    6328.
28 psi[8]     0.480  0.479 0.0324 0.0324 0.430 0.536  1.00    9187.    6727.
29 psi[9]     0.782  0.780 0.0461 0.0463 0.710 0.861  1.00   12622.    6432.
30 psi[10]    0.839  0.838 0.0469 0.0475 0.766 0.921  1.00   13958.    5885.

Comparing Results

Plot of all parameters across both algorithm runs:

Correlation of all parameters across both algorithm runs:

[1] 0.9999968

Wrapping Up

Wrapping Up

Today, we showed how to model observed data using a normal distribution

  • Assumptions of Confirmatory Factor Analysis
    • Not appropriate for our data
    • May not be appropriate for many data sets
  • We will have to keep our loading/discrimination parameters positive to ensure each chain converges to the same posterior mode
    • This will continue through the next types of data
  • Next up, categorical distributions for observed data
    • More appropriate for these data as they are discrete categorical responses