Jonas HaslbeckPostdoc Psych Methods
http://jmbh.github.io/
Selecting the Number of Factors in Exploratory Factor Analysis via out-of-sample Prediction Errors<p><a href="https://en.wikipedia.org/wiki/Exploratory_factor_analysis">Exploratory Factor Analysis</a> (EFA) identifies a number of latent factors that explain correlations between observed variables. A key issue in the application of EFA is the selection of an adequate number of factors. This is a non-trivial problem because more factors always improve the fit of the model. Most methods for selecting the number of factors fall into two categories: either they analyze the patterns of eigenvalues of the correlation matrix, such as <a href="https://en.wikipedia.org/wiki/Parallel_analysis">parallel analysis</a>; or they frame the selection of the number of factors as a model selection problem and use approaches such as <a href="https://en.wikipedia.org/wiki/Likelihood-ratio_test">likelihood ratio tests</a> or <a href="https://en.wikipedia.org/wiki/Model_selection#Criteria">information criteria</a>.</p>
<p><a href="https://psycnet.apa.org/fulltext/2023-13984-001.html">In a recent paper</a> we proposed a new method based on model selection. We use the connection between model-implied correlation matrices and standardized regression coefficients to do model selection based on out-of-sample prediction errors, as is common in the field of machine learning. We show in a simulation study that our method slightly outperforms other standard methods on average and is relatively robust across specifications of the true model. An implementation is available in the <a href="https://cran.r-project.org/web/packages/fspe/index.html">R-package fspe</a>, which I present here with a short code example.</p>
<p>We use a dataset with 24 measurements of cognitive tasks from 301 individuals from <a href="https://psycnet.apa.org/record/1939-04445-001">Holzinger and Swineford (1939)</a>. <a href="https://books.google.com/books?hl=en&lr=&id=e-vMN68C3M4C&oi=fnd&pg=PR15&dq=Harman,+H.+H.+(1967).+Modern+factor+analysis.+University+of+Chicago+Press.&ots=t6OpGtgX1C&sig=AxyxKKP9Aj7y9vhIJotRfBkQamM">Harman (1967)</a> presents both a four- and five-factor solution for this dataset. In the four-factor solution, the fifth factor corresponding to the variables 20–24 is eliminated. For this reason, we exclude variables 20–24, which gives us an example dataset in which we would theoretically expect four factors. This reduced dataset is is included in the fspe-package:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">fspe</span><span class="p">)</span><span class="w">
</span><span class="n">data</span><span class="p">(</span><span class="n">holzinger19</span><span class="p">)</span><span class="w">
</span><span class="nf">dim</span><span class="p">(</span><span class="n">holzinger19</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 301 19</code></pre></figure>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">head</span><span class="p">(</span><span class="n">holzinger19</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## t01_visperc t02_cubes t03_frmbord t04_lozenges t05_geninfo t06_paracomp
## 1 20 31 12 3 40 7
## 2 32 21 12 17 34 5
## 3 27 21 12 15 20 3
## 4 32 31 16 24 42 8
## 5 29 19 12 7 37 8
## 6 32 20 11 18 31 3
## t07_sentcomp t08_wordclas t09_wordmean t10_addition t11_code t12_countdot
## 1 23 22 9 78 74 115
## 2 12 22 9 87 84 125
## 3 7 12 3 75 49 78
## 4 18 21 17 69 65 106
## 5 16 25 18 85 63 126
## 6 12 25 6 100 92 133
## t13_sccaps t14_wordrecg t15_numbrecg t16_figrrecg t17_objnumb t18_numbfig
## 1 229 170 86 96 6 9
## 2 285 184 85 100 12 12
## 3 159 170 85 95 1 5
## 4 175 181 80 91 5 3
## 5 213 187 99 104 15 14
## 6 270 164 84 104 6 6
## t19_figword
## 1 16
## 2 10
## 3 6
## 4 10
## 5 14
## 6 14</code></pre></figure>
<p>Next to providing the data to the <code class="language-plaintext highlighter-rouge">fspe()</code> function we specify that factor models with 1, 2, … ,10 factors should be considered (<code class="language-plaintext highlighter-rouge">maxK = 10</code>), that the cross-validation scheme should use with 10 folds (<code class="language-plaintext highlighter-rouge">nfold = 10)</code> and be repeated 10 times (<code class="language-plaintext highlighter-rouge">rep = 10</code>), and that prediction errors (<code class="language-plaintext highlighter-rouge">method = "PE"</code>) should be used. An alternative method (<code class="language-plaintext highlighter-rouge">method = "CovE"</code>) computes an out-of-sample estimation error on the covariance matrix instead of a prediction error on the raw data. This is a method that is similar to the one proposed by <a href="https://www.tandfonline.com/doi/abs/10.1207/s15327906mbr2404_4">Browne & Cudeck (1989)</a>. Finally, we set a seed so that the analysis demonstrated here is fully reproducible.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">fspe_out</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">fspe</span><span class="p">(</span><span class="n">holzinger19</span><span class="p">,</span><span class="w">
</span><span class="n">maxK</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">10</span><span class="p">,</span><span class="w">
</span><span class="n">nfold</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">10</span><span class="p">,</span><span class="w">
</span><span class="n">rep</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">10</span><span class="p">,</span><span class="w">
</span><span class="n">method</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"PE"</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<p>We can inspect the out-of-sample prediction error averaged across variables, folds, and repetitions as a function of the number of factors:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">par</span><span class="p">(</span><span class="n">mar</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">4.5</span><span class="p">,</span><span class="m">4</span><span class="p">,</span><span class="m">0</span><span class="p">,</span><span class="m">1</span><span class="p">))</span><span class="w">
</span><span class="n">plot.new</span><span class="p">()</span><span class="w">
</span><span class="n">plot.window</span><span class="p">(</span><span class="n">xlim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">10</span><span class="p">),</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">0.6</span><span class="p">,</span><span class="w"> </span><span class="m">0.8</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">10</span><span class="p">)</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">xlab</span><span class="o">=</span><span class="s2">"Number of Factors"</span><span class="p">,</span><span class="w"> </span><span class="n">ylab</span><span class="o">=</span><span class="s2">"Out-of-sample Prediction Error"</span><span class="p">)</span><span class="w">
</span><span class="n">points</span><span class="p">(</span><span class="n">which.min</span><span class="p">(</span><span class="n">fspe_out</span><span class="o">$</span><span class="n">PEs</span><span class="p">),</span><span class="w"> </span><span class="nf">min</span><span class="p">(</span><span class="n">fspe_out</span><span class="o">$</span><span class="n">PEs</span><span class="p">),</span><span class="w"> </span><span class="n">cex</span><span class="o">=</span><span class="m">3</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="o">=</span><span class="s2">"red"</span><span class="p">,</span><span class="w"> </span><span class="n">lwd</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">fspe_out</span><span class="o">$</span><span class="n">PEs</span><span class="p">,</span><span class="w"> </span><span class="n">lwd</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">abline</span><span class="p">(</span><span class="n">h</span><span class="o">=</span><span class="nf">min</span><span class="p">(</span><span class="n">fspe_out</span><span class="o">$</span><span class="n">PEs</span><span class="p">),</span><span class="w"> </span><span class="n">col</span><span class="o">=</span><span class="s2">"grey"</span><span class="p">,</span><span class="w"> </span><span class="n">lty</span><span class="o">=</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">lwd</span><span class="o">=</span><span class="m">2</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2023-02-14-EFA_Factors_OoSPE.Rmd/unnamed-chunk-3-1.png" title="plot of chunk unnamed-chunk-3" alt="plot of chunk unnamed-chunk-3" style="display: block; margin: auto;" />
We see that the out-of-sample prediction error is minimized by the factor model with four factors. The number of factors with lowest prediction error can also be directly obtained from the output object:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">fspe_out</span><span class="o">$</span><span class="n">nfactor</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 4</code></pre></figure>
<p>The un-aggregated of the 10 repetitions of the cross-validation scheme can be found in <code class="language-plaintext highlighter-rouge">fspe_out$PE_array</code>.</p>
Tue, 14 Feb 2023 09:00:00 +0000
http://jmbh.github.io//EFA_Factors_OoSPE/
http://jmbh.github.io//EFA_Factors_OoSPE/The Impact of Ordinal Scales on Gaussian Mixture Recovery<p><a href="https://en.wikipedia.org/wiki/Mixture_model#Multivariate_Gaussian_mixture_model">Gaussian Mixture Models (GMMs)</a> and its special cases Latent Profile Analysis and <a href="https://en.wikipedia.org/wiki/K-means_clustering">k-Means</a> are a popular and versatile tools for exploring heterogeneity in multivariate continuous data. However, they assume that the observed data are continuous, an assumption that is often not met: for example, the severity of symptoms of diseases is often measured in ordinal categories such as <em>not at all</em>, <em>several days</em>, <em>more than half the days</em>, and <em>nearly every day</em>, and survey questions are often assessed using ordinal responses such as <em>strongly agree</em>, <em>agree</em>, <em>neutral</em>, and <em>agree</em>, <em>strongly agree</em>. In this blog post, I summarize <a href="https://link.springer.com/article/10.3758/s13428-022-01883-8">a paper</a> which investigates to what extent estimating GMMs is robust against observing ordinal instead of continuous variables.</p>
<h3 id="simulation-setup">Simulation Setup</h3>
<p>To investigate this question we generate data from a number of GMMs that differ in the number of variables/dimensions $p \in \{2, \dots, 10 \}$, the number of components $K \in \{2,3,4\}$ and the pairwise <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback-Leibler Divergence</a> $\text{D}_{KL} \in \{2, 3.5, 5\}$ between the components. We then generate data from each GMM and threshold the continuous data into $12, 10, 8, 6, 5, 4, 3,$ or $2$ categories, using equally spaced thresholds ranging from the $0.5\%$ to the $99.5\%$ quantile. The following figure shows the result of this thresholding for a bivariate ($p=2$) GMM with two components ($K=2$):</p>
<h2><img src="http://jmbh.github.io/figs/OrdinalGMM/OGMM_setup.png" alt="center" /></h2>
<p>We then estimate the GMM using arguably the most widely used algorithm, the <a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm">Expectation-Maximization (EM) algorithm</a> and perform model selection with the <a href="https://en.wikipedia.org/wiki/Bayesian_information_criterion">Bayesian Information Criterion (BIC)</a>. The red <font color="red">X</font> in the figure indicate the means of the selected model. We see that the EM-algorithm & BIC correctly recover two components and their means for the continuous data. However, when observing ordinal variables instead we select incorrect numbers of components.</p>
<h3 id="summary-of-results">Summary of Results</h3>
<p>In the following figure we show the accuracy of recovering the correct number of components $K$ averaged across the variations in $K$ and $\text{D}_{KL}$, as a function of the number of variables $p$ (y-axis) and the number of ordinal categories (x-axis) using $N=10000$ samples and averaged across $100$ repetitions. We see that if the number of variables or the number of ordinal categories are low, the accuracy is extremely low. However, when both are larger than $5$ then the accuracy is above $0.90$.</p>
<h2 id="-1"><img src="http://jmbh.github.io/figs/OrdinalGMM/OGMM_results.png" alt="center" /></h2>
<p><a href="https://link.springer.com/article/10.3758/s13428-022-01883-8">In our paper</a> we present performance as a function of the number of components $K$, the distance between components $\text{D}_{KL}$, the number of variables $p$, and the sample sizes $N \in \{1000, 2500, 10000\}$. In addition, we assess the estimation error on parameters the models for which $K$ has been correctly estimated, as a function of various characteristics of the data generating GMM. These results show that a sizable bias in parameter estimates remains across scenarios and this bias does not decrease with increasing sample size. Next to the simulation results we discuss possible alternative modeling approaches based on ordinal models with underlying latent Gaussian distributions and based on categorical data analysis in which ordinal variables are manifest variables. The code to reproduce all analyses and results in our paper can be found <a href="https://github.com/jmbh/OrdinalGMMSim_reparchive">here on Github</a>.</p>
Thu, 14 Jul 2022 11:00:00 +0000
http://jmbh.github.io//OrdinalGMM/
http://jmbh.github.io//OrdinalGMM/Computing Odds Ratios from Mixed Graphical Models<p>Interpreting statistical network models typically involves interpreting individual edge parameters. If the network model is a Gaussian Graphical Model (GGM), the interpretation is relatively simple: the pairwise interaction parameters are partial correlations, which indicate conditional linear relationships and vary from -1 to 1. Using the standard deviations of the two involved variables, the partial correlation can also be transformed into a linear regression coefficient (see for example <a href="https://arxiv.org/abs/1609.04156">here</a>). However, when studying interactions involving categorical variables, such as in an Ising model or a Mixed Graphical Model (MGM), the parameters are not limited to a certain range and their interpretation is less intuitive. In these situations it may be helpful to report the interactions between variables in terms of odds ratios.</p>
<h3 id="odds-ratios">Odds Ratios</h3>
<p>Odds Ratios are made up of odds, which are themselves a ratio of probabilities</p>
<p>\(\text{Odds} = \frac{P(X_1=1)}{P(X_1=0)}.\)
Since we chose to put $P(X_1=1)$ in the numerator, we interpret these odds as the “odds being in favor of $X_1=1$”. For example, if $X_1$ is the symptom sleep problems which takes the values 0 (no sleep problems) with probability 0.75 and the value 1 (sleep problems) with probability 0.25, then the odds of having sleep problems are 1 to 4.</p>
<p>However, these odds may be different in different circumstances. Let’s say these circumstances are captured by variable $X_2$ which takes values in ${0,1}$. In our example, those circumstances could be whether you live next to a busy street (1) or not (0). If the odds indeed depend on $X_2$ then we have</p>
\[\text{Odds}_{X_2=1} = \frac{P(X_1=1 \mid X_2=1)}{P(X_1=0 \mid X_2=1)} \neq
\frac{P(X_1=1 \mid X_2=0)}{P(X_1=0 \mid X_2=0)} = \text{Odds}_{X_2=0}
.\]
<p>A way to quantify the degree to which the odds are different depending whether we set $X_2=1$ or $X_2=0$ is to divide the odds in those two situations</p>
\[\text{Odds Ratio} = \frac{\text{Odds}_{X_2=1}}{\text{Odds}_{X_2=0}}
,\]
<p>which gives rise to an odds ratio (OR).</p>
<p>How do we interpret this odds ratio? If the OR is equal to 1, then $X_2$ has no influence on the odds between $P(X_1=1)$ and $P(X_1=0)$; if OR > 1, $X_2=1$ <em>increases</em> the odds compared to $X_2=0$; and if OR < 1, $X_2=1$ <em>decreases</em> the odds compared to $X_2=0$. In our example from above, an OR = 4 would imply that the odds of sleep problems (vs. no sleep problems) are four times larger when living next to a busy street (vs. not living next to a busy street).</p>
<p>In the remainder of this blog post, I will illustrate how to compute such odds ratios based on MGMs estimated with the R-package <a href="https://cran.r-project.org/web/packages/mgm/index.html"><em>mgm</em></a>.</p>
<h3 id="loading-example-data">Loading Example Data</h3>
<p>We use a data set on Autism Spectrum Disorder (ASD) which contains $n=3521$ observations of seven variables, gender (1 = male, 2 = female), IQ (continuous), Integration in Society (3 categories ordinal), Number of comorbidities (count), Type of housing (1 = supervised, 2 = unsupervised), working hours (continuous), and satisfaction with treatment (continuous).</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">mgm</span><span class="p">)</span><span class="w"> </span><span class="c1"># data is loaded with mgm package (version 1.2-11)</span><span class="w">
</span><span class="n">head</span><span class="p">(</span><span class="n">autism_data</span><span class="o">$</span><span class="n">data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Gender IQ Integration in Society No of Comorbidities Type of Housing
## 1 1 6.00 1 1 1
## 2 2 6.00 2 1 1
## 3 1 5.00 2 0 1
## 4 1 6.00 1 0 1
## 5 1 5.00 1 1 1
## 6 1 4.49 1 1 1
## Workinghours Satisfaction: Treatment
## 1 0 3.00
## 2 0 2.00
## 3 0 4.00
## 4 10 3.00
## 5 0 1.00
## 6 0 1.75</code></pre></figure>
<p>For more details on this data set have a look at <a href="http://jmbh.github.io/Estimation-of-mixed-graphical-models/">this previous blog post</a>.</p>
<h3 id="fitting-mgm">Fitting MGM</h3>
<p>We model gender, integration in society and type of housing as categorical variables (with 2, 3 and 2 categories), and all remaining variables as Gaussian variables</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">mod</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mgm</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">autism_data</span><span class="o">$</span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"c"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"c"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"c"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">),</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"CV"</span><span class="p">,</span><span class="w">
</span><span class="n">ruleReg</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"AND"</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?mgm</code></pre></figure>
<p>and we visualize the dependencies of the resulting model using the <a href="https://cran.r-project.org/web/packages/qgraph/index.html">qqgraph</a> package:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">qgraph</span><span class="p">)</span><span class="w">
</span><span class="n">qgraph</span><span class="p">(</span><span class="n">mod</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">,</span><span class="w">
</span><span class="n">nodeNames</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">autism_data</span><span class="o">$</span><span class="n">colnames</span><span class="p">,</span><span class="w">
</span><span class="n">edge.color</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mod</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">,</span><span class="w">
</span><span class="c1"># edge.labels = TRUE,</span><span class="w">
</span><span class="n">legend</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">TRUE</span><span class="p">,</span><span class="w">
</span><span class="n">layout</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"spring"</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2020-09-01-ORs-in-MGMs.Rmd/unnamed-chunk-3-1.png" title="plot of chunk unnamed-chunk-3" alt="plot of chunk unnamed-chunk-3" style="display: block; margin: auto;" /></p>
<h3 id="example-calculation-of-odds-ratio">Example Calculation of Odds Ratio</h3>
<p>We consider the simplest possible example of calculating the OR involving two binary variables. Note that we calculate the OR between two variables within a multivariate model. This means that the OR is conditional on all other variables in the model which implies that it can be different from the OR calculated based on those two variables alone (specifically, this will always happen when at least one additional variable is connected to both binary variables.)</p>
<p>Some of you might know that there is a simple relationship between the OR and the coefficients in logistic regression. Since multinomial regression with two outcomes is equivalent to logistic regression, we could use this simple rule in this specific example. Here, however, we start out with the definition of OR and show how to calculate it in the general case of $\geq 2$ categories. Along the way, we’ll also derive the simple relationship between parameters and ORs for the binary case.</p>
<p>In our data set we use the two binary variables type of housing and gender for illustration. Specifically, we look at how the odds of type of housing change as a function of gender. Corresponding to the column numbers of the two variables, let $X_5$ be type of housing, and $X_1$ gender.</p>
<p>The definition of ORs above shows that we need to calculate four conditional probabilities. We first calculate $P(X_5=1 \mid X_1=0)$ and $P(X_5=0 \mid X_1=0)$ in the numerator. To compute these probabilities, we need the estimated parameters of the multinomial regression on $X_5$. In the standard parameterization of multinomial regression one of the response categories serves as the reference category (see <a href="https://en.wikipedia.org/wiki/Multinomial_logistic_regression">here</a>). The regularization used within the mgm package allows more direct parameterization in which the probability of each response category can be modeled directly (for details see Chapter 4 in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2929880/">this paper</a> on the <a href="https://cran.r-project.org/web/packages/glmnet/index.html">glmnet</a> package). We therefore get a set of parameters for <em>each</em> response category.</p>
<p>We can find those parameters in <code class="language-plaintext highlighter-rouge">mod$nodemodels[[5]]</code> which contains the parameters of the multinomial regression on variable $X_5$:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">coefs</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mod</span><span class="o">$</span><span class="n">nodemodels</span><span class="p">[[</span><span class="m">5</span><span class="p">]]</span><span class="o">$</span><span class="n">model</span><span class="w">
</span><span class="n">coefs</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## $`1`
## 8 x 1 sparse Matrix of class "dgCMatrix"
## 1
## (Intercept) 0.62372320
## V1.2 -0.38147442
## V2. -0.42186120
## V3.2 0.13918915
## V3.3 -0.04001268
## V4. 0.01414994
## V6. -0.22024227
## V7. 0.04346463
##
## $`2`
## 8 x 1 sparse Matrix of class "dgCMatrix"
## 1
## (Intercept) -0.62372320
## V1.2 0.38147442
## V2. 0.42186120
## V3.2 -0.13918915
## V3.3 0.04001268
## V4. -0.01414994
## V6. 0.22024227
## V7. -0.04346463</code></pre></figure>
<p>The first set of parameters in <code class="language-plaintext highlighter-rouge">coefs[[1]]</code> models $P(X_5 = 0 \mid \dots)$ and the second set of parameters in <code class="language-plaintext highlighter-rouge">coefs[[2]]</code> models $P(X_5 = 1 \mid \dots)$.</p>
<p>The data set contains seven variables, which means that we have six variables that predict variable 5. However, since variable 3 (Integration in Society) is a categorical variable with three categories, it is represented by two dummy variables that code for its 2nd and 3rd category. This is why we have a total of 5*1+2=7 predictor variables and 7 associated parameters.</p>
<p>Now, back to computing those probabilities in the enumerator. We would like to compute $P(X_5=1 \mid X_1=0)$ and $P(X_5=0 \mid X_1=0)$, however we see that the probability of $P(X_5)$ not only depends on $X_1$ but also on all other variables (none of the parameters are zero). We therefore need to fix all variables to some value in order to obtain a conditional probability. That is, we actually have to write the conditional probabilities as $P(X_5=1 \mid X_1, X_2, X_3, X_4, X_6, X_7)$ and $P(X_5=0 \mid X_1, X_2, X_3, X_4, X_6, X_7)$. Here we will fix all other variables to 0, but we will see later that it does not matter for our OR calculation to which value we set all these variables, as long as we choose the same values in all of the four probabilities.</p>
<p>The probability of $P(X_5=0 \mid \dots)$ is calculated by dividing the potential for this category by the sum of all (here two) potentials:</p>
<p>\(P(X_5=0 \mid \dots) = \frac{
\text{Potential}(X_5=0 \mid \dots)
}{
\text{Potential}(X_5=0 \mid \dots) + \text{Potential}(X_5=1 \mid \dots)
}\)
If $X_5$ would have $m$ categories, there would be $m$ terms in the denominator.</p>
<p>The potentials are specified by the estimated parameters</p>
\[\text{Potential}(X_5=1 \mid X_1=0, X_2=0, \dots, X_7=0) =
\exp \{
\beta_{0} + \beta_{0,1.2} \mathbb{I}(X_1=1) + \dots + \beta_{0,7} X_7
\}
,\]
<p>where $\beta_{0}$ is the intercept and the remaining seven parameters are the ones associated with the predictor terms in the model, and $\mathbb{I}(X_1=1)$ is the indicator function (or dummy variable) for category $X_1=1$.</p>
<p>Notice that we set all variables to zero, which means that the above potential simplifies to</p>
\[\text{Potential}(X_5=1 \mid \dots) =
\exp \{\beta_{0} \} \approx \exp \{ 0.624 \}\]
<p>where I took the intercept parameter $\beta_{0}$ from <code class="language-plaintext highlighter-rouge">coefs[[1]][1,1]</code>. Similarly, we have</p>
<p>\(\text{Potential}(X_5=0 \mid \dots) =
\exp \{\beta_{1} \} \approx \exp \{ -0.624 \}\)
taken from <code class="language-plaintext highlighter-rouge">coefs[[1]][1,1]</code>.</p>
<p>Using the two potentials, we can compute the probabilities. We now do this in R:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">Prob0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential0</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span><span class="w">
</span><span class="n">Prob1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span><span class="w">
</span><span class="n">Prob0</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.7768575</code></pre></figure>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Prob1</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.2231425</code></pre></figure>
<p>We calculated that the probability of $P(X_5=0 \mid \dots)$ (supervised housing) is $\approx 0.78$ and the probability of $P(X_5=0 \mid \dots)$ (unsupervised housing) is $\approx 0.22$.</p>
<p>Now we can compute the odds $\text{Odds}_{X_2=0}$:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">odds_x1_0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Prob1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">Prob0</span><span class="w">
</span><span class="n">odds_x1_0</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.2872373</code></pre></figure>
<p>The odds are smaller than one, which means that it is more likely that an individual lives in supervised housing.</p>
<p>Note that when computing the odds, the denominator in the calculations for the probabilities cancel out, which means we could have immediately computed the odds with the potentials:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">Potential0</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.2872373</code></pre></figure>
<p>So far, we computed the numerator of the formula of the Odds Ratio. We now compute the denominator, which includes the same conditional probabilities as above, except that we set $X_1=1$ instead of $X_1=0$. As discussed above, all other variables are kept constant at 0. To keep things short, I only show the R code for this second case:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">odds_x1_1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">Potential0</span></code></pre></figure>
<p>Similar to above, <code class="language-plaintext highlighter-rouge">coefs[[1]][1,1]</code> contains the intercept and <code class="language-plaintext highlighter-rouge">coefs[[1]][2,1]</code> contains the parameter associated with predictor $X_1$ for probability $P(X_5 = 0 \mid \dots)$. <code class="language-plaintext highlighter-rouge">coefs[[2]]</code> contains the corresponding parameters for probability $P(X_5 = 1 \mid \dots)$.</p>
<p>We can now compute the OR:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">OR</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">odds_x1_1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">odds_x1_0</span><span class="w">
</span><span class="n">OR</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 2.144591</code></pre></figure>
<p>We see that for females (coded 2) the odds of living in unsupervised housing (coded 2) are about twice as high as for males.</p>
<p>In a similar way, ORs can be calculated when the predicted variable or the predictor variable of interest has more than two categories. One can also compute ORs that combine the effect of several variables, for example by setting several variables to 1 in the numerator, and setting them all to 0 in the denominator.</p>
<h3 id="does-it-matter-to-which-value-we-fix-the-other-variables">Does it matter to which value we fix the other variables?</h3>
<p>In the above calculation we fixed all other variables to zero. Would the OR have been different if we had fixed them to a different value? We will show with some basic calculations that the answer is no. Along the way, we will also derive a much simpler way to compute the OR from the parameter estimates for the special case in which the response variable is binary.</p>
<p>To keep the notation manageable, we only consider one third variable $X_3$ instead of the five in the empirical example above. However, we would reach the same conclusion with any number of additional variables that are kept constant.</p>
<p>We start out with the definition of the odds ratio:</p>
\[\text{Odds Ratio} =
\frac{\text{Odds}_{X_2=1}}{\text{Odds}_{X_2=0}}
=
\frac{
\frac{P(X_1=1 \mid X_2=1,X_3=x_3)}{P(X_1=0 \mid X_2=1,X_3=x_3)}
}{
\frac{P(X_1=1 \mid X_2=0,X_3=x_3)}{P(X_1=0 \mid X_2=0,X_3=x_3)}
}
.\]
<p>The question is whether it matters what we fill in for $x_3$. We will show that the terms associated with $x_3$ cancel out and it therefore does not matter to which value we fix $x_3$.</p>
\[\frac{
\frac{P(X_1=1 \mid X_2=1,X_3=x_3)}{P(X_1=0 \mid X_2=1,X_3=x_3)}
}{
\frac{P(X_1=1 \mid X_2=0,X_3=x_3)}{P(X_1=0 \mid X_2=0,X_3=x_3)}
}
=
\frac{
\frac{\exp\{\beta_1 + \beta_{21}1 + \beta_{31}x_3\}
}{\exp\{\beta_0 + \beta_{20}1 + \beta_{30}x_3\}}
}{
\frac{\exp\{\beta_1 + \beta_{21}0 + \beta_{31}x_3\}
}{
\exp\{\beta_0 + \beta_{20}0 + \beta_{30}x_3\}
}
}
=
\frac{
\frac{\exp\{\beta_1 + \beta_{21} + \beta_{31}x_3\}
}{\exp\{\beta_0 + \beta_{20} + \beta_{30}x_3\}}
}{
\frac{\exp\{\beta_1 + \beta_{31}x_3\}
}{
\exp\{\beta_0 + \beta_{30}x_3\}
}
}\]
<p>In the first step we fixed $X_1=1$ in the numerator and $X_1=0$ in the denominator and simplified. The parameter $\beta_{21}$ refers to the coefficient associated with $X_2$ in the equation modeling $P(X_1=1 \mid \dots)$, while $\beta_{20}$ refers to the coefficient associated with $X_2$ in the equation modeling $P(X_1=0 \mid \dots)$.</p>
<p>We further rearrange</p>
\[\frac{
\frac{\exp\{\beta_1 + \beta_{21} + \beta_{31}x_3\}
}{\exp\{\beta_0 + \beta_{20} + \beta_{30}x_3\}}
}{
\frac{\exp\{\beta_1 + \beta_{31}x_3\}
}{
\exp\{\beta_0 + \beta_{30}x_3\}
}
}
=
\frac{\exp\{\beta_1 + \beta_{21} + \beta_{31}x_3\}
}{\exp\{\beta_0 + \beta_{20} + \beta_{30}x_3\}}
\frac{\exp\{\beta_0 + \beta_{30}x_3\}
}{
\exp\{\beta_1 + \beta_{31}x_3\}
}
,\]
<p>which is equal to</p>
\[\exp\{\beta_1 + \beta_{21} + \beta_{31}x_3 + \beta_0 + \beta_{30}x_3 - (\beta_0 + \beta_{20} + \beta_{30}x_3 + \beta_1 + \beta_{31}x_3)\}
.\]
<p>We collect all the terms with $x_3$</p>
\[\exp\{
(\beta_1 + \beta_{21} + \beta_0 - \beta_0 - \beta_{20} - \beta_1)
+
(\beta_{31}x_3 + \beta_{30}x_3 - \beta_{30}x_3 - \beta_{31}x_3)
\}\]
<p>and we see that the terms including $x_3$ add to zero, which shows that no matter what number we fill in for $x_3$, we will always obtain the same OR.</p>
<p>Actually, we can further simplify and get</p>
\[\exp\{
(\beta_1 + \beta_{21} + \beta_0 - \beta_0 - \beta_{20} - \beta_1)
+ 0
=
\exp\{
\beta_{21} - \beta_{20}
\}\]
<p>which reveals a simpler way to compute the OR. We can verify this with our estimated coefficients</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">])</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 2.144591</code></pre></figure>
<p>and indeed we get the same OR.</p>
<p>If we wanted the OR with the numerator and denominator swapped, one would calculate:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">])</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.4662894</code></pre></figure>
<p>One could verify this by repeating the derivation above with swapped numerator/denominator or by using the general approach of calculating ORs that we used above.</p>
<p>This reflects the <a href="https://en.wikipedia.org/wiki/Odds_ratio#Role_in_logistic_regression">well-known</a> relation between multiple regression parameters and the OR</p>
\[\exp\{\beta_x\}=\text{OR}_x\]
<p>since the relation between logistic regression parameterization and the symmetric multinomial regression parameterization used here is $\beta_x = 2 \beta_{x1}$, where $\beta_{x1}$ is the parameter corresponding to $\beta_x$ in the equation modeling $P(X_1=1 \mid \dots)$.</p>
<h3 id="is-the-or-significant">Is the OR “significant”?</h3>
<p>The model has been estimated with $\ell_1$-regularized regression, in which the regularization parameters have been selected with 10-fold cross-validation with the goal that the parameter estimates generalize to new samples. Thus, variable selection has already been performed and it is not necessary to perform an additional hypothesis test on the OR or the underlying variables.</p>
<h3 id="an-alternative-changes-in-predicted-probabilities">An Alternative: Changes in Predicted Probabilities</h3>
<p>An alternative to ORs is to report change in predicted probabilities of $X_5$ depending on which value we fill in for $X_1$. When considering only two variables such a change in probabilities is perhaps easier to interpret than ORs. However, we will see that changes in predicted probabilities do not have the nice property of ORs that it doesn’t matter to which value we fix all other variables. This makes this alternative less attractive for models that include more than two variables.</p>
<p>When looking at changes in predicted probabilities we are interested in the difference</p>
\[P(X_5=1 \mid X_1=1, \dots) - P(X_5=0 \mid X_1=0, \dots)
.\]
<p>When calculating these probabilities we are again required to fix all other variables (“…”) to some value, which we again choose to be 0. In the interest of brevity I only show the R-code for this calculation.</p>
<p>We compute the probabilities for the case $X_1=0$</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">Prob1_x10</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span></code></pre></figure>
<p>and for the case $X_1=1$:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Prob1_x11</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span></code></pre></figure>
<p>We see a change in probability of</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Prob1_x11</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">Prob1_x10</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.1580482</code></pre></figure>
<p>That is, the probability of living in unsupervised housing is $\approx 0.16$ higher for females, which is consistent with the OR > 1 calculated above.</p>
<p>However, now let’s set $X_3=1$ instead of $X_3=0$:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">3</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="o">*</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">3</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="o">*</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Prob1_x10</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span><span class="w">
</span><span class="n">Potential0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">1</span><span class="p">]][</span><span class="m">3</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="o">*</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Potential1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">exp</span><span class="p">(</span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">1</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">2</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">coefs</span><span class="p">[[</span><span class="m">2</span><span class="p">]][</span><span class="m">3</span><span class="p">,</span><span class="m">1</span><span class="p">]</span><span class="o">*</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">Prob1_x11</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">Potential1</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="p">(</span><span class="n">Potential0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">Potential1</span><span class="p">)</span></code></pre></figure>
<p>We now get an increase of probability of</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">Prob1_x11</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">Prob1_x10</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 0.1884348</code></pre></figure>
<p>We see that changes in the predicted probabilities as a function of $X_5$ depend on where we fixed the other variables. So while changes in probabilities are maybe easier to interpret, they have the downside that the changes depend on which values we fix the other variables. However, this may be acceptable in situations in which we are interested in some specific state of all other variables.</p>
<h3 id="summary">Summary</h3>
<p>Starting out with the definition of odds ratios, I showed how to compute them in a general setting and how to compute them from the output of the <a href="https://cran.r-project.org/web/packages/mgm/index.html"><em>mgm</em></a>-package. We looked into whether the OR depends on the specific values at which we fix the other variables (it doesn’t). Proving this fact revealed a simpler formula for calculating ORs for the special case of having a binary response variable. Finally, we considered predicted probabilities as an alternative for ORs.</p>
<hr />
<p>I would like to thank <a href="https://ryanoisin.github.io/">Oisín Ryan</a> for his feedback on this blog post.</p>
Tue, 25 Aug 2020 11:00:00 +0000
http://jmbh.github.io//ORs-in-MGMs/
http://jmbh.github.io//ORs-in-MGMs/Estimating Group Differences in Network Models using Moderation<p>Researchers are often interested in comparing statistical network models across groups. For example, <a href="https://www.nature.com/articles/s41598-018-34130-2">Fritz and colleagues</a> compared the relations between resilience factors in a network model for adolescents who did experience childhood adversity to those who did not. Several methods are already available to perform such comparisons. The <a href="https://cran.r-project.org/web/packages/NetworkComparisonTest/index.html">Network Comparison Test (NCT)</a> performs a permutation test to decide for each parameter whether it differs across two groups. The <a href="https://cran.r-project.org/web/packages/EstimateGroupNetwork/index.html">Fused Graphical Lasso (FGL)</a> uses a lasso penalty to estimate group differences in Gaussian Graphical Models (GGMs). And the <em><a href="https://cran.r-project.org/web/packages/BGGM/index.html">BGGM</a></em> package allows one to test and estimate differences in GGMs in a Bayesian setting. In a <a href="https://psyarxiv.com/926pv">recent preprint</a>, I proposed an additional method based on moderation analysis which has the advantage that it can be applied to essentially any network model and at the same time allows for comparisons across more than two groups.</p>
<p>In this blog post I illustrate how to estimate group differences in network models via moderation analysis using the R-package <em><a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm</a></em>. I show how to estimate a moderated Mixed Graphical Model (MGM) in which the grouping variable serves as a moderator; how to analyze the moderated MGM by conditioning on the moderator; how to visualize the conditional MGMs; and how to assess the stability of group differences.</p>
<h3 id="the-data">The Data</h3>
<p>The data are automatically loaded with the R-package <em><a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm</a></em> and can be accessed in the object <code class="language-plaintext highlighter-rouge">dataGD</code>. The data set contains 3000 observations, of seven variables $X_1, …, X_7$, where $X_7 \in {1, 2, 3}$ indicates group membership. We have 1000 observations from each group. Note that these variables are of mixed type, with $X_1, X_2, X_4,$ and $X_6$ being continuous and the other variables being categorical.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">mgm</span><span class="p">)</span><span class="w"> </span><span class="c1"># version 1.2-10</span><span class="w">
</span><span class="nf">dim</span><span class="p">(</span><span class="n">dataGD</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 3000 7</code></pre></figure>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">head</span><span class="p">(</span><span class="n">dataGD</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## x1 x2 x3 x4 x5 x6 x7
## [1,] -0.136 -0.009 0 0.399 0 -0.137 1
## [2,] -0.041 0.803 1 2.475 2 0.181 1
## [3,] 1.011 0.254 1 -0.194 0 1.329 1
## [4,] -0.158 -1.022 1 -1.587 2 -1.377 1
## [5,] -2.157 1.291 1 0.990 0 -0.018 1
## [6,] 0.499 -0.757 1 -0.941 2 1.099 1</code></pre></figure>
<p>In the Appendix of <a href="https://psyarxiv.com/926pv">my preprint</a>, I describe how these data were generated.</p>
<h3 id="fitting-a-moderated-mgm">Fitting a Moderated MGM</h3>
<p>Recall that a standard MGM describes the pairwise relationships between variables of mixed types. In order to detect group differences in the pairwise relationships between variables $X_1, X_2, \dots, X_6$ we fit a moderated MGM with the grouping variable $X_7$ being specified as a categorical moderator:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">mgm_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mgm</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">dataGD</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"c"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"c"</span><span class="p">,</span><span class="w"> </span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="s2">"c"</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">),</span><span class="w">
</span><span class="n">moderators</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">7</span><span class="p">,</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"EBIC"</span><span class="p">,</span><span class="w">
</span><span class="n">lambdaGam</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0.25</span><span class="p">,</span><span class="w">
</span><span class="n">ruleReg</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"AND"</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?mgm</code></pre></figure>
<p>The argument <code class="language-plaintext highlighter-rouge">type</code> indicates the type of variable (“g” for continuous-Gaussian, and “c” for categorical) and <code class="language-plaintext highlighter-rouge">level</code> indicates the number of categories of each variable, which is set to 1 by default for continuous variables. The <code class="language-plaintext highlighter-rouge">moderators</code> argument specifies that the variable in the $7^{th}$ column is included as a moderator. Since we specified via the <code class="language-plaintext highlighter-rouge">type</code> argument that this variable is categorical, it will be treated as a categorical moderator. The remaining arguments specify that the regularization parameters in the $\ell_1$-regularized nodewise regression algorithm used by <em>mgm</em> are selected with the EBIC with a hyperparameter of $\gamma=0.25$ and that estimates are combined across nodewise regressions using the AND-rule.</p>
<h3 id="conditioning-on-the-moderator">Conditioning on the Moderator</h3>
<p>In order to inspect the pairwise MGMs of the three groups, we need to condition the moderated MGM on the values of the moderator variable, which represent the three groups. This can be done with the function <code class="language-plaintext highlighter-rouge">condition()</code>, which takes the moderated MGM object and a list specifying on which values of which variables the model should be conditioned on. Here we only have a single moderator variable ($X_7$) and we condition on each of its values ${1, 2, 3}$ which represent the three groups, and save the three conditional pairwise MGMs in the list object <code class="language-plaintext highlighter-rouge">l_mgm_cond</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">l_mgm_cond</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">list</span><span class="p">()</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">)</span><span class="w"> </span><span class="n">l_mgm_cond</span><span class="p">[[</span><span class="n">g</span><span class="p">]]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">condition</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_obj</span><span class="p">,</span><span class="w">
</span><span class="n">values</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">list</span><span class="p">(</span><span class="s2">"7"</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">g</span><span class="p">))</span></code></pre></figure>
<h3 id="visualizing-conditioned-mgms">Visualizing conditioned MGMs</h3>
<p>We can now inspect the pairwise MGM in each group similar to when fitting a standard pairwise MGM (for details see <a href="https://www.jstatsoft.org/article/view/v093i08">the mgm paper</a> or the other posts in my blog). Here we choose to visualize the strength of dependencies in the three MGMs in a network using the <a href="https://cran.r-project.org/web/packages/qgraph/index.html">qgraph</a> package. We provide the three conditional <em>mgm</em>-objects as an input and set the <code class="language-plaintext highlighter-rouge">maximum</code> argument in <code class="language-plaintext highlighter-rouge">qgraph()</code> for each visualization to the maximum parameter across all groups to ensure that the visualizations are comparable.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">qgraph</span><span class="p">)</span><span class="w">
</span><span class="n">v_max</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="kc">NA</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">)</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">)</span><span class="w"> </span><span class="n">v_max</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">max</span><span class="p">(</span><span class="n">l_mgm_cond</span><span class="p">[[</span><span class="n">g</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">)</span><span class="w">
</span><span class="n">par</span><span class="p">(</span><span class="n">mfrow</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">))</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">qgraph</span><span class="p">(</span><span class="n">input</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">l_mgm_cond</span><span class="p">[[</span><span class="n">g</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">,</span><span class="w">
</span><span class="n">edge.color</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">l_mgm_cond</span><span class="p">[[</span><span class="n">g</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">edgecolor_cb</span><span class="p">,</span><span class="w">
</span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">l_mgm_cond</span><span class="p">[[</span><span class="n">g</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">edge_lty</span><span class="p">,</span><span class="w">
</span><span class="n">layout</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"circle"</span><span class="p">,</span><span class="w"> </span><span class="n">mar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">),</span><span class="w">
</span><span class="n">maximum</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">max</span><span class="p">(</span><span class="n">v_max</span><span class="p">),</span><span class="w"> </span><span class="n">vsize</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">16</span><span class="p">,</span><span class="w"> </span><span class="n">esize</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">23</span><span class="p">,</span><span class="w">
</span><span class="n">edge.labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">TRUE</span><span class="p">,</span><span class="w"> </span><span class="n">edge.label.cex</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">3</span><span class="p">)</span><span class="w">
</span><span class="n">mtext</span><span class="p">(</span><span class="n">text</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">paste0</span><span class="p">(</span><span class="s2">"Group "</span><span class="p">,</span><span class="w"> </span><span class="n">g</span><span class="p">),</span><span class="w"> </span><span class="n">line</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2.5</span><span class="p">)</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p><img src="/assets/img/2020-06-22-Groupdifferences-via-Moderation.Rmd/unnamed-chunk-4-1.png" title="plot of chunk unnamed-chunk-4" alt="plot of chunk unnamed-chunk-4" style="display: block; margin: auto;" /></p>
<p>The edges represent conditional dependence relationships and their width is proportional to their strength. The blue (red) edges indicate positive (negative) linear relationships. The grey edges indicate relationships involving categorical variables, for which no sign is defined (for details see <code class="language-plaintext highlighter-rouge">?mgm</code> or <a href="https://www.jstatsoft.org/article/view/v093i08">the mgm paper</a>). We see that there are conditional dependencies of equal strength between variables $X_1 - X_3$, $X_3 - X_4$ and $X_4 - X_6$ in all three groups. However, the linear dependency between $X_1 - X_2$ differs across groups: it is negative in Group 1, positive in Group 2 and almost absent in Group 3. In addition, there is no dependency between $X_3 - X_5$ in Group 1, but there is a dependency in Groups 2 and 3. Note that the comparable strength in dependencies between those variables in Groups 2 and 3 does not imply that the nature of these dependencies is the same. As with pairwise MGMs, it is possible to inspect the (non-aggregated) parameter estimates of these interactions with the function <code class="language-plaintext highlighter-rouge">showInteraction()</code>.</p>
<h3 id="assessing-the-stability-of-estimates">Assessing the Stability of Estimates</h3>
<p>Similar to pairwise MGMs, we can use the <code class="language-plaintext highlighter-rouge">resample()</code> function to assess the stability of all estimated parameters with bootstrapping. Here we only choose 50 bootstrap samples to keep the running time manageable for this tutorial. In practice, the number of bootstrap samples should better be in the order of 1000s.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">res_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">resample</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_obj</span><span class="p">,</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">dataGD</span><span class="p">,</span><span class="w">
</span><span class="n">nB</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">50</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<p>Finally, we can visualize the summaries of the bootstrapped sampling distributions using the function <code class="language-plaintext highlighter-rouge">plotRes()</code>. The location of the circles indicates the mean of the sampling distribution, the horizontal lines the 95\% quantiles, and the number in the circle the proportion of estimates that were nonzero.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">plotRes</span><span class="p">(</span><span class="n">res_obj</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2020-06-22-Groupdifferences-via-Moderation.Rmd/unnamed-chunk-6-1.png" title="plot of chunk unnamed-chunk-6" alt="plot of chunk unnamed-chunk-6" style="display: block; margin: auto;" />
We see that for these simulated data the moderation effects (group differences) are extremely stable. However, note that in observational data moderation effects are typically much smaller and less stable.</p>
<h3 id="summary">Summary</h3>
<p>In this blog post, I have shown how to use the <em><a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm</a></em>-package to estimate a moderated MGM in which the grouping variable serves as a moderator; how to analyze the moderated MNM by conditioning on the moderator; how to visualize the conditional MGMs; and how to assess the stability of group differences. For more details on this method and for a simulation study comparing its performance to the performance of existing methods, have a look at <a href="https://psyarxiv.com/926pv">this preprint</a>.</p>
<hr />
<p>I would like to thank <a href="https://fabiandablander.com/">Fabian Dablander</a> for his feedback on this blog post.</p>
Wed, 24 Jun 2020 11:00:00 +0000
http://jmbh.github.io//Groupdifferences-via-Moderation/
http://jmbh.github.io//Groupdifferences-via-Moderation/Estimating Time-varying Vector Autoregressive (VAR) Models<p>Models for individual subjects are becoming increasingly popular in psychological research. One reason is that it is difficult to make inferences from between-person data to within-person processes. Another is that time series obtained from individuals are becoming increasingly available due to the ubiquity of mobile devices. The central goal of so-called idiographic modeling is to tap into the within-person dynamics underlying psychological phenomena. With this goal in mind many researchers have set out to analyze the multivariate dependencies in within-person time series. The most simple and most popular model for such dependencies is the first-order Vector Autoregressive (VAR) model, in which each variable at the current time point is predicted by (a linear function of) all variables (including itself) at the previous time point.</p>
<p>A key assumption of the standard VAR model is that its parameter do not change over time. However, often one is interested in exactly such changes over time. For example, one could be interested in relating changes in parameters with other variables, such as changes in a person’s environment. This could be a new job, the seasons, or the impact of a global pandemic. In less exploratory designs, one could examine which impact certain interventions (e.g., medication or therapy) have on the interactions between symptoms.</p>
<p>In this blog post I give a very brief overview of how to estimate a time-varying VAR model with the kernel smoothing approach, which we discussed in <a href="https://www.tandfonline.com/doi/abs/10.1080/00273171.2020.1743630">this recent tutorial paper</a>. This method is based on the assumption that parameters can change smoothly over time, which means that parameters cannot “jump” from one value to another. I then focus on how to estimate and analyze this type of time-varying VAR models with the R-package <em><a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm</a></em>.</p>
<h3 id="estimating-time-varying-models-via-kernel-smoothing">Estimating time-varying Models via Kernel Smoothing</h3>
<p>The core idea of the kernel smoothing approach is the following: We choose equally spaced time points across the duration of the whole time series and then estimate “local” models at each of those time points. All local models taken together then constitute the time-varying model. With “local” models we mean that these models are largely based on time points that are close to the time point at hand. This is achieved by weighting observations accordingly during parameter estimation. This idea is illustrated for a toy data set in the following Figure:</p>
<p><img src="http://jmbh.github.io/figs/tvvar/tvvar_illustration.png" alt="center" /></p>
<p>Here we only illustrate estimating the local model at $t=3$. We see the 10 time points of this time series on the left panel. The column $w_{t_e=3}$ in red indicates a possible set of weights we could use to estimate the local model at $t=3$: the data at time points close to $t=3$ get the highest weight, and time points further away get an increasingly small weight. The function that defines these weights is shown on the right panel. The blue column in the left panel, and the corresponding blue function on the right indicate another possible weighting. Using this weighting, we combine fewer observations close in time. This allows us to detect more “time-varyingness” in the parameters, because we smooth over less time points. On the other hand, however, we use less data, which makes our estimates less reliable. It is therefore important to choose a weighting function that strikes a good balance between sensitivity to “time-varyingness” and stable estimates. In the method presented here we use a Gaussian weighting function (also called a <em>kernel</em>) which is defined by its standard deviation (or bandwidth). We will return to how to select a good bandwidth parameter below.</p>
<p>In this blog post I focus on how to estimate time-varying models with the R-package <em>mgm</em>. For a more detailed explanation of the method see our recent <a href="https://www.tandfonline.com/doi/full/10.1080/00273171.2020.1743630">tutorial paper</a>.</p>
<h3 id="loading--inspecting-the-data">Loading & Inspecting the Data</h3>
<p>To illustrate estimating time-varying VAR models, I use an ESM time series of 12 mood related variables that are measured up to 10 times a day for 238 consecutive days (for details about this dataset see <a href="http://openpsychologydata.metajnl.com/articles/10.5334/jopd.29/">Kossakowski et al. (2017)</a>). The questions are “I feel relaxed”, “I feel down”, “I feel irritated”, “I feel satisfied”, “I feel lonely”, “I feel anxious”, “I feel enthusiastic”, “I feel suspicious”, “I feel cheerful”, “I feel guilty”, “I feel indecisive”, and “I feel strong”. Each question is answered on a 7-point Likert scale ranging from “not” to “very”.</p>
<p>The data set is loaded with the <a href="https://cran.r-project.org/web/packages/mgm/index.html"><em>mgm</em>-package</a>. We first subset the 12 mood variables:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">mgm</span><span class="p">)</span><span class="w">
</span><span class="n">mood_data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">as.matrix</span><span class="p">(</span><span class="n">symptom_data</span><span class="o">$</span><span class="n">data</span><span class="p">[,</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">12</span><span class="p">])</span><span class="w">
</span><span class="n">mood_labels</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">symptom_data</span><span class="o">$</span><span class="n">colnames</span><span class="p">[</span><span class="m">1</span><span class="o">:</span><span class="m">12</span><span class="p">]</span><span class="w">
</span><span class="n">colnames</span><span class="p">(</span><span class="n">mood_data</span><span class="p">)</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mood_labels</span><span class="w">
</span><span class="n">time_data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">symptom_data</span><span class="o">$</span><span class="n">data_time</span></code></pre></figure>
<p>We see that the data set has 1476 observations:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="nf">dim</span><span class="p">(</span><span class="n">mood_data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 1476 12</code></pre></figure>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">head</span><span class="p">(</span><span class="n">mood_data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Relaxed Down Irritated Satisfied Lonely Anxious Enthusiastic Suspicious
## [1,] 5 -1 1 5 -1 -1 4 1
## [2,] 4 0 3 3 0 0 3 1
## [3,] 4 0 2 3 0 0 4 1
## [4,] 4 0 1 4 0 0 4 1
## [5,] 4 0 2 4 0 0 4 1
## [6,] 5 0 1 4 0 0 3 1
## Cheerful Guilty Doubt Strong
## [1,] 5 -1 1 5
## [2,] 4 0 1 4
## [3,] 4 0 2 4
## [4,] 4 1 1 4
## [5,] 4 1 2 3
## [6,] 3 1 2 3</code></pre></figure>
<p><code class="language-plaintext highlighter-rouge">time_data</code> contains temporal information about each measurement. We will make use of the day on which the measurement occured (<code class="language-plaintext highlighter-rouge">dayno</code>), the measurement prompt (<code class="language-plaintext highlighter-rouge">beepno</code>) and the overall time stamp (<code class="language-plaintext highlighter-rouge">time_norm</code>).</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">head</span><span class="p">(</span><span class="n">time_data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## date dayno beepno beeptime resptime_s resptime_e time_norm
## 1 13/08/12 226 1 08:58 08:58:56 09:00:15 0.000000000
## 2 14/08/12 227 5 14:32 14:32:09 14:33:25 0.005164874
## 3 14/08/12 227 6 16:17 16:17:13 16:23:16 0.005470574
## 4 14/08/12 227 8 18:04 18:04:10 18:06:29 0.005782097
## 5 14/08/12 227 9 20:57 20:58:23 21:00:18 0.006285774
## 6 14/08/12 227 10 21:54 21:54:15 21:56:05 0.006451726</code></pre></figure>
<h3 id="selecting-the-optimal-bandwidth">Selecting the optimal Bandwidth</h3>
<p>One way of selecting a good bandwidth parameter is to fit time-varying models with different candidate bandwidth parameters on a training data set, and evaluate their prediction error on a test data set. The function <code class="language-plaintext highlighter-rouge">bwSelect()</code> implements such a bandwith selection scheme. Here we do not show the specification of this function, because it has the same input arguments as the <code class="language-plaintext highlighter-rouge">tvmvar()</code> which we describe in a moment below plus a candidate sequence of bandwidth values and some specifications for how to split the data into training and test data. In addition, data driven bandwidth selection can take a considerable amount of time to run, which would not allow you to run the code while reading the blog post. For this tutorial, we therefore just fix the bandwidth to the value that was returned by <code class="language-plaintext highlighter-rouge">bwSelect()</code>. However, you can find the code to perform bandwidth selection with <code class="language-plaintext highlighter-rouge">bwSelect()</code> on the present data set <a href="https://github.com/jmbh/tvvar_paper/blob/master/Tutorials/tutorial_mgm.R">here</a>).</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">bandwidth</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">.34</span></code></pre></figure>
<h3 id="estimating-time-varying-var-models">Estimating time-varying VAR Models</h3>
<p>We can now specify the estimation of the time-varying VAR model. We provide the data as input and we specify the type of variables and how many categories they have with the <code class="language-plaintext highlighter-rouge">type</code> and <code class="language-plaintext highlighter-rouge">level</code> arguments. In our example data sets all variables are continuous, and we therefore set <code class="language-plaintext highlighter-rouge">type = rep("g", 12)</code> for continuous-Gaussian, and set the number of categories to 1 by convention. We choose to select the regularization parameters with cross-validation with <code class="language-plaintext highlighter-rouge">lambdaSel = "CV"</code>, and we specify that the VAR model should include a single lag with <code class="language-plaintext highlighter-rouge">lags = 1</code>. The arguments <code class="language-plaintext highlighter-rouge">beepvar</code> and <code class="language-plaintext highlighter-rouge">dayvar</code> provide the day and the number of notification on a given day for each measurement, which is necessary to specify the VAR design matrix. In addition, we provide the time stamps of all measurements with <code class="language-plaintext highlighter-rouge">timepoints = time_data$time_norm</code> to account for missing measurements. Note however, that we still assume a constant lag size of 1. The time stamps are only used to ensure that the weighting indeed gives those time points the highest weight that are closest to the current estimation point (for details see Section 2.5 in <a href="https://www.jstatsoft.org/article/view/v093i08">this paper</a>). So far, the specification is the same as for the <code class="language-plaintext highlighter-rouge">mvar()</code> function which fits stationary mixed VAR models.</p>
<p>For the time-varying model, we need to specify two additional arguments. First, with <code class="language-plaintext highlighter-rouge">estpoints = seq(0, 1, length = 20)</code> we specify that we would like to estimate 20 local models across the duration of the entire time series (which is normalized to [0,1]). The number of estimation points can be chosen arbitrarily large, but at some point adding more estimation point is not worth the additional computational costs, because subsequent local models are essentially identical. Finally, we specify the bandwidth with the <code class="language-plaintext highlighter-rouge">bandwidth</code> argument.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="c1"># Estimate Model on Full Dataset</span><span class="w">
</span><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">tvvar_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">tvmvar</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mood_data</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="m">12</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">12</span><span class="p">),</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"CV"</span><span class="p">,</span><span class="w">
</span><span class="n">lags</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w">
</span><span class="n">beepvar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">time_data</span><span class="o">$</span><span class="n">beepno</span><span class="p">,</span><span class="w">
</span><span class="n">dayvar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">time_data</span><span class="o">$</span><span class="n">dayno</span><span class="p">,</span><span class="w">
</span><span class="n">timepoints</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">time_data</span><span class="o">$</span><span class="n">time_norm</span><span class="p">,</span><span class="w">
</span><span class="n">estpoints</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">seq</span><span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">length</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">20</span><span class="p">),</span><span class="w">
</span><span class="n">bandwidth</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">bandwidth</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?tvmvar</code></pre></figure>
<p>We can paste the output object into the console</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="c1"># Check on how much data was used</span><span class="w">
</span><span class="n">tvvar_obj</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## mgm fit-object
##
## Model class: Time-varying mixed Vector Autoregressive (tv-mVAR) model
## Lags: 1
## Rows included in VAR design matrix: 876 / 1476 ( 59.35 %)
## Nodes: 12
## Estimation points: 20</code></pre></figure>
<p>which provides a summary of the model and also shows how many rows were in the VAR design matrix (876) compared to the number of time points in the data set (1476). The former number is lower, because a VAR(1) model can only be estimated if for a given time point also the time point 1 lag earlier is available. This is not the case for the first measurement on a given day or if there are missing responses during the day.</p>
<h3 id="computing-time-varying-prediction-errors">Computing Time-varying Prediction Errors</h3>
<p>Similarly to stationary VAR models, we can compute prediction errors. This can be done with the <code class="language-plaintext highlighter-rouge">predict()</code> function, which takes the model object, the data, and two variables indicating the day number and notification number. Providing the data and the notification variables independently from the model object allows to compute prediction errors for new samples.</p>
<p>The argument <code class="language-plaintext highlighter-rouge">errorCon = c("R2", "RMSE")</code> specifies that the proportion of explained variance ($R^2$) and the Root Mean Squared Error (RMSE) should be returned as prediction errors. The final argument <code class="language-plaintext highlighter-rouge">tvMethod</code> specifies how time-varying prediction errors should be calculated. The option <code class="language-plaintext highlighter-rouge">tvMethod = "closestModel"</code> makes predictions for a time point using the local model that is closest to it. The option chosen here, <code class="language-plaintext highlighter-rouge">tvMethod = "weighted"</code>, provides a weighted average of the predictions of all local models, weighted using the weighting function centered on the location of the time point at hand. Typically, both methods give very similar results.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">pred_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tvvar_obj</span><span class="p">,</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mood_data</span><span class="p">,</span><span class="w">
</span><span class="n">beepvar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">time_data</span><span class="o">$</span><span class="n">beepno</span><span class="p">,</span><span class="w">
</span><span class="n">dayvar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">time_data</span><span class="o">$</span><span class="n">dayno</span><span class="p">,</span><span class="w">
</span><span class="n">errorCon</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"R2"</span><span class="p">,</span><span class="w"> </span><span class="s2">"RMSE"</span><span class="p">),</span><span class="w">
</span><span class="n">tvMethod</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"weighted"</span><span class="p">)</span></code></pre></figure>
<p>The main output are the following two objects:
<code class="language-plaintext highlighter-rouge">pred_obj$tverrors</code> is a list that includes the estimation errors for each estimation point / local model; <code class="language-plaintext highlighter-rouge">pred_obj$errors</code> contains the average error across estimation points.</p>
<h3 id="visualizing-parts-of-the-model">Visualizing parts of the Model</h3>
<p>The time-varying VAR(1) model consists of $(p + p^2) \times E$ parameters, where $p$ is the number of variables and $E$ is the number of estimation points. Visualizing all parameters at once is therefore challenging. Instead, one can pick the parameters that are of most interest for the research question at hand. Here, we choose two different visualizations. First, we use the <code class="language-plaintext highlighter-rouge">qgraph()</code> function from the R-package <em><a href="https://cran.r-project.org/web/packages/qgraph/index.html">qgraph</a></em> to inspect the VAR interaction parameters at estimation points 1, 10, and 20:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">qgraph</span><span class="p">)</span><span class="w">
</span><span class="n">par</span><span class="p">(</span><span class="n">mfrow</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">))</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">tp</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">10</span><span class="p">,</span><span class="m">20</span><span class="p">))</span><span class="w"> </span><span class="n">qgraph</span><span class="p">(</span><span class="n">t</span><span class="p">(</span><span class="n">tvvar_obj</span><span class="o">$</span><span class="n">wadj</span><span class="p">[,</span><span class="w"> </span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">tp</span><span class="p">]),</span><span class="w">
</span><span class="n">layout</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"circle"</span><span class="p">,</span><span class="w">
</span><span class="n">edge.color</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">t</span><span class="p">(</span><span class="n">tvvar_obj</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">[,</span><span class="w"> </span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">tp</span><span class="p">]),</span><span class="w">
</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mood_labels</span><span class="p">,</span><span class="w">
</span><span class="n">mar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">5</span><span class="p">,</span><span class="w"> </span><span class="m">4</span><span class="p">),</span><span class="w">
</span><span class="n">vsize</span><span class="o">=</span><span class="m">14</span><span class="p">,</span><span class="w"> </span><span class="n">esize</span><span class="o">=</span><span class="m">15</span><span class="p">,</span><span class="w"> </span><span class="n">asize</span><span class="o">=</span><span class="m">13</span><span class="p">,</span><span class="w">
</span><span class="n">maximum</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">.5</span><span class="p">,</span><span class="w">
</span><span class="n">pie</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pred_obj</span><span class="o">$</span><span class="n">tverrors</span><span class="p">[[</span><span class="n">tp</span><span class="p">]][,</span><span class="w"> </span><span class="m">3</span><span class="p">],</span><span class="w">
</span><span class="n">title</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">paste0</span><span class="p">(</span><span class="s2">"Estimation point = "</span><span class="p">,</span><span class="w"> </span><span class="n">tp</span><span class="p">),</span><span class="w">
</span><span class="n">title.cex</span><span class="o">=</span><span class="m">1.2</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2020-06-02-Estimating-time-varying-VAR-Models.Rmd/unnamed-chunk-8-1.png" title="plot of chunk unnamed-chunk-8" alt="plot of chunk unnamed-chunk-8" style="display: block; margin: auto;" /></p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">dev.off</span><span class="p">()</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## null device
## 1</code></pre></figure>
<p>We see that some parameters in the VAR models are varying considerably over time. For example, the autocorrelation effect of Relaxed seems to be decreasing over time, the positive effect of Strong on Satisfied only appears at estimation point 20, and also the negative effect of Satisfied on Guilty only appears at estimation point 20.</p>
<p>We can zoom in on these individual parameters by plotting them as a function of time:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="c1"># Obtain parameter estimates with sign</span><span class="w">
</span><span class="n">par_ests</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">tvvar_obj</span><span class="o">$</span><span class="n">wadj</span><span class="p">[,</span><span class="w"> </span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="p">]</span><span class="w">
</span><span class="n">par_ests</span><span class="p">[</span><span class="n">tvvar_obj</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">[,</span><span class="w"> </span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="p">]</span><span class="o">==</span><span class="s2">"red"</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">par_ests</span><span class="p">[</span><span class="n">tvvar_obj</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">[,</span><span class="w"> </span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="p">]</span><span class="o">==</span><span class="s2">"red"</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="m">-1</span><span class="w">
</span><span class="c1"># Select three parameters to plot</span><span class="w">
</span><span class="n">m_par_display</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">matrix</span><span class="p">(</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w">
</span><span class="m">4</span><span class="p">,</span><span class="w"> </span><span class="m">12</span><span class="p">,</span><span class="w">
</span><span class="m">10</span><span class="p">,</span><span class="w"> </span><span class="m">4</span><span class="p">),</span><span class="w"> </span><span class="n">ncol</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">byrow</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nb">T</span><span class="p">)</span><span class="w">
</span><span class="c1"># Plotting</span><span class="w">
</span><span class="n">plot.new</span><span class="p">()</span><span class="w">
</span><span class="n">par</span><span class="p">(</span><span class="n">mar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">4</span><span class="p">,</span><span class="w"> </span><span class="m">4</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">))</span><span class="w">
</span><span class="n">plot.window</span><span class="p">(</span><span class="n">xlim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">20</span><span class="p">),</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">-.25</span><span class="p">,</span><span class="w"> </span><span class="m">.55</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">,</span><span class="w"> </span><span class="m">10</span><span class="p">,</span><span class="w"> </span><span class="m">15</span><span class="p">,</span><span class="w"> </span><span class="m">20</span><span class="p">),</span><span class="w"> </span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nb">T</span><span class="p">)</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.25</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.25</span><span class="p">,</span><span class="w"> </span><span class="m">.5</span><span class="p">),</span><span class="w"> </span><span class="n">las</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">abline</span><span class="p">(</span><span class="n">h</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"grey"</span><span class="p">,</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Estimation points"</span><span class="p">,</span><span class="w"> </span><span class="n">cex.lab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1.2</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">ylab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Parameter estimate"</span><span class="p">,</span><span class="w"> </span><span class="n">cex.lab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1.2</span><span class="p">)</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="n">nrow</span><span class="p">(</span><span class="n">m_par_display</span><span class="p">))</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">par_row</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">m_par_display</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="w"> </span><span class="p">]</span><span class="w">
</span><span class="n">P1_pointest</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">par_ests</span><span class="p">[</span><span class="n">par_row</span><span class="p">[</span><span class="m">1</span><span class="p">],</span><span class="w"> </span><span class="n">par_row</span><span class="p">[</span><span class="m">2</span><span class="p">],</span><span class="w"> </span><span class="p">]</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="m">1</span><span class="o">:</span><span class="m">20</span><span class="p">,</span><span class="w"> </span><span class="n">P1_pointest</span><span class="p">,</span><span class="w"> </span><span class="n">lwd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">i</span><span class="p">)</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="n">legend_labels</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="nf">expression</span><span class="p">(</span><span class="s2">"Relaxed"</span><span class="p">[</span><span class="s2">"t-1"</span><span class="p">]</span><span class="w"> </span><span class="o">%->%</span><span class="w"> </span><span class="s2">"Relaxed"</span><span class="p">[</span><span class="s2">"t"</span><span class="p">]),</span><span class="w">
</span><span class="nf">expression</span><span class="p">(</span><span class="s2">"Strong"</span><span class="p">[</span><span class="s2">"t-1"</span><span class="p">]</span><span class="w"> </span><span class="o">%->%</span><span class="w"> </span><span class="s2">"Satisfied"</span><span class="p">[</span><span class="s2">"t"</span><span class="p">]),</span><span class="w">
</span><span class="nf">expression</span><span class="p">(</span><span class="s2">"Satisfied"</span><span class="p">[</span><span class="s2">"t-1"</span><span class="p">]</span><span class="w"> </span><span class="o">%->%</span><span class="w"> </span><span class="s2">"Guilty"</span><span class="p">[</span><span class="s2">"t"</span><span class="p">]))</span><span class="w">
</span><span class="n">legend</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">.49</span><span class="p">,</span><span class="w">
</span><span class="n">legend_labels</span><span class="p">,</span><span class="w">
</span><span class="n">lwd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">bty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"n"</span><span class="p">,</span><span class="w"> </span><span class="n">cex</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">horiz</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nb">T</span><span class="p">,</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2020-06-02-Estimating-time-varying-VAR-Models.Rmd/unnamed-chunk-9-1.png" title="plot of chunk unnamed-chunk-9" alt="plot of chunk unnamed-chunk-9" style="display: block; margin: auto;" /></p>
<p>We see that the effect of Relaxed on itself on the next time point is relatively strong at the beginning of the time series, but then decreases towards zero and remains zero from around estimation point 13. The cross-lagged effect of Strong on Satisfied on the next time point is equal to zero until around estimation point 9 but then seems to increase monotonically. Finally, the cross-lagged effect of Satisfied on Guilty is also equal to zero until around estimation point 13 and then decreases monotonically.</p>
<h3 id="stability-of-estimates">Stability of Estimates</h3>
<p>Similar to stationary models, one can assess the stability of time-varying parameters using boostrapped sampling distributions. The <em>mgm</em> package allows one to do that with the <code class="language-plaintext highlighter-rouge">resample()</code> function. <a href="https://github.com/jmbh/tvvar_paper/blob/master/Tutorials/tutorial_mgm.R">Here</a> you can find code on how to do that for the example in this tutorial.</p>
<h3 id="time-varying-or-not">Time-varying or not?</h3>
<p>Clearly, “time-varyingness” is a continuum that goes from stationary to extremely time-varying. However, in some cases it might be necessary to decide whether the parameters of a VAR model are reliably time-varying. To reach such a decision one can use a hypothesis test with the null hypothesis that the model is not time-varying. Here is one way to perform such a hypothesis test: One begins by fitting a stationary VAR model to the data and then repeatedly simulates data from this estimated model. For each of these simulated time series data sets, one computes a pooled prediction error for the time-varying model. The distribution of these prediction errors serves as the sampling distribution of prediction errors under the null hypothesis. Now one can compute the pooled estimation error of the time-varying VAR model on the empirical data and use it as a test-statistic. This test is explained in more detailed <a href="https://www.tandfonline.com/doi/abs/10.1080/00273171.2020.1743630">here</a>, and the code to implement this test for the data set used in this tutorial can be found <a href="https://github.com/jmbh/tvvar_paper/blob/master/Tutorials/tutorial_mgm.R">here</a>.</p>
<h3 id="summary">Summary</h3>
<p>In this blog post, I have shown how to estimate a time-varying VAR model with a kernel smoothing approach, which is based on the assumption that all parameters are a smooth function of time. In addition to estimating the model, we discussed the selection of an appropriate bandwidth parameter, how to compute (time-varying) prediction errors, and how to visualize different aspects of the model. Finally, I provided pointers to code that shows how to assess the stability of estimates via bootstrapping, and how to perform a hypothesis test one can use to select between stationary and time-varying VAR models.</p>
<hr />
<p>I would like to thank <a href="https://fabiandablander.com/">Fabian Dablander</a> for his feedback on this blog post.</p>
Tue, 02 Jun 2020 11:00:00 +0000
http://jmbh.github.io//Estimating-time-varying-VAR-Models/
http://jmbh.github.io//Estimating-time-varying-VAR-Models/Moderated Network Models for Continuous Data<p>Statistical network models have become a popular exploratory data analysis tool in psychology and related disciplines that allow to study relations between variables. The most popular models in this emerging literature are the binary-valued <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> and the <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution">multivariate Gaussian distribution</a> for continuous variables, which both model interactions between <em>pairs</em> of variables. In these pairwise models, the interaction between any pair of variables A and B is a constant and therefore does not depend on the values of any of the variables in the model. Put differently, none of the pairwise interactions is moderated. However, in the highly complex and contextualized fields like psychology, such moderation effects are often plausible. In this blog post, I show how to fit, analyze, visualize and assess the stability of Moderated Network Models for continuous data with the <a href="https://cran.r-project.org/web/packages/mgm/index.html">R-package mgm</a>.</p>
<p>Moderated Network Models (MNMs) for continuous data are extending the pairwise multivariate Gaussian distribution with moderation effects (3-way interactions). The implementation in the <a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm package</a> estimates these MNMs with a nodewise regression approach, and allows one to condition on moderators, visualize the models and assess the stability of its parameter estimates. For a detailed description of how to construct such a MNM, and on how to estimate its parameters, have a look at <a href="https://arxiv.org/abs/1807.02877">our paper on MNMs</a>. For a short recap on moderation and its relation to interactions in the regression framework, have a look at <a href="https://jmbh.github.io/CenteringPredictors/">this blog post</a>.</p>
<h3 id="loading--inspecting-the-data">Loading & Inspecting the Data</h3>
<p>We use a data set with $n=3896$ observations including the variables Hostile, Lonely, Nervous, Sleepy and Depressed, which we took from the 92-item Motivational State Questionnaire (MSQ) data set that comes with the <a href="https://cran.r-project.org/web/packages/psych/index.html">R-package psych</a>. Update June 5 2020: the msq data is not available anymore from the psych package, and we therefore load it manually. Each item is answered on a Likert scale with responses 0 (Not at all), 1 (A little), 2 (Moderately), and 3 (Very much).</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">read.table</span><span class="p">(</span><span class="s2">"https://jmbh.github.io/files/data/msq.csv"</span><span class="p">,</span><span class="w"> </span><span class="n">sep</span><span class="o">=</span><span class="s2">","</span><span class="p">,</span><span class="w"> </span><span class="n">header</span><span class="o">=</span><span class="kc">TRUE</span><span class="p">)</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">data</span><span class="p">[,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"hostile"</span><span class="p">,</span><span class="w"> </span><span class="s2">"lonely"</span><span class="p">,</span><span class="w"> </span><span class="s2">"nervous"</span><span class="p">,</span><span class="w"> </span><span class="s2">"sleepy"</span><span class="p">,</span><span class="w"> </span><span class="s2">"depressed"</span><span class="p">)]</span><span class="w"> </span><span class="c1"># subset</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">na.omit</span><span class="p">(</span><span class="n">data</span><span class="p">)</span><span class="w"> </span><span class="c1"># exclude rows with missing values</span><span class="w">
</span><span class="n">head</span><span class="p">(</span><span class="n">data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## hostile lonely nervous sleepy depressed
## 1 0 1 1 1 0
## 2 0 0 0 1 0
## 3 0 0 0 0 0
## 4 0 1 2 1 1
## 5 0 0 0 1 0
## 6 0 0 0 2 1</code></pre></figure>
<p>Because MNMs include next to 2-way (pairwise) interactions also 3-way interactions, they are more sensitive to extreme values. This is because multiplying three of them naturally leads to more extreme values than when multiplying only two extreme values. It is therefore especially important to check the marginal distributions of all variables:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">par</span><span class="p">(</span><span class="n">mfrow</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="m">3</span><span class="p">))</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">5</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">barplot</span><span class="p">(</span><span class="n">table</span><span class="p">(</span><span class="n">data</span><span class="p">[,</span><span class="w"> </span><span class="n">i</span><span class="p">]),</span><span class="w"> </span><span class="n">axes</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">,</span><span class="w"> </span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="n">ylim</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">3000</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">1000</span><span class="p">,</span><span class="w"> </span><span class="m">2000</span><span class="p">,</span><span class="w"> </span><span class="m">3000</span><span class="p">))</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">main</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">colnames</span><span class="p">(</span><span class="n">data</span><span class="p">)[</span><span class="n">i</span><span class="p">])</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p><img src="/assets/img/2019-07-29-Moderated-Network-Models-for-continuous-data.Rmd/unnamed-chunk-2-1.png" title="plot of chunk unnamed-chunk-2" alt="plot of chunk unnamed-chunk-2" style="display: block; margin: auto;" /></p>
<p>We see that the marginal distributions of all variables except Sleepy are right-skewed and are thereby most likely violating the assumption of MNMs that all variables are conditionally Gaussian. This is the same assumption as in any multiple linear regression model, i.e. that the residuals have a Gaussian distribution. One option would be to transform the variables, possibly by taking the log or square root, or applying the <a href="https://rdrr.io/cran/huge/man/huge.npn.html">nonparanormal transform</a>. However, any transformation renders the interpretation of parameters more difficult (for example, “increasing X by 1 unit increases the nonparanormal transform of Y by $\beta_{AB}$, keeping everything else constant”), which is why we choose here to use the original variables, but to later check the reliability of our estimates using bootstrapping.</p>
<h3 id="estimating-moderated-network-model">Estimating Moderated Network Model</h3>
<p>MNMs can be estimated with the <code class="language-plaintext highlighter-rouge">mgm()</code> function, and which moderation effects are included is specified with the <code class="language-plaintext highlighter-rouge">moderators</code> argument:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">mgm</span><span class="p">)</span><span class="w"> </span><span class="c1"># 1.1-7</span><span class="w">
</span><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">mgm_mod</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mgm</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">),</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"CV"</span><span class="p">,</span><span class="w">
</span><span class="n">ruleReg</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"AND"</span><span class="p">,</span><span class="w">
</span><span class="n">moderators</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">5</span><span class="p">,</span><span class="w">
</span><span class="n">threshold</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"none"</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?mgm</code></pre></figure>
<p>One can specify a particular set of moderation effects by providing a $M \times 3$ matrix to the <code class="language-plaintext highlighter-rouge">moderators</code> argument which indicates the specific set of 3-way interactions to be included in the model, where $M$ is the number of included moderation effects. If one provides a vector, then all moderation effects involving the specified variables are included. For the present example we include all moderation effects of the variable Depressed by setting <code class="language-plaintext highlighter-rouge">moderators = 5</code> (Depressed is in column 5).</p>
<p>The remaining arguments are standard arguments of <code class="language-plaintext highlighter-rouge">mgm()</code>: <code class="language-plaintext highlighter-rouge">type</code> indicates the type of each variable, which in our example is continuous for all variables, which is especified with a <code class="language-plaintext highlighter-rouge">"g"</code> for “Gaussian”. <code class="language-plaintext highlighter-rouge">level</code> indicates the number of categories of a given variable, which is not applicable for continuous variables and set to 1 by convention. <code class="language-plaintext highlighter-rouge">lambdaSel="CV"</code> specifies that the regularization parameters in the $\ell_1$-regularized nodewise estimation approach used in <code class="language-plaintext highlighter-rouge">mgm()</code> is selected with cross-validation, and <code class="language-plaintext highlighter-rouge">threshold = "none"</code> specifies that no additional thresholding should be performed after estimation.</p>
<p>We can inspect the nonzero interaction parameters in the <code class="language-plaintext highlighter-rouge">mgm()</code> output object:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">mgm_mod</span><span class="o">$</span><span class="n">interactions</span><span class="o">$</span><span class="n">indicator</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [[1]]
## [,1] [,2]
## [1,] 1 2
## [2,] 1 3
## [3,] 1 4
## [4,] 1 5
## [5,] 2 3
## [6,] 2 4
## [7,] 2 5
## [8,] 3 4
## [9,] 3 5
## [10,] 4 5
##
## [[2]]
## [,1] [,2] [,3]
## [1,] 1 2 5
## [2,] 1 3 5
## [3,] 2 4 5
## [4,] 3 4 5</code></pre></figure>
<p>We see that we estimated ten pairwise interactions (that is, all possible $\frac{5(5-1)}{2} = 10$ pairwise interactions) and four 3-way interactions or moderation effects. We see that each nonzero 3-way interaction involves variable 5 (Depression), which has to be the case since we only specified Depression as a moderator. The parameter estimates can be retrieved with the <code class="language-plaintext highlighter-rouge">showInteraction()</code> function. For example, for the pairwise interaction between variables 1 (Hostile) and 3 (Nervous) can be obtained like this:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">showInteraction</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w"> </span><span class="n">int</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">))</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Interaction: 1-3
## Weight: 0.1145438
## Sign: 1 (Positive)</code></pre></figure>
<p>We can interpret this parameter using the usual linear regression interpretation: when increasing Nervous (Hostile) by 1 unit, Hostile (Nervous) increases by $\approx 0.114$, keeping everything else constant.</p>
<p>Note, however, that variables 1 and 3 are also involved in a 3-way interaction with 5 (Depressed), that is, the pairwise interaction between 1 and 3 is moderated by 5 (Depression). Similarly to above, we can retrieve the moderation effect using <code class="language-plaintext highlighter-rouge">showInteraction()</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">showInteraction</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w"> </span><span class="n">int</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">,</span><span class="m">5</span><span class="p">))</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Interaction: 1-3-5
## Weight: 0.05279001
## Sign: 1 (Positive)</code></pre></figure>
<p>We see that there is a <em>positive</em> moderation effect. This means that, if one <em>increases</em> the values of Depression, the pairwise interaction between Hostile and Nervous becomes <em>stronger</em>. For example, if Depression = 0, then the parameter for the pairwise interaction between Hostile and Nervous is equal to $0.114 + 0.053 \times 0 = 0.114$. If Depression = 1, then the pairwise interaction parameter is equal to $0.114 + 0.053 \times 1 = 0.167$. In this example, we fixed Depression to a given value and computed the resulting pairwise interaction between Hostile and Nervous We can also do this for the entire network, and inspect the network for different values of Depression. The function <code class="language-plaintext highlighter-rouge">condition()</code> does that for you (see below). The discussion of the modereation effect shows that we have to correct our above interpretation of the pairwise interaction between Hostile and Nervous: when increasing Nervous (Hostile) by 1 unit, Hostile (Nervous) increases by $\approx 0.114$, <em>if Depression is equal to zero</em> and when keeping everything else constant.</p>
<p>Why does the pairwise effect represent exactly the effect when Depression is equal to zero? The reason is that we mean-centered all variables before estimation by default. This leads to the situation that the pairwise interaction parameter represents the moderated effect when conditioning on the moderator value with the highest density (assuming that the moderator variable is symmetric and unimodel, which we do here because we assume all variables to be conditional Gaussians). Uncentered variables often lead to pairwise parameters that are conditioned on moderator values that do not exist in the data, which leads to pairwise parameters that have no meaningful interpretation. For a detailed discussion of this <a href="https://jmbh.github.io/CenteringPredictors/">see here</a>.</p>
<h3 id="conditioning-on-the-moderator">Conditioning on the Moderator</h3>
<p>The function <code class="language-plaintext highlighter-rouge">condition</code> conditions on (fixes values of) a set of moderators. In our model, we included only a single moderator (Depression), so we only fix the values of Depression. Note that internally, <code class="language-plaintext highlighter-rouge">mgm()</code> scales all variables to mean = 0, SD = 1, to ensure that the regularization on parameters does not depend on the variance of the associated variable(s). We therefore need to specify values based on the scaled version of the Depression variable. To pick a reasonable set of values, we inspect the <em>scaled version</em> of the Depression variable:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">tb</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">table</span><span class="p">(</span><span class="n">scale</span><span class="p">(</span><span class="n">data</span><span class="o">$</span><span class="n">depressed</span><span class="p">))</span><span class="w">
</span><span class="nf">names</span><span class="p">(</span><span class="n">tb</span><span class="p">)</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">round</span><span class="p">(</span><span class="nf">as.numeric</span><span class="p">(</span><span class="nf">names</span><span class="p">(</span><span class="n">tb</span><span class="p">)),</span><span class="w"> </span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">barplot</span><span class="p">(</span><span class="n">tb</span><span class="p">,</span><span class="w"> </span><span class="n">axes</span><span class="o">=</span><span class="kc">FALSE</span><span class="p">,</span><span class="w"> </span><span class="n">xlab</span><span class="o">=</span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">3000</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">1000</span><span class="p">,</span><span class="w"> </span><span class="m">2000</span><span class="p">,</span><span class="w"> </span><span class="m">3000</span><span class="p">))</span></code></pre></figure>
<p><img src="/assets/img/2019-07-29-Moderated-Network-Models-for-continuous-data.Rmd/unnamed-chunk-7-1.png" title="plot of chunk unnamed-chunk-7" alt="plot of chunk unnamed-chunk-7" style="display: block; margin: auto;" /></p>
<p>Here, we choose the values 0, 1 and 2.</p>
<p>To condition on, that is, fix a set of variables we provide two arguments to <code class="language-plaintext highlighter-rouge">condition()</code>: first, the <code class="language-plaintext highlighter-rouge">mgm()</code> output object; and second, a list in which the entry name indicates the variable (by its column number) and the entry value indicates the value to which the variable should be fixed. For example, if we would like to fix Depression = 2 we specify <code class="language-plaintext highlighter-rouge">values = list('5' = 2)</code>. Here, we call the <code class="language-plaintext highlighter-rouge">condition()</code> and fix Depression (5) to the values 0, 1 and 2.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">cond0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">condition</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w">
</span><span class="n">values</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">list</span><span class="p">(</span><span class="s1">'5'</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0</span><span class="p">))</span><span class="w">
</span><span class="n">cond1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">condition</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w">
</span><span class="n">values</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">list</span><span class="p">(</span><span class="s1">'5'</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">))</span><span class="w">
</span><span class="n">cond2</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">condition</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w">
</span><span class="n">values</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">list</span><span class="p">(</span><span class="s1">'5'</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">))</span></code></pre></figure>
<p>The output of <code class="language-plaintext highlighter-rouge">condition()</code> is a complete mgm model object, conditioned on the provided set of variables. For example, <code class="language-plaintext highlighter-rouge">cond0</code> contains the model object conditioned on Depression = 0. On the population level (i.e. if sample size does not play a role), this is the model one would obtain if one took only the rows in which Depression = 0 in the data set, and estimated a pairwise model on this subset of the data.</p>
<p>Note that we can compute the model object conditioned on any value. However, if we specify values that we do not actually observe, for example Depression = 7, we extrapolate beyond the observed data and therefore do not know whether the computed conditional mgm object is accurate.</p>
<p>Since we only specified a single moderator in the present example, the three conditional mgm objects are now pairwise models. We can inspect their parameters as usual, or visualize them:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">l_cond</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">list</span><span class="p">(</span><span class="n">cond0</span><span class="p">,</span><span class="w"> </span><span class="n">cond1</span><span class="p">,</span><span class="w"> </span><span class="n">cond2</span><span class="p">)</span><span class="w">
</span><span class="n">library</span><span class="p">(</span><span class="n">qgraph</span><span class="p">)</span><span class="w">
</span><span class="n">par</span><span class="p">(</span><span class="n">mfrow</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">))</span><span class="w">
</span><span class="n">max_val</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">max</span><span class="p">(</span><span class="nf">max</span><span class="p">(</span><span class="n">l_cond</span><span class="p">[[</span><span class="m">1</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">),</span><span class="w">
</span><span class="nf">max</span><span class="p">(</span><span class="n">l_cond</span><span class="p">[[</span><span class="m">2</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">),</span><span class="w">
</span><span class="nf">max</span><span class="p">(</span><span class="n">l_cond</span><span class="p">[[</span><span class="m">3</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">))</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">)</span><span class="w"> </span><span class="n">qgraph</span><span class="p">(</span><span class="n">l_cond</span><span class="p">[[</span><span class="n">i</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">,</span><span class="w"> </span><span class="n">layout</span><span class="o">=</span><span class="s2">"circle"</span><span class="p">,</span><span class="w">
</span><span class="n">edge.color</span><span class="o">=</span><span class="n">l_cond</span><span class="p">[[</span><span class="n">i</span><span class="p">]]</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">,</span><span class="w">
</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">colnames</span><span class="p">(</span><span class="n">msq_p5</span><span class="p">),</span><span class="w">
</span><span class="n">maximum</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">max_val</span><span class="p">,</span><span class="w">
</span><span class="n">edge.labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">TRUE</span><span class="p">,</span><span class="w"> </span><span class="n">edge.label.cex</span><span class="o">=</span><span class="m">2</span><span class="p">,</span><span class="w">
</span><span class="n">vsize</span><span class="o">=</span><span class="m">20</span><span class="p">,</span><span class="w"> </span><span class="n">esize</span><span class="o">=</span><span class="m">18</span><span class="p">,</span><span class="w">
</span><span class="n">title</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">paste0</span><span class="p">(</span><span class="s2">"Depression = "</span><span class="p">,</span><span class="w"> </span><span class="p">(</span><span class="m">0</span><span class="o">:</span><span class="m">2</span><span class="p">)[</span><span class="n">i</span><span class="p">]))</span></code></pre></figure>
<p><img src="/assets/img/2019-07-29-Moderated-Network-Models-for-continuous-data.Rmd/unnamed-chunk-9-1.png" title="plot of chunk unnamed-chunk-9" alt="plot of chunk unnamed-chunk-9" style="display: block; margin: auto;" /></p>
<p>Let’s focus again on the pairwise interaction between Hostile and Nervous. As illustrated above, the relationship becomes stronger when increasing Depression. We also see changes in the three other pairwise interactions that are moderated by Depression: Hostile-Lonely, Lonely-Sleepy, and Nervous-Sleepy. All these conditional pairwise effects can be interpreted as partial correlations. However, note that the pairwise interaction between Hostile-Nervous for Depression=1 computed above ($.167$) is not the same as the one shown in the network picture ($.18$). The reason is that <code class="language-plaintext highlighter-rouge">showInteraction()</code> reports the moderation effect aggregated over the estimates of this parameters from all three regressions, while <code class="language-plaintext highlighter-rouge">condition()</code> reports the moderation effect aggregated over the two regressions on the two variables in the pairwise interaction. For details have a look at our <a href="https://arxiv.org/abs/1807.02877">paper on Moderated Network Models</a>.</p>
<p>Note that Depression is not connected to any other variable in all of the networks. The reason is that in each of the networks Depression is fixed to a value, and therefore technically not part of the model anymore. However, in mgm we keep the variable in the fit object to avoid confusion when comparing MNMs with pairwise models.</p>
<p>Displaying the pairwise networks after conditioning on different values of the moderator variable is perhaps the most intuitive way to report the results of a Moderated Network Model. However, this is only feasible if the number of moderators is small, because the number of cases to consider is equal to $3^m$ if we consider three values for each moderator and have $m$ moderators. In our case we had $3^1=3$ which is still easy to visualize. However, $m=2,3,4$ moderators lead to $9, 27, 81$ cases. An alternative way to visualize MNMs that also works for larger number of moderators is to visualize the MNM in a factor graph.</p>
<h3 id="visualization-using-factor-graph">Visualization using Factor Graph</h3>
<p>Here we show how to visualize a MNM that includes several moderators. To this end, we fit the same model as above, but now include <em>all</em> variables as moderators:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">mgm_mod_all</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mgm</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="s2">"g"</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">5</span><span class="p">),</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"CV"</span><span class="p">,</span><span class="w">
</span><span class="n">ruleReg</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"AND"</span><span class="p">,</span><span class="w">
</span><span class="n">moderators</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">5</span><span class="p">,</span><span class="w">
</span><span class="n">threshold</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"none"</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?mgm</code></pre></figure>
<p>We again check which interactions have been estimated to be nonzero</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">mgm_mod_all</span><span class="o">$</span><span class="n">interactions</span><span class="o">$</span><span class="n">indicator</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [[1]]
## [,1] [,2]
## [1,] 1 2
## [2,] 1 3
## [3,] 1 4
## [4,] 1 5
## [5,] 2 3
## [6,] 2 4
## [7,] 2 5
## [8,] 3 4
## [9,] 3 5
## [10,] 4 5
##
## [[2]]
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 1 2 4
## [3,] 1 3 4
## [4,] 1 3 5
## [5,] 2 3 4
## [6,] 2 4 5
## [7,] 3 4 5</code></pre></figure>
<p>and we see that we recovered a few additional 3-way interactions (moderation effects).</p>
<p>Before visualizing this model as a factor graph, let’s consider why we actually need to go beyond a typical network. For example, if there is a 3-way interaction between variables 1-2-3, one could simply connect those three nodes to indicate the 3-way interaction. The problem with this solution, however, is that we cannot tell from the graph alone anymore whether this triangle comes indeed from a 3-way interaction, or from three 2-way (pairwise) interactions, or even a combination of 2-way and 3-way interactions. We therefore need a more powerful graph.</p>
<p>A factor graph includes, as usual, nodes as variables. But it includes additional variables for interaction parameters. That is, we add an additional (factor) node for each 2-way and 3-way interaction. With an <code class="language-plaintext highlighter-rouge">mgm()</code> output object as input, the <code class="language-plaintext highlighter-rouge">FactorGraph()</code> function plots the factor graph for us:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">FactorGraph</span><span class="p">(</span><span class="n">mgm_mod_all</span><span class="p">,</span><span class="w">
</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">colnames</span><span class="p">(</span><span class="n">data</span><span class="p">),</span><span class="w">
</span><span class="n">layout</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"circle"</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2019-07-29-Moderated-Network-Models-for-continuous-data.Rmd/unnamed-chunk-12-1.png" title="plot of chunk unnamed-chunk-12" alt="plot of chunk unnamed-chunk-12" style="display: block; margin: auto;" /></p>
<p>The five round nodes indicate the five variables, while the red square nodes indicate pairwise interactions, and the blue triangle nodes indicate 3-way intereactions. Corresponding to the estimated interactions listed in <code class="language-plaintext highlighter-rouge">mgm_mod_all$interactions$indicator</code> we see that there are seven 3-way interactions, and 10 pairwise interactions.</p>
<p>The <code class="language-plaintext highlighter-rouge">FactorGraph()</code> function also allows one to only plot 3-way interactions as factor nodes and 2-way interactions as standard edges between variables by setting <code class="language-plaintext highlighter-rouge">PairwiseAsEdge = TRUE</code>, which often helps to simply the visualization. In addition, it allows to pass any arguments to <code class="language-plaintext highlighter-rouge">qgraph()</code>, which is called by <code class="language-plaintext highlighter-rouge">FactorGraph()</code>.</p>
<h3 id="assessing-stability-of-estimates">Assessing Stability of Estimates</h3>
<p>As with pairwise network models, it is useful to obtain a measure for how stable the estimated parameters are. This is especially important in MNMs: first, as discussed at the beginning of this blog post, moderation effects depend more on extreme values than pairwise effects; second, MNMs have more parameters, and therefore the variance on estimates may be larger than for pairwise models; and third, moderation effects are typically smaller than pairwise effects.</p>
<p>To assess stability, we inspect the bootstrapped sampling distributions of all parameters. The function <code class="language-plaintext highlighter-rouge">resample()</code> takes the original model object, the data, and the number of bootstrap samples <code class="language-plaintext highlighter-rouge">nB</code> as input, and returns an object with <code class="language-plaintext highlighter-rouge">nB</code> models fitted on bootstrap samples. Here, we assess the reliability of estimates of the initial example in which Depression was the only moderator (this takes 30-60s to run):</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">res_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">resample</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mgm_mod</span><span class="p">,</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">nB</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">50</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<p>We can then visualize the summary of all sampling distributions using <code class="language-plaintext highlighter-rouge">plotRes()</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">plotRes</span><span class="p">(</span><span class="n">res_obj</span><span class="p">,</span><span class="w">
</span><span class="n">axis.ticks</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.1</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.1</span><span class="p">,</span><span class="w"> </span><span class="m">.2</span><span class="p">,</span><span class="w"> </span><span class="m">.3</span><span class="p">,</span><span class="w"> </span><span class="m">.4</span><span class="p">,</span><span class="w"> </span><span class="m">.5</span><span class="p">),</span><span class="w">
</span><span class="n">axis.ticks.mod</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.1</span><span class="p">,</span><span class="w"> </span><span class="m">-.05</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.05</span><span class="p">,</span><span class="w"> </span><span class="m">.1</span><span class="p">),</span><span class="w">
</span><span class="n">cex.label</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w">
</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">colnames</span><span class="p">(</span><span class="n">msq_p5</span><span class="p">),</span><span class="w">
</span><span class="n">layout.width.labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">.40</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2019-07-29-Moderated-Network-Models-for-continuous-data.Rmd/unnamed-chunk-14-1.png" title="plot of chunk unnamed-chunk-14" alt="plot of chunk unnamed-chunk-14" style="display: block; margin: auto;" /></p>
<p>The first column displays the pairwise effects, and the second column shows the moderation effects. The horizontal lines show the 5% and 95% quantiles of the bootstrapped sampling distributions. The number is the proportion of bootstrap samples in which a parameter has been estimated to be nonzero, and the number is placed at the location of the mean of the sampling distribution. We observe that the variance of the sampling distribution is small for pairwise effects, indicating that they are stable.</p>
<p>Regarding the moderation effects, we first notice that moderation effects on pairwise interactions involving Depressed are always zero. Taking the pairwise interaction Lonely-Depressed as an example, the moderation effect would refer to the 3-way interaction Lonely-Depressed-Depressed. Because we do not include such quadratic effects by default, these parameters are always equal to zero. For the moderation effects that were estimated, we see that their mean size are smaller and that the variances are larger relative to their means. However, the moderation effect of Depression on Nervous-Sleepy seems quite stable, and at least the moderation effects on Lonely-Sleepy, and Hostile-Nervous are somewhat stable.</p>
<h3 id="model-selection">Model Selection</h3>
<p>Finally, I would like to address the topic of model selection; specifically, the problem of selecting between models with and without moderation effects. Since all effects are subject to regularization, the model selection in <code class="language-plaintext highlighter-rouge">mgm()</code> between models with different regularization parameters essentially selects between models with and without moderation.</p>
<p>An alternative would be to compare the fit of a (possibly unregularized) model including only pairwise interactions with the fit of a (possibly unregularized) model including pairwise interactions and (a set of) moderation effects. This could be done by comparing the fit in a separate training data set, or by approximating out-of-sample error using an out-of-bag error of a cross-validation scheme. These approaches, however, have to be performed on a nodewise basis, because a limitation of MNMs is that it is unclear whether the estimates obtained from nodewise regression give rise to a proper joint distribution. For details see <a href="https://arxiv.org/abs/1807.02877">our paper</a>, which precludes using any global likelihood ratio tests or global goodness-of-fit measures. However, all measures can be computed on a nodewise bases and combined to achieve a global model selection.</p>
<h3 id="moderated-mixed-graphical-models">Moderated Mixed Graphical Models</h3>
<p>In the present blog post, I have focused on MNMs that include only continuous variables. However, <code class="language-plaintext highlighter-rouge">mgm()</code> also allows one to fit MGMs with moderation effects, in which any type of variable can be a moderator variable that moderates the pairwise interaction between pairs of any types of variables. An interesting special case of this flexible model is a model in which one includes a single categorical variable as a moderator, since this presents an alternative way to estimate group differences between 2 or more groups. I will write a blog post about this in the near future.</p>
<h3 id="summary">Summary</h3>
<p>I showed how to estimate Moderated Network Models using <code class="language-plaintext highlighter-rouge">mgm()</code> and how to use <code class="language-plaintext highlighter-rouge">showInteraction()</code> to retrieve parameter estimates from the output object. For the model including only Depression as a moderator, we used <code class="language-plaintext highlighter-rouge">condition()</code> to fix, that is, condition on a set of values of the moderator variable, and used the <code class="language-plaintext highlighter-rouge">qgraph</code> package to visualize the separate conditional network models. We showed how to use <code class="language-plaintext highlighter-rouge">FactorGraph()</code> to draw a factor graph of an estimated MNM including more than one moderator, which is useful especially when the model includes several moderator variables. And finally, we used <code class="language-plaintext highlighter-rouge">resample()</code> and <code class="language-plaintext highlighter-rouge">plotRes()</code> to evaluate the stability of estimates using boostrapping.</p>
<p>In case anything is unclear or if you have any questions or comments, please let me know in the comments!</p>
<hr />
<p>I would like to thank <a href="https://fabiandablander.com/">Fabian Dablander</a> and <a href="https://mattiheino.com/">Matti Heino</a> for their feedback on this blog post.</p>
Thu, 05 Sep 2019 11:00:00 +0000
http://jmbh.github.io//Moderated-Network-Models-for-continuous-data/
http://jmbh.github.io//Moderated-Network-Models-for-continuous-data/Regression with Interaction Terms - How Centering Predictors influences Main Effects'<p>Centering predictors in a regression model with only main effects has no influence on the main effects. In contrast, in a regression model including interaction terms centering predictors <em>does</em> have an influence on the main effects. After getting confused by this, I read <a href="https://amstat.tandfonline.com/doi/pdf/10.1080/10691898.2011.11889620">this</a> nice paper by Afshartous & Preston (2011) on the topic and played around with the examples in R. I summarize the resulting notes and code snippets in this blogpost.</p>
<p>We give an explanation on two levels:</p>
<ol>
<li>By illustrating the issue with the simplest possible example</li>
<li>By showing in general how main effects are a function of the constants (e.g. means) that are substracted from predictor variables</li>
</ol>
<h2 id="explanation-1-simplest-example">Explanation 1: Simplest example</h2>
<p>The simplest possible example to illustrate the issue is a regression model in which variable $Y$ is a linear function of variables $X_1$, $X_2$, and their product $X_1X_2$</p>
\[Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1X_2 + \epsilon,\]
<p>where we set $\beta_0 = 1, \beta_1 = 0.3, \beta_2 = 0.2, \beta_3 = 0.2$, and $\epsilon \sim N(0, \sigma^2)$ is Gaussian distribution with mean zero and variance $\sigma^2$. We define the predictors $X_1, X_2$ as Gaussians with means $\mu_{X_1} = \mu_{X_2} = 1$ and $\sigma_{X_1}^{2}=\sigma_{X_2}^{2}=1$. This code samples $n = 10000$ observations from this model:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">n</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">10000</span><span class="w">
</span><span class="n">b0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">1</span><span class="p">;</span><span class="w"> </span><span class="n">b1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">.3</span><span class="p">;</span><span class="w"> </span><span class="n">b2</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">.2</span><span class="p">;</span><span class="w"> </span><span class="n">b3</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">.2</span><span class="w">
</span><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">x1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">rnorm</span><span class="p">(</span><span class="n">n</span><span class="p">,</span><span class="w"> </span><span class="n">mean</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">sd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">x2</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">rnorm</span><span class="p">(</span><span class="n">n</span><span class="p">,</span><span class="w"> </span><span class="n">mean</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">sd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">y</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">b0</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b1</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b2</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b3</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">rnorm</span><span class="p">(</span><span class="n">n</span><span class="p">,</span><span class="w"> </span><span class="n">mean</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="n">sd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">)</span></code></pre></figure>
<p><strong>Regression models with main effects</strong></p>
<p>We first verify that centering variables indeed does not affect the main effects. To do so, we first fit the linear regression with only main effects with uncentered predictors</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">x2</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">##
## Call:
## lm(formula = y ~ x1 + x2)
##
## Coefficients:
## (Intercept) x1 x2
## 0.8088 0.4983 0.4015</code></pre></figure>
<p>and then with mean centered predictors</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">x1_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">x1</span><span class="p">)</span><span class="w"> </span><span class="c1"># center predictors</span><span class="w">
</span><span class="n">x2_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x2</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">x2</span><span class="p">)</span><span class="w">
</span><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1_c</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">x2_c</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">##
## Call:
## lm(formula = y ~ x1_c + x2_c)
##
## Coefficients:
## (Intercept) x1_c x2_c
## 1.7036 0.4983 0.4015</code></pre></figure>
<p>The parameter estimates of the regression with uncentered predictors are $\hat\beta_1 \approx 0.50$ and $\hat\beta_2 \approx 0.40$. The estimates of the regression with <em>centered</em> predictors are $\hat\beta_1^\ast \approx 0.50$ and $\hat\beta_2^\ast \approx 0.40$ (we denote estimates from regressions with centered predictors with an asterisk). And indeed, $\hat\beta_1 = \hat\beta_1^\ast$ and $\hat\beta_2 = \hat\beta_2^\ast$.</p>
<p><strong>Regression models with main effects + interaction</strong></p>
<p>We include the interaction term and show that centering the predictors now does <em>does</em> affect the main effects. We first fit the regression model without centering</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">##
## Call:
## lm(formula = y ~ x1 * x2)
##
## Coefficients:
## (Intercept) x1 x2 x1:x2
## 1.0183 0.2883 0.1898 0.2111</code></pre></figure>
<p>and then with centering</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1_c</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2_c</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">##
## Call:
## lm(formula = y ~ x1_c * x2_c)
##
## Coefficients:
## (Intercept) x1_c x2_c x1_c:x2_c
## 1.7026 0.4984 0.3995 0.2111</code></pre></figure>
<p>We see that $\hat\beta_1 \approx 0.29$ and $\hat\beta_2 \approx 0.19$ and $\hat\beta_1^\ast \approx 0.50$ and $\hat\beta_2^\ast \approx 0.40$. While the two models have different parameters, they are statistically equivalent. Here this means that expected values of both models are the same. In empirical terms this means that their coefficient of determination $R^2$ is the same. The reader will be able to verify this in Explanation 2 below.</p>
<p>We make two observations:</p>
<ol>
<li>In the model with interaction terms, the main effects differ between the regressions with/without centering of predictors</li>
<li>When centering predictors, the main effects are the same in the model with/without the interaction term (up to some numerical inaccuracy)</li>
</ol>
<p><strong>Why does centering influence main effects in the presence of an interaction term?</strong></p>
<p>The reason is that in the model with the interaction term, the parameter $\beta_1$ (uncentered predictors) is the main effect of $X_1$ on $Y$ if $X_2 = 0$, and the parameter $\beta_1^\ast$ (centered predictors) is the main effect of $X_1$ on $Y$ if $X_2 = \mu_{X_2}$. This means that $\beta_1$ and $\beta_1^\ast$ are modeling different effects in the data. Here is a more detailed explanation:</p>
<p>Rewriting the model equation in the following way</p>
\[\begin{aligned}
\mathbb{E}[Y] &= \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1X_2 \\
&= \beta_0 + (\beta_1 + \beta_3 X_2) X_1 + \beta_2 X_2
\end{aligned}\]
<p>shows that in the model with interaction term, the effect of $X_1$ on $Y$ is equal to $(\beta_1 + \beta_3 X_2)$ and therefore a function of $X_2$. What does the parameter $\beta_1$ model here? It models the effect of $X_1$ on $Y$ when $X_2 = 0$. Similarly we could rewrite the effect of $X_1$ on $Y$ as a function of $X_2$.</p>
<p>Now let $X_1^c = X_1 - \mu_{X_1}$ and $X_2^c = X_2 - \mu_{X_2}$ be the centered predictors. We get the same model equations, now with the parameters estimated using the centered predictors $X_1^c, X_2^c$:</p>
\[\begin{aligned}
\mathbb{E}[Y] &= \beta_0^\ast + \beta_1^\ast X_1^c + \beta_2^\ast X_2^c + \beta_3^\ast X_1^c X_2^c \\
&= \beta_0^\ast + (\beta_1^\ast + \beta_3^\ast X_2^c) X_1^c + \beta_2^\ast X_2^c \\
\end{aligned}\]
<p>Again we focus on the effect $(\beta_1^\ast + \beta_3^\ast X_2^c)$ of $X_1^c$ on $Y$. What does the the parameter $\beta_1^\ast$ model here? It models the main effect of $X_1^c$ on $Y$ when $X_2^c = \mu_{X_2^c} = 0$. What remained the same is that $\beta_1^\ast$ is the main effect of $X_1^c$ on $Y$ when $X_2^c = 0$. But what is new is that $\mu_{X_2^c} = 0$.</p>
<p>To summarize, in the uncentered case $\beta_i$ is the main effect when the predictor variable $X_i$ is equal to zero; and in the centered case, $\beta_i^\ast$ is the main effect when the predictor variable $X_i$ is equal to its mean. Clearly, $\beta_i$ and $\beta_i^\ast$ model different effects in the data and it is therefore not surprising that the two regressions give us very different estimates.</p>
<p><strong>Centering $\rightarrow$ interpretation of $\beta$ remains the same when adding interaction</strong></p>
<p>Our second observation above was that the estimates of main effects are the same with/without interaction term when centering the predictor variables. This is because in the models <em>without</em> interaction term (centered or uncentered predictors) the interpretation of $\beta_1$ is the same as in the model <em>with</em> interaction term and centered predictors.</p>
<p>More precisely, in the regression model with only main effects, $\beta_1$ is the main effect of $X_1$ on $Y$ averaged over all values of $X_2$, which is the same as the main effect of $X_1$ on $Y$ for $X_2 = \mu_{X_2}$. This means that if we center predictors, $\beta_1$ models the same effect in the data in a model with/without interaction term. This is an attractive property to have when one is interested in comparing models with/without interaction term.</p>
<h2 id="explanation-2-main-effects-as-functions-of-added-constants">Explanation 2: Main effects as functions of added constants</h2>
<p>Substracting the mean from predictors is a special case of adding constants to predictors. Here we first show numerically what happens to each regression parameter when adding constants to predictors. Then we show analytically how each parameter is a function of its value in the original regression model (no constant added) and the added constants.</p>
<p>Why are we doing this? We are doing this to develop a more general understanding of what happens when adding constants to predictors. It also puts the above example in a more general context, since we can consider it as a special case of the following analysis.</p>
<p><strong>Numerical experiment I: Only main effects</strong></p>
<p>We first fit a series of regression models with only main effects. In each of them we add a different constant to the predictors. We do this verify that our claim that centering predictors does not change main effects extends to the more general situation of adding constants to predictors.</p>
<p>We first define a sequence of constant values we add to the predictors and create storage for parameter estimates:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">n</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">25</span><span class="w">
</span><span class="n">c_sequence</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">seq</span><span class="p">(</span><span class="m">-1.5</span><span class="p">,</span><span class="w"> </span><span class="m">1.5</span><span class="p">,</span><span class="w"> </span><span class="n">length</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">n</span><span class="p">)</span><span class="w">
</span><span class="n">A</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">as.data.frame</span><span class="p">(</span><span class="n">matrix</span><span class="p">(</span><span class="kc">NA</span><span class="p">,</span><span class="w"> </span><span class="n">ncol</span><span class="o">=</span><span class="m">5</span><span class="p">,</span><span class="w"> </span><span class="n">nrow</span><span class="o">=</span><span class="n">n</span><span class="p">))</span><span class="w">
</span><span class="n">colnames</span><span class="p">(</span><span class="n">A</span><span class="p">)</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"b0"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b1"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b2"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b3"</span><span class="p">,</span><span class="w"> </span><span class="s2">"R2"</span><span class="p">)</span></code></pre></figure>
<p>We now fit 25 regression models, and in each of them we add a constant <code class="language-plaintext highlighter-rouge">c</code> to both predictors, taken from the sequence <code class="language-plaintext highlighter-rouge">c_sequence</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">25</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">c_sequence</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w">
</span><span class="n">x1_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">c</span><span class="w">
</span><span class="n">x2_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x2</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">c</span><span class="w">
</span><span class="n">lm_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1_c</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">x2_c</span><span class="p">)</span><span class="w"> </span><span class="c1"># Fit model</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b0</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">1</span><span class="p">]</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b1</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">2</span><span class="p">]</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">3</span><span class="p">]</span><span class="w">
</span><span class="n">yhat</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">lm_obj</span><span class="p">)</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">R2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">var</span><span class="p">(</span><span class="n">yhat</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">y</span><span class="p">)</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">var</span><span class="p">(</span><span class="n">y</span><span class="p">)</span><span class="w"> </span><span class="c1"># Compute R2</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p>Remark: in Explanation 1 we said that the coefficient of determination $R^2$ does not change when adding constants to the predictors. We invite the reader to verify this by inspecting <code class="language-plaintext highlighter-rouge">A$R2</code>.</p>
<p>We plot all parameters $\beta_0, \beta_1, \beta_2$ as a function of <code class="language-plaintext highlighter-rouge">c</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">RColorBrewer</span><span class="p">)</span><span class="w">
</span><span class="n">cols</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">brewer.pal</span><span class="p">(</span><span class="m">4</span><span class="p">,</span><span class="w"> </span><span class="s2">"Set1"</span><span class="p">)</span><span class="w"> </span><span class="c1"># Select nice colors</span><span class="w">
</span><span class="n">plot.new</span><span class="p">()</span><span class="w">
</span><span class="n">plot.window</span><span class="p">(</span><span class="n">xlim</span><span class="o">=</span><span class="nf">range</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">),</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="nf">round</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">),</span><span class="w"> </span><span class="n">cex.axis</span><span class="o">=</span><span class="m">0.75</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.5</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1.5</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">),</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b0</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b1</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">2</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b2</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">3</span><span class="p">])</span><span class="w">
</span><span class="n">legend</span><span class="p">(</span><span class="s2">"topright"</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"b0"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b1"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b2"</span><span class="p">),</span><span class="w">
</span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="o">:</span><span class="m">3</span><span class="p">],</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">),</span><span class="w"> </span><span class="n">bty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"n"</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Added constant"</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">ylab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Parameter value"</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2018-05-05-CenteringPredictors.Rmd/unnamed-chunk-8-1.png" title="plot of chunk unnamed-chunk-8" alt="plot of chunk unnamed-chunk-8" style="display: block; margin: auto;" /></p>
<p>We see that the intercept changes as a function of <code class="language-plaintext highlighter-rouge">c</code>. The model at <code class="language-plaintext highlighter-rouge">c = 0</code> corresponds to the very first model we fitted above. And the model at <code class="language-plaintext highlighter-rouge">c = -1</code> corresponds to the model fitted with centered predictors. But the key observation is that the main effects $\beta_1, \beta_2$ do not change. A proof of this and an exact expression for the intercept will fall out of our analysis of the model with interaction term in the last section of this blogpost.</p>
<p><strong>Numerical experiment II: main effects + interaction term</strong></p>
<p>Next we show that this is different when adding the interaction term. We use the same sequence of <code class="language-plaintext highlighter-rouge">c</code> as above and fit regression models with interaction term:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">25</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">c_sequence</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w">
</span><span class="n">x1_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">c</span><span class="w">
</span><span class="n">x2_c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">x2</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">c</span><span class="w">
</span><span class="n">lm_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1_c</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2_c</span><span class="p">)</span><span class="w"> </span><span class="c1"># Fit model</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b0</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">1</span><span class="p">]</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b1</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">2</span><span class="p">]</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">3</span><span class="p">]</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">b3</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">4</span><span class="p">]</span><span class="w">
</span><span class="n">yhat</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">lm_obj</span><span class="p">,</span><span class="w"> </span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="n">y</span><span class="p">,</span><span class="w"> </span><span class="n">x1_c</span><span class="p">,</span><span class="w"> </span><span class="n">x2_c</span><span class="p">))</span><span class="w">
</span><span class="n">A</span><span class="o">$</span><span class="n">R2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">var</span><span class="p">(</span><span class="n">yhat</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">y</span><span class="p">)</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">var</span><span class="p">(</span><span class="n">y</span><span class="p">)</span><span class="w"> </span><span class="c1"># Compute R2</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p>And again we plot all parameters $\beta_0, \beta_1, \beta_2, \beta_3$ as a function of <code class="language-plaintext highlighter-rouge">c</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">plot.new</span><span class="p">()</span><span class="w">
</span><span class="n">plot.window</span><span class="p">(</span><span class="n">xlim</span><span class="o">=</span><span class="nf">range</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">),</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="nf">round</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">),</span><span class="w"> </span><span class="n">cex.axis</span><span class="o">=</span><span class="m">0.75</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.5</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1.5</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">),</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b0</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b1</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">2</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b2</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">3</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b3</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">4</span><span class="p">])</span><span class="w">
</span><span class="n">legend</span><span class="p">(</span><span class="s2">"topright"</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"b0"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b1"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b2"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b3"</span><span class="p">),</span><span class="w">
</span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="o">:</span><span class="m">4</span><span class="p">],</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">),</span><span class="w"> </span><span class="n">bty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"n"</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Added constant"</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">ylab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Parameter value"</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2018-05-05-CenteringPredictors.Rmd/unnamed-chunk-10-1.png" title="plot of chunk unnamed-chunk-10" alt="plot of chunk unnamed-chunk-10" style="display: block; margin: auto;" /></p>
<p>This time both the intercept $\beta_0$ and the main effects $\beta_1, \beta_2$ are a function of <code class="language-plaintext highlighter-rouge">c</code>, while the interaction effect $\beta_3$ is constant. At this point the best explanation is simply to go through the algebra, which explains these results exactly. We do this in the next section.</p>
<p><strong>Deriving function for all effects</strong></p>
<p>We plug in the definition of centering in the population regression model we introduced at the very beginning of this blogpost. This gives us every parameter as a function of two things: (1) the parameters in the original model and (b) the added constant. Above we added the same constant to both predictors. Here we consider the general case where the constants can differ.</p>
<p>Our original (unaltered) model is given by:</p>
\[\begin{aligned}
\mathbb{E}[Y] &= \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1X_2
\end{aligned}\]
<p>Now we plug in the predictors with added constants $c_1, c_2$, multiply out, and rearrange:</p>
\[\begin{aligned}
\mathbb{E}[Y] &= \beta_0^\ast + \beta_1^\ast (X_1 + c_1) + \beta_2 (X_2 + c_2) + \beta_3^\ast (X_1 + c_1) (X_2 + c_2) \\
& = \beta_0^\ast + \beta_1^\ast X_1 + \beta_1^\ast c_1 + \beta_2^\ast X_2 + \beta_2^\ast c_2
+ \beta_3^\ast X_1X_2 + \beta_3^\ast X_1 c_2 + \beta_3^\ast c_1X_2 + \beta_3^\ast c_1c_2 \\
&= (\beta_0^\ast + \beta_1^\ast c_1 + \beta_2^* c_2 + \beta_3^\ast c_1c_2) + (\beta_1^\ast + \beta_3^\ast c_2)X_1 + (\beta_2^\ast + \beta_3^\ast c_1)X_2 + \beta_3^\ast X_1X_2
\end{aligned}\]
<p>Now if we equate the respective interecept and slope terms we get:</p>
\[\beta_0 = \beta_0^\ast + \beta_1^\ast c_1 + \beta_2^\ast c_2 + \beta_3^\ast c_1c_2\]
\[\beta_1 = \beta_1^\ast + \beta_3^\ast c_2\]
\[\beta_2 = \beta_2^\ast + \beta_3^\ast c_1\]
<p>and</p>
\[\beta_3 = \beta_3^\ast\]
<p>Now we solve for the parameters $\beta_0^\ast, \beta_1^\ast, \beta_2^\ast, \beta_3^\ast$ from the models with constants added to the predictors.</p>
<p>Because we know $\beta_3 = \beta_3^\ast$ we can write $\beta_2 = \beta_2^\ast + \beta_3 c_1$ and can solve</p>
\[\beta_2^\ast = \beta_2 - \beta_3 c_1\]
<p>The same goes for $\beta_1^\ast$ so we have</p>
\[\beta_1^\ast = \beta_1 - \beta_3 c_2\]
<p>Finally, to obtain a formula for $\beta_0^\ast$ we plug the just obtained expressions for $\beta_1^\ast$, $\beta_2^\ast$ and $\beta_3^\ast$ into</p>
\[\beta_0 = \beta_0^\ast + \beta_1^\ast c_1 + \beta_2^\ast c_2 + \beta_3^\ast c_1c_2\]
<p>and get</p>
\[\begin{aligned}
\beta_0 &= \beta_0^\ast + (\beta_1 - \beta_3 c_2)c_1 + (\beta_2 - \beta_3 c_1)c_2 + \beta_3 c_1c_2 \\
&= \beta_0^\ast + \beta_1 c_1 - \beta_3 c_2 c_1 + \beta_2 c_2 - \beta_3 c_2 c_1 + \beta_3 c_1c_2 \\
&= \beta_0^\ast + \beta_1 c_1 + \beta_2 c_2 - \beta_3 c_1c_2
\end{aligned}\]
<p>and can solve for $\beta_0^\ast$:</p>
\[\beta_0^\ast = \beta_0 - \beta_1 c_1 - \beta_2 c_2 + \beta_3 c_1c_2\]
<p>Let’s check whether those fomulas predict the parameter changes as a function of <code class="language-plaintext highlighter-rouge">c</code> in the numerical experiment above.</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">lm_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">x1</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x2</span><span class="p">)</span><span class="w"> </span><span class="c1"># Reference model (no constant added)</span><span class="w">
</span><span class="n">b0</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">1</span><span class="p">]</span><span class="w">
</span><span class="n">b1</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">2</span><span class="p">]</span><span class="w">
</span><span class="n">b2</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">3</span><span class="p">]</span><span class="w">
</span><span class="n">b3</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">lm_obj</span><span class="o">$</span><span class="n">coefficients</span><span class="p">[</span><span class="m">4</span><span class="p">]</span><span class="w">
</span><span class="n">B</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">A</span><span class="w"> </span><span class="c1"># Storage for predicted parameters</span><span class="w">
</span><span class="k">for</span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="m">25</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="n">c</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">c_sequence</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w">
</span><span class="n">B</span><span class="o">$</span><span class="n">b0</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">b0</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">b1</span><span class="o">*</span><span class="n">c</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">b2</span><span class="o">*</span><span class="n">c</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">b3</span><span class="o">*</span><span class="n">c</span><span class="o">*</span><span class="n">c</span><span class="w">
</span><span class="n">B</span><span class="o">$</span><span class="n">b1</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">b1</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">b3</span><span class="o">*</span><span class="n">c</span><span class="w">
</span><span class="n">B</span><span class="o">$</span><span class="n">b2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">b2</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">b3</span><span class="o">*</span><span class="n">c</span><span class="w">
</span><span class="n">B</span><span class="o">$</span><span class="n">b3</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">b3</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p>We plot the computed parameters by the derived expressions as points on the empirical results from the numerical experiments above</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">plot.new</span><span class="p">()</span><span class="w">
</span><span class="n">plot.window</span><span class="p">(</span><span class="n">xlim</span><span class="o">=</span><span class="nf">range</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">),</span><span class="w"> </span><span class="n">ylim</span><span class="o">=</span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">))</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="nf">round</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">),</span><span class="w"> </span><span class="n">cex.axis</span><span class="o">=</span><span class="m">0.75</span><span class="p">,</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">axis</span><span class="p">(</span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">-.5</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="m">.5</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">1.5</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">2.5</span><span class="p">),</span><span class="w"> </span><span class="n">las</span><span class="o">=</span><span class="m">2</span><span class="p">)</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b0</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b1</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">2</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b2</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">3</span><span class="p">])</span><span class="w">
</span><span class="n">lines</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">A</span><span class="o">$</span><span class="n">b3</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">4</span><span class="p">])</span><span class="w">
</span><span class="n">legend</span><span class="p">(</span><span class="s2">"topright"</span><span class="p">,</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"b0"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b1"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b2"</span><span class="p">,</span><span class="w"> </span><span class="s2">"b3"</span><span class="p">),</span><span class="w">
</span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="o">:</span><span class="m">4</span><span class="p">],</span><span class="w"> </span><span class="n">lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="m">3</span><span class="p">),</span><span class="w"> </span><span class="n">bty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"n"</span><span class="p">)</span><span class="w">
</span><span class="c1"># Plot predictions</span><span class="w">
</span><span class="n">points</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">B</span><span class="o">$</span><span class="n">b0</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">1</span><span class="p">])</span><span class="w">
</span><span class="n">points</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">B</span><span class="o">$</span><span class="n">b1</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">2</span><span class="p">])</span><span class="w">
</span><span class="n">points</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">B</span><span class="o">$</span><span class="n">b2</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">3</span><span class="p">])</span><span class="w">
</span><span class="n">points</span><span class="p">(</span><span class="n">c_sequence</span><span class="p">,</span><span class="w"> </span><span class="n">B</span><span class="o">$</span><span class="n">b3</span><span class="p">,</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cols</span><span class="p">[</span><span class="m">4</span><span class="p">])</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Added constant"</span><span class="p">)</span><span class="w">
</span><span class="n">title</span><span class="p">(</span><span class="n">ylab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Parameter value"</span><span class="p">)</span></code></pre></figure>
<p><img src="/assets/img/2018-05-05-CenteringPredictors.Rmd/unnamed-chunk-12-1.png" title="plot of chunk unnamed-chunk-12" alt="plot of chunk unnamed-chunk-12" style="display: block; margin: auto;" /></p>
<p>and they match the numerical results exactly.</p>
<p>We see that the derived expressions explain exactly how parameters change as a function of the parameters of the reference model and the added constants.</p>
<p>If we set $\beta_3 = 0$, we get the same derivation for the regression model <em>without</em> interaction term. We find that $\beta_1^* = \beta_1$, $\beta_2^* = \beta_2$, and $\beta_0^* = \beta_0 - \beta_1 c_1 - \beta_2 c_2$.</p>
Thu, 16 Feb 2017 10:00:00 +0000
http://jmbh.github.io//CenteringPredictors/
http://jmbh.github.io//CenteringPredictors/Deconstructing 'Measurement error and the replication crisis'<p>Yesterday, I read <a href="http://science.sciencemag.org/content/355/6325/584/tab-pdf">‘Measurement error and the replication crisis’</a> by <a href="http://hhd.psu.edu/dsg/eric-loken-phd-assistant-director">Eric Loken</a> and <a href="http://andrewgelman.com">Andrew Gelman</a>, which left me puzzled. The first part of the paper consists of general statements about measurement error. The second part consists of the claim that in the presence of measurement error, we overestimate the true effect when having a small sample size. This sounded wrong enough to ask the authors for their <a href="https://raw.githubusercontent.com/jmbh/jmbh.github.io/master/figs/measurementerror/graph%20codes%20to%20share%20for%20science%20paper%20final-2.txt">simulation code</a> and spend a couple of hours to figure out what they did in their paper. I am offering a short and a long version.</p>
<p><strong>Edit Feb 17th:</strong> After a nice email converstaion with the authors, I now know that they <em>do</em> make their general argument only under the condition of selecting on significance. Their result then trivially follows from the increased variance of the sampling distribution due to adding ‘measurement error’ (see section (3) below). My source of confusion was that they talk about selection on significance in the paper, but then do not select on significance in the two scatter plots, and incorrectly state in the figure title, that they do. The conclusions of this blog post are still valid when making the assumptions in (1), so I leave it online in case somebody finds (parts of) it interesting.</p>
<h2 id="the-short-version">The Short Version</h2>
<p>My conclusion is that the authors show the following: If an estimator is biased (here by the presence of measurement error), then the proportion of estimates that overestimate the true effect depends on the variance of the sampling distribution (which depends on $N$). While this is an interesting insight, the authors do not say this clearly anywhere in the paper. Instead, they use formulations that suggest that they refer to the expected value of the estimator, which does not depend on the sample size. To make things worse, they plot the estimates in a way that suggest that the variance of the estimators is equal for N = 50 and N = 3000 and that the effect is driven by a difference in expected value, while the reverse is true.</p>
<h2 id="the-long-version">The Long Version</h2>
<p>I try to make an argument for my claims in the ‘short version’ above in 6 steps. (1) We make clear what the claim is the authors make, (2) we define our terminology, (3) we investigate what adding measurement error does on the population level, (4) we see how this influences the characteristics of estimators based on different sample sizes, (5) we summarize our results and (6) get back to the paper.</p>
<p><strong>(1) The exact claim</strong></p>
<p>The authors write <em>‘In a low-noise setting, the theoretical results of Hausman and others correctly show that measurement error will attenuate co- efficient estimates. But we can demonstrate with a simple exercise that the opposite occurs in the presence of high noise and selection on statistical significance.’ (p. 584/585)</em>. From this we can deduce that the authors claim that ‘In a high noise setting, the presence of measurement error and selection on statistical significance leads to an increase in coefficient estimates’. However, the authors do not select on statistical significance in their simulation, hence we also drop this condition and arrive at the claim ‘In a high noise setting, the presence of measurement error leads to an increase in coefficient estimates’.</p>
<p>What this statement means is unclear to me. Under the reasonable assumption that the authors did not make a fundamental mistake, the rest of this blogpost is about finding out what the authors could have meant.</p>
<p><strong>(2) Terminology (for reference)</strong></p>
<p>In the paper, ‘measurement error’, ‘noise’ and ‘variance’ are used interchangeably. Here, with variances we refer to the variances of the dimensions of the bivariate Gaussian distribution, if not stated otherwise. With measurement error we mean another bivariate Gaussian distribution with zero covariance. By a noisy setting, we refer to a situation with a low signal to noise ratio. This is defined relative to another setting, which is less noisy. The signal to noise ratio is a function of $N$ and is related to the variance of the sampling distribution of the estimator. All these things will become clear in sections (3) and (4).</p>
<p><strong>(3) What does ‘adding measurement error’ mean on the population level?</strong></p>
<p>In order to evaluate the above claim with respect to the simulation setup of the authors, we need to know the simulation setup. Fortunately, the authors provided the code in a quick and friendly email.</p>
<p>The authors consider the problem of estimating the covariance of a bivariate Gaussian distribution from a finite number of observations. The bivariate Gaussian distribution has the density</p>
\[f(x_1, x_2) = \frac{1}{\sqrt{(2 \pi)^k | \mathbf{ \Sigma } | }} \exp \bigl \{ - \frac{1}{2} \bigr (x - \mu)^{\top} \mathbf{ \Sigma }^{-1} (x - \mu) \},\]
<p>where in our case the covariance $cov(x_1, x_2) = r > 0$ is some positive value, so the covariance matrix $\Sigma$ has entries:</p>
\[\Sigma = \begin{bmatrix}
1 & r \\[0.3em]
r & 1
\end{bmatrix}\]
<p>Note that if we scale both dimensions of the Gaussian to $\mu_1 = \mu_2 = 0$ and $\sigma_1 = \sigma_2 = 1$ the correlation coefficient is equal to the coefficient of the regression of $x_1$ on $x_2$ or vice versa. Thus all results obtained here also extend to the regression coefficient that is refered to in the paper.</p>
<p>Now the authors ‘add measurement error’ to the two variables which consists of independent Gaussian noise with a variance $k > 0$, where $k$ is a constant. Notice that these two variables can also described by a bivariate Gaussian with covariance matrix $\Sigma^{ME}$:</p>
\[\Sigma^{ME} = \begin{bmatrix}
k & 0 \\[0.3em]
0 & k
\end{bmatrix}\]
<p>Notice that adding ‘measurement error’ as done by the authors is the same as adding these two Gaussians. Addition is a linear transformation and hence the resulting distribution is again a bivariate Gaussian distribution. Indeed, it turns out that the covariance matrix $\Sigma^A$ of the resulting bivariate Gaussian is the sum of the covariance matrices $\Sigma$ and $\Sigma^{ME}$ of the two bivariate Gaussians:</p>
\[\Sigma^A = \begin{bmatrix}
1 & r \\[0.3em]
r & 1
\end{bmatrix}
+
\begin{bmatrix}
k & 0 \\[0.3em]
0 & k
\end{bmatrix}
=
\begin{bmatrix}
k + 1 & r \\[0.3em]
r & k + 1
\end{bmatrix}\]
<p>Now, if we renormalize the variances to get back to a correlation matrix it becomes obvious that adding ‘measurement error’ has to decrease the absolute value of the covariance:</p>
\[\Sigma^{A_{norm}} = \begin{bmatrix}
1 & \frac{r}{k + 1} \\[0.3em]
\frac{r}{k + 1} & 1
\end{bmatrix}\]
<p>Note that $k > 0$ and hence $\frac{r}{k + 1} < r$ and hence the absolute value of the covariance is smaller in \Sigma^{A_{norm}} than in $\Sigma$ in the population.</p>
<p><strong>(4) Properties of the Estimator</strong></p>
<p>We now consider the estimate $\hat \sigma_{1,2}$ for the covariance between $x_1$ and $x_2$ in the bivariate Gaussian with covariance matrix $\Sigma^{A_{norm}}$ which is ‘corrupted’ by measurement error. We obtain $\hat \sigma_{1,2}$ via the least squares estimator, <a href="http://math.stackexchange.com/questions/787939/show-that-the-least-squares-estimator-of-the-slope-is-an-unbiased-estimator-of-t">which is an unbiased estimator</a> for $\frac{r}{k + 1}$.</p>
<p>What does this mean? This means that by the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem">Central limit theorem</a>, the sampling distribution will be a Gaussian distribution that is centered on the true coefficient, which is $\frac{r}{k + 1}$. Thus, if we take many samples of size $N$ and compute a coefficient estimate on each of them, the mean coefficient will be equal to $\frac{r}{k + 1}$:</p>
\[\mathbb{E} [\hat \sigma_{1,2}] = \lim_{S \rightarrow \infty} \frac{1}{S} \sum_{i=1}^{\infty} \hat \sigma_{1,2}^i = \frac{r}{k + 1}\]
<p>From the fact that the Gaussian density is symmetric and centered on the true effect, it follows that $\hat \sigma_{1,2}$ will <em>equally often</em> under- and overestimate the true effect $\frac{r}{k + 1}$. It is important to stress that this is true, irrespective of the variance of the sampling distribution (which depends on $N$). We illustrate this in the following Figure which shows the empirical sampling distributions from the simulation of the authors:</p>
<p><img src="https://raw.githubusercontent.com/jmbh/jmbh.github.io/master/figs/measurementerror/SamplingDistri_new.png" alt="center" /></p>
<p>The solid black line indicates the density estimate of the empirical sampling distribution of the coefficient estimates in the low noise (N = 3000) case. The solid red line indicates the density of the empirical sampling distribution of in the high noise (N = 50) case. The dashed black and red lines indicate the arithmetic means of the corresponding sampling distributions. The green dashed line indicates the true coefficient of the bivariate Gaussian with added measurement error. Now, as predicted from the fact that $\hat \sigma_{1,2}$ is an unbiased estimator independent of $N$, we see that the mean parameter estimates in both low/high noise setting (black/red dashed lines) are close to the true coefficient $\frac{r}{k + 1}$ (dashed green line).</p>
<p>Before moving on, we define $\mathcal{P}^\uparrow \in [0,1]$ as the proportion of coefficient estimates that are larger than the true effect $r$ and hence overestimate it. $\mathcal{P}^\uparrow_H$ refers to that proportion in the high noise (small $N$) setting, $\mathcal{P}^\uparrow_L$ refers to that proportion in the low noise (large $N$) setting.</p>
<p>Now, the second important observation is that for both noise settings we have $\mathcal{P}^\uparrow_H = \mathcal{P}^\uparrow_L = \frac{1}{2}$, which implies that we equally often under- and overestimate the true effect. Note that another way of saying this is that the area under the curve left of the green line is equal to the area under the curve right to the orange line, for both sampling distributions.</p>
<p>We now make the crucial step by considering $\hat \sigma_{1,2}$ not as an estimate for the covariance $\frac{r}{k + 1}$ in $\Sigma^{A_{norm}}$, but for the covariance $r$ of the ‘true’ bivariate Gaussian without added measurement error with covariance matrix $\Sigma$. We <em>know</em> that $\hat \sigma_{1,2}$ is an unbiased estimator for $\frac{r}{k + 1}$ and we know $\frac{r}{k + 1} < r$. From this follows that $\hat{\sigma}_{1,2}$ is a <em>biased</em> estimator for $r$. Specifically, the estimator is biased downwards.</p>
<p>We again look at the proportions of coefficient estimates that under- and overestimate the true effect $r$ (the dashed blue line in the figure). We first consider the low noise case: the first observation is that we overestimate $r$ <em>less often</em> than we overestimated $\frac{r}{k + 1}$, which implies $\mathcal{P}^\uparrow_L < \frac{1}{2}$. Again, this is the same as saying that the area under the curve on the right of the blue line is smaller than the area under the curve left to the blue line.</p>
<p>For the high noise case the exact same is true, i.e. $\mathcal{P}^\uparrow_H < \frac{1}{2}$. Let’s define $q := \frac{\mathcal{P}^\uparrow_H}{\mathcal{P}^\uparrow_L}$. Now what we <em>do</em> we have is that $\mathcal{P}^\uparrow_H > \mathcal{P}^\uparrow_L$ and hence $q > 1$. This means that in the presence of measurement error, we overestimate <em>absolutely less</em> often than we underestimate in all settings, however, we overestimate <em>relatively more</em> in a high noise (small $N$) setting compared to a low noise (large $N$) setting. Let’s let this sink in for a moment and then move on to the summary:</p>
<p><strong>(5) Summary</strong></p>
<p>What have we found? We found that if our estimator is biased downwards (here by measurement error), then different sample sizes (and hence different variances of the sampling distribution) lead to different proportions of coefficient estimates that overestimate the true effect.</p>
<p>However, it is important to stress: when keeping $N$ constant and introducing measurement error, the proportion of overestimating estimates <em>decreases</em> compared to the situation without measurement error. This is because the whole sampling distribution is shifted towards zero in the presence of measurement error (the blue line is shifted to the position of the green line in the Figure).</p>
<p>The only thing that is increasing is $q$, which means that in the presence of measurement error in a high noise setting (small $N$) we <em>relatively</em> overestimate more than in a low noise setting (high $N$). What determines $q$? The larger the difference between the variances of two sampling distributions, the larger $q$. The more we shift the sampling distribution towards zero (by adding measurement error), the larger $q$.</p>
<p><strong>(6) Back to the Paper</strong></p>
<p>I think the results stated in (5) are pretty far away from the claim in the paper, which was ‘In a high noise setting, the presence of measurement error leads to an increase in coefficient estimates’. This statement rather suggests that introducing measurement error increases the expected value of the sampling distribution (moving the blue line to the right instead of to the left) which is - as we have seen - incorrect. This false suggestion is strengthened by the scaling of the figures. We illustrate this here, by plotting the figure as shown in the paper (top row) and with equal coordinate systems (bottom row).</p>
<p><img src="https://raw.githubusercontent.com/jmbh/jmbh.github.io/master/figs/measurementerror/ScalingIssue.png" alt="center" /></p>
<p>The top row suggests that the difference between the low/high noise setting is because the whole cloud is ‘shifted’ downwards in the low noise setting. This would mean that the sampling distributions are shifted differently depending on the noise setting (sample size) when adding measurement error. On the other hand, when plotting the data in the same coordinate system, it is clear that the expected values do not change and that effect is driven by the differing variances of the estimator.</p>
<p>And one more thing: in the right panel in the figure of the paper the authors plot $\mathcal{P}^\uparrow$ as a function of $N$. Note that from the discussion in (4) it follows that this value can <em>never</em> be larger than $\frac{1}{2}$ as long as the estimator is unbiased or biased downwards. So there must have been some mistake.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This was a fun opportunity to do some statistics detective work. However, the lack of clarity does potentially also do quite some harm by confusing the reader about important concepts. There is of course also the possibility that I just fully misunderstood their paper. In that case I hope the reader will point to my mistakes.</p>
<p>The code to exactly reproduce the above figures can be found <a href="https://raw.githubusercontent.com/jmbh/jmbh.github.io/master/figs/measurementerror/RCode_ME_comment.R">here</a>.</p>
<p>I would like to thank <a href="https://fdabl.github.io/">Fabian Dablander</a> and <a href="https://www.peteredelsbrunner.com/">Peter Edelsbrunner</a> for helpful comments on this blogpost. In addition, I would like to thank <a href="https://www.uu.nl/staff/ORyan/0">Oisín Ryan</a> and <a href="https://www.uu.nl/medewerkers/JJBroere/0">Joris Broere</a> for an interesting discussion on a train ride from Eindhoven to Utrecht yesterday, and I apologize to about 15 anonymous Dutch travelers because they had to endure a heated statistical debate.</p>
<p>I am looking forward to comments, complaints and corrections.</p>
Thu, 16 Feb 2017 10:00:00 +0000
http://jmbh.github.io//Deconstructing-ME/
http://jmbh.github.io//Deconstructing-ME/Predictability in Network Models<p>Network models have become a popular way to abstract complex systems and gain insights into relational patterns among observed variables in <a href="http://www.sachaepskamp.com/files/NA/NetworkTakeover.pdf">many areas of science</a>. The majority of these applications focuses on analyzing the structure of the network. However, if the network is not directly observed (Alice and Bob are friends) but <em>estimated</em> from data (there is a relation between smoking and cancer), we can analyze - in addition to the network structure - the predictability of the nodes in the network. That is, we would like to know: how well can a given node in the network predicted by all remaining nodes in the network?</p>
<p>Predictability is interesting for several reasons:</p>
<ol>
<li>It gives us an idea of how <em>practically relevant</em> edges are: if node A is connected to many other nodes but these only explain, let’s say, only 1% of its variance, how interesting are the edges connected to A?</li>
<li>We get an indication of how to design an <em>intervention</em> in order to achieve a change in a certain set of nodes and we can estimate how efficient the intervention will be</li>
<li>It tells us to which extent different parts of the network are <em>self-determined or determined by other factors</em> that are not included in the network</li>
</ol>
<p>In this blogpost, we use the R-package <a href="https://cran.r-project.org/web/packages/mgm/index.html">mgm</a> to estimate a network model and compute node wise predictability measures for a <a href="http://cpx.sagepub.com/content/3/6/836.short">dataset</a> on <a href="https://en.wikipedia.org/wiki/Posttraumatic_stress_disorder">Post Traumatic Stress Disorder (PTSD)</a> symptoms of <a href="https://en.wikipedia.org/wiki/2008_Sichuan_earthquake">Chinese earthquake victims</a>. We visualize the network model and predictability using <a href="https://cran.r-project.org/web/packages/qgraph/index.html">the qgraph package</a> and discuss how the combination of network model and node wise predictability can be used to design effective interventions on the symptom network.</p>
<h2 id="load-data">Load Data</h2>
<p>We load the data which the authors made openly available:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">read.csv</span><span class="p">(</span><span class="s1">'http://psychosystems.org/wp-content/uploads/2014/10/Wenchuan.csv'</span><span class="p">)</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">na.omit</span><span class="p">(</span><span class="n">data</span><span class="p">)</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">as.matrix</span><span class="p">(</span><span class="n">data</span><span class="p">)</span><span class="w">
</span><span class="n">p</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">ncol</span><span class="p">(</span><span class="n">data</span><span class="p">)</span><span class="w">
</span><span class="nf">dim</span><span class="p">(</span><span class="n">data</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## [1] 344 17</code></pre></figure>
<p>The datasets contains complete responses to 17 PTSD symptoms of 344 individuals. The answer categories for the intensity of symptoms ranges from 1 ‘not at all’ to 5 ‘extremely’. The exact wording of all symptoms is in the <a href="http://cpx.sagepub.com/content/3/6/836.short">paper of McNally and colleagues</a>.</p>
<h2 id="estimate-network-model">Estimate Network Model</h2>
<p>We estimate a <a href="http://www.jmlr.org/proceedings/papers/v33/yang14a.pdf">Mixed Graphical Model (MGM)</a>, where we treat all variables as continuous-Gaussian variables. Hence we set the type of all variables to <code class="language-plaintext highlighter-rouge">type = 'g'</code> and the number of categories for each variable to 1, which is the default for continuous variables <code class="language-plaintext highlighter-rouge">level = 1</code>:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">mgm</span><span class="p">)</span><span class="w">
</span><span class="n">set.seed</span><span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="w">
</span><span class="n">fit_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">mgm</span><span class="p">(</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="s1">'g'</span><span class="p">,</span><span class="w"> </span><span class="n">p</span><span class="p">),</span><span class="w">
</span><span class="n">level</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="n">p</span><span class="p">),</span><span class="w">
</span><span class="n">lambdaSel</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'CV'</span><span class="p">,</span><span class="w">
</span><span class="n">ruleReg</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'OR'</span><span class="p">,</span><span class="w">
</span><span class="n">pbar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Note that the sign of parameter estimates is stored separately; see ?mgm</code></pre></figure>
<p>For more info on how to estimate Mixed Graphical Models using the mgm package see <a href="http://jmbh.github.io/Estimation-of-mixed-graphical-models/">this previous post</a> or the <a href="https://arxiv.org/pdf/1510.06871v2.pdf">mgm paper</a>.</p>
<h2 id="compute-predictability-of-nodes">Compute Predictability of Nodes</h2>
<p>After estimating the network model we are ready to compute the predictability for each node. Node wise predictability (or error) can be easily computed, because the graph is estimated by taking each node in turn and regressing all other nodes on it. As a measure for predictability we pick the propotion of explained variance, as it is straight forward to interpret: 0 means the node at hand is not explained at all by other nodes in the nentwork, 1 means perfect prediction. We centered all variables before estimation in order to remove any influence of the intercepts. For a detailed description of how to compute predictions and to choose predictability measures, have a look at <a href="https://link.springer.com/article/10.3758/s13428-017-0910-x">this paper</a>. In case there are additional variable types (e.g. categorical) in the network, we can choose an appropriate measure for these variables (e.g. % correct classification, for details see <code class="language-plaintext highlighter-rouge">?predict.mgm</code>).</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">pred_obj</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">object</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">fit_obj</span><span class="p">,</span><span class="w">
</span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">data</span><span class="p">,</span><span class="w">
</span><span class="n">errorCon</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'R2'</span><span class="p">)</span><span class="w">
</span><span class="n">pred_obj</span><span class="o">$</span><span class="n">error</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-text" data-lang="text">## Variable R2
## 1 intrusion 0.637
## 2 dreams 0.650
## 3 flash 0.602
## 4 upset 0.637
## 5 physior 0.627
## 6 avoidth 0.686
## 7 avoidact 0.681
## 8 amnesia 0.410
## 9 lossint 0.521
## 10 distant 0.498
## 11 numb 0.458
## 12 future 0.543
## 13 sleep 0.564
## 14 anger 0.561
## 15 concen 0.630
## 16 hyper 0.675
## 17 startle 0.621</code></pre></figure>
<p>We calculated the percentage of explained variance ($R^2$) for each of the nodes in the network. Next, we visualize the estimated network and discuss its structure in relation to explained variance.</p>
<h2 id="visualize-network--predictability">Visualize Network & Predictability</h2>
<p>We provide the estimated weighted adjacency matrix and the node wise predictability measures as arguments to <code class="language-plaintext highlighter-rouge">qgraph()</code> to obtain a network visualization including the predictability measure $R^2$:</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">library</span><span class="p">(</span><span class="n">qgraph</span><span class="p">)</span><span class="w">
</span><span class="n">qgraph</span><span class="p">(</span><span class="n">fit_obj</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">wadj</span><span class="p">,</span><span class="w"> </span><span class="c1"># weighted adjacency matrix as input</span><span class="w">
</span><span class="n">layout</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'spring'</span><span class="p">,</span><span class="w">
</span><span class="n">pie</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pred_obj</span><span class="o">$</span><span class="n">error</span><span class="p">[,</span><span class="m">2</span><span class="p">],</span><span class="w"> </span><span class="c1"># provide errors as input</span><span class="w">
</span><span class="n">pieColor</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="s1">'#377EB8'</span><span class="p">,</span><span class="n">p</span><span class="p">),</span><span class="w">
</span><span class="n">edge.color</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">fit_obj</span><span class="o">$</span><span class="n">pairwise</span><span class="o">$</span><span class="n">edgecolor</span><span class="p">,</span><span class="w">
</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">colnames</span><span class="p">(</span><span class="n">data</span><span class="p">))</span></code></pre></figure>
<p><img src="/assets/img/2016-11-01-Predictability-in-network-models.Rmd/unnamed-chunk-4-1.png" title="plot of chunk unnamed-chunk-4" alt="plot of chunk unnamed-chunk-4" style="display: block; margin: auto;" /></p>
<p>The <code class="language-plaintext highlighter-rouge">mgm</code>-package also allows to compute predictability for higher-order (or moderated) MGMs and for (mixewd) Vector Autoregressive (VAR) models. For details see <a href="https://link.springer.com/article/10.3758/s13428-017-0910-x">this paper</a>. For an early paper looking into the predictability of symptoms of different psychological disorders, see <a href="https://www.cambridge.org/core/journals/psychological-medicine/article/how-predictable-are-symptoms-in-psychopathological-networks-a-reanalysis-of-18-published-datasets/84F1D7F73DB03586ABA48783419FE62A">this paper</a>.</p>
Tue, 01 Nov 2016 10:00:00 +0000
http://jmbh.github.io//Predictability-in-network-models/
http://jmbh.github.io//Predictability-in-network-models/Graphical Analysis of German Parliament Voting Pattern<p>We use network visualizations to look into the voting patterns in the current German parliament. I downloaded the data <a href="https://www.bundestag.de/abstimmung">here</a> and all figures can be reproduced using the R code available on <a href="https://github.com/jmbh/bundestag">Github</a>.</p>
<p>Missing values, invalid votes, abstention from voting and not showing up for the vote weres coded as (-1), such that all other responses are a yes (1) or no (2) vote. We use pearson correlation as a measure of voting similarity and voting behavior coded as (-1) is regarded as noise in the dataset. 36 of the 659 members of parliament were removed from the data because more than 50% of the votes were coded as (-1). The reason was that they either joined or left the parliament during the analyzed time period.</p>
<p><em>Disclaimer: note that only for a fraction of the bills passed in the German parliament votes are recorded (and used here) and that relations between single members of parliaments might be artifacts of the noise-coding. Moreover, the data is quite scarce (136 bills). Therefore we should not draw any strong conclusions from this coarse-grained analysis.</em></p>
<h2 id="voting-pattern-amongst-members-of-parliament">Voting Pattern Amongst Members of Parliament</h2>
<p>We first compute the correlations between the voting behavior of all pairs of members of parliament, which gives us a 623 x 623 correlation matrix. We then visualize this correlation matrix using the force-directed <a href="https://en.wikipedia.org/wiki/Force-directed_graph_drawing">Fruchterman Reingold algorithm</a> as implemented in the <a href="https://cran.r-project.org/web/packages/qgraph/index.html">qgraph package</a>. This algorithm puts nodes (politicians) on the plane such that edges (connections) have comparable length and that edges are crossing as little as possible.</p>
<p><img src="http://jmbh.github.io/figs/bundestag/bundestag_cor_full.jpg" alt="center" /></p>
<p>(For readers on R-Bloggers.com: <a href="http://jmbh.github.io/Analyzing-voting-pattern-of-German-parliament/">click here for the original post with larger figures.</a>)</p>
<p>Green edges indicate positive correlations (voter agreement) and red edges indicate negative correlations (voter disagreement). The width of the edges is proportional to the strength (absolute value) of the correlation. We see that the green party (B90/GRUENE) clusters together, as well as the left party (DIE LINKE). The third and biggest cluster consists of members of the two largest parties, the social democrats (SPD) and the conservatives (CDU/CSU). This is the structure we would expect intuitively, as social democrats and conservatives currently form the government in a grand coalition.</p>
<p>With some imagination, one could also identify a couple of subclusters in this large cluster. A detailed analysis of smaller clusters would be especially interesting if we had additional information about politicians. We could then see whether the cluster assignment computed from the voting behavior relates to these additional variables. For instance, politicians with close ties to the economy might vote together, irrespective of their party.</p>
<p>So far we assumed that we can adequately describe the voting pattern of the whole period from 26.11.2013 - 14.04.2016 with one graph. This implies that we assume that the relative voting behavior does not change over time. For example, this means that if members of parliament A and B agree on votes at the beginning of the period, they also agree throughout the rest of the period and do not start to disagree at some point. In the next section we check whether the voting behavior changes over time.</p>
<h2 id="voting-pattern-amongst-members-of-parliament-across-time">Voting Pattern Amongst Members of Parliament across Time</h2>
<p>To make graphs comparable over different time points and to be able to see growing (dis-) agreement between parties, we arrange individual members of parliament in circles that correspond to their parties. We compute a time-varying graph by visualizing a Gaussian kernel smoothed (bandwidth = .1, time interval [0,1]) correlation matrix at 20 equally spaced time points. Details can be found in the code used to create all figures, which is available <a href="https://github.com/jmbh/bundestag">here</a>. We then combine these 20 graphs into the following video:</p>
<p><img src="http://jmbh.github.io/figs/bundestag/bundestag_cor.gif" alt="center" /></p>
<p>We see that right after the time the parliament was elected and the big coalition was formed in November 2013, there is relatively high agreement between members of CDU/CSU and SPD. Within the next three years, however, the agreement decreasees. With regards to the parties in the opposition, at the beginning of the period the green and the left party disagree to a similar degree with the grand coalition. Over time, however, it appears that the green party increasingly agrees with the grand coalition, while the left party agrees less and less with the CDU/CSU- and SPD-led government.</p>
<p>As the number of seats the parties have in the parliament differs widely, it is hard to read agreement <em>within</em> parties from the above graph. For instance, the cycle of CDU/CSU seems to be filled with more and thicker green edges than the one of SPD, however, this could well be because there are simply more politicians (307 vs. 191) and hence more edges displayed. Therefore, we have a closer look at within-party agreement in the following graph:</p>
<center><img src="http://jmbh.github.io/figs/bundestag/bundestag_agreement_time.jpg" width="400" height="350" /></center>
<p>Collapsed over time we see the members of the left party agree most with each other and the members of the social democratic party agree the least with each other. The largest changes in agreement appear in the green and left party: from late 2014 to mid 2015, members of the green party seem to agree less with each other than usual, while members of the left party seem to agree more with each other than usual.</p>
<h2 id="zoom-in-on-small-group-of-members-of-parliament">Zoom in on small Group of Members of Parliament</h2>
<p>While the analyses so far gave a comprehensive <em>overview</em> of the voting behavior amongst members of parliament, the graph is too large to see which node in the graph corresponds to which politician. In the following graph we zoom in on a random subset of 30 politicians and match the nodes to their names:</p>
<p><img src="http://jmbh.github.io/figs/bundestag/bundestag_cor_ss_names.jpg" alt="center" /></p>
<p>Note that correlations are bivariate measures and therefore the correlations in this smaller graph are the same as the ones in the larger graph above. We see the same overall structure as above, but now with names assigned to nodes. Again the members of the green party cluster together, but for instance Nicole Maisch votes more often together with Steffi Lempke than with the other displayed colleagues. We also see that for instance Steffen Kampeter and Christian Schmidt are both members of the convervative party, however are placed at quite distant locations in the graph (and indeed the correlation between their voting behavior is almost zero: -0.04).</p>
<p>Analogous to above, we now look into how voting agreement between the politicians in our subset changes over time by computing a time-varying graph as before:</p>
<p><img src="http://jmbh.github.io/figs/bundestag/bundestag_cor_ss.gif" alt="center" /></p>
<p>We see that voting agreement changes substantially: for instance members of the opposition parties seem to agree less and less with the grand coalition until mid-2015 and then agree again more and more until the end of the period in early 2016. Some politicians seem to change their voting pattern quite dramatically: for example the voting behavior of conserviative party member Heike Bremer strongly correlates with the voting behavior of most of her party colleagues in 2014, however in late 2015 and early 2016 the correlations are close to zero. Also, interestingly, the voting behavior of conservative Steffen Kampeter tends to vote in the opposite direction than his conservative colleagues in early 2014, but then agrees more and more with them until the last recorded votes.</p>
<h2 id="unique-agreement-between-members-of-parliamen">‘Unique’ Agreement between Members of Parliamen</h2>
<p>So far we looked into how the voting patterns of any pair of members of parliaments correlate with each other. While this is an informative measure and gives a first overview of how politicians vote relative to each other, it is also a measure that is tricky to interpret. For instance two politicians of a party might always vote together because they always align their votes with their common mentor in the party. Or because there is pressure from the whole party to vote for a bill together. Or because they are both members of a specific think tank within the parliament, …</p>
<p>An interesting alternative measure is conditional correlation, which is the correlation between any two members of parliament, <em>after controlling for all other members of parliament</em>. In case of a conditional correlation between two members of parliament there are still many possible explanations (e.g. both might be influenced by some person <em>outside</em> the parliament), however, we are sure that this correlation cannot be explained by the voting pattern by any other member of parliament. We compute this conditional correlation graph and visualize it using the same layout as in the corresponding correlation graph:</p>
<p><img src="http://jmbh.github.io/figs/bundestag/bundestag_cond_ss_names.jpg" alt="center" /></p>
<p>It is apparent that there are less edges and less strong edges. Note that this is what we would expect in this dataset: in a parliament there is a general level of agreement within parties and also between parties, otherwise it would be difficult to pass bills. Therefore, we would expect that a substantial part of a correlation between the voting pattern between any two politicians can be explained by the voting patterns of other politicians. The strongest conditional correlations is the one between Nicole Gohlke and Norbert Mueller of the left party. For some reason these two politicians align their votes in a way that cannot be explained by the voting pattern of other politicians within and outside their party. Note here that</p>
<h2 id="concluding-comments">Concluding comments</h2>
<p>It came as quite a surprise to me that the large majority of votes on bills in the German parliament are not recorded and hence not available to the public (please correct me if I missed something). While this is a major reason to interpret these data with caution, on the other hand the votes on bills that <em>are</em> recorded are the more controversial and therefore probably more interesting ones.</p>
<p>The graphs in this post were the first few obvious things I wanted to look into, but of course many more analyses are possible. I put the preprocessed data (no information lost, just everyting in 3 linked files instead of hundreds) on <a href="https://github.com/jmbh/bundestag">Github</a> alongside with the code that produces the above figures. In case you have any comments, complaints or questions, please comment below!</p>
Wed, 18 May 2016 10:00:00 +0000
http://jmbh.github.io//Analyzing-voting-pattern-of-German-parliament/
http://jmbh.github.io//Analyzing-voting-pattern-of-German-parliament/