Skip to main content
Log in

Explaining the behavior of joint and marginal Monte Carlo estimators in latent variable models with independence assumptions

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

In latent variable models parameter estimation can be implemented by using the joint or the marginal likelihood, based on independence or conditional independence assumptions. The same dilemma occurs within the Bayesian framework with respect to the estimation of the Bayesian marginal (or integrated) likelihood, which is the main tool for model comparison and averaging. In most cases, the Bayesian marginal likelihood is a high dimensional integral that cannot be computed analytically and a plethora of methods based on Monte Carlo integration (MCI) are used for its estimation. In this work, it is shown that the joint MCI approach makes subtle use of the properties of the adopted model, leading to increased error and bias in finite settings. The sources and the components of the error associated with estimators under the two approaches are identified here and provided in exact forms. Additionally, the effect of the sample covariation on the Monte Carlo estimators is examined. In particular, even under independence assumptions the sample covariance will be close to (but not exactly) zero which surprisingly has a severe effect on the estimated values and their variability. To address this problem, an index of the sample’s divergence from independence is introduced as a multivariate extension of covariance. The implications addressed here are important in the majority of practical problems appearing in Bayesian inference of multi-parameter models with analogous structures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Aguilar, O., West, M.: Bayesian dynamic factor models and portfolio allocation. J. Bus. Econ. Stat. 18, 338–357 (2000)

    Google Scholar 

  • Baker, F.: An investigation of the item parameter recovery characteristics of a Gibbs sampling procedure. Appl. Psychol. Meas. 22, 153–169 (1998)

    Article  Google Scholar 

  • Bartholomew, D., Knott, M., Moustaki, I.: Latent Variable Models and Factor Analysis: A Unified Approach. Wiley Series on Probability and Statistics, 3rd edn. Wiley, London (2011)

    Book  Google Scholar 

  • Bock, R., Aitkin, M.: Marginal maximum likelihood estimation of item parameters: application of an EM algorithm. Psychometrika 46, 443–459 (1981)

    Article  MathSciNet  Google Scholar 

  • Bock, R.D., Lieberman, M.: Fitting a response model for n dichotomously scored items. Psychometrika 35, 179–197 (1970)

    Article  Google Scholar 

  • Bratley, P., Fox, B.L., Schrage, L.: A Guide to Simulation, 2nd edn. Springer, Berlin (1987)

    Book  Google Scholar 

  • Carlin, B.P., Louis, T.A.: Bayes and Empirical Bayes Methods for Data Analysis, 2nd edn. Chapman & Hall/CRC, London (2000)

    Book  MATH  Google Scholar 

  • Chib, S., Jeliazkov, I.: Marginal likelihood from the Metropolis–Hastings output. J. Am. Stat. Assoc. 96, 270–281 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  • Congdon, P.: Applied Bayesian Hierarchical Methods. Chapman and Hall/CRC, London (2010)

    Book  MATH  Google Scholar 

  • DiCiccio, T.J., Kass, R.E., Raftery, A., Wasserman, L.: Computing Bayes factors by combining simulation and asymptotic approximations. J. Am. Stat. Assoc. 92(439), 903–915 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  • Flegal, J., Jones, G.: Batch means and spectral variance estimators in markov chain monte carlo. Ann. Stat. 38, 1034–1070 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  • Fouskakis, D., Ntzoufras, I., Draper, D.: Bayesian variable selection using cost-adjusted BIC, with application to cost-effective measurement of quality of health care. Ann. Appl. Stat. 3, 663–690 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  • Friel, N., Pettitt, N.: Marginal likelihood estimation via power posteriors. J. R. Stat. Soc. Ser. (Stat. Methodol.) 70(3), 589–607 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  • Gelfand, A.E., Dey, D.K.: Bayesian model choice: asymptotics and exact calculations. J. R. Stat. Soc. Ser. B (Methodol.) 56(3), 501–514 (1994)

    MATH  MathSciNet  Google Scholar 

  • Gelman, A., Meng, X.-L.: Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Stat. Sci. 13(2), 163–185 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  • Geweke, J., Zhou, G.: Measuring the pricing error of the arbitrage pricing theory. Rev. Financ. Stud. 9, 557–587 (1996)

    Article  Google Scholar 

  • Gifford, J.A., Swaminathan, H.: Bias and the effect of priors in Bayesian estimation of parameters of item response models. Appl. Psychol. Meas. 14, 33–43 (1990)

    Article  Google Scholar 

  • Goodman, L.A.: The variance of the product of K random variables. J. Am. Stat. Assoc. 57, 54–60 (1962)

    MATH  Google Scholar 

  • Huber, P., Ronchetti, E., Victoria-Feser, M.-P.: Estimation of generalized linear latent variable models. J. R. Stat. Soc. Ser. B 66, 893–908 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  • Jones, G., Haran, M., Caffo, B., Neath, R.: Fixed-width output analysis for Markov chain Monte Carlo. J. Am. Stat. Assoc. 101, 1537–1547 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  • Kang, T., Cohen, A.S.: Irt model selection methods for dichotomous items. Appl. Psychol. Meas. 31(4), 331358 (2007)

    Article  MathSciNet  Google Scholar 

  • Kass, R., Raftery, A.: Bayes factors. J. Am. Stat. Assoc. 90, 773–795 (1995)

    Article  MATH  Google Scholar 

  • Kim, S.-H., Cohen, A.S., Baker, F.B., Subkoviak, M.J., Leonard, T.: An investigation of hierarchical Bayes procedures in item response theory. Psychometrika 59(3), 405–421 (1994)

    Article  MATH  Google Scholar 

  • Koehler, E., Brown, E., Haneuse, S.J.-P.A.: On the assessment of Monte Carlo error in simulation-based statistical analyses. Am. Stat. 63(2), 155–162 (2009)

    Article  MathSciNet  Google Scholar 

  • Lewis, S., Raftery, A.: Estimating Bayes factors via posterior simulation with the Laplace Metropolis estimator. J. Am. Stat. Assoc. 92, 648–655 (1997)

    MATH  MathSciNet  Google Scholar 

  • Lopes, H.F., West, M.: Bayesian model assessment in factor analysis. Stat. Sin. 14, 4167 (2004)

    MathSciNet  Google Scholar 

  • Lord, F.M.: Applications of Item Response Theory to Practical Testing Problems. Erlbaum Associates, Hillsdale (1980)

    Google Scholar 

  • Lord, F.M., Novick, M.R.: Statistical Theories of Mental Test Scores. Addison-Wesley, Oxford (1968)

    MATH  Google Scholar 

  • Meketon, M.S., Schmeiser, B.W. Overlapping batch means: something for nothing?” In: Proceedings of the 1984 Winter Simulation Conference, pp. 227–230. Institute of Electrical and Electronics Engineers Inc., Piscataway (1984)

  • Meng, X.-L., Schilling, S.: Warp bridge sampling. J. Comput. Graph. Stat. 11(3), 552–586 (2002)

    Article  MathSciNet  Google Scholar 

  • Meng, X.-L., Wong, W.-H.: Simulating ratios of normalizing constants via a simple identity: a theoretical exploration. Stat. Sin. 6, 831–860 (1996)

    MATH  MathSciNet  Google Scholar 

  • Mislevy, R.: Bayes modal estimation in item response models. Psychometrika 51, 177–195 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  • Moustaki, I., Knott, M.: Generalized latent trait models. Psychometrika 65, 391–411 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  • Ntzoufras, I., Dellaportas, P., Forster, J.: Bayesian variable and link determination for generalised linear models. J. Stat. Plan. Inference 111(1–2), 165–180 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  • Patz, R., Junker, B.: A straightforward approach to Markov chain Monte Carlo methods for item response models. J. Educ. Behav. Stat. 24, 146–178 (1999)

    Article  Google Scholar 

  • Rabe-Hesketh, S., Skrondal, A., Pickles, A.: Maximum likelihood estimation of limited and discrete dependent variable models with nested random effects. J. Econom. 128, 301–323 (2005)

    Article  MathSciNet  Google Scholar 

  • Schilling, S., Bock, R.: High-dimensional maximum marginal likelihood item factor analysis by adaptive quadrature. Psychometrika 70, 533–555 (2005)

    MATH  MathSciNet  Google Scholar 

  • Schmeiser, B.W.: Batch size effects in the analysis of simulation output. Oper. Res. 30, 556–568 (1982)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ioannis Ntzoufras.

Appendix

Appendix

The identities of the MCMC estimators used in the Sect. 2.1 are

  • Reciprocal importance (RM) sampling estimator (Gelfand and Dey 1994)

    $$\begin{aligned} f(\varvec{Y})= \left[ \int \frac{g(\varvec{\vartheta })}{f(\mathbf Y |\,\varvec{\vartheta })\,\pi (\varvec{\vartheta })} \,\pi (\varvec{\vartheta }|\,\mathbf Y )\,d{\varvec{\vartheta }}\right] ^{-1}, \end{aligned}$$
    (23)
  • Generalized harmonic bridge (BH) sampling estimator (Meng and Wong 1996)

    $$\begin{aligned} f(\varvec{Y})=\frac{ \displaystyle \int \left[ \,g(\varvec{\vartheta \underline{}}) \right] ^{-1}g(\varvec{\vartheta })\,d{\varvec{\vartheta }}}{\displaystyle \int \left[ f(\mathbf Y |\,\varvec{\vartheta })\pi (\varvec{\vartheta }) \right] ^{- 1}\pi (\varvec{\vartheta }|\,\mathbf Y )\,d{\varvec{\vartheta }}}\,, \end{aligned}$$
    (24)
  • Geometric bridge (BG) sampling estimator (Meng and Wong 1996)

    $$\begin{aligned} f(\mathbf Y )=\frac{ \displaystyle \int \left[ \frac{ f(\mathbf Y |\,\varvec{\vartheta })\pi (\varvec{\vartheta }) }{g(\varvec{\vartheta }) } \right] ^{1/2}g(\varvec{\vartheta })\,d{\varvec{\vartheta }}}{\displaystyle \int \left[ \frac{ f(\mathbf Y |\,\varvec{\vartheta })\pi (\varvec{\vartheta }) }{g(\varvec{\vartheta }) } \right] ^{-1/2}\pi (\varvec{\vartheta }|\,\mathbf Y )\,d{\varvec{\vartheta }}}\,\,. \end{aligned}$$
    (25)

Proof of Lemma 3.2

According to Goodman (1962), the variance of the product of N variables is given by

$$\begin{aligned} Var\left( \prod _{i=1}^N\phi _i(Y_i)\right) =\prod _{i=1}^N\left( V_i+E_i^2\right) -\prod _{i=1}^NE_i^2. \end{aligned}$$
(26)

Hence we can write

$$\begin{aligned} Var\left( \prod _{i=1}^N\phi _i(Y_i)\right)&= \prod _{i \in \mathcal{N}_0} \left( V_i+E_i^2\right) \prod _{i \in \overline{\mathcal{N}}_0} \left( V_i+E_i^2\right) \ \\&-\prod _{i \in \mathcal{N}_0}E_i^2 \prod _{i \in \overline{\mathcal{N}}_0}E_i^2. \\&= \prod _{i \in \mathcal{N}_0} V_i \prod _{i \in \overline{\mathcal{N}}_0} \Big [ E_i^2 \left( CV_i^2+1\right) \Big ]\\&-\prod _{i \in \mathcal{N}_0}E_i^2 \prod _{i \in \overline{\mathcal{N}}_0}E_i^2. \\&= \prod _{i \in \overline{\mathcal{N}}_0} E_i^2 \times \left[ \prod _{i \in \mathcal{N}_0} V_i \prod _{i \in \overline{\mathcal{N}}_0} \left( CV_i^2+1\right) \right. \\&\left. -\prod _{i \in \mathcal{N}_0}E_i^2 \right] . \end{aligned}$$

Note that \(\prod \limits _{i \in \mathcal{N}_0}E_i^2 \) will be the value of one if \(\mathcal{N}_0 = \emptyset \) and zero otherwise. Therefore we can write \(\prod \limits _{i \in \mathcal{N}_0}E_i^2 = \prod \limits _{i \in \mathcal{N}_0}E_i^2 \times \prod \limits _{i \in \mathcal{N}_0}V_i^2\) resulting in

$$\begin{aligned} Var\left( \prod _{i=1}^N\phi _i(Y_i)\right)&= \prod _{i \in \mathcal{N}_0} V_i \times \prod _{i \in \overline{\mathcal{N}}_0} E_i^2 \\&\quad \times \left[ \prod _{i \in \overline{\mathcal{N}}_0} \left( CV_i^2+1\right) -\prod _{i \in \mathcal{N}_0}E_i^2 \right] . \\&= \prod _{i \in \mathcal{N}_0} V_i \times \prod _{i \in \overline{\mathcal{N}}_0} E_i^2 \\&\quad \times \left[ \prod _{i \in \overline{\mathcal{N}}_0} \left( CV_i^2+1\right) \!-\!I( \mathcal{N}_0 = \emptyset ) \right] \!, \\ \end{aligned}$$

which gives

$$\begin{aligned}&Var\left( \prod _{i=1}^N\phi _i(Y_i)\right) = \\&\quad = \left\{ \begin{array}{ll} \prod \limits _{i=1}^N V_i &{} \text{ if } \mathcal{N}_0=\mathcal{N} \text{(all } \text{ expectations } \text{ are } \text{ zero) }\\ \prod \limits _{i =1}^N E_i^2 \times \Big [ \prod \limits _{i =1}^N \big (CV_i^2+1 \big ) -1 \Big ]&{} \text{ if } \mathcal{N}_0=\emptyset \text{(all } \text{ expectations } \text{ are } \text{ non-zero) } \\ \prod \limits _{i \in \mathcal{N}_0} V_i \times \prod \limits _{i \in \overline{\mathcal{N}}_0} E_i^2 \times \prod \limits _{i \in \overline{\mathcal{N}}_0} \left( CV_i^2+1\right) &{} \text{ otherwise } \end{array} \right. \end{aligned}$$

The proof is completed by placing the general expression for the integrand’s variance in (11) and (12) respectively. \(\square \)

Proof of Lemma 3.3

$$\begin{aligned} Var(\widehat{I}_{J})&= Var_{(\varvec{u}, \varvec{v})} \left\{ \frac{1}{R} \sum _{r=1}^{R} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i^{(r)},{{\varvec{v}}}^{(r)}\big ) \right] \right\} \nonumber \\&= \frac{1}{R} Var_{(\varvec{u}, \varvec{v})} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \right] \nonumber \\&= \frac{1}{R} Var_{\varvec{v}} \left\{ E_{\varvec{u}|\varvec{v}} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \, \Big | \varvec{v} \right] \right\} \nonumber \\&+\frac{1}{R} E_{\varvec{v}} \left\{ Var_{\varvec{u}|\varvec{v}} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \, \Big | \varvec{v} \right] \right\} \end{aligned}$$
(27)

Due to conditional independence we have that

$$\begin{aligned} E_{\varvec{u}|\varvec{v}} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \, \Big | \varvec{v} \right]&= \prod _{i=1}^{N} E_{\varvec{u}|\varvec{v}} \left[ \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \, \Big | \varvec{v} \right] \nonumber \\&= \prod _{i=1}^{N} E \left( \varphi _i \big | \varvec{v} \right) . \end{aligned}$$
(28)

Moreover, from (14) we have that

$$\begin{aligned}&Var_{\varvec{u}|\varvec{v}} \left[ \prod _{i=1}^{N} \varphi _i \big ({{\varvec{u}}}_i,{{\varvec{v}}}\big ) \, \Big | \varvec{v} \right] \nonumber \\&\quad = \sum _{k=1}^{N} \sum _{ \mathcal{C} \in {\mathcal{N}\atopwithdelims (){k}} } \Big [ \prod _{i \in \mathcal{C} } V\big ( \varphi _i \big | \varvec{v} \big ) \prod _{j \in \mathcal{N} \setminus \mathcal{C} } E\big ( \varphi _j \big | \varvec{v} \big )^2 \Big ] \end{aligned}$$
(29)

By substituting (28) and (29) in (27), we obtain the variance of the joint estimator of Lemma 3.3.

Similarly, for the marginal estimator we have

$$\begin{aligned} Var \left( \widehat{\mathcal {I}}_M \right)&= Var_{(\varvec{u}, \varvec{v})} \left[ \frac{1}{R_1} \sum _{r_1=1}^{R_1}\prod _{i=1}^N \overline{\varphi }_i^{(r_1)} \right] \nonumber \\&= \frac{1}{R_1} Var_{(\varvec{u}, \varvec{v})} \Bigg [ \prod _{i=1}^N \overline{\varphi }_i \Bigg ] \nonumber \\&= \frac{1}{R_1} Var_{\varvec{v}} \left\{ E_{\varvec{u}| \varvec{v}} \Big [ \prod _{i=1}^N \overline{\varphi }_i \, \Big | \varvec{v} \Big ] \right\} \nonumber \\&+ \frac{1}{R_1} E_{\varvec{v}} \left\{ Var_{\varvec{u}| \varvec{v}} \Big [ \prod _{i=1}^N \overline{\varphi }_i \, \Big | \varvec{v} \Big ] \right\} \end{aligned}$$
(30)

Due to conditional independence we have that

$$\begin{aligned} E_{\varvec{u}| \varvec{v}} \Big [ \prod _{i=1}^N \overline{\varphi }_i \, \Big | \varvec{v} \Big ]&= \prod _{i=1}^{N} E_{\varvec{u}|\varvec{v}} \left[ \overline{\varphi }_i \, \Big | \varvec{v} \right] = \prod _{i=1}^{N} E \left( \varphi _i \big | \varvec{v} \right) . \end{aligned}$$
(31)

Moreover, from Lemma 3.1 we have that

$$\begin{aligned} Var_{\varvec{u}| \varvec{v}} \Big [ \prod _{i=1}^N \overline{\varphi }_i \, \Big | \varvec{v} \Big ]&= \sum _{k=1}^{N} \left[ \frac{1}{R_2^k} \sum _{ \mathcal{C} \in {\mathcal{N}\atopwithdelims (){k}} } \prod _{i \in \mathcal{C} } V_i \prod _{j \in \mathcal{N} \setminus \mathcal{C} } E_j^2 \right] ,\nonumber \\ \end{aligned}$$
(32)

Substituting (31) and (32) in (30) gives the expression of the variance of the marginal estimator of Lemma 3.3. \(\square \)

Proof of Lemma 3.4

The proof of Lemma 3.4 can be obtained by induction. The statement of the Lemma holds for \(N=3\) with \(\varvec{Y}_3=(Y_1, Y_2, Y_3)\) since

$$\begin{aligned}&Cov_{(3)}(\varvec{Y}) + \sum _{k=1}^{1} \left[ \left( \prod ^{3}_{i=4-k} \!\!\!\! E (Y_i) \right) Cov_{(3-k)}(\varvec{Y}) \right] \\&\quad = Cov_{(3)}(\varvec{Y}) + \left( \prod ^{3}_{i=3 } E (Y_i) \right) Cov_{(2)}(\varvec{Y}) \\&\quad = Cov( Y_1Y_2, Y_3 ) + E(Y_3) Cov(Y_1, Y_2) \\&\quad = E( Y_1 Y_2 Y_3 ) - E(Y_1Y_2)E(Y_3) \\&\qquad + E(Y_3) [ E(Y_1Y_2) - E(Y_1) E(Y_2)] \\&\quad = TCI(\varvec{Y}_3)~. \end{aligned}$$

which is true by the definition of TCI (see Eq. 19) for vectors \(\varvec{Y}\) of length equal to three.

Let us now assume that (21) it is true for any vector \(\varvec{Y}_N\) of length \(N > 3\). Then, for \(\varvec{Y}_{N+1}=( \varvec{Y}_{N}, Y_{N+1} ) = ( Y_{1}, \dots , Y_{N}, Y_{N+1} )\) the equation

$$\begin{aligned} \text{ TCI }(\varvec{Y}_{N+1})&= Cov_{(N+1)}(\varvec{Y}) \nonumber \\&+\sum _{k=1}^{N-1} \left[ \left( \prod ^{N+1 }_{i=N-k+2} \!\!\!\! E (Y_i) \right) Cov_{(N+1-k)}(\varvec{Y}) \right] , \end{aligned}$$
(33)

is also true since

$$\begin{aligned} TCI( \varvec{Y}_{N+1} )&= E\left( \left[ \prod _{i=1}^N Y_i \right] \times Y_{N+1}\right) - \left[ \prod _{i=1}^N E(Y_i)\right] E(Y_{N+1}) \\&= Cov_{(N+1)}(\varvec{Y}) + E\left( \prod _{i=1}^N Y_i \right) E(Y_{N+1})\\&- \left[ \prod _{i=1}^N E(Y_i)\right] E(Y_{N+1})\\&= Cov_{(N+1)}(\varvec{Y}) + TCI(\varvec{Y}_N) E(Y_{N+1})\\&= Cov_{(N+1)}(\varvec{Y}) + \left\{ Cov_{(N)}(\varvec{Y}) \right. \\&\left. + \sum _{k=1}^{N-2} \left[ \left( \prod ^{N }_{i=N-k+1} \!\!\!\! E (Y_i) \right) Cov_{(N-k)}(\varvec{Y}) \right] \right\} E(Y_{N+1}) \\&\qquad \qquad \qquad \qquad \qquad \qquad \,\,\,\, {\textit{(}from\, eq.\, 21)} \\&= Cov_{(N+1)}(\varvec{Y}) + Cov_{(N)}(\varvec{Y})E(Y_{N+1}) \\&+ \sum _{k=1}^{N-2} \left[ \left( \prod ^{N+1 }_{i=N-k+1} \!\!\!\! E (Y_i) \right) Cov_{(N-k)}(\varvec{Y}) \right] \\&= Cov_{(N+1)}(\varvec{Y}) + Cov_{(N)}(\varvec{Y})E(Y_{N+1}) \\&+ \sum _{k'=2}^{N-1} \left[ \left( \prod ^{N+1 }_{i=N-k'+2} \!\!\!\! E (Y_i) \right) Cov_{(N-k'+1)}(\varvec{Y}) \right] \\&\qquad \qquad \qquad \qquad \,\,\, { ( we\,set\, }k'=k+1 ) \\&= Cov_{(N+1)}(\varvec{Y}) + \sum _{k'=1}^{N-1} \left[ \left( \prod ^{N+1 }_{i=N-k'+2} \!\!\!\! E (Y_i) \right) \right. \\&\left. \quad Cov_{(N-k'+1)}(\varvec{Y}) \right] \\&\quad [\textit{for }k=1,\textit{the term in the summation of (33)}\\&\qquad \textit{is equal to }Cov_{(N)}(\varvec{Y})E(Y_{N+1})]. \end{aligned}$$

\(\square \)

Proof of Lemma 3.5

$$\begin{aligned} Var \Big ( \prod _{i=1}^N Y_i \Big )&= E\left[ \prod _{i=1}^N Y_i- E \Big (\prod _{i=1}^N Y_i \Big )\right] ^2 \\&= E\left[ \left( \prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i) \right) -TCI(\varvec{Y})\right] ^2 \\&= E\left[ \prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i)\right] ^2 +TCI(\varvec{Y})^2 \\&- 2 \, E\left\{ TCI(\varvec{Y}) \Big [\prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i)\Big ]\right\} \\&= E\left[ \prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i)\right] ^2 \\&= Var\left( \prod _{i=1}^N Y_i \Big | Independence \right) \!-\!TCI(\varvec{Y})^2. \end{aligned}$$

since \(E\left\{ TCI(\varvec{Y}) \Big [\prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i)\Big ]\right\} =TCI(\varvec{Y}) E\Big [\prod _{i=1}^N Y_i-\prod _{i=1}^N E (Y_i)\Big ] = 0\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vitoratou, S., Ntzoufras, I. & Moustaki, I. Explaining the behavior of joint and marginal Monte Carlo estimators in latent variable models with independence assumptions. Stat Comput 26, 333–348 (2016). https://doi.org/10.1007/s11222-014-9495-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-014-9495-8

Keywords

Navigation