Abstract
The Weibull-tail class of distributions is a sub-class of the Gumbel extreme domain of attraction, and it has caught the attention of a number of researchers in the last decade, particularly concerning the estimation of the so-called Weibull-tail coefficient. In this paper, we propose an estimator of this Weibull-tail coefficient when the Weibull-tail distribution of interest is censored from the right by another Weibull-tail distribution: to the best of our knowledge, this is the first one proposed in this context. A corresponding estimator of extreme quantiles is also proposed. In both mild censoring and heavy censoring (in the tail) settings, asymptotic normality of these estimators is proved, and their finite sample behavior is presented via some simulations.
Similar content being viewed by others
References
Brahimi, B., Meraghni, D., Necir, A.: Approximations to the tail index estimator of a heavy-tailed distribution under random censoring and application. Math. Methods Statist. 24, 266β279 (2015)
Brahimi, B., Meraghni, D., Necir, A.: Nelson-Aalen tail product-limit process and extreme value index estimation under random censorship. Unpublished manuscript, available on the ArXiv archive : arXiv:1502.03955v2 (2016)
Brahimi, B., Meraghni, D., Necir, A., Soltane, L.: Tail empirical process and a weighted extreme value index estimator for randomly right-censored data. Unpublished manuscript, available on the ArXiv archive : arXiv:1801.00572(2018)
Beirlant, J., Dierckx, G., Guillou, A., Fils-Villetard, A.: Estimation of the extreme value index and extreme quantiles under random censoring. Extremes 10, 151β174 (2007)
Beirlant, J., Broniatowski, M., Teugels, J., Vynckier, P.: The mean residual life function at great age : applications to tail estimation. Journal of Statistical Planning and Inference 45, 21β48 (1995)
Beirlant, J., Goegebeur, Y., Segers, J., Teugels, J.: Statistics of extremes: theory and applications. Wiley (2004)
Beirlant, J., Guillou, A., Toulemonde, G.: Peaks-over-threshold modeling under random censoring. Communications in Statistics - Theory and Methods 39, 1158β1179 (2010)
Beirlant, J., Bardoutsos, A., de Wet, T., Gijbels, I.: Bias reduced tail estimation for censored Pareto type distributions. Stat. Prob. Lett. 109, 78β88 (2016)
Beirlant, J., Maribe, G., Verster, A.: Penalized bias reduction in extreme value estimation for censored Pareto-type data, and long-tailed insurance applications. Insurance Math. Econom. 78, 114β122 (2018)
Beirlant, J., Worms, J., Worms, R.: Asymptotic distribution for an extreme value index estimator in a censorship framework. Journal of Statistical Planning and Inference 202, 31β56 (2019)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular variation. Cambridge University Press, Cambridge (1987)
Csorgo, S.: Probability theory. Independence, interchangeability, martingales. Ann. Stat. 24(6), 2744β2778 (1996)
de Haan, L., Ferreira, A.: Extreme value theory : an introduction springer science + business media (2006)
Diebolt, J., Gardes, L., Girard, S., Guillou, A.: Bias-reduced Estimators of the Weibull tail-Coefficient. Test 17, 311β331 (2008)
Dierckx, G., Beirlant, J., De Waal, D., Guillou, A.: A new estimation method for Weibull-type tails based on the mean excess function. Journal of Statistical Planning and Inference 139, 1905β1920 (2009)
Einmahl, J., Fils-Villetard, A., Guillou, A.: Statistics of Extremes under Random Censoring. Bernoulli 14, 207β227 (2008)
Gardes, L., Girard, S.: Estimating extreme quantiles of Weibull-tail distributions. Communications in Statistics : Theory and Methods 34, 1065β1080 (2005)
Girard, S.: A Hill type estimator of the Weibull-tail coefficient. Communications in Statistics : Theory and Methods 33(2), 205β234 (2004a)
Girard, S.: A Hill type estimator of the Weibull-tail coefficient. HAL archive version : hal-00724602 (2004b)
Goegebeur, Y., Guillou, A.: Goodness-of-fit testing for Weibull-type behavior. Journal of Statistical Planning and Inference 140, 1417β1436 (2010)
Goegebeur, Y., Beirlant, J., de Wet, T.: Generalized kernel estimators for the Weibull-Tail coefficient. Communications in Statistics : Theory and Methods 39, 3695β3716 (2010)
Gomes, M.I., Neves, M.M.: Estimation of the extreme value index for randomly censored data. Biotechnol. Lett. 48(1), 1β22 (2011)
Klein, J.P., Moeschberger, M.L.: Data sets for survival analysis - techniques for censored and truncated data. Springer Second Edition (2005)
Ndao, P., Diop, A., Dupuy, J.-F.: Nonparametric estimation of the conditional tail index and extreme quantiles under random censoring. Comput. Stat. Data Anal. 79, 63β79 (2014)
Ndao, P., Diop, A., Dupuy, J.-F.: Nonparametric estimation of the conditional extreme-value index with random covariates and censoring. Journal of Statistical Planning and Inference 168, 20β37 (2016)
Reiss, R.: Approximate distribution of order statistics. Springer-Verlag (1989)
Reynkens, T., Verbelen, R., Beirlant, J., Antonio, K.: Modelling censored losses using splicing: a global fit strategy with mixed Erlang and extreme value distributions. Insurance Math. Econom. 77, 65β77 (2017)
Sayah, A., Yahia, D., Brahimi, B.: On robust tail index estimation under random censorship. Afrika Statistika 9, 671β683 (2014)
Stupfler, G.: Estimating the conditional extreme-value index in presence of random right-censoring. J. Multivar. Anal. 144, 1β24 (2016)
Stupfler, G.: On the study of extremes with dependent random right-censoring. Extremes 22, 97β129 (2019)
Worms, J., Worms, R.: New estimators of the extreme value index under random right censoring, for heavy-tailed distributions. Extremes 17(2), 337β358 (2014)
Worms, J., Worms, R.: Moment estimators of the extreme value index for randomly censored data in the Weibull domain of attraction. Unpublished manuscript, available on the ArXiv archive, arXiv:1506.03765 (2015)
Worms, J., Worms, R.: Extreme value statistics for censored data with heavy tails under competing risks. Metrika 81(7), 849β889 (2018)
Zhou, M.: Some properties of the Kaplan-Meier estimator for independent non identically distributed random variables. Ann. Statist. 19(4), 2266β2274 (1991)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisherβs note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Let us here first summarize the contents of the Appendix. It is composed of 3 main parts.
Part A contains the proof of Theorem 1 : after showing that the statistic Ξn (defined in formula (A.3)) is the main contributor to the behavior of \(\hat \theta _{X,k}\), three propositions are then stated, and proved. Two important lemmas are also stated in the proof of the first and main proposition (which describes the asymptotic distribution of Ξn) : the first one (Lemma 1) is concerned with all the βremainderβ terms, and the second one (Lemma 2) is concerned with the asymptotic distribution of the proportion \(\hat p_{k}\) of uncensored observations in the tail, depending on the position of πX with respect to πC. These 2 lemmas are proved in parts C.2 and C.3 of the Appendix.
Part B is then devoted to the proof of Theorem 2.
Part C finally contains other lemmas which are repeatedly useful in the first two parts. In Appendix C.1, the important Lemmas 3 and 4 describe sharp second order properties of the different slowly varying functions which are handled in this work, and of the theoretical probability function p(β ) of being uncensored in the tail. In Appendix C.4, the useful Lemmas 5, 6 and 7 are stated (they are issued from the literature, but are provided for ease of reference).
1.1 Appendix A. Proof of Theorem 1
Remind that
Introducing E1,β¦, Enn independent standard exponential random variables, such that \(Z_{i}={\Lambda }^{-}_{H}(E_{i})\), we have, since \({\Lambda }^{-}_{H}(x)= x^{\theta _{Z}} l(x)\) and \({\Lambda }_{F} \circ {\Lambda }^{-}_{H} (x)= x^{a} \tilde {l}(x)\) with l and \(\tilde {l}\) slowly varying at infinity,
Now, let
and
Since the denominator in the expression for \(\hat \theta _{X,k}\) above equals
we obtain, using (A.1), (A.2) and relation πX = πZ/a,
where
We thus have the following representation, which shows that the behavior of the estimation error is essentially based on the behavior of the statistic Ξn :
where the denominator \(D_{n}=L_{nk} M_{n} + a^{-1}L_{nk} R_{n,\tilde {l}} + a^{-1}L_{nk} {\Delta }_{n}\) will turn out to converge to 1. It is now clear that the proof of Theorem 1 then follows from the combination of the following three propositions, the first one being the most important and the longest to establish. These propositions are proved in the next three subsections.
Proposition 1
Under the conditions of Theorem 1 we have, as n tends to infinity,
and
where\(\bar E_{n}= \frac {1}{k} {\sum }_{i=1}^{k} E_{i}\) (sample mean of standard exponential variables), and
Please note that the exponential variables Ei appearing in the statement of Proposition 1 above are not the same as those introduced at the beginning of this Section.
Proposition 2
Under the conditions of Theorem 1 we have, as n tends to infinity,
Proposition 3
Under conditionH1, we have\( L_{nk} M_{n} \overset {\mathbb {P}}{\longrightarrow } 1 \), asn tends to infinity.
Remark 1
First, remind that a =β1 and \(\tilde c=1\) when πX < πC. Let us highlight that the convergence in distribution of \(\sqrt {k} L_{nk}^{1-b}{\Delta }_{n}\) stated in Proposition 1 comes from the confrontation between the two terms appearing in the representation (A.5) of Ξn : the term in \(\hat {p}_{k}\) and the term involving the exponential sample mean. The convergence in distribution of the term involving \(\hat {p}_{k}\) is detailed in Lemma 2 in Subsection Appendix A.1; this will be the leading term only when πX > πC (in this setting, the constant b is positive and thus the exponential term vanishes). When πX < πC, it will only generate a possible bias, and when πX = πC it participates to the asymptotic normality along with the exponential term.
The following corollary is then stated, concerning the statistic RLn defined in Eq. 9 and discussed thereafter. Note that this corollary probably holds under weaker conditions.
Corollary 1
Under the conditions of Theorem 1, asn ββ, we have\(RL_{n} \overset {\mathbb {P}}{\longrightarrow } a\).
Its proof is short, so we will provide it here. With the same notations as in the previous page, we have readily
where the mean inside the large brackets is equivalent to 1/Lnk (see Girard2004b formula (15), for a proof). The proof of Corollary 1 thus follows from Propositions 1, 2 and 3.
1.1.1 A.1. Proof of Proposition 1
Starting from the definition of Ξn in Eq. A.3, we introduce the first remainder term \( R_{1,k}^{({\Delta })}\) by writing
Now, using the definition of \(\hat {{\Lambda } }_{nF}\) in Eq. 4, we obtain
Hence, it can easily be checked that
where
Since, β 1 β€ j β€ k +β1, \({\Lambda }_{F}(Z_{n-j+1,n}) = ({\Lambda }_{F} \circ {\Lambda }^{-}_{H}) (E_{n-j+1,n}) = E_{n-j+1,n}^{a} \tilde {l}(E_{n-j+1,n})\), where \(\tilde {l}\) is slowly varying and tends to \(\tilde {c}\) at infinity (cf Lemma 3 in Appendix C.1), then
and, introducing \((\tilde {E}_{1}, \ldots , \tilde {E}_{k})\)k independent standard exponential random variable such that, according to Lemma 5, \((E_{n-j+1,n}-E_{n-k,n})_{1 \leqslant j \leqslant k} \overset {d}{=} (\tilde {E}_{k,k}, {\ldots } , \tilde {E}_{1,k})\), we can write
where
Let us summarize :
But
where \( \bar {E}_{n} = \frac {1}{k} {\sum }_{j=1}^{k} \tilde {E}_{j}\) and
Finally,
The following lemma, proved in Appendix C.2 shows that that \(\sqrt {k} L_{nk}^{1-b} {\sum }_{i=1}^{6} R_{i,k}^{({\Delta })}\) tends to a constant.
Lemma 1
Under the assumptions of Theorem 1, as n tends to infinity,
Moreover, we have \(\sqrt {k} \left (\bar {E}_{n} -1 \right ) \overset {d}{\longrightarrow } N(0,1)\), and, according to Lemmas 6 and 7, both \(\frac {L_{nk}}{E_{n-k,n}} \) and \(\frac {{\Lambda }_{F}(Z_{n-k,n})}{\hat {{\Lambda } }_{nF}(Z_{n-k,n})} \) tend to 1 as n β +β. Hence
where
It remains to study the behavior of Dn, which is done in the following Lemma which is proved in Appendix C.3.
Lemma 2
Under the assumptions of Theorem 1, we have, asn β +β:
- 1.
IfπX < πC, then\(D_{n}=\sqrt {k} (\hat {p}_{k}-1) \displaystyle \overset {\mathbb {P}}{\longrightarrow } - \frac {\theta _{X}}{\theta _{C}} \frac {c_{G}}{{c_{F}^{d}}} \alpha ^{\prime }\).
- 2.
IfπX = πC, then\(D_{n}\! =\! \displaystyle \sqrt {k} \left (\frac {\hat {p}_{k}}{p} -1 \right ) \overset {d}{\longrightarrow } N\left (0,\frac {1-p}{p}\right )\), where\(p= \displaystyle \frac {c_{F}}{c_{F}+c_{G}}\).
- 3.
IfπX > πC (hencea <β1 andb β]0, 1/2[), then\(D_{n} \displaystyle \overset {d}{\longrightarrow } N\left (0,\frac {a}{\tilde {c}}\right )\).
Remark 2
Lemma 2 shows, in particular, that the proportion of non-censored data in the tail \(\hat {p}_{k}\) tends to p =β1 if πX < πC, to \(p=\frac {c_{F}}{c_{F}+c_{G}}\) if πX = πC (in this case, p equals \(\tilde {c}\)) and to p =β0 (with rate \(L_{nk}^{a-1}\)) if πX > πC. This has to be linked to the result of Lemma 4 (see Appendix C.1) concerning the limit of the theoretical function \(p(x)=\mathbb {P}(\delta =1|Z=x)\) as x ββ.
When πX < πC, Lemma 2 states that Dn converges to a constant : hence, via Lemma 1, the leading term in Eq. A.7 is \(\sqrt {k} L_{nk}^{-b}\left (\bar {E}_{n} -1 \right ) = \sqrt {k} \left (\bar {E}_{n} -1 \right ) \overset {d}{\longrightarrow } N(0,1)\), and we thus obtain as desired \( \sqrt {k} L_{nk}^{1-b} {\Delta }_{n} \overset {d}{\longrightarrow } N(m_{\Delta },1)\), where mΞ is defined in the statement of Proposition 1.
When πX = πC, the constant b is still equal to 0 and both Dn and \(\sqrt {k} \left (\bar {E}_{n} -1 \right )\) (which are independent) take part into the asymptotic normality of Ξn, with \(D_{n} - a \sqrt {k} \left (\bar {E}_{n} -1 \right ) \overset {d}{\longrightarrow } N(0,\sigma ^{2}_{\Delta })\) in relation (A.7), where \(\sigma ^{2}_{\Delta } = \frac {1-p}{p} +a^{2} = \frac {1}{\tilde {c}}\). Thus, we obtain \(\sqrt {k} L_{nk}^{1-b} {\Delta }_{n} \overset {d}{\longrightarrow } N(0, \frac {1}{\tilde {c}})\).
Finally, when πX > πC, \(\sqrt {k} L_{nk}^{-b}\left (\bar {E}_{n} -1 \right ) \) tends to 0 and Dn is thus the leading term : we obtain \(\sqrt {k} L_{nk}^{1-b} {\Delta }_{n} \overset {d}{\longrightarrow } N(0, \frac {a}{\tilde {c}})\) as desired.
This ends the proof of Proposition 1.
1.1.2 A.2. Proof of Proposition 2
Remind from Eq. A.4 that
Let A >β1. Under condition Rl(B, Ο), we have for all π >β0 and t sufficiently large
We only prove the result for Rn, l, the proof for \( R_{n,\tilde {l}}\) being very similar, using \(R_{\tilde {l}}(\tilde {B}, \tilde {\rho })\) instead of Rl(B, Ο). Note that
where \( \xi _{j,n}= \frac {l(E_{n-j+1,n})}{l(E_{n-k,n})} -1\) tends to 1 uniformly in j, because l is slowly varying and \(\frac {E_{n-j+1,n}}{E_{n-k,n}}\) tends to 1 uniformly in j, according to Lemma 6 stated in Appendix C.4. Hence, using the following inequality,
and the fact that \(x_{j,n} := \frac {E_{n-j+1,n}}{E_{n-k,n}} \geqslant 1\) tends to 1 uniformly in j, we obtain that for all π >β0 and n sufficiently large,
omitting the lower bound, which is treated similarly. Since KΟ(1 + x) βΌ x when x tends to 0, then \(K_{\rho }(x_{j,n}) \sim \frac {E_{n-j+1,n}-E_{n-k,n}}{E_{n-k,n}}\), uniformly in j. By Lemma 5 (also stated in Appendix C.4), \(\frac {E_{n-j+1,n}-E_{n-k,n}}{E_{n-k,n}} \overset {d}{=} \frac {\tilde {E}_{k-j+1,k}}{E_{n-k,n}} \). Hence, it is easy to prove that
Since B is regularly varying and \(\frac {E_{n-k,n}}{L_{nk}} \rightarrow 1\), then \(\frac {B(E_{n-k,n})}{E_{n-k,n}} \sim \frac {B(L_{nk})}{L_{nk}}\) and consequently
We conclude using assumption Rl(B, Ο) and conditions H2(i), H3(i) or H4(ii), because |B| is regularly varying of order Ο, and we have \(\rho =\tilde \rho \) when πX β€ πC, and \(\rho \leqslant \tilde \rho \) when πX > πC (see Lemma 3 in Appendix C.1).
1.1.3 Appendix A.3. Proof of Proposition 3
Recall that
Since \(\frac {E_{n-j+1,n}}{\log (n/j)} \overset {\mathbb {P}}{\longrightarrow } 1 \) and \(\frac {L_{nk}}{\log (n/j)} \overset {\mathbb {P}}{\longrightarrow } 1 \), uniformly in j =β1,β¦, k (see Lemma 6), then \(\frac {E_{n-j+1,n}}{E_{n-k,n}}\overset {\mathbb {P}}{\longrightarrow } 1 \), uniformly in j =β1,β¦, k. By Lemma 5, \((E_{n-j+1,n}-E_{n-k,n})_{1 \leqslant j \leqslant k} \overset {d}{=} (\tilde {E}_{k,k}, {\ldots } , \tilde {E}_{1,k}) \). Therefore
with \( \frac {1}{k} {\sum }_{j=1}^{k} \tilde {E}_{j} \rightarrow 1\), a.s. Hence, LnkMn also tends to 1, in probability, as desired.
Appendix B: Proof of Theorem 2
Starting from \(x_{p_{n}} =\overline {F}^{-1}(p_{n}) \) and the definition of \(\hat {x}_{p_{n}}\) in Eq. 5, we obtain
Hence
First of all, the result of Theorem 1 implies that
Then, Lemma 7 (stated in Appendix C.4) implies that \((\hat {{\Lambda } }_{nF}/{\Lambda }_{F})(Z_{n-k,n}) -1 = O_{\mathbb {P}} \left (1/(\sqrt {k}{\Lambda }_{F} (Z_{n-k,n}))\right )\). Hence
Now, remind that \( {\Lambda }_{F}(Z_{n-k,n}) = {\Lambda }_{F} \circ {\Lambda }_{H}^{-} (E_{n-k,n}) = E_{n-k,n}^{a} \tilde {l}(E_{n-k,n})\). Hence, the asymptotic normality of \((\hat {\theta }_{X,k}-\theta _{X}) \) yields
The additional condition \(H^{\prime }_{1}\) of Theorem 2, along with Lemma 6, imply that this term tends to 0 in probability.
Finally, Lemma 3 implies that
where v and \(\bar {v}\) are slowly varying. Hence, \( \frac {\sqrt {k}L_{nk}^{-b}}{\log \log (1/p_{n})} Q_{4,n} \) tends to 0 as soon as there exist some 0 < Ξ΄ <β1 such that \(\frac {\sqrt {k}L_{nk}^{-b}}{\log \log (1/p_{n})} (\log {1/p_{n}})^{\theta _{X} \rho _{F}+\delta } = O(1)\) and \(\frac {\sqrt {k}L_{nk}^{-b}}{\log \log (1/p_{n})} Z_{n-k,n}^{\rho _{F} + \delta } =O_{\mathbb {P}}(1)\). Remind that \(Z_{n-k,n} = E^{\theta _{Z}}_{n-k,n} l(E_{n-k,n})\). Hence, condition \(H^{\prime }_{1}\) guarantees that we only need to show that \(\sqrt {k}L_{nk}^{-b+ \theta _{X} \rho _{F}} = O(1)\) and \(\sqrt {k}L_{nk}^{-b+ \theta _{Z} \rho _{F}} = O(1)\). When πX = πZ < πC, this is due to the additional condition H2(iv). When πX = πZ = πC, it is due to condition H3(i). Finally, when πX > πZ = πC, it is due to H4(ii).
Appendix C: More technical aspects
3.1 C.1 Details on the second order properties
Remind that the starting assumption of this paper is relation (6),
where lF and lG are slowly varying. It is then easy to prove that
where πZ = min(πX, πC), a = πZ/πX, and \(\bar {l}_{F}\), \(\bar {l}_{G}\), l and \(\tilde {l}\) are slowly varying.
More precisely, we have the following Lemma, under the second order condition (7), which is called upon in several occasions in this paper.
Lemma 3
Under Assumptions (A1) and (A2), we have,
for different slowly varying functions generically noted v, with
and
The proof of this Lemma is based on Theorem B.2.2 in de Haan and Ferreira (2006) as well as the concept of de Bruyn conjugate (see Proposition 2.5 in Beirlant et al. 2004). Details are ommited for brevity.
Remark 3
It is clear that all the aforementioned slowly varying functions satisfy the second order condition SR2 with the corresponding second order parameters defined in the previous Lemma. In particular, rate functions B and \(\tilde {B}\) associated, respectively, to l and \(\tilde {l}\) satisfy \(x^{\tilde {\rho }}v(x)/\tilde {B}(x) \rightarrow -1/\tilde {\rho }\) and xΟv(x)/B(x) βββ1/Ο, as x β +β, with v, the appropriate slowly varying function (see again Theorem B.2.2 in de Haan and Ferreira 2006) .
Let us introduce, as in Einmahl et al. (2008), the function p(β ) defined by
The following lemma provides useful developments of functions p and \(p \circ {\Lambda }_{H}^{-}\). In particular, it provides details about the rate of convergence of p(x), as x β +β (to a limit which was denoted by p in the statement of Lemma 2, as the limit of the sequence \(\hat p_{k}\)). Its proof is based on the fact that
where f and g are respectively the derivatives of F and G, as well as on the results of Lemma 3. It is omitted for brevity.
Lemma 4
Under assumptions (A1) and (A2), wehave
In particular, asx β +β,
Moreover, we have
whered = πX/πC, vis a generic notation for a slowly varying function and
3.2 C.2. Proof of Lemma 1
-
Remind that
$$ \begin{array}{@{}rcl@{}} R_{1,k}^{({\Delta})} & = & \displaystyle {\Delta}_{n} - \frac{1}{k} \sum\limits_{j=1}^{k} \left( \frac{\hat{{\Lambda} }_{nF}(Z_{n-j+1,n})}{ {\Lambda}_{F}(Z_{n-j+1,n})} \frac{ {\Lambda}_{F}(Z_{n-k,n}) }{\hat{{\Lambda} }_{nF}(Z_{n-k,n})} - 1\right)\\ & = & \displaystyle \frac{1}{k} \sum\limits_{j=1}^{k} \left( \log(1+\xi_{j,n}) - \xi_{j,n} \right), \end{array} $$where
$$ \xi_{j,n} = \displaystyle \frac{\hat{{\Lambda} }_{nF}(Z_{n-j+1,n})}{ {\Lambda}_{F}(Z_{n-j+1,n})} \frac{ {\Lambda}_{F}(Z_{n-k,n}) }{\hat{{\Lambda} }_{nF}(Z_{n-k,n})} - 1 $$Introducing \( {\Delta }_{j} = \hat {{\Lambda } }_{nF}(Z_{n-j+1,n}) - {\Lambda }_{F}(Z_{n-j+1,n})\), for j =β1,β¦, k +β1 (which must not be confused with the Ξn defined earlier in relation (A.3)), we have readily
$$ \xi_{j,n} = \displaystyle \frac{{\Lambda}_{F}(Z_{n-k,n})}{\hat{{\Lambda} }_{nF}(Z_{n-k,n})} \left( {\Delta}_{j} \frac{{\Lambda}_{F}(Z_{n-k,n})}{{\Lambda}_{F}(Z_{n-j+1,n})} - {\Delta}_{k+1} \right) \frac{1}{{\Lambda}_{F}(Z_{n-k,n})}. $$Lemma 7 (in Appendix C.4) implies that \( |{\Delta }_{j}| = O_{\mathbb {P}} (1/\sqrt {j-1})\) for all j =β2,β¦, k +β1, \(|{\Delta }_{1}| = O_{\mathbb {P}} (1)\) and \( \frac {{\Lambda }_{F}(Z_{n-k,n})}{\hat {{\Lambda } }_{nF}(Z_{n-k,n})}\) tends to 1, in probability.
Let now E1,β¦, En be n independent standard exponential random variable such that \( \frac {1}{{\Lambda }_{F}(Z_{n-k,n})} = \frac {E_{n-k,n}^{-a}}{\tilde {l}(E_{n-k,n})}\), where \(\tilde {l}\) tends to \(\tilde {c}\) at infinity. Moreover, \(\frac {{\Lambda }_{F}(Z_{n-k,n})}{{\Lambda }_{F}(Z_{n-j+1,n})} \leqslant 1\) and \(\frac {E_{n-k,n}}{L_{nk}}\) tends to 1 (see Lemma 6). Thus, we obtain \(|\xi _{1,n} | \leqslant (1 + o_{\mathbb {P}}(1)) \left (O_{\mathbb {P}} (1) + O_{\mathbb {P}} (1/\sqrt {k}) \right ) L_{nk}^{-a} (1/\tilde {c} + o_{\mathbb {P}}(1))\) and
$$ |\xi_{j,n} | \leqslant (1 + o_{\mathbb{P}}(1)) \left( O_{\mathbb{P}} (1/\sqrt{j-1}) + O_{\mathbb{P}} (1/\sqrt{k}) \right) L_{nk}^{-a} (1/\tilde{c} + o_{\mathbb{P}}(1)), \text{ for } j=2, \ldots, k . $$Therefore \(\xi _{1,n}^{2} \leqslant O_{\mathbb {P}}(1) L_{nk}^{-2a}\) and
$$ \xi_{j,n}^{2} \leqslant O_{\mathbb{P}}(1) \frac{L_{nk}^{-2a}}{j-1} \text{ for } j=2 \ldots, k. $$Consequently, since a >β0, sup1β€jβ€k|ΞΎj, n| tends to 0, in probability, and thus, using the inequality 0 β€ x β log(1 + x) β€ x2 (\(\forall x \geqslant -1/2\)), we obtain,
$$ 0 \leqslant -R_{1,k}^{({\Delta})} \leqslant \frac{1}{k} \sum\limits_{j=1}^{k} \xi_{j,n}^{2}. $$But \( \frac {1}{k} {\sum }_{j=1}^{k} 1/j \sim \frac {\log k}{k}\). Hence
$$ 0 \leqslant -\sqrt{k} L_{nk}^{1-b} R_{1,k}^{({\Delta})} \leqslant O_{\mathbb{P}}(1) \frac{\log k}{\sqrt{k}} L_{nk}^{1-b-2a}. $$Let π >β0. We have 1 β b ββ2a =β3b ββ1, and so we want
$$ \sqrt{k}(\log k)^{-1} L_{nk}^{1-3b} = (k^{\epsilon}/\log k) \left( \sqrt{k} L_{nk}^{(1-3b)/(1-2\epsilon)}\right)^{1-2\epsilon} $$to go to + β. This is automatic when 0 β€ b β€β1/3. If b >β1/3 (i.e. when πX >β3πC), we can write (1 β 3b)/(1 β 2π) =β1 β 3b β Ξ΄ for some positive Ξ΄ and small enough π, and we have \(\sqrt {k} L_{nk}^{1-3b+\delta } = \sqrt {k}L_{nk}^{-b} \times L_{nk}^{-2b+1-\delta }\): the first factor goes to infinity (it is the CLT rate, assumption H4(i)), and the second factor as well for Ξ΄ (i.e.π) small enough because b is always smaller than 1/2.
-
Remind that
$$ R_{2,k}^{({\Delta})} = \frac{1}{{\Lambda}_{F}(Z_{n-k,n})} \frac{1}{k} \sum\limits_{j=1}^{k} \left( \hat{{\Lambda} }_{nF}(Z_{n-j+1,n}) - {\Lambda}_{F}(Z_{n-j+1,n}) \right) \left( \frac{{\Lambda}_{F}(Z_{n-k,n})}{{\Lambda}_{F}(Z_{n-j+1,n})} -1 \right) $$and that \( \frac {{\Lambda }_{F}(Z_{n-k,n})}{{\Lambda }_{F}(Z_{n-j+1,n})} = x_{j,n}^{-a} \frac {\tilde {l}(E_{n-k,n})}{\tilde {l}(E_{n-j+1,n})}\), where \(x_{j,n} = \frac {E_{n-k,n}}{E_{n-j+1,n}} \rightarrow 1\), uniformly on j (see Lemma 6). Hence, using the fact that \(\sup _{1\leqslant j \leqslant k} |\hat {{\Lambda } }_{nF}(Z_{n-j+1,n}) - {\Lambda }_{F}(Z_{n-j+1,n}) | = O_{\mathbb {P}}(1)\) (see Lemma 7), we obtain
$$ | R_{2,k}^{({\Delta})} | \leqslant O_{\mathbb{P}}(1) \frac{E_{n-k,n}^{-a}}{\tilde{l}(E_{n-k,n})} \left( \frac{1}{k} \sum\limits_{j=1}^{k} |x_{j,n}^{-a} -1| + \frac{1}{k} \sum\limits_{j=1}^{k} x_{j,n}^{-a} \left| \frac{\tilde{l}(E_{n-k,n})}{\tilde{l}(E_{n-j+1,n})} - 1 \right| \right). $$Introducing, once again, \(\tilde {E_{1}}, \ldots , \tilde {E_{k}}\), k independent standard exponential random variables, such that, \(\frac {E_{n-j+1,n}-E_{n-k,n}}{E_{n-k,n}} \overset {d}{=} \frac {\tilde {E}_{k-j,k}}{E_{n-k,n}} \) (see Lemma 5), and using a Taylor expansion, we have
$$ | R_{2,k}^{({\Delta})}| \leqslant O_{\mathbb{P}}(1) E_{n-k,n}^{-a} \left( \frac{1}{k} \sum\limits_{j=1}^{k} \frac{\tilde{E}_{k-j,k}}{E_{n-k,n}} + \frac{1}{k} \sum\limits_{j=1}^{k} \left| \frac{\tilde{l}(E_{n-k,n})}{\tilde{l}(E_{n-j+1,n})}-1 \right| \right). $$Since \(\bar {E}_{n} = \frac {1}{k} {\sum }_{j=1}^{k} \tilde {E}_{j}\) and \(\frac {E_{n-k,n}}{L_{nk}}\) tend to 1, in probability, the first term of the right hand side multiplied by \(\sqrt {k} L_{nk}^{1-b} \) tends to 0, by the fact that \(\sqrt {k} L_{nk}^{-a-b} \) tends to 0 under condition H2(iii), H3(ii) or H4(iv). For the second term of the right hand side, we proceed as for \(R_{n,\tilde {l}}\) (see the proof of Proposition 2), by using the fact that condition \(R_{\tilde {l}}(\tilde {B}, \tilde {\rho })\) implies \(R_{1/\tilde {l}}(-\tilde {B}, \tilde {\rho })\) and again that \(\sqrt {k} L_{nk}^{-a-b} \) tends to 0.
-
Remind that
$$ R_{3,k}^{({\Delta})}= \frac{ \hat{p}_{k}}{E_{n-k,n}^{a}} \left( \frac{1}{\tilde{l}(E_{n-k,n})} - \frac{1}{\tilde{c}} \right), $$where, according to Lemma 3, we have \(1-\frac {\tilde {l}(x)}{\tilde {c}} = x^{\tilde {\rho }} v(x) \), with v slowly varying. Hence,
$$ R_{3,k}^{({\Delta})}= (1+o_{\mathbb{P}}(1)) E_{n-k,n}^{-a} \frac{ \hat{p}_{k}}{\tilde{c}} E_{n-k,n}^{\tilde{\rho}} v(E_{n-k,n}). $$We prove, in Lemma 2 (in Appendix A.1), that \(L_{nk}^{1-a} \frac {\hat {p}_{k}}{\tilde {c}}\) tends to a. Moreover, since v is slowly varying and \(\frac {E_{n-k,n}}{L_{nk}}\) tends to 1 (see Lemma 6), we obtain
$$ \sqrt{k} L_{nk}^{1-b} R_{3,k}^{({\Delta})} = a (1+o_{\mathbb{P}}(1)) \sqrt{k} L_{nk}^{-b+ \tilde{\rho}} v(L_{nk}). $$This term tends to 0 in the case \(\theta _{X} \geqslant \theta _{C}\), under condition H3(i) or H4(ii). In the case πX < πC, we use the fact that \(\frac {x^{\tilde {\rho }} v(x)}{\tilde {B}(x)} \rightarrow -\frac {1}{\tilde {\rho }}\) (see Remark 7 in Appendix C.1). Therefore,
$$ \sqrt{k} L_{nk}^{1-b} R_{3,k}^{({\Delta})} = -\frac{1}{\tilde{\rho}} (1+o_{\mathbb{P}}(1)) \sqrt{k} L_{nk}^{-b} \tilde{B}(L_{nk}), $$which tends to \(-\frac { \tilde {\alpha }}{\rho }\) under condition H2(ii), since \(\rho =\tilde {\rho }\), in this case.
-
Remind that
$$ R_{4,k}^{({\Delta})} = - \frac{1}{k} \sum\limits_{j=1}^{k} \left( \frac{E_{n-j+1,n}}{E_{n-k,n}} \right)^{a} \left( \frac{\tilde{l}(E_{n-j+1,n})}{\tilde{l}(E_{n-k,n})} -1 \right). $$The treatment of this term is very similar to that of \(R_{n,\tilde {l}}\) (see the proof of Proposition 2). It relies on condition \(R_{\tilde {l}}(\tilde {B}, \tilde {\rho })\), as well as H2(ii), H3(i) or H4(ii). It is thus omitted.
-
Remind that
$$ R_{5,k}^{({\Delta})} = - \frac{1}{k} \sum\limits_{j=1}^{k} \left\{ \left( \left( 1+ \frac{\tilde{E}_{k-j+1,k}}{E_{n-k,n}} \right)^{a}-1 \right) - a \frac{\tilde{E}_{k-j+1,k}}{E_{n-k,n}} \right\}. $$This term is 0 in the case πX β€ πC (a =β1). So, we only consider the case πX > πC (where 0 < a <β1). It is clear (see Lemmas 5 and 6) that \(\xi _{j,n} = \frac {\tilde {E}_{k-j+1,k}}{E_{n-k,n}} \overset {d}{=} \frac {E_{n-j+1,n}}{E_{n-k,n}} -1\) tends to 0, uniformly in j. Hence, by a Taylor expansion, we obtain
$$ \begin{array}{@{}rcl@{}} R_{5,k}^{({\Delta})} & = & - (1+ o_{\mathbb{P}}(1)) \frac{1}{k} {\sum}_{j=1}^{k} \frac{a(a-1)}{2} \xi_{j,n}^{2}\\ & \overset{d}{=} & (1+ o_{\mathbb{P}}(1)) \frac{a(1-a)}{2} \frac{1}{E_{n-k,n}^{2}} \frac{1}{k} {\sum}_{j=1}^{k} \tilde{E_{j}}^{2} \sim \frac{a(1-a)}{2} L_{nk}^{-2}, \text{ (in probability)}, \end{array} $$and we conclude using H4(iv).
-
Finally, remind that
$$ R_{6,k}^{({\Delta})}= \frac{ \hat{p}_{k}}{\tilde{c} E_{n-k,n}} \left( E_{n-k,n}^{1-a} - L_{nk}^{1-a} \right). $$This term is 0 in the case πX β€ πC (a =β1). So, we only consider the case πX > πC, where 0 < a <β1 and \( \hat {p}_{k}\) tends to 0 (see Lemma 2 in Appendix A.1). By the mean value theorem,
$$ E_{n-k,n}^{1-a} - L_{nk}^{1-a} = (1-a) L_{nk}^{-a} \left( \frac{\widetilde{L}_{nk}}{L_{nk}} \right)^{-a}(E_{n-k,n} - L_{nk}), $$where \(\widetilde {L}_{nk}\) is between Lnk and Enβk, n. Hence \(\frac {\widetilde {L}_{nk}}{L_{nk}}\) tends to 1 and, since \(\sqrt {k}(E_{n-k,n} - L_{nk}) \overset {d}{\longrightarrow } N(0,1)\) (see Lemma 6), we have
$$ \sqrt{k} L_{nk}^{1-b} |R_{6,k}^{({\Delta})}| \leqslant o_{\mathbb{P}}(1)L_{nk}^{-b-a}=o_{\mathbb{P}}(1). $$
3.3 C.3. Proof of Lemma 2
The function p(β ) below has already been defined in Appendix C.1,
Proceeding as in Einmahl et al. (2008), we carry on the proof by considering now that Ξ΄i is related to Zi by
where (Ui)iβ€n denotes an independent sequence of standard uniform variables, independent of the sequence (Zi)iβ€n. We denote by U[1, n],β¦, U[n, n] the (unordered) values of the uniform sample pertaining to the order statistics Z1, n β€β¦ β€ Zn, n of the observed sample Z1,β¦, Zn.
Remind that \(Z_{i}= {\Lambda }^{-}_{H}(E_{i})\), where E1,β¦, En are independent standard exponential random variables. We introduce, for every 1 β€ i β€ n, the standard uniform random variables Vi =β1 β exp(βEi) such that \(Z_{i}= {\Lambda }^{-}_{H}(-\log (1-V_{i}))\), and define the function
Lemma 4 (in Appendix C.1) provides valuable information about the behavior of r(β ) at infinity. We now write
Whatever the position of πX versus πC, we will prove below that the term T1, k above converges to 0 in probability. It turns out that this amounts to proving that, for some positive sequence vn = o(1/n) (to be chosen later) and some constant c >β0,
As a matter of fact, if we introduce the events
then, since \(|\mathbb {I}_{U\leqslant a}-\mathbb {I}_{U\leqslant b}| \overset {d}{=} \mathbb {I}_{U\leqslant |a-b|}\) for any standard uniform U and constants a, b in [0, 1], it comes
for any given Ξ΄ >β0 and Ξ· >β0. The second term in the right-hand side is (by Markovβs inequality) lower than \(\tilde c \delta / \eta \) (which is arbitrarily small), the third term is equal to nvn(1 + o(1)) = o(1), and the fourth term is arbitrarily small (for c large enough) by the weak convergence of the uniform tail quantile process. Therefore, we are left to prove that \(\sqrt {k}L_{nk}^{b} S_{n,k}=o(1)\) (i.e. relation (C.1)), so that \(T_{1,k}=o_{\mathbb {P}}(1)\) will be proved. This is done in the different cases distinguished below, along with the treatment of the main term T2, k.
The whole proof heavily relies on the first and second order developments stated in Lemma 4 of AppendixοΏ½οΏ½C.1, concerning the function \(p\circ {\Lambda }_{H}^{-}\).
1. CaseπX < πC
In this situation, we have a =β1, b =β0, \(\tilde {c}=1\) and \(p=\lim _{z \rightarrow + \infty } p(z) = \lim _{t\searrow 0} r(t)=1\) via Lemma 4. Hence
where \(T^{\prime }_{2,k}\) turns out to be a sum of centered independent random variables. Let us now prove that \(T^{\prime }_{2,k}=o_{\mathbb {P}}(1)\), \(T^{\prime \prime }_{2,k}\) tends to AΞ±β² (here \(A=\frac {\theta _{X}}{\theta _{C}} \frac {c_{G}}{{c_{F}^{d}}}\) where Ξ±β² is defined in condition H2(iii)) and that \(\sqrt {k}S_{n,k}\to 0\) (hence, as explained above, \(T_{1,k}=o_{\mathbb {P}}(1)\)).
Concerning \(T^{\prime }_{2,k}\), by definition of r(β ) and thanks to Lemma 4, we have
Therefore, since log(n/j)/Lnk tends to 1 uniformly in j under condition H1 (Lemma 6), we obtain
which implies that \( \mathbb {V}(T^{\prime }_{2,k})\) tends to 0, since d <β1.
Concerning \(T^{\prime \prime }_{2,k}\), we have similarly, using now assumption H2(iii) and Lemma 6 (log n/j βΌ Lnk),
Let us now deal with \(\sqrt {k}S_{n,k}\). From now on, let cst denote some generic positive constant. Since r(t) converges to 1 as t β 0, and thanks to Lemma 4, we have, for s and t small,
Introducing the set \(Z_{n}=\{ (s,t) ; 1/n \leqslant t \leqslant k/n , |t-s|\leqslant c\sqrt {k}/n , s\geqslant v_{n} \}\) and reminding that vn = o(1/n) (an appropriate sequence will be chosen in few lines), it can be checked that applying the mean value theorem to the function h(t) = (β log t)dββ1 of positive derivative hβ²(t) = (1 β d)tββ1(β log t)dββ2, yields for large n (below, u = u(s, t) denotes some appropriate value between s and t)
This is the first step towards the proof of \(\sqrt {k}S_{n,k}=o(1)\). The second step requires to do the same job with the function \(\tilde h(t)=(-\log t)^{d-1-\beta }v(-\log t)\), where v(β ) is slowly varying at infinity. It is known (cf Bingham et al. 1987 page 15) that we have xvβ²(x)/v(x) β 0 and xβΞ²v(x) β 0 as x ββ, so that
where x denoted (β log t), which is large when t is close to 0. Therefore, taking into account all the previous findings, and considering the choice vn = kβπ/n = o(1/n), we have proved that for n large
which turns out to be o(1) as soon as 0 < Ξ΄ < d/2 thanks to assumption H2(iii). This ends the proof of Lemma 2 in the mild censoring case πX < πC.
2. CaseπX = πC
In this case, we also have a =β1, b =β0 but now \(\tilde {c}=\frac {c_{F}}{c_{F}+c_{G}} =p=\lim _{z \rightarrow \infty } p(z) = \lim _{t\searrow 0} r(t)\) via Lemma 4. It is then clear that
Let us prove that \(T^{\prime }_{2,k} \overset {d}{\longrightarrow } N(0,\frac {1-p}{p})\), while \(T^{\prime \prime }_{2,k}\) and \(\sqrt {k} S_{n,k}\) are both o(1).
Concerning \(T^{\prime }_{2,k}\), we have
which tends to \(\frac {1-p}{p}\), since r(j/n) tends to p, uniformly in j (see Lemma 4). We conclude, for this term, using Lyapunovβs theorem (details are omitted, here r(j/n) β€β1).
Concerning \(T^{\prime \prime }_{2,k}\), since Lemma 4 yields r(t) = p (1 β (β log t)Οv(β log t)), we have (for some Ξ΄ >β0)
where we noted un, j = log(n/j)/Lnk, which tends to 1 uniformly in j thanks to condition H1, and used the fact that v(log(n/j)) βΌ v(Lnk) because v β RV0. The Riemann sum on the right-hand side converges to 1, so for a choice of Ξ΄ satisfying assumption H3(i), we have proved that \(T^{\prime \prime }_{2,k}=o(1)\).
Concerning now \(\sqrt {k}S_{n,k}\), we proceed similarly as in the first case. Introducing \(\tilde h(t)=(-\log t)^{\rho }v(-\log t)\) where v(β ) is slowly varying at infinity, we have as previously \(|\tilde h^{\prime }(t)|=\frac 1 t (-\log t)^{\rho -1+\epsilon }o(1)\) for t β 0 and any some small π >β0. Therefore, Lemma 4, definitions of Sn, k and of the set Zn, along with the mean value theorem, yield
Choosing, in the definition of Sn, k, the sequence vn = kβπ/n = o(1/n) for some small π >β0, we have
which turns out to be o(1) according to assumption H3(i) (if \(\rho \geqslant 1\)) or H3(ii) (if Ο <β0), as soon as Ξ΄ is sufficiently small. This ends the proof of Lemma 2 in the semi-strong censoring case πX = πC.
3. CaseπX > πC
Now we are in the situation where a <β1, b = (1 β a)/2 β]0, 1/2[, and \(\tilde {c}=\frac {c_{F}}{{c_{G}^{a}}}\) is different from \(p=\lim _{z \rightarrow \infty } p(z) = \lim _{t\searrow 0} r(t)=0\). Since 1 β a β b = b, we have readily
Let us prove that \(T^{\prime }_{2,k} \overset {d}{\longrightarrow } N(0,\frac {a}{\tilde {c}})\), while \(T^{\prime \prime }_{2,k}\) and \(\sqrt {k} L_{nk}^{b} S_{n,k}\) are both o(1) (the latter will guarantee that \(T_{1,k}=o_{\mathbb {P}}(1)\)).
Concerning \(T^{\prime }_{2,k}\), we have
Lemma 4 yields the following first order development, as t β 0,
Since un, j = log(n/j)/Lnk tends to 1 uniformly in j, under condition H1 (see Lemma 6), it is then easy to see that \(\mathbb {V}(T^{\prime }_{2,k})\) tends to \(\frac {a}{\tilde {c}}\). We conclude concerning \(T^{\prime }_{2,k}\) using Lyapunovβs theorem (again, details are easy and omitted).
Concerning \(T^{\prime \prime }_{2,k}\), we write
and treat these two terms separately. Using the second order formula stated in Lemma 4, we have
and consequently, for some small Ξ΄ >β0,
where we used condition H1 and the slow variation of v, which guarantees that v(log(n/j)) βΌ v(Lnk) and xβΞ΄v(x) β 0 as x ββ. Now, since \( \tilde {\rho }= \max (\theta _{Z} \rho _{F}, \theta _{Z} \rho _{G},a-1) \geqslant a-1\), it comes
and therefore the first term of \(T^{\prime \prime }_{2,k}\) is equal to \(a \sqrt {k} L_{nk}^{-b+\tilde {\rho }+\delta } o(1)\), which tends to 0 under condition H4(ii). The second term of \(T^{\prime \prime }_{2,k}\) is
But \( \left (\frac {L_{nk}}{\log (n/j)} \right )^{1-a} -1= (a-1) \frac {\log (k/j)}{L_{nk}} (1+o(1))\) with \( \frac {1}{k} {\sum }_{j=1}^{k} \log (k/j)\) tending to 1. So the second term of \(T^{\prime \prime }_{2,k}\) is equal to
and this quantity tends to 0 under condition H4(iv).
Concerning now \(\sqrt {k}L_{nk}^{b} S_{n,k}\), we have
Thanks to the first order relation (A.2), the second supremum of the right-hand side is lower than a constant times \(L_{nk}^{2(a-1)}\). The first supremum will be handled with the more precise second order development (A.3), which yields
where we define h(t) = (β log t)1βa and \(\tilde h(t)=(-\log t)^{1-a+\tilde {\rho }}v(-\log t)\). Contrary to the functions arisen in case 1, the functions h and \(\tilde h\) tend to infinity instead of vanishing to 0, when t β 0 : this will be counterbalanced by the second supremum. Studying derivatives of the functions h and \(\tilde h\), and again using a first order Taylor expansion, we obtain via similar computations as in the previous cases, for n large and any π >β0 (with the choice vn = kβπ/n),
Therefore, gathering the two suprema, we have (for some small value of Ξ΄ >β0 depending on π)
which, by assumption H4(iii), converges to 0 as n ββ.
3.4 C.4. Additional useful lemmas
Let E1,β¦, En be n iid standard exponential random variables.
Lemma 5
According to Lemma 1.4.3. inReiss (1989), wehave
where\(\tilde {E}_{1}, \ldots , \tilde {E}_{k}\)arekindependent standard exponential random variables.
Lemma 6
Under conditionH1, we have, asn β +β,
We refer to Girard (2004b) for the proof of this Lemma.
Lemma 7
If we consider the classical random censoring model ( 1 ) with continuous distributionfunctionsF andG of the variablesX andC, then the following in-probability results hold :
The first statement is a part of Theorem 1 in Csorgo (1996). For the second statement, one has to make a careful examination of Theorem 2.1 in Zhou (1991), in a narrower context, since the samples (Xi) and (Ci) we consider are i.i.d. , whereas Zhou considers possibly non-identically distributed censoring variables Ci. In pages 2269-2270 of the mentioned paper, one can find out that the maximum observed value (named Tn) does not have to be excluded from the probability bound (2.3) : it can indeed be proved, by following the steps of the proof of (2.3), that for every n,
So the second statement of Lemma 7 follows.
Rights and permissions
About this article
Cite this article
Worms, J., Worms, R. Estimation of extremes for Weibull-tail distributions in the presence of random censoring. Extremes 22, 667β704 (2019). https://doi.org/10.1007/s10687-019-00354-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-019-00354-2