Abstract
We consider a problem of characterization of continuous distributions for which linearity of regression of overlapping order statistics, \(\mathbb {E}(X_{i:m}|X_{j:n})=aX_{j:n}+b\), \(m\le n\), holds. Due to a new representation of conditional expectation \(\mathbb {E}(X_{i:m}|X_{j:n})\) in terms of conditional expectations \(\mathbb {E}(X_{l:n}|X_{j:n})\), \(l=i,\ldots ,n-m+i\), we are able to use the already known approach based on the Rao-Shanbhag version of the Cauchy integrated functional equation. However this is possible only if \(j\le i\) or \(j\ge n-m+i\). In the remaining cases the problem essentially is still open.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider a sequence \((X_k)_{k\ge 1}\) of independent identically distributed continuous random variables. For an arbitrary \(n\ge 1\) denote order statistics for the sample of size \(n\) by \(X_{1:n}\le X_{2:n}\le \cdots \le X_{n:n}\). In this paper we are interested in linearity of regression of overlapping order statistics, that is, we consider the condition
where \(a,b\) are some real constants, and we want to describe the family of parent distribution for which (1) holds.
The problem has a long history. It goes back to Fisz (1958) who considered the case \(m=n=i=2,\, j=1,\, a=1\) and characterized the exponential distribution. This setting was extended in Rogers (1963) with characterization of the exponential distribution by (1) with \(m=n,\, i=j+1,\,a=1\). The case of adjacent order statistics was completed in Ferguson (1967) who considered the case \(m=n,\, i=j+1\) with no restriction on \(a\) and characterized three families of distributions: exponential for \(a=1\), Pareto for \(a>1\) and power for \(0<a<1\). Similar result was obtained in the PhD thesis of Pudeg (1991) and independently in Ahsanullah and Wesołowski (1997) for (1) with \(m=n\) and \(i=j+2\). Other trials in the non-adjacent case where given in Dembińska and Wesołowski ((1997)) and López-Blázquez and Moreno-Rebollo (1997). Finally the problem for \(m=n\) was completely solved in Dembińska and Wesołowski (1998), denoted in the sequel by DW, where the same triplet of exponential, Pareto and power distributions or their symmetric (about zero) versions were characterized by (1) with arbitrary \(j<i\) or \(j>i\), respectively. Various recent extensions and complements of this result can be found e.g. in Ahsanullah and Hamedani (2012), Ahsanullah et al. (2012), Beg et al. (2013), Bieniek and Szynal (2003), Cramer et al. (2004), Ferguson (2002) or Gupta and Ahsanullah (2004).
All the previously mentioned papers were concerned with the case of one sample, i.e. \(m=n\). We were able to trace in the literature only two papers dealing with the case \(m\ne n\). In Ahsanullah and Nevzerov (1999) the authors claim that (1) with \(i=j=1\) and \(n>m\) characterizes the triplet of exponential, Pareto and power distributions as above. In Wesołowski and Gupta (2001) only a very special case \(i=m=1\) was considered—see Sect. 5 below for more details.
In the present paper we will give the characterization of both the triplet families (exponential, Pareto, power or their symmetric versions) in the case \(m\le n\) and \(j\le i\) or \(j\ge n+m-i\). Note that it does not cover the case considered in Wesołowski and Gupta (2001) but it covers the result announced in Ahsanullah and Nevzerov (1999). It appears that in the case considered, to prove the characterization one can apply Rao-Shanbhag version of integrated Cauchy functional equation (see Rao and Shanbhag 1994), similarly as in DW. This is done in Sect. 4. However, to reduce the problem to one to which this method can be applied we need to prove a representation of the conditional expectation \(\mathbb {E}(X_{i:m}|X_{j:n})\) through conditional expectations from a single sample of size \(n\). This is done, even in a more general setting, that is with no restrictions on relations between \(i\) and \(j\), in Sect. 2. In Sect. 3 we observe that suitable form of linearity of regression (1) for \(m \le n\) holds for both considered triplets of distributions. In Sect. 5 we make some comments regarding the case \(i<j<n-m+i\) which still remains unsolved.
2 A representation of conditional expectation for overlapping order statistics
In this section we are interested in the conditional moment \(\mathbb {E}(X_{i:m}|X_{j:n})\) for different values of \(i,j\in \mathbb {N}\), \(m<n\in \mathbb {N}\). We will express it as a convex combination of conditional moments of the form \(\mathbb {E}(X_{l:n}|X_{j:n})\), \(l=i,i+1,\ldots ,n-m+i\).
Theorem 1
Let \(X_{1},\ldots ,X_{n}\) be a sequence of continuous, independent, identically distributed and integrable random variables. Then for any \(m<n\in \mathbb {N}\), \(1\le i\le m\), \(1\le j \le n\)
Proof
Let us denote the set of all subsets of size \(m\) of \(\{1,\ldots ,n\}\) by \(\mathbb {C}^{n}_{m}\). Of course, \(\#\,\mathbb {C}^{n}_{m}={\left( \begin{array}{l}n\\ m\end{array}\right) }\). We can number the elements of \(\mathbb {C}^{n}_{m}\) arbitrarily and define \(C(k)\) as the \(k\)-th element of \(\mathbb {C}^{n}_{m}\), where \(1\le k \le {\left( \begin{array}{l}n\\ m\end{array}\right) }\). Denote by \(X^{(k)}_{i:m}\) the \(i\)-th order statistic from \((X_i,\,i\in C(k))\). Due to the fact that the joint distribution of \((X_{1},\ldots ,X_{n})\) is invariant under permutations, we can write:
Consequently, denoting \(S_i=X^{(1)}_{i:m}+X^{(2)}_{i:m}+\cdots +X^{\left( {\left( {\begin{array}{l}n\\ m\end{array}}\right) }\right) }_{i:m}\),we have
Let us consider the event \(A=\{X_{1}<X_{2}<\cdots <X_{n}\}\) and an arbitrary \(l\in \{1,\ldots ,n\}\). Obviously, on the event \(A\) we have \(X_l=X_{l:n}\). Note that if \(l\in \{1,\ldots ,i-1\}\cup \{n-m+i+1,\ldots ,n\}\) then on \(A\) the variable \(X_l\) cannot appear in the sum \(S_i\). Otherwise, on \(A\) the variable \(X_l\) appears in the sum \(S_i\) as many times as there are \(m\)-elementary combinations of elements of \(\{1,\ldots ,n\}\) which consist of: \(l\), exactly \((i-1)\) numbers smaller than \(l\) and exactly \((m-i)\) numbers greater than \(l\). That is,
By (3) we get
Let \(\mathfrak {S}_n\) denote the set of permutations of \(\{1,\ldots ,n\}\). We may repeat the same reasoning for any event \(A_{\sigma }=(X_{\sigma (1)}<\cdots <X_{\sigma (n)})\), where \(\sigma \in \mathfrak {S}_n\). Consequently, (4) holds with \(A\) changed into \(A_{\sigma }\) for any \(\sigma \in \mathfrak {S}_n\). Since the sets \(A_{\sigma }\), \(\sigma \in \mathfrak {S}_n\), are disjoint, we get
Now (2) follows due to the identity \(\sum _{\sigma \in \mathfrak {S}_n}\,I_{A_{\sigma }}=1\) holding \(\mathbb {P}\)-a.s.\(\square \)
Remark 1
Note that the coefficients which appear at the right hand side of (2) have a clear probabilistic interpretation. Namely, for any \(1\le i\le m\le n\)
Thus \(\sum ^{n-m+i}_{l=i}\mathbb {P}(X_{l:n}=X_{i:m})=1\).
To see that Remark 1 holds true, note that the event \(\{X_{i:m}=X_{l:n}\}\) consists only of special permutations of \(X_1,\ldots ,X_n\): The variables \(X_1,\ldots ,X_m\) have to appear only at: position \(l\), \(i-1\) positions chosen from \(\{1,\ldots ,l-1\}\) (on ways) and \(m-i\) positions chosen from \(\{l+1,\ldots ,n\}\) (on ways). Now it remains to permute the variables \(X_1,\ldots ,X_m\) at already fixed \(m\) positions (on \(m!\) ways) and to permute the variables \(X_{m+1},\ldots ,X_n\) at the remaining \(n-m\) positions (on \((n-m)!\) ways). Therefore, there are
permutations of \(X_1,\ldots ,X_n\) for which \(X_{i:m}=X_{l:n}\). Since every permutation of \(X_1,\ldots ,X_n\) is equally likely, we arrive at (6).
3 Linearity of regression for exponential, Pareto and power distributions
By \(\mathrm {PAR}(\theta ;\mu ;\delta )\) we denote the Pareto distribution with the density
where \(\theta >0\), \(\mu \), \(\delta \) are some real constants such that \(\mu +\delta >0\).
By \(\mathrm {EXP}(\lambda ;\gamma )\) we denote the exponential distribution with the density
where \(\lambda >0\), \(\gamma \) are some real constants.
By \(\mathrm {POW}(\theta ;\mu ;\nu )\) we denote the power distribution with the density
where \(\theta >0\), \(-\infty <\mu <\nu <\infty \) are some real constants.
It is well known, see e.g. DW, that for each of the above distributions for \(l>j\)
where \(\alpha \) and \(\beta \) are some constants depending on the distribution and on \(l,j,n\)—the formulas for these constants are given on pp. 217–218 of DW. These formulas together with the representation, (2) imply for \(j<i\) that
where \(a\) and \(b\) are suitable constants, which in each of special cases are listed below.
-
For the exponential distribution \(\mathrm {EXP}(\lambda ;\gamma )\)
$$\begin{aligned} a=1,\qquad b=\tfrac{(n-j)!}{\left( \begin{array}{l}n\\ m\end{array}\right) }\lambda \,\sum ^{n-m+i}_{l=i} \tfrac{\left( \begin{array}{l}l-1\\ i-1\end{array}\right) \left( \begin{array}{l}n-l\\ m-i\end{array}\right) }{(n-l)!}\,\sum ^{l-j-1}_{s=0}\tfrac{(-1)^{s}}{s!(l-j-1-s)!(n-l+s+1)^{2}}.\nonumber \\ \end{aligned}$$(9) -
For the Pareto distribution \(\mathrm {PAR}(\theta ;\mu ;\delta )\)
$$\begin{aligned}&a=\tfrac{\theta (n-j)!}{\left( \begin{array}{l}n\\ m\end{array}\right) }\,\sum ^{n-m+i}_{l=i} \tfrac{{\left( \begin{array}{l}l-1\\ i-1\end{array}\right) }{\left( \begin{array}{l}n-l\\ m-i\end{array}\right) }}{(n-l)!}\,\sum ^{l-j-1}_{s=0}\tfrac{(-1)^{s}}{s!(l-j-1-s)![\theta (n-l+1+s)-1]},\nonumber \\&b\!=\!\tfrac{\delta (n-j)!}{{\left( \begin{array}{l}n\\ m\end{array}\right) }}\,\sum ^{n-m+i}_{l=i} \tfrac{{\left( \begin{array}{l}l-1\\ i\!-\!1\end{array}\right) }{\left( \begin{array}{l}n\!-\!l\\ m-i\end{array}\right) }}{(n-l)!}\,\sum ^{l-j-1}_{s-0}\tfrac{(-1)^{s}}{s!(l-j-1-s)!(n-l+s+1)[\theta (n-l+s+1)-1]}.\nonumber \\ \end{aligned}$$(10) -
For the power distribution \(\mathrm {POW}(\theta ;\mu ;\nu )\)
$$\begin{aligned}&a=\tfrac{\theta (n-j)!}{{\left( \begin{array}{l}n\\ m\end{array}\right) }}\,\sum ^{n-m+i}_{l=i} \tfrac{{\left( \begin{array}{l}l-1\\ i-1\end{array}\right) }{\left( \begin{array}{l}n-l\\ m-i\end{array}\right) }}{(n-l)!}\,\sum ^{l-j-1}_{s=0}\tfrac{(-1)^{s}}{s!(l-j-1-s)![\theta (n-l+1+s)+1]},\nonumber \\&b\!=\!\tfrac{\nu (n-j)!}{{\left( \begin{array}{l}n\\ m\end{array}\right) }}\,\sum ^{n-m+i}_{l=i} \tfrac{{\left( \begin{array}{l}l-1\\ i\!-\!1\end{array}\right) }{\left( \begin{array}{l}n-l\\ m\!-\!i\end{array}\right) }}{(n-l)!}\,\sum ^{l-j-1}_{s=0}\tfrac{(-1)^{s}}{s!(l-j-1-s)!(n-l+1+s)[\theta (n-l+1+s)+1]}.\nonumber \\ \end{aligned}$$(11)
For any distribution \(\mu \) of a random variable \(X\), denote by \(\mu _-\) the distribution of \(-X\). Since for \(Y_i=-X_i\), \(i=1,\ldots ,n\), we have \(Y_{i:n}=-X_{n-i+1:n}\) it follows that (7) holds for \(l<j\) if the distribution of \(X_i\)’s is one of the triplet \(\mathrm {PAR}_-\), \(\mathrm {EXP}_-\) or \(\mathrm {POW}_-\). Consequently, (8) holds for this triplet in the case \(j\in \{n-m+i,\ldots ,n\}\).
4 Characterization in the case \(j\le i\) or \(j\ge n-m+i\)
These three distributions of type \(\mu \) or related of type \(\mu _-\) appear to be the only possible distributions for \(X_i\)’s for which (8) holds with \(j\le i\) or, respectively, with \(j\ge n-m+i\).
Before we give the proof of our main result we recall a result on possible solutions of the integrated Cauchy functional equation. Following the method from DW we will use this result in the proof of the characterization. Let \(\lambda \) denote the Lebesgue measure on \({\mathbb {R}}_+\).
Theorem 2
( Rao and Shanbhag (1994)) Consider the integral equation:
where \(\mu \) is a non-arithmetic \(\sigma \)-finite measure on \(\mathbb {R}_{+}\) and \(H:\mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\) is a Borel measurable, either non-decreasing or non-increasing \(\lambda \)-a.e. function that is locally \(\lambda \)-integrable and is not identically equal zero \(\lambda \)-a.e. Then there exists \(\eta \in \mathbb {R}\) such that
and H has the form
where \(\alpha ,\beta ,\gamma \) are some constants. If \(c=0\), then \(\gamma =-\alpha \) and \(\beta =0\).
Now we are ready to state and then to prove our main result which is a characterization of both the triplets of distributions described in Sect. 3 by linearity of regression of order statistics from overlapping samples.
Theorem 3
Let \(X_{1},\ldots ,X_{n}\) be independent random variables with a common continuous distribution \(\mu \). Assume that \(\mathbb {E}(|X_{1}|)<\infty \). If for some \(i,m,n\in {\mathbb {N}}\) such that \(1\le i\le m<n\in \mathbb {N}\) linearity of regression (8) holds for some
-
\(j\in \{1,\ldots ,i\}\) then only one of the following cases is possible:
-
(1)
\(a=1\) and \(\mu =\mathrm {EXP}\),
-
(2)
\(a<1\) and \(\mu =\mathrm {POW}\),
-
(3)
\(a>1\) and \(\mu =\mathrm {PAR}\).
-
(1)
-
\(j\in \{n-m+i+1,\ldots ,n\}\) then only one of the following cases is possible:
-
(1)
\(a=1\) and \(\mu =\mathrm {EXP}_-\),
-
(2)
\(a<1\) and \(\mu =\mathrm {POW}_-\),
-
(3)
\(a>1\) and \(\mu =\mathrm {PAR}_-\).
-
(1)
Proof
Let us note that if \(X\) has a continuous distribution function \(F\) then in the case \(j<l\) the conditional distribution of \(X_{l:n}\) given \(X_{j:n}\) has the form
\(l_F\le x\le y\le r_F\), where \(l_{F}=\inf \{x\in \mathbb {R}:F(x)>0\}\) and \(r_{F}=\sup \{x\in \mathbb {R}:F(x)<1\}\). Alternatively, for continuous \(F\) the conditional distribution \(X_{l:n}|X_{j:n}=x\) is the same as the distribution of \(Y_{l-j:n-j}\) for the \(Y_i\), \(i=1,\ldots ,n-j\), which are iid and their common distribution function is \(F_Y(y)=\tfrac{F(y)-F(x)}{1-F(x)}\), \(y\ge x\) and \(F_Y(y)=0\), otherwise. This fact seems to be well known for continuous parent distribution (in particular, it was used in DW). Since in basic monographs by Arnold et al. (1992), David and Nagaraja (2003) it is stated only in the absolutely continuous case, while in Nevzerov (2001) it is formulated for continuous distributions but proved only in the absolutely continuous case, for the sake of completeness we sketch its proof here. We note that from the well known general formula for the distribution function of \(X_{k:n}\) (see, e.g. (2.2.15) in Arnold et al. (1992), in the continuous case, since then \(F(X_i)\) has the uniform distribution on \((0,1)\), one gets
for any \(k=1,\ldots ,n\). Therefore, to prove the formula (12) it suffices to check (which is an elementary computation) that with \(dF_{X_{l:n}|X_{j:n}=x}(y)\) defined by (12) the following identity holds
for any \(y\in {\mathbb {R}}\).
Let us first consider the case when \(j<i\). From (2) and (12) we have:
where \(A_{l}={\small \frac{\left( \begin{array}{l}l-1\\ i-1\end{array}\right) {\left( \begin{array}{l}n-l\\ m-i\end{array}\right) }}{(\begin{array}{l}n\\ m\end{array})}}\) and \(B_{l}=\tfrac{(n-j)!}{(l-j-1)!(n-l)!}\), \(x\in (l_F,\,r_F)\)
Observe that there does not exist an interval \((c,d), l_{F}<c<d<r_{F}\), on which \(F\) is constant, because the right side of (13) is either strictly increasing or strictly decreasing. Both sides of this equation are continuous, so they could not be equal in the next point of increase of \(F\). Therefore \((l_{F},r_{F})\) is the support of distribution given by \(F\) and \(F\) is strictly increasing on this interval. Both sides of the second equation in (13) are continuous with respect to \(x\), so it holds for any \(x\in (l_{F},r_{F})\). After substituting \(t=\overline{F}(y)/\overline{F}(x)\), we insert \(y=\overline{F}^{-1}(t\overline{F}(x))\) (\(\overline{F}^{-1}\) exists, because \(\overline{F}\) is strictly decreasing on \((l_{F},r_{F})\)) into (13) and thus
Note that the left hand side is strictly increasing in \(x\) and thus \(a\) has to be positive. Substituting again \(\overline{F}(x)=w\) in (14), which implies \(x=\overline{F}^{-1}(w)\), we get:
Divide both sides of the above equation by \(a\) and substitute again \(t=e^{-u}\) and \(w=e^{-v}\) for \(v>0\) to arrive at
After changing sum of integrals into integral of sums:
Let us now define \(H(v)=\overline{F}^{-1}(e^{-v})\). Consequently,
where \(\mu \) is a finite measure on \(\mathbb {R}_{+}\), which is absolutely continuous with respect to the Lebesgue measure and has the form
Note that \(H\) is strictly increasing on \([0,\infty )\) as composition of two strictly decreasing functions. The assumptions of the Rao-Shanbhag theorem are satisfied, so \(H\) has the form
\(v>0\), where \(\alpha ,\beta ,\gamma ,\delta ,\eta \) are some constants and
To find relations between \(\eta \) and \(a\) we rewrite (15) as
After substituting \(t=e^{-x}\)
Performing the integration at the right hand side above (note that necessarily \(\eta <m-i+1\), otherwise the integrals are infinite) we get
Finally, we get
where
Since the function \(h_l\) is strictly increasing on \((-\infty ,\,m-i+1)\) it follows from (16) that for a given coefficient \(a\) there exists a unique \(\eta \) satisfying (15). Moreover,
-
if \(\eta =0\) then \(a=0\),
-
if \(0<\eta <m-i+1\) then \(a>1\),
-
if \(\eta <0\) then \(a<1\).
Let us now consider the case when \(j=i\). From (12) we get
thus instead of (14) we get
Similarly, as in the case above we make substitutions and use the Rao-Shabhag theorem to arrive at the solution \(H\). The only difference is the equation for \(a\) which now reads
This equation gives us the same condition for parameter \(a\) as for the case \(j<i\).
Before computing the parameters of distributions we arrived at, we will explain why solution of the case \(j\le i\) gives also the solution in the case \(j\ge n-m+i\). Define \(Y_{k}=-X_{k}\), \(k=1,\ldots ,n\) and consider order statistics of the random vector \((Y_{1},\ldots ,Y_{n})\). Since \(Y_{k:n}=-X_{n-k+1:n}\), so we can write for \(j\ge n-m+i\):
Consequently,
where \(j'=n-j+1\le i'=m-i+1\), \(a'=a\) and \(b'=-b\).
We will find distribution functions only in the case \(j<i\) (For \(i=j\) the derivation is almost exactly the same and is skipped. In the case \(j\ge n-m+1\) one has again to refer to the representation \(Y_k=-X_k\) and use the results of the case \(j\le i\)). For \(\eta \ne 0\) from the definition of \(H\) we get
Hence for \(z>\gamma \)
Consider now three cases:
-
(1)
\(a<1\) and \(\eta <0\) then (17) for \(z\in (\mu ,\nu )\) can be written as
$$\begin{aligned} \overline{F}(z)=\left( \frac{\alpha +\gamma -z}{\alpha }\right) ^{-1/\eta }=\left( \frac{\alpha +\gamma -z}{\alpha +\gamma -\gamma }\right) ^{-1/\eta }=\left( \frac{\nu -z}{\nu -\mu }\right) ^{\theta }, \end{aligned}$$where \(\nu =\alpha +\gamma \), \(\mu =\gamma \), \(\theta =-\frac{1}{\eta }>0\). Notice that \(\alpha \) has to be positive. Hence \(X_1\) has \(\mathrm {POW}(\theta ;\mu ;\nu )\) distribution and
-
(2)
\(a>1\) and \(\eta >0\) then (17) for \(z>\mu \) can be written as
$$\begin{aligned} \overline{F}(z)=\left( \frac{-\alpha }{z-\alpha -\gamma }\right) ^{1/\eta }=\left( \frac{\gamma +(-\alpha -\gamma )}{z+(-\alpha -\gamma )}\right) ^{1/\eta }=\left( \frac{\mu +\delta }{z+\delta }\right) ^{\theta }, \end{aligned}$$where and \(\delta =-\alpha -\gamma \), \(\mu =\gamma \), \(\theta =\frac{1}{\eta }>0\). Thus \(X_{1}\) has \(\mathrm {PAR}(\theta ;\mu ;\delta )\) distribution and
-
(3)
\(a=1\) and \(\eta =0\) then by the definition of \(H\) we get
$$\begin{aligned} \overline{F}^{-1}(e^{-v})=\gamma +\beta v \end{aligned}$$and, consequently,
$$\begin{aligned} \overline{F}(z)=e^{-(z-\gamma )/\beta }=e^{-\lambda (z-\gamma )} \end{aligned}$$for \(z>\gamma \), where \(\lambda =\frac{1}{\beta }>0\) Hence \(X_{1}\) has \(\mathrm {EXP}(\lambda ;\gamma )\) distribution and
-
(a)
\(\lambda \) may be calculated from the formula for \(b\) in (9),
-
(b)
\(\gamma \) is a real number.\(\square \)
-
(a)
5 The case \(i<j<n-m+i\) remains unsolved
As it was already said in the introduction if \(i<j<n-m+i\) then only the case \(m=i=1\) was considered in Wesołowski and Gupta (2001) (see also Nagaraja and Nevzerov 1997, and Gupta and Kirmani 2008). More precisely, only the family of distributions for which \(\mathbb {E}(X_1|X_{k+1:2k+1})=a X_{k+1:2k}\) was described. Unexpectedly, this family is completely different than the triplets of distributions described above, e.g. it contains Student distribution with two degrees of freedom.
In the case \(j\in \{i+1,\ldots ,n-m+i-1\}\) it follows from Theorem 1 that
Linearity of regression, as in (1) would imply that the right hand side above equals \(ax+b\). Such an equation seems to be much harder to solve than the one solved in Sect. 4 above. In particular, it is not visible how to reduce it, through some substitutions, to the Rao-Shanbhag equation.
For \(i=1\), \(j=2\), \(m=2\), \(n=4\) under linearity of regression assumption we obtain the equation
Similarly, for \(i=2\), \(j=3\), \(m=2\), \(n=4\) we have
These two last equations seem to be the simplest unsolved cases.
Nevertheless, it can be easily verified that if a sample is taken from a uniform distribution then both the above linearity of regression conditions hold true.
References
Ahsanullah M, Hamedani GG (2012) Characterizations of certain univariate distributions based on the conditional distribution of generalized order statistics. Pakistan J Stat 28(2):253–258
Ahsanullah M, Hamedani GG, Wesołowski J (2012) Linearity of regressions inside top-\(k\)-lists and related characterizations. Studia Sci Math Hungar 49(4):436–445
Ahsanullah M, Nevzerov VB (1999) Spacings of order statistics from extended sample. In: Ahsanullah M, Yildrim F (eds) Applied statistical science IV. Nova Sci. Publ, Commack, pp 251–257
Ahsanullah M, Wesołowski J (1997) On characterizing distributions via linearity of regression for order statistics. Aust J Stat 39(1):69–78
Arnold BC, Balakrishnan N, Nagaraja HN (1992) A first course in order statistics. Wiley, New York
Beg MI, Ahsanullah M, Gupta RC (2013) Characterizations via regressions for generalized order statistics. Stat Methodol 12:31–41
Bieniek M, Szynal D (2003) Characterizations of distributions via linearity of regression of generalized order statistics. Metrika 58:259–272
Cramer E, Kamps U, Keseling C (2004) Characterizations via linear regressions of order statistics: a unifying approach. Commun Stat Theory Methods 33:2885–2911
David HA, Nagaraja HN (2003) Order statistics. Wiley, Hoboken
Dembińska A, Wesołowski J (1997) On characterizing the exponential distribution by linearity of regression for non-adjacent order statistics. Demonstratio Mathematica 30:945–952
Dembińska A, Wesołowski J (1998) Linearity of regression for non-adjacent order statistics. Metrika 48:215–222
Ferguson TS (1967) On characterizing distributions by properties of order statistics. Sankhya A 29:265–278
Ferguson TS (2002) On a Rao-Shanbhag characterization of exponential/geometric distribution. Sankya A 64:246–255
Fisz M (1958) Characterizations of some probability distributions. Skand Aktuarietidskr 41:65–70
Gupta RC, Ahsanullah M (2004) Some characterization results based on the conditional expectation of a function of non-adjacent order statistics (record values). Ann Inst Stat Math 56:721–732
Gupta RC, Kirmani SNUA (2008) Characterizations based on convex conditional mean function. J Stat Plan Inference 138:964–970
López-Blázquez F, Moreno-Rebollo JL (1997) A characterization of distributions based on linearity of regression for order statistics and record values. Sankhya A 59:311–323
Nagaraja HN, Nevzerov VB (1997) On characterizations based on records and order statistics. J Stat Plan Inference 63:271–284
Nevzerov VB (2001) Records: mathematical theory. AMS, Providence
Pudeg A (1991) Characterization of probability distributions via distributional properties of order statistics and record values. PhD Dissert., Aachen Univ. Tech., Aachen (in German)
Rao CR, Shanbhag DN (1994) Choquet-Deny type of functional equations with applications to stochastic models. Wiley, New York
Rogers GS (1963) An alternative proof of the characterization of the density \(Ax^{\beta }\). Am Math Mon 70:857–858
Wesołowski J, Gupta AK (2001) Linearity of convex mean residual life. J Stat Plan Inference 99:183–191
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Dołęgowski, A., Wesołowski, J. Linearity of regression for overlapping order statistics. Metrika 78, 205–218 (2015). https://doi.org/10.1007/s00184-014-0496-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-014-0496-6