Bayesian and MLE of R=P(Y>X) for Exponential Distribution Based on Varied L Ranked Set Sampling

Mohamed S. Abdallah

Department of Quantitative Techniques, Faculty of Commerce, Aswan University, Egypt
E-mail: statisticsms.2010@gmail.com, mohamed_abdallah@com.aswu.edu.eg
ORCID: https://orcid.org/0000-0003-0600-5518

Abstract

The ranked set sampling (RSS) is an effective scheme popularly used to produce more precisely estimators. Despite its popularity, RSS suffers from some drawbacks which includes high sensitivity to outliers and it cannot sometimes be applicable when the population is relatively small. To overcome these limitations, varied L ranked set sampling (VLRSS) is recently introduced. It is shown that VLRSS scheme enjoys with many interesting properties over RSS and also encompasses several existing RSS schemes. In addition, it is also helpful for providing precise estimates of several population parameters. To fill this gap, this article extends the work and address the estimation of based on VLRSS when the strength and stress both follow exponential distribution. Maximum likelihood approach as well as Bayesian method are considered for estimating . The Bayes estimators are obtained by using gamma distribution under general entropy loss function and LINEX loss function. The performance of the estimators based on VLRSS are investigated by a simulation study as well as a real dataset relevant to industrial field. The results reveal that the proposed estimators are more efficient relative to their analogues estimators under L ranked set sampling given that the quality of ranking is fairly good.

Keywords: Varied L ranked set sampling, maximum likelihood estimation, Bayes estimation, Lindley approximation, relative efficiency.

1 Introduction

The stress-strength reliability can be defined as the probability that the strength of the item (Y) is greater than the stress (X) imposed on it. This can be expressed as =P(X<Y). It is clear that if X is greater than Y, then the system will fail, otherwise it will continue to work. Therefore is commonly called as system reliability. Birnbaum (1956) applied firstly the idea of in physics and engineering, since then it has been applied in many other fields such as agriculture, quality control, statistics, medicine and so on. For more detailed applications, one can refer to the monograph of Kotz et al. (2003) and Vexler and Hutson (2018). Investigating the difference between two populations (say control group and treatment group) is an attractive practical example of , see Zamanzade et al. (2020).

Ranked set sampling (RSS), first proposed by McIntyre (1952), is one of the most significant sampling technique in statistics. Subsequently, RSS has been received extensive academic attention and concern by many scholars from various domains for its satisfactory performance and nice properties when making statistical inferences. Bouza and Al-Omari (2019) and references cited therein addressed the applicability of RSS in a wide range of disciplines. Since the introduction of RSS, several modifications of the RSS have been suggested to raise the efficiency of RSS-based inference, among which Al-nasser (2007) proposed LRSS as a variation of RSS to reduce the negative effect of the outliers and the extreme units to the information produced by RSS. The steps of LRSS can be summarized as:

1. Randomly select m sets each of size m items from the interested population.

2. Without any certain quantification, rank the units within each set with respect to virtual comparisons, expert opinion or auxiliary variables.

3. Identify the LRSS coefficient, k=[mα], such that 0α<.5, where [x] is the largest integer value less than or equal to x.

4. For each of the first k+1 ranked sets, exactly quantify the k+1 ordered sampling unit.

5. For each of the last k+1 ranked sets, exactly quantify the m-k ordered sampling unit.

6. For j=k+2,,m-k-1, exactly quantify the item with rank j in the jth ranked item.

7. Repeat the preceding steps, if needed, r cycles to get a sample of size n=rm for actual quantification.

Note that the ranking process may be imprecise and contains errors, this situation is known as imperfect ranking. In contradiction, perfect ranking is a situation in which the ranking process is free from errors. Additionally, if k=0, then the LRSS will be reduced to the traditional RSS. Al-Omari (2015) considered the estimation of the distribution function under LRSS. Göçoğlu and Demirel (2019) used LRSS and other sampling variations for estimating the population proportion. Recently, Haq et al. (2015) introduced a generalization variation of LRSS as proposing a new sampling design termed as varied LRSS (VLRSS).

To get VLRSS, draw randomly k sets each of size m1 from the parent population, where m1m or m1<m is a determined value based on cost- or budget-constraints. Rank these items separately within each set via any inexpensive way and then select the vth smallest ranked item, where v=1, 2,[m1/2]. Draw again another k sets each of size m1 from the parent population and rank them, then select this time the (m1-v+1)th smallest ranked item. Finally, draw (m-2k) sets each of size m items from the parent population and also rank them, then select ith smallest ranked item from each set, where i=k+1,m-k. This completes one cycle of VLRSS of size m. The whole procedure can be repeated r times to obtain VLRSS of size n=rm.

The beauty of VLRSS is that it encompasses several variations of RSS schemes, thus studying VLRSS implies considering many popular RSS designs. Therefore, VLRSS can be adopted for estimating much many interested parameters such as proportion parameter (Frey and Zhang 2019), dynamic reliability measure parameter (Mahdizadeh and Zamanzade 2018b), Quantile parameter (Morabbi and Razmkhah 2020), mean past lifetime parameter (Zamanzade et al. 2022) …etc. Nevertheless, not too much studies are published in this concern. Haq et al. (2015) proved that the mean estimator under VLRSS is an unbiased estimator and better than its analog in RSS under certain conditions. Recently, Al-Omari (2021) investigated the superiority of VLRSS in estimating the location and scale parameters of a location-scale family. Abdallah (2022) used VLRSS in estimating the population distribution function. These findings encourage us to study the properties of MLE and Bayesian estimators of under exponentiality in the case of adopting VLRSS.

Numerous works have studied the estimation under RSS and its variations using either parametric or nonparametric approach. For instance, Mahdizadeh and Zamanzade (2016, 2018a, 2020) adopted the nonparametric approach, while Akgul et al. (2018, 2020), Esemen et al. (2021) and Hassan et al. (2021) considered the parametric approach. In the same context, Dong and Zhang (2019) estimated under LRSS in the case that X and Y follow the exponential distribution. This motivated us to extent this particular study to estimate based on VLRSS using Maximum likelihood estimation (MLE) as well as Bayesian approach.

A continuous random variable X has the exponential distribution if its probability density function (pdf) and cumulative distribution function (CDF) are given, respectively, by:

f(x;λ)=λe-λx

and

F(x;λ)=1-e-λx,

Kotz et al. (2003) obtained under the assumption that X and Y are two independent exponential random variables with parameters μ and λ, respectively as follows:

=P(Y>X)=0F(x,μ)f(x,λ)dx=μμ+λ. (1)

The rest of the paper is organized as follows: Section 2 presents the estimates of under LRSS using MLE and Bayesian method. Section 3 proposes the MLE and Bayesian estimators for under VLRSS. Section 4 gives the numerical comparisons between the considered estimators in terms of relative efficiency criterion. In Section 5, an application to an empirical dataset is addressed. Eventually, some concluding remarks and some potential directions for future studies are presented in Section 6. Finally, R code are postponed to an appendix.

2 Estimators of Under LRSS

In this section, the MLE and the Bayes estimators of R under LRSS data are derived.

2.1 MLE Estimators of

Let Xi(s:m)j,(i=1,m, j=1,r) denote the sth order statistics from the ith set of size m in the jth cycle. Then Xi(hix:mx)j,(i=1,mx, j=1,rx) and Yi(hiy:my)j,(i=1,my, j=1,ry) denote the LRSS samples from the exponential distribution with parameters μ and λ at sample sizes nx=mxrx and ny=myry respectively, where mx and my are the set sizes, rx and ry are the number of cycles and hiz can be expressed as:

hiz={k+11ik+1ik+2imz-k-1mz-kmz-kimz.

Assuming the ranking is perfect, the pdf of Xi(hix:mx)j and Yi(hiy:my)j are respectively given by:

gx(xi(hix:mx)j) =mx!(hix-1)!(mx-hix)![F(xi(hix:mx)j,μ)]hix-1
×[1-F(xi(hix:mx)j,μ)]mx-hixf(xi(hix:mx)j,μ) (2)

and

gy(yi(hiy:my)j) =my!(hiy-1)!(my-hiy)![F(yi(hiy:my)j,λ)]hiy-1
×[1-F(yi(hiy:my)j,λ)]my-hiyf(yi(hiy:my)j,λ). (3)

Using (2) and (3), the likelihood function L1(μ,λ) can be written as:

L1(μ,λ) =[j=1rxi=1mxgx(xi(hix:mx)j)]×[j=1ryi=1mygy(yi(hiy:my)j)].
=(μnxλny)([j=1rxi=1mxmx!(hix-1)!(mx-hix)!
×(1-e-μx(hix:mx)j)hix-1e-(mx-hix+1)μx(hix:mx)j]
×[j=1ryi=1mymy!(hiy-1)!(my-hiy)!(1-e-λy(hiy:my)j)hiy-1
×e-(my-hiy+1)λy(hiy:my)j]).

According to Dong and Zhang (2019), the MLE estimators of the parameters for μ and λ, say μ^1 and λ^1, can been obtained by solving simultaneously the following equations:

ln(L2(μ,λ))μ =nxμ-j=1rxi=1mx(mix-hix+1)x(hix:mix)j
-j=1rxi=1mx(hix-1)x(hix:mix)je-μx(tix:mix)j1-e-μx(hix:mix)j=0 (4)
ln(L2(μ,λ))λ =nyλ-j=1ryi=1my(miy-hiy+1)y(hiy:miy)j
-j=1ryi=1my(hiy-1)y(hiy:miy)je-λy(hiy:miy)j1-e-λy(hiy:miy)j=0 (5)

By using the invariance property, the estimated value of R given by:

^1=μ^1μ^1+λ^1.

Additionally, Dong and Zhang (2019) proved that ^1 has existence and uniqueness property and it is asymptotically normally distributed.

2.2 Bayesian Estimators of

In this part, we consider the Bayesian approach to estimate R under LRSS. This implies treating μ and λ as random variables instated of fixed constants. This idea is believable since the parameters of any population may be change throughout the study. This variation in the parameters can be taken into account by assuming prior distributions of unknown parameters. This explains the reason that the Bayesian approach often provides better and more precision estimates than MLE method. Here, we assume that the parameters μ and λ follow independent gamma prior distributions of the following forms:

π(μ,λ)μa1-1λλ1-1exp(-a2μ-λ2λ),

where a1,λ1,a2 and λ2 are called the hyper-parameters. L1(μ,λ) can be expressed as:

L1(μ,λ) =[(μnx)j=1rxi=1mxs=0hix-1(-1)shix(mxhix)(hix-1s)
×e-(mx-hix+s+1)μx(hix:mx)j]
×(λny)[j=1ryi=1mys=0hiy-1(-1)shiy(myhiy)(hiy-1s)
×e-(my-hiy+s+1)λy(hiy:my)j].
=[(μnx)j=1rx[s1j=0h1x-1s2j=0h2x-1smxj=0hmx-1
×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]
×e-μi=1mx(mx-hix+sij+1)x(hix:mx)j]]
×[(λny)j=1ry[s1j=0h1y-1s2j=0h2y-1smyj=0hmy-1
×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]
e-λi=1my(my-hiy+sij+1)y(hiy:my)j]].

Consequently, the posterior density can be obtained as:

π(μ,λ|X,Y)
  =[j=1rx[s1j=0h1x-1s2j=0h2x-1smxj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)](μmx+a1-1)×e-μi=1mx(mx-hix+sij+a2+1)x(hix:mx)j]j=1rx[s1j=0h1x-1s2j=0h2x-1snj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]×Γ(mx+a1)(i=1mx(mx-hix+sij+a2+1)x(hix:mx)j)mx+a1]]
  ×[j=1ry[s1j=0h1y-1s2j=0h2y-1smyj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)](λmy+λ1-1)×e-λi=1my(my-hiy+sij+λ2+1)y(hiy:my)j]j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]×Γ(my+λ1)(i=1my(my-hiy+sij+λ2+1)y(hiy:my)j)my+λ1]].

Although squared error loss function is the most popular loss function used in the Bayesian analysis, it is not preferable for estimating reliability function (see Singh et al. (2014)). Hereafter, general entropy loss (GEL) function and LINEX loss function are adopted. The Bayesian estimates of μ and λ based on GEL can be expressed respectively as:

μ^2 =(E(μ-p))-1p
=[j=1rx[s1j=0h1x-1s2j=0h2x-1snj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]×Γ(mx+a1-p)(i=1mx(mx-hix+sij+a2+1)x(hix:mx)j)mx+a1-p]j=1rx[s1j=0h1x-1s2j=0h2x-1snj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]×Γ(mx+a1)(i=1mx(mx-hix+sij+a2+1)x(hix:mx)j)mx+a1]]-1p.
λ^2 =(E(λ-p))-1p
=[j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]×Γ(my+λ1-p)(i=1my(my-hiy+sij+λ2+1)y(hiy:my)j)my+λ1-p]j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]×Γ(my+λ1)(i=1my(my-hiy+sij+λ2+1)y(hiy:my)j)my+λ1]]-1p.

Then the Bayesian estimator of under GEL given by:

^2=μ^2μ^2+λ^2.

Under LINEX loss function, the Bayesian estimates of μ and λ can be described respectively as:

μ^3 =-1pln(E(e-pμ))
=-1pln[j=1rx[s1j=0h1x-1s2j=0h2x-1smxj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]×Γ(mx+a1)(i=1mx(mx-hix+sij+a2+1)x(hix:mx)j+p)mx+a1]j=1rx[s1j=0h1x-1s2j=0h2x-1snj=0hmx-1×[i=1mx(-1)sijhix(mxhix)(hix-1sij)]×Γ(mx+a1)(i=1mx(mx-hix+sij+a2+1)x(hix:mx)j)mx+a1]].
λ^3 =-1pln(E(e-pλ))
=-1pln[j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]×Γ(my+λ1)(i=1my(my-hiy+sij+λ2+1)y(hiy:my)j+p)my+λ1]j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijhiy(myhiy)(hiy-1sij)]×Γ(my+λ1)(i=1my(my-hiy+sij+λ2+1)y(hiy:my)j)my+λ1]].

Hence the Bayesian estimator of under LINEX loss function given by:

^3=μ^3μ^3+λ^3.

Note that by setting a1=a2=λ1=λ2=0, the ^2 and ^3 are then corresponding to the Jeffreys prior.

3 Estimators of Under VLRSS

In this section, the MLE and the Bayes estimators of under VLRSS data are considered.

3.1 MLE Estimators of

Let Xi(tix:mix)j and Yi(tiy:miy)j denote a perfect VLRSS drawn from the exponential distribution, where:

tiz={v1ik+1m1-v+1k+2i2ki-k2k+1imz.

and

miz={m1z1i2kmz2k+1imz.

Assuming the ranking is perfect, the pdf of Xi(tix:mix)j and Yi(tiy:miy)j are respectively given by:

wx(xi(tix:mix)j) =mix!(tix-1)!(mix-tix)![F(xi(tix:mix)j,μ)]tix-1
×[1-F(xi(tix:mix)j,μ)]mix-tixf(xi(tix:mix)j,μ)

and

wy(yi(tiy:miy)j) =miy!(tiy-1)!(miy-tiy)![F(yi(tiy:miy)j,λ)]tiy-1
×[1-F(yi(tiy:miy)j,λ)]miy-tiyf(yi(tiy:miy)j,λ).

Using (6) and (7), the likelihood function L2(μ,λ) can be expressed as:

L2(μ,λ) =[j=1rxi=1mxwx(xi(tix:mix)j)]×[j=1ryi=1mywy(yi(tiy:miy)j)].
=(μnxλny)([j=1rxi=1mxmix!(tix-1)!(mix-tix)!
×(1-e-μx(tix:mix)j)tix-1e-(mix-tix+1)μx(tix:mix)j]
×[j=1ryi=1mymiy!(tiy-1)!(miy-tiy)!(1-e-λy(tiy:miy)j)tiy-1
×e-(miy-tiy+1)λy(tiy:miy)j]).

The MLEs of μ and λ can be obtained by solving the following likelihood equations:

ln(L2(μ,λ))μ =nxμ-j=1rxi=1mx(mix-tix+1)μx(tix:mix)j
+j=1rxi=1mxx(tix:mix)je-μx(tix:mix)j(tix-1)e-μx(tix:mix)j-1=0
ln(L2(μ,λ))λ =nyλ-j=1ryi=1my(miy-tiy+1)λy(tiy:miy)j
+j=1ryi=1myy(tiy:miy)je-μy(tiy:miy)j(tiy-1)e-μy(tiy:miy)j-1=0

The MLEs of μ and λ based on VLRSS are denoted by μ^4 and λ^4 respectively. By using the invariance property of the MLEs, the MLE of under VLRSS is given by:

^4=μ^4μ^4+λ^4.

Proposition 1: For any a given VLRSS scheme, R^4 has existence and uniqueness.

Proof: It is clear that when μ0, the first and the third terms of ln(L2(μ,λ))μ tend to while the second term tend to 0 yielding ln(L2(μ,λ))μ to be positive. On the other hand, as μ, the second and the third terms of ln(L2(μ,λ))μ tend to - while the first term tend to 0 yielding ln(L2(μ,λ))μ to be negative. By using the first derivative test, see Sinha et al. (2016), this is to say that ln(L2(μ,λ))μ has exactly one root which is μ^4. Likewise, the same argument can also be extended in the case of ln(L2(μ,λ))λ. The proposition is proved.

In order to get the asymptotic distribution of ^4, we have to derive the fisher information matrix of I(μ^4,λ^4) expressed as:

I(μ^4,λ^4)=[I1100I22],

where:

I11 =-E(2ln(L2(μ,λ))μ2).
=-E(-nxμ2-j=1rxi=1mx(mix-tix+1)x(tix:mix)j
+j=1rxi=1mxx(tix:mix)j2e-2μx(tix:mix)j(tix-1)(1-e-μx(tix:mix)j)2
+x(tix:mix)j2e-μx(tix:mix)j(tix-1)1-e-μx(tix:mix)j).
=nxμ2+F1x(μ)-F2x(μ)-F3x(μ),

where:

F1x(μ) =E(j=1rxi=1mx(mix-tix+1)x(tix:mix)j)
=j=1rxi=1mx(mix-tix+1)E(x(tix:mix)j). (8)

Since:

E(x(tix:mix)j) =0x(tix:mix)jwx(xi(tix:mix)j)dx(tix:mix)j
=0x(tix:mix)jmix!(tix-1)!(mix-tix)!
×(1-e-μx(tix:mix)j)tix-1
×e-(mix-tix+1)μx(tix:mix)jdx(tix:mix)j.
=mix!(tix-1)!(mix-tix)!s=0tix-1(tix-1s)(-1)s
×0x(tix:mix)jμe-(mix-tix+1+s)μx(tix:mix)jdx(tix:mix)j.
=mix!(tix-1)!(mix-tix)!s=0tix-1(tix-1s)
×(-1)s(mix-tix+1+s)2μ. (9)

By substituting (9) in (8), we will get:

F1x(μ) =j=1rxi=1mx(mix-tix+1)mix!(tix-1)!(mix-tix)!
×s=0tix-1(tix-1s)(-1)s(mix-tix+1+s)2μ.

By using the same procedure, one can investigate that:

F2x(μ) =E(j=1rxi=1mxx(tix:mix)j2e-2μx(tix:mix)j(tix-1)(1-e-μx(tix:mix)j)2)
=j=1rxi=1mxmix!(tix-1)(tix-1)!(mix-tix)!s=0tix-3(tix-3s)
×2(-1)s(mix-tix+3+s)3μ2and
F3x(μ) =E(j=1rxi=1mxx(tix:mix)j2e-μx(tix:mix)j(tix-1)1-e-μx(tix:mix)j)
=j=1rxi=1mxmix!(tix-1)(tix-1)!(mix-tix)!s=0tix-2(tix-2s)
×2(-1)s(mix-tix+2+s)3μ2.

Similarly,

I22=-E(2ln(L2(μ,λ))λ2)=nyλ2+F1y(λ)-F2y(λ)-F3y(λ).

Proposition 2: As nx and ny, then:

(nx(μ^4-μ),ny(λ^4-λ))dN(0,[I11nx00I22ny]-1).

Proof. The proof follows from the asymptotic properties of the ML estimators under regularity conditions.

Proposition 3: As nx and ny, the asymptotic distribution of ^4 is obtained as:

^4-d1nxI11+d2nyI22dN(0,1),

where d1=μ=λ(μ+λ)2 and d2=λ=-μ(μ+λ)2.

Proof. The proof follows from proposition 2 and the delta method.

3.2 Bayesian Estimators of

Here the unknown parameter is estimated under Bayesian set-up based on VLRSS. Similar to what is done in the preceding section, L2(μ,λ) can be rewritten as:

L2(μ,λ) =(μnxλny)([j=1rxi=1mxmix!(tix-1)!(mix-tix)!
×(1-e-μx(tix:mix)j)tix-1e-(mix-tix+1)μx(tix:mix)j]
×[j=1ryi=1mymiy!(tiy-1)!(miy-tiy)!(1-e-λy(tiy:miy)j)tiy-1
×e-(miy-tiy+1)λy(tiy:miy)j]).
=[(μnx)j=1rx[s1j=0t1x-1s2j=0t2x-1smxj=0tmx-1
×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]
×e-μi=1mx(mix-tix+sij+1)x(tix:mix)j]]
×[(λny)j=1ry[s1j=0t1y-1s2j=0t2y-1smyj=0tmy-1
×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]
×e-λi=1my(miy-tiy+sij+1)y(tiy:miy)j]].

Consequently, the posterior density can be obtained as:

π(μ,λ|X,Y) =π(μ,λ)L2(μ,λ)π(μ,λ)L2(μ,λ)dμdλ
=[j=1rx[s1j=0t1x-1s2j=0t2x-1smxj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)](μmx+a1-1)×e-μi=1mx(mix-tix+sij+a2+1)x(tix:mix)j]j=1rx[s1j=0t1x-1s2j=0t2x-1snj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]×Γ(mx+a1)(i=1mx(mix-tix+sij+a2+1)x(tix:mix)j)mx+a1]]
×[j=1ry[s1j=0t1y-1s2j=0t2y-1smyj=0tmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×(λmy+λ1-1)×e-λi=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j]j=1ry[s1j=0t1y-1s2j=0t2y-1snj=0tmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×Γ(my+λ1)(i=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j)my+λ1]].

The Bayesian estimates of μ and λ based on GEL function can be expressed respectively as:

μ^5 =(E(μ-p))-1p
=[j=1rx[s1j=0t1x-1s2j=0t2x-1snj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]×Γ(mx+a1-p)(i=1mx(mix-tix+sij+a2+1)x(tix:mix)j)mx+a1-p]j=1rx[s1j=0t1x-1s2j=0t2x-1snj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]×Γ(mx+a1)(i=1mx(mix-tix+sij+a2+1)x(tix:mix)j)mx+a1]]-1p.
λ^5 =(E(λ-p))-1p
=[j=1ry[s1j=0t1y-1s2j=0t2y-1snj=0tmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×Γ(my+λ1-p)(i=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j)my+λ1-p]j=1ry[s1j=0t1y-1s2j=0t2y-1snj=0tmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×Γ(my+λ1)(i=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j)my+λ1]]-1p.

Then the Bayesian estimator of under GEL given by:

^5=μ^5μ^5+λ^5.

Under LINEX loss function, the Bayesian estimates of μ and λ can be described respectively as:

μ^6 =-1pln(E(e-pμ))
=-1pln[j=1rx[s1j=0t1x-1s2j=0t2x-1smxj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]×Γ(mx+a1)(i=1mx(mix-tix+sij+a2+1)x(tix:mix)j+p)mx+a1]j=1rx[s1j=0t1x-1s2j=0t2x-1snj=0tmx-1×[i=1mx(-1)sijtix(mixtix)(tix-1sij)]×Γ(mx+a1)(i=1mx(mix-tix+sij+a2+1)x(tix:mix)j)mx+a1]].
λ^6 =-1pln(E(e-pλ))
=-1pln[j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×Γ(my+λ1)(i=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j+p)my+λ1]j=1ry[s1j=0h1y-1s2j=0h2y-1snj=0hmy-1×[i=1my(-1)sijtiy(miytiy)(tiy-1sij)]×Γ(my+λ1)(i=1my(miy-tiy+sij+λ2+1)y(tiy:miy)j)my+λ1]].

Hence the Bayesian estimator of under LINEX loss function given by:

^6=μ^6μ^6+λ^6.

4 Comparison of the Proposed Estimators

In this part, a comparison study between the suggested estimators for based on VLRSS and their LRSS analogues is made under varying sample sizes, different values of and several quality ranking configurations. Two levels of the set sizes and the cycle sizes are considered: mx=my=m{3,5} and rx=ry=r{5,10} which enables us to assess the effect of the set size as well as the sample size. For studying the effect of m1,k and v, we considered m1{3,4,10} and (k,v){1,2}. We set μ=1 and λ is selected such that {0.25,0.5,0.75}. Aiming better analyzing the simulated results, Dell and Clutter (1972)’s imperfect ranking model with correlation coefficient ρ is considered to take the effect of the quality of ranking process into account. In this case, a sample from the bivariate normal distribution with linear correlation ρ is first generated. One of these variables is considered as the variable of interest and then it is transformed to the exponential distribution, while the other one is taken as a concomitant variable and it is used for ranking process. Three different configurations of ρ are considered: ρ=1 for perfect ranking, ρ=0.9 for imperfect ranking with reasonable good accuracy, and ρ=0.5 for imperfect ranking. We compute the relative efficiency (RE) measure as the comparison characteristic in the light of the following formula:

REi=MSE(^i)MSE(^3+i),i{1,2,3}.

Table 1 The estimates of REs at ρ=1

M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
k=1
RE1 0.25 (3,5) 1.17 1.23 1.44 2.01 1.47 2.12 1.50 2.27 1.55 2.31 2.39 1.61 2.41 2.51 1.65 2.58 2.66
(3,10) 1.15 1.43 1.48 1.54 1.59 1.78 1.66 1.85 1.71 1.92 2.01 1.75 2.07 2.11 1.76 2.15 2.17
(5,5) 0.82 0.86 0.90 1.01 0.94 0.74 0.96 1.03 0.99 1.04 1.13 1.01 1.04 1.24 1.03 1.05 1.45
(5,10) 0.87 0.87 0.92 0.97 0.97 0.87 0.89 1.01 0.89 1.01 1.10 0.97 1.05 1.23 1.04 1.06 1.35
0.50 (3,5) 1.04 1.11 1.34 1.39 1.59 1.61 1.65 1.74 1.71 2.10 2.01 1.79 2.21 2.22 1.87 2.35 2.41
(3,10) 1.12 1.44 1.50 1.18 1.58 1.28 1.61 1.37 1.70 1.72 1.80 1.77 1.88 1.99 1.81 2.01 2.15
(5,5) 0.89 0.87 0.92 1.03 0.99 1.06 1.02 1.14 1.04 1.20 1.21 1.05 1.30 1.34 1.08 1.352 1.39
(5,10) 0.88 0.89 0.96 1.02 0.99 1.13 1.01 1.25 1.03 1.25 1.19 1.03 1.26 1.30 1.09 1.40 1.35
0.75 (3,5) 0.90 1.21 1.41 1.66 1.54 1.78 1.63 1.81 1.77 1.99 2.01 1.81 2.16 2.22 1.84 2.40 2.51
(3,10) 0.98 1.33 1.41 1.45 1.49 1.52 1.53 1.62 1.68 1.78 1.87 1.75 1.89 2.01 1.80 2.03 2.22
(5,5) 0.91 0.92 0.95 1.12 0.99 1.17 1.02 1.19 1.02 1.22 1.34 1.02 1.23 1.38 1.01 1.24 1.42
(5,10) 0.79 0.93 0.98 1.11 1.01 1.12 1.02 1.18 1.03 1.25 1.36 1.04 1.26 1.40 1.03 1.31 1.45
RE2 0.25 (3,5) 1.08 1.46 2.01 1.99 2.09 2.13 2.16 2.36 2.30 2.63 2.66 2.40 2.81 2.89 2.48 2.99 3.10
(3,10) 1.03 1.35 1.51 1.23 1.61 1.34 1.99 1.76 2.23 2.65 2.70 2.43 2.98 2.96 2.81 3.20 3.65
(5,5) 0.76 0.79 0.94 0.97 1.11 0.99 1.12 1.11 1.14 1.13 1.16 1.16 1.14 1.18 1.14 1.15 1.20
(5,10) 0.79 0.82 0.96 0.89 1.08 1.02 1.22 1.13 1.15 1.13 1.17 1.18 1.21 1.21 1.20 1.24 1.26
0.50 (3,5) 1.26 1.35 1.43 1.69 1.58 1.81 1.94 1.90 2.10 2.15 2.33 2.54 2.49 2.61 2.91 2.99 3.11
(3,10) 1.19 1.11 1.76 1.54 1.89 1.98 2.21 2.42 2.65 2.78 2.98 2.87 2.91 3.10 3.01 3.01 3.23
(5,5) 0.91 0.93 0.98 1.06 1.11 1.12 1.13 1.15 1.15 1.16 1.03 1.17 1.20 1.05 1.18 1.21 1.11
(5,10) 0.82 0.92 0.97 1.05 1.13 1.14 1.10 1.16 1.16 1.20 1.02 1.21 1.18 1.04 1.20 1.25 1.13
0.75 (3,5) 1.07 1.26 1.46 1.71 1.60 2.14 1.74 2.43 1.91 2.65 2.69 2.13 2.87 2.99 2.31 3.21 3.51
(3,10) 1.21 2.41 2.78 1.73 2.99 3.12 3.20 3.31 3.65 3.81 3.90 3.87 3.99 4.20 4.19 3.92 4.32
(5,5) 0.94 0.93 1.02 1.03 1.04 1.05 1.08 1.09 1.11 1.13 1.05 1.14 1.17 1.22 1.16 1.22 1.32
(5,10) 0.89 0.94 1.04 1.08 1.05 1.07 1.07 1.11 1.15 1.09 1.19 1.18 1.23 1.24 1.23 1.29 1.34
RE3 0.25 (3,5) 1.07 1.15 1.54 1.16 1.89 2.18 2.12 2.31 2.25 2.41 2.47 2.37 2.77 2.80 2.42 2.88 2.99
(3,10) 1.09 1.30 1.80 1.47 1.76 1.65 1.89 1.79 1.99 2.32 2.46 2.25 2.77 2.98 2.54 3.82 4.05
(5,5) 0.78 0.89 0.87 0.99 1.16 1.06 1.45 1.15 1.64 1.27 1.79 1.75 1.35 1.89 1.85 1.45 2.12
M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
(5,10) 0.84 0.92 0.97 0.99 1.19 1.12 1.54 1.20 1.69 1.32 1.83 1.88 1.43 1.97 1.90 1.54 2.09
0.50 (3,5) 1.21 1.27 1.32 1.57 1.48 1.71 1.65 1.99 1.78 2.17 2.22 2.12 2.30 2.28 2.45 2.39 2.41
(3,10) 1.13 1.15 1.45 1.50 1.59 1.56 1.71 1.79 1.98 2.12 2.33 2.34 2.69 2.82 2.51 2.91 3.09
(5,5) 0.94 0.95 1.02 1.03 1.08 1.12 1.11 1.15 1.14 1.19 1.06 1.16 1.23 1.09 1.19 1.27 1.19
(5,10) 0.86 0.91 1.03 1.04 1.09 1.13 1.10 1.18 1.20 1.25 1.16 1.26 1.29 1.19 1.27 1.37 1.34
0.75 (3,5) 1.08 1.19 1.48 1.61 1.54 1.81 1.68 2.07 2.072 2.39 2.41 2.16 2.69 2.78 2.26 3.25 3.31
(3,10) 1.23 1.70 2.01 1.75 2.20 2.54 2.54 2.69 2.76 2.84 2.99 2.89 3.32 3.63 3.12 3.85 3.99
(5,5) 0.88 0.97 1.05 1.08 0.99 1.16 1.09 1.24 1.25 1.33 1.04 1.30 1.45 1.14 1.38 1.61 1.24
(5,10) 0.918 0.93 1.07 1.10 1.02 1.22 1.12 1.27 1.29 1.39 1.09 1.38 1.49 1.19 1.43 1.69 1.29
k=2
RE1 0.25 (5,5) 0.68 0.70 0.78 0.88 0.83 0.89 0.85 0.99 1.02 1.05 1.10 1.08 1.14 1.27 1.12 1.20 1.30
(5,10) 0.79 0.82 0.89 0.95 0.90 0.99 1.03 1.10 1.10 1.15 1.23 1.18 1.26 1.33 1.29 1.33 1.41
0.50 (5,5) 0.59 0.64 0.76 0.83 0.89 0.99 1.01 1.02 1.02 1.06 1.04 1.04 1.09 1.15 1.04 1.14 1.22
(5,10) 0.63 0.68 0.82 0.91 0.96 0.99 1.03 1.05 1.04 1.10 1.07 1.06 1.12 1.18 1.08 1.23 1.43
0.75 (5,5) 0.52 0.66 0.73 0.81 0.94 0.98 0.96 0.98 0.99 1.02 1.06 1.01 1.05 1.09 1.02 1.06 1.12
(5,10) 0.59 0.70 0.79 0.85 0.90 0.99 0.99 0.99 1.02 1.04 1.07 1.03 1.07 1.17 1.05 1.10 1.21
RE2 0.25 (5,5) 0.61 0.67 0.75 0.89 0.94 1.019 1.13 1.10 1.17 1.21 1.26 1.27 1.33 1.49 1.34 1.45 1.54
(5,10) 0.67 0.74 0.84 0.94 0.92 1.028 1.26 1.13 1.35 1.24 1.38 1.47 1.36 1.54 1.54 1.61 1.73
0.50 (5,5) 0.55 0.66 0.74 0.88 0.96 1.018 1.04 1.12 1.08 1.25 1.30 1.11 1.27 1.49 1.14 1.35 1.54
(5,10) 0.52 0.62 0.78 0.82 0.91 1.018 1.06 1.16 1.10 1.31 1.34 1.17 1.31 1.54 1.23 1.42 1.63
0.75 (5,5) 0.57 0.69 0.73 0.88 0.97 0.98 1.07 1.09 1.13 1.19 1.26 1.23 1.32 1.42 1.34 1.40 1.52
(5,10) 0.51 0.73 0.79 0.85 0.90 0.99 1.09 1.12 1.16 1.22 1.32 1.31 1.39 1.51 1.39 1.52 1.61
RE3 0.25 (5,5) 0.54 0.66 0.70 0.81 0.91 0.97 1.10 1.15 1.29 1.32 1.39 1.33 1.49 1.51 1.41 1.56 1.70
(5,10) 0.58 0.73 0.79 0.87 0.93 0.99 1.14 1.19 1.33 1.39 1.46 1.43 1.50 1.64 1.52 1.66 1.88
0.50 (5,5) 0.58 0.79 0.87 0.93 0.97 1.038 1.01 1.051 1.02 1.09 1.15 1.03 1.21 1.32 1.03 1.27 1.40
(5,10) 0.59 0.84 0.90 0.98 0.99 0.98 1.02 1.06 1.02 1.13 1.19 1.04 1.28 1.43 1.05 1.36 1.54
0.75 (5,5) 0.63 0.89 0.96 1.01 0.90 0.96 1.05 1.09 1.09 1.14 1.17 1.16 1.29 1.37 1.25 1.43 1.60
(5,10) 0.55 0.94 0.99 1.00 0.94 0.95 1.10 1.12 1.14 1.18 1.23 1.26 1.32 1.43 1.33 1.49 1.73

Table 2 The estimates of REs at ρ=0.90

M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
k=1
RE1 0.25 (3,5) 1.16 1.18 1.19 1.37 1.23 1.43 1.251 1.58 1.26 1.71 1.72 1.27 1.78 1.84 1.28 1.84 1.88
(3,10) 1.25 1.31 1.35 1.45 1.41 1.55 1.44 1.68 1.48 1.86 1.98 1.50 1.98 2.10 1.53 2.01 2.19
(5,5) 0.79 0.84 0.89 0.85 0.96 0.94 1.02 0.99 1.05 1.01 1.05 1.07 1.02 1.09 1.10 1.04 1.13
(5,10) 0.65 0.79 0.82 0.89 0.88 0.91 0.95 0.94 0.99 0.97 1.01 1.03 0.99 1.04 1.09 1.03 1.10
0.50 (3,5) 1.14 1.17 1.19 1.28 1.21 1.36 1.25 1.44 1.29 1.48 1.58 1.32 1.51 1.66 1.33 1.54 1.70
(3,10) 1.503 1.56 1.60 1.72 1.63 1.48 1.65 1.50 1.69 1.61 1.69 1.76 1.81 1.88 1.78 1.86 1.93
(5,5) 0.73 0.85 0.90 0.91 1.026 0.959 1.06 0.99 1.11 1.03 0.99 1.15 1.04 1.03 1.18 1.05 1.06
(5,10) 0.80 0.90 0.96 0.93 1.04 0.96 1.07 1.02 1.15 1.05 0.97 1.18 1.05 1.01 1.23 1.06 1.04
0.75 (3,5) 1.23 1.26 1.32 1.35 1.38 1.422 1.45 1.47 1.53 1.59 1.61 1.56 1.62 1.64 1.60 1.65 1.67
(3,10) 1.26 1.29 1.36 1.39 1.41 1.45 1.49 1.55 1.58 1.61 1.64 1.61 1.64 1.66 1.66 1.6871 1.703
(5,5) 0.76 0.82 0.92 0.89 0.99 0.92 1.04 0.97 1.09 0.97 0.98 1.16 0.99 1.01 1.22 0.99 1.03
(5,10) 0.79 0.80 0.96 0.90 1.02 0.931 1.06 0.98 1.10 1.01 1.02 1.14 1.01 1.03 1.20 1.02 1.05
RE2 0.25 (3,5) 1.37 1.55 1.69 1.66 1.74 1.70 1.79 1.75 1.85 1.79 1.75 1.88 1.83 1.80 1.91 1.846 1.86
(3,10) 1.44 1.63 1.76 1.749 1.80 1.75 1.85 1.77 1.90 1.84 1.79 1.97 1.88 1.85 2.11 1.92 1.89
(5,5) 0.83 0.88 0.98 0.82 1.06 0.92 1.17 1.07 1.29 1.12 0.993 1.41 1.15 1.03 1.53 1.17 1.05
(5,10) 0.77 0.86 0.94 0.99 1.02 1.01 1.10 1.04 1.20 1.09 1.01 1.24 1.13 1.04 1.27 1.18 1.07
0.50 (3,5) 1.24 1.29 1.35 1.44 1.42 1.54 1.51 1.62 1.60 1.74 1.80 1.64 1.82 1.94 1.71 1.91 2.051
(3,10) 1.33 1.39 1.45 1.47 1.51 1.57 1.54 1.68 1.65 1.78 1.83 1.70 1.89 1.99 1.79 2.04 2.14
(5,5) 0.79 0.84 0.99 0.96 1.03 0.99 1.06 1.01 1.09 1.02 0.97 1.12 1.03 1.00 1.14 1.03 1.02
(5,10) 0.84 0.88 0.99 0.98 1.06 1.01 1.13 1.03 1.17 1.04 0.99 1.207 1.06 1.00 1.27 1.04 1.03
0.75 (3,5) 1.42 1.54 1.66 1.58 1.70 1.65 1.84 1.80 1.98 1.89 1.86 2.12 1.89 1.87 2.24 1.930 1.91
(3,10) 1.46 1.50 1.73 1.68 1.80 1.76 1.85 1.84 2.02 1.96 1.90 2.20 2.05 2.00 2.31 2.25 2.072
(5,5) 0.80 0.89 1.02 0.97 1.09 1.01 1.127 1.09 1.16 1.15 1.02 1.1920 1.1707 1.08 1.21 1.20 1.10
(5,10) 0.82 0.95 1.05 0.99 1.11 1.04 1.15 1.07 1.19 1.194 1.04 1.21 1.24 1.06 1.25 1.28 1.08
RE3 0.25 (3,5) 1.40 1.45 1.51 1.548 1.57 1.603 1.64 1.679 1.69 1.739 1.75 1.71 1.74 1.79 1.75 1.8045 1.82
(3,10) 1.53 1.69 1.76 1.781 1.80 1.84 1.89 1.910 2.03 2.07 2.10 2.16 2.11 2.15 2.17 2.14 2.19
(5,5) 0.80 0.87 0.949 0.92 1.04 0.99 1.15 1.04 1.25 1.072 1.02 1.37 1.10 1.04 1.50 1.12 1.06
M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
(5,10) 0.75 0.80 0.92 0.97 1.01 0.99 1.10 1.041 1.22 1.09 0.95 1.30 1.10 0.99 1.09 1.13 1.04
0.50 (3,5) 1.19 1.26 1.31 1.38 1.37 1.43 1.43 1.48 1.48 1.54 1.666 1.50 1.63 1.79 1.551 1.65 1.78
(3,10) 1.26 1.31 1.36 1.39 1.43 1.47 1.51 1.54 1.60 1.64 1.70 1.74 1.777 1.84 1.84 1.90 1.95
(5,5) 0.77 0.89 1.01 0.902 1.02 0.994 1.07 1.01 1.119 1.03 0.99 1.12 1.06 1.02 1.15 1.03 1.01
(5,10) 0.79 0.94 1.04 0.93 1.05 0.98 1.10 1.02 1.15 1.02 0.98 1.191 1.04 1.01 1.26 1.05 1.04
0.75 (3,5) 1.49 1.54 1.63 1.60 1.71 1.66 1.78 1.75 1.84 1.80 1.70 1.90 1.84 1.74 1.96 1.86 1.76
(3,10) 1.54 1.70 1.79 1.64 1.83 1.70 1.89 1.80 1.94 1.85 1.75 1.99 1.87 1.80 2.07 1.94 1.83
(5,5) 0.86 0.96 1.03 0.99 1.06 1.04 1.09 1.08 1.13 1.11 1.05 1.16 1.16 1.08 1.22 1.19 1.12
(5,10) 0.90 0.99 1.05 1.001 1.09 1.05 1.10 1.10 1.15 1.14 1.07 1.218 1.20 1.097 1.28 1.26 1.13
k=2
RE1 0.25 (5,5) 0.65 0.69 0.79 0.80 0.84 0.86 0.87 0.94 0.95 0.92 1.29 0.99 1.02 1.30 1.02 1.03 1.32
(5,10) 0.71 0.73 0.81 0.84 0.88 0.89 0.90 0.92 0.98 0.97 1.31 1.03 1.05 1.36 1.03 1.05 1.37
0.50 (5,5) 0.67 0.721 0.779 0.82 0.92 1.01 0.94 1.08 0.99 1.15 1.24 1.03 1.20 1.34 1.03 1.25 1.45
(5,10) 0.72 0.741 0.749 0.852 0.95 1.03 0.984 1.108 1.0299 1.195 1.254 1.043 1.220 1.364 1.02 1.29 1.51
0.75 (5,5) 0.617 0.781 0.79 0.872 0.942 1.02 0.974 1.048 0.99 1.075 1.194 1.03 1.09 1.24 1.02 1.10 1.39
(5,10) 0.627 0.771 0.89 0.892 0.975 1.04 0.984 1.06 1.03 1.09 1.25 1.04 1.12 1.26 1.03 1.14 1.41
RE2 0.25 (5,5) 0.57 0.731 0.9179 0.92 0.9149 1.03 0.98 1.0246 1.02 1.06 1.0938 1.03 1.12 1.15 1.04 1.23 1.13
(5,10) 0.697 0.87881 0.89 0.902 0.93 1.04 0.99 1.03 1.03 1.07 1.10 1.04 1.14 1.17 1.05 1.26 1.15
0.50 (5,5) 0.70 0.8571 0.869 0.872 0.91 0.93 0.94 0.96 0.96 1.17 1.079 0.97 1.21 1.24 1.00 1.23 1.35
(5,10) 0.71 0.91 0.839 0.922 0.92 0.94 0.95 0.97 0.99 1.19 1.08 0.99 1.24 1.26 0.99 1.28 1.31
0.75 (5,5) 0.72 0.811 0.889 0.832 0.84 0.86 0.87 0.94 0.95 0.92 1.19 0.99 1.02 1.2630 1.01 1.05 1.43
(5,10) 0.637 0.841 0.789 0.842 0.88 0.89 0.90 0.92 0.98 0.97 1.21 1.03 1.05 1.276 1.02 1.04 1.35
RE3 0.25 (5,5) 0.647 0.851 0.819 0.852 0.91 0.94 0.94 0.96 0.96 1.09 1.09 0.97 1.12 1.13 0.98 1.17 1.150
(5,10) 0.67 0.891 0.8889 0.862 0.941 0.974 0.964 0.986 0.996 1.129 1.19 0.97 1.17 1.23 1.01 1.18 1.21
0.50 (5,5) 0.607 0.911 0.899 0.872 0.88 0.91 0.95 0.94 0.99 1.07 1.081 1.03 1.139 1.14 1.03 1.18 1.15
(5,10) 0.617 0.921 0.769 0.92 0.908 0.931 0.975 0.964 1.0199 1.087 1.0981 1.053 1.1539 1.174 1.04 1.195 1.21
0.75 (5,5) 0.627 0.761 0.929 0.9582 0.929 0.959 0.946 1.0199 0.961 1.03 1.189 1.01 1.04 1.289 1.02 1.04 1.34
(5,10) 0.67 0.861 0.95 0.9382 0.94 0.96 0.97 1.02 0.95 1.05 1.197 1.02 1.05 1.297 1.03 1.06 1.35

Table 3 The estimates of REs at ρ=0.5

M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
k=1
RE1 0.25 (3,5) 0.95 1.091 1.11 1.05 1.15 1.04 1.10 1.03 1.02 1.02 0.99 1.01 1.00 1.01 0.96 0.98 1.02
(3,10) 0.97 1.05 1.13 1.08 1.14 1.07 1.08 1.05 1.05 1.04 0.97 0.99 0.98 1.04 0.98 0.99 1.03
(5,5) 1.13 1.11 1.02 0.89 1.01 0.94 0.99 0.98 0.99 0.97 0.92 0.96 0.92 0.88 0.96 0.87 0.86
(5,10) 1.17 1.15 1.04 0.93 1.03 0.96 0.98 0.99 0.98 0.93 0.99 0.95 0.90 0.93 0.94 0.89 0.89
0.50 (3,5) 1.06 1.07 1.07 1.05 1.08 1.07 1.04 1.05 1.03 1.04 1.12 1.01 0.99 1.04 1.01 0.93 0.91
(3,10) 1.07 1.08 1.09 1.06 1.10 1.04 1.05 1.03 1.03 1.02 1.11 1.03 0.98 1.08 1.02 0.97 0.97
(5,5) 1.03 0.99 0.92 0.98 0.90 1.01 0.88 0.99 0.87 0.97 1.03 0.86 0.93 1.03 0.88 0.87 1.03
(5,10) 1.01 0.98 0.94 1.01 0.92 1.02 0.90 0.97 0.85 0.96 1.02 0.84 0.91 1.02 0.85 0.89 1.02
0.75 (3,5) 1.129 1.05 1.03 1.02 0.97 1.04 0.94 1.06 0.90 1.04 1.03 0.88 1.02 1.03 0.89 1.01 1.04
(3,10) 1.10 1.04 1.03 1.03 0.99 1.05 0.95 1.08 0.91 1.03 1.02 0.87 1.02 1.03 0.95 1.01 1.02
(5,5) 1.04 1.02 1.01 0.92 0.99 0.96 0.96 0.99 0.93 0.96 1.03 0.92 0.94 1.02 0.90 0.95 1.03
(5,10) 1.06 1.04 1.02 0.99 0.98 0.98 0.98 1.01 0.949 0.91 1.02 0.92 0.91 0.97 0.91 0.99 1.02
RE2 0.25 (3,5) 1.10 1.14 1.16 1.14 1.172 1.12 1.20 1.09 1.22 1.05 1.07 1.10 1.03 1.03 1.05 1.027 0.91
(3,10) 1.17 1.16 1.19 1.10 1.20 1.08 1.25 1.06 1.30 1.03 1.11 1.12 1.01 1.02 1.11 1.01 0.98
(5,5) 1.05 1.02 0.98 1.12 0.97 1.03 0.95 0.993 0.93 1.03 1.02 0.92 1.03 1.01 0.90 0.82 1.02
(5,10) 1.07 1.03 0.92 1.09 0.96 1.02 0.97 0.989 0.90 1.02 1.04 0.91 1.02 1.04 0.89 0.85 1.03
0.50 (3,5) 0.99 1.02 1.03 1.13 1.045 1.15 1.07 1.17 1.04 1.13 1.19 1.02 1.10 1.05 1.03 1.062 0.88
(3,10) 1.01 1.03 1.04 1.10 1.05 1.14 1.08 1.11 1.03 1.10 1.14 1.02 1.09 1.08 1.01 1.10 0.92
(5,5) 1.10 0.99 0.94 1.10 0.93 1.06 0.92 1.04 0.90 0.98 0.99 0.90 0.98 1.01 0.90 0.97 0.91
(5,10) 1.08 0.98 0.97 1.11 0.96 1.03 0.94 1.01 0.93 0.91 0.96 0.92 0.91 1.03 0.92 0.99 0.96
0.75 (3,5) 1.02 1.04 1.06 1.14 1.08 1.13 1.05 1.10 1.03 1.08 1.21 1.02 1.05 1.06 0.98 1.04 0.99
(3,10) 1.08 1.06 1.08 1.16 1.10 1.15 1.07 1.13 1.05 1.09 1.22 1.01 1.04 1.03 0.99 1.03 0.98
(5,5) 1.21 1.16 1.12 1.05 1.04 1.04 1.03 1.01 1.02 0.97 0.90 1.01 0.97 0.97 1.01 1.04 0.927
(5,10) 1.20 1.19 1.14 1.07 1.09 1.04 1.05 1.024 1.03 0.980 0.980 1.02 0.958 0.980 0.99 1.06 0.980
RE3 0.25 (3,5) 1.08 1.10 1.11 1.05 1.19 1.06 1.24 1.07 1.30 1.09 1.05 1.25 1.13 1.06 1.10 1.10 0.98
(3,10) 1.10 1.12 1.13 1.07 1.24 1.08 1.27 1.09 1.32 1.10 1.06 1.26 1.10 1.08 1.11 1.13 0.99
M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
(5,5) 1.03 0.97 0.98 1.09 0.99 1.068 0.97 1.036 0.95 0.99 0.91 0.94 0.99 0.94 0.93 0.83 0.859
(5,10) 1.05 0.99 0.96 1.12 0.99 1.039 0.99 1.013 0.97 0.96 0.946 0.95 0.96 0.956 0.95 0.88 0.829
0.50 (3,5) 0.99 1.01 1.02 1.06 1.03 1.09 1.02 1.06 1.01 1.04 1.07 1.00 1.03 1.04 1.02 1.02 0.92
(3,10) 1.01 1.02 1.03 1.07 1.04 1.05 1.03 1.04 1.02 1.02 1.09 1.01 1.02 1.01 1.03 1.03 0.94
(5,5) 1.11 1.06 0.99 1.09 0.96 1.08 0.95 1.05 0.94 0.89 0.93 0.92 0.89 0.93 0.89 0.97 0.96
(5,10) 1.14 1.04 0.98 1.12 0.97 1.10 0.96 1.06 0.95 0.92 0.99 0.94 0.92 0.99 0.90 0.99 0.99
0.75 (3,5) 1.18 1.11 1.08 1.10 1.05 1.12 1.04 1.10 1.00 1.07 1.14 0.10 1.05 1.09 0.96 1.04 1.02
(3,10) 1.19 1.12 1.09 1.12 1.06 1.11 1.039 1.07 0.99 1.05 1.17 0.93 1.06 1.11 0.97 1.04 1.04
(5,5) 1.19 1.16 1.16 1.06 1.11 1.04 1.09 1.01 1.06 0.85 0.90 1.03 0.89 0.93 0.98 1.02 0.829
1.18 1.16 1.15 1.05 1.12 1.01 1.07 0.99 1.05 0.972 0.92 1.04 0.92 0.99 0.97 1.03 0.92
k=2
RE1 0.25 (5,5) 0.97 0.99 1.01 1.01 1.021 1.03 1.00 1.06 0.99 1.09 0.99 0.98 1.03 0.89 0.96 1.013 0.89
(5,10) 0.99 0.99 1.01 1.05 1.042 1.07 1.02 1.09 0.99 1.10 0.97 0.99 1.07 0.89 0.98 1.016 0.87
0.50 (5,5) 1.11 1.13 1.18 1.02 1.121 1.09 1.00 1.11 0.99 1.12 0.92 0.98 1.06 0.82 0.97 1.018 0.72
(5,10) 1.10 1.15 1.20 1.03 1.094 1.04 1.02 1.07 0.99 1.09 0.99 0.99 1.04 0.89 0.93 1.020 0.79
0.75 (5,5) 0.95 0.97 0.98 1.08 0.021 1.10 1.00 1.12 0.99 1.12 0.96 0.98 1.10 0.83 0.95 1.08 0.76
(5,10) 0.97 0.99 1.001 1.04 1.042 1.06 1.02 1.08 0.99 1.096 0.98 0.99 1.056 0.88 0.94 1.07 0.78
RE2 0.25 (5,5) 0.927 0.99 1.01 1.08 1.021 1.10 1.00 1.12 0.99 1.12 0.96 0.98 1.10 0.82 0.91 1.013 0.716
(5,10) 0.969 0.99 1.01 1.11 1.042 1.13 1.02 1.15 0.99 1.15 0.94 0.99 1.111 0.84 0.98 1.06 0.744
0.50 (5,5) 1.08 1.08 1.10 1.01 1.021 1.05 1.00 1.06 0.99 1.08 0.99 0.98 1.05 0.92 0.97 1.08 0.809
(5,10) 1.07 1.11 1.12 1.05 1.042 1.07 1.02 1.09 0.99 1.09 0.97 0.99 1.06 0.87 0.93 1.020 0.717
0.75 (5,5) 0.95 0.97 0.98 1.02 1.021 1.04 1.00 1.06 0.99 1.07 0.92 0.98 1.04 0.85 0.95 1.08 0.72
(5,10) 0.97 0.99 1.02 1.03 1.042 1.06 1.02 1.08 0.99 1.09 0.99 0.99 1.06 0.90 0.94 1.07 0.79
RE3 0.25 (5,5) 0.97 0.99 1.01 1.08 1.021 1.10 1.00 1.12 0.99 1.13 0.96 0.98 1.09 0.90 0.96 1.03 0.86
(5,10) 0.99 0.99 1.01 1.04 1.042 1.06 1.02 1.09 0.99 1.09 0.98 0.99 1.06 0.92 0.98 1.06 0.78
0.50 (5,5) 1.11 1.13 1.18 1.08 1.021 1.10 1.00 1.12 0.99 1.13 0.96 0.98 1.10 0.96 0.97 1.08 0.76
(5,10) 1.10 1.15 1.20 1.11 1.042 1.13 1.02 1.15 0.99 1.16 0.94 0.99 1.12 0.90 0.98 1.020 0.704
0.75 (5,5) 0.95 0.97 0.98 1.06 1.021 1.08 1.00 1.11 0.99 1.10 0.99 0.98 1.08 0.91 0.95 1.08 0.749
(5,10) 0.97 0.99 1.00 1.05 1.042 1.07 1.02 1.11 0.99 1.09 0.97 0.99 1.07 0.91 0.94 1.07 0.737

With the above definition, a REi larger (less) than one implies that ^3+i asymptotically more (less) efficient than ^i. Under Bayesian set up, we let hyper-parameters a1=a2=λ1=λ2=0.01 and also p=1. For each combination of m,r,m1,k,v,ρ and , 5,000 samples are generated based on LRSS, and VLRSS schemes. Finally, for each generated sample the aforesaid estimators are computed, then the REs are estimated and listed in Tables 13.

It can be noted that the REs are increasing as the values of either m1 or v are increasing provided that the quality of ranking process is not weak. In contrast, in the case of the imperfect ranking, this pattern is generally reversed. This is not surprising as all the studied estimators are derived under the perfectness, thus when the ranking quality is low, increasing m1 or v yields to increasing the error ranking which leads to decreasing the efficiency of VLRSS-based estimators. It is also observed that the VLRSS-based estimators have better performance at small m rather than large m at fixed n. However, for a fixed m the effect of n is not pronounced. Furthermore, at many considered cases increasing the values of k has a negative effect to REs. Generally speaking, it is to say that VLRSS becomes more efficient than the LRSS for both MLE and Bayesian methods as improving the quality of the ranking, increasing the values of m1 or v and decreasing the values of m or k.

5 Real Data Analysis

In what follows, two empirical data set are used to explore the efficiency of estimating under exponentiality based on VLRSS. The first one is related to the industrial field reported by Bader and Priest (1982) and can be found at Esemen et al. (2021). The data reflect the strength measured in general purpose amplifier (GPA) for single carbon fibers and 1000-carbon fiber tows at different levels of gauge lengths. Here, we restrict our consideration on single carbon fibers tested under tension of 20 mm gauge (X) and 10 mm (Y). Suppose that we are interested in comparing with these two different levels of gauge lengths. One can easily to investigate that Y5 and X5 fit approximately the exponential distribution with λ~1=0.25% and μ~1=0.79%. Consequently, our analysis will be based on Y5 and X5 rather than Y and X. Hence, the single carbon fibers data set is considered as the hypothetical population with the value of the =μ~1μ~1+λ~1=76% which is close to that was estimated by Bader and Priest (1982).

Table 4 The estimates of REs based on single carbon fibers data set

M1
3 4 5 6 7 8 9 10
REi (m,r)v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
k=1
RE1 (3,5) 1.12 1.13 1.15 1.36 1.18 1.40 1.20 1.45 1.22 1.52 1.27 1.25 1.62 1.32 1.28 1.82 1.39
(3,10) 1.15 1.11 1.17 1.40 1.22 1.43 1.24 1.49 1.26 1.56 1.31 1.29 1.68 1.35 1.32 1.86 1.42
(5,5) 1.02 1.02 1.17 1.21 1.19 1.29 1.20 1.35 1.20 1.42 1.15 1.21 1.46 1.26 1.21 1.52 1.32
(5,10) 0.92 0.95 1.04 1.24 1.07 1.32 1.09 1.37 1.12 1.46 1.12 1.15 1.49 1.19 1.18 1.54 1.22
k=2
(5,5) 0.82 0.72 0.79 0.89 0.99 1.02 1.05 1.10 1.13 1.19 1.15 1.17 1.16 1.19 1.21 1.22 1.24
(5,10) 0.82 0.85 0.84 0.94 0.97 1.05 1.09 1.12 1.15 1.17 1.12 1.15 1.19 1.20 1.18 1.24 1.22
k=1
RE2 (3,5) 1.02 1.04 1.05 1.36 1.05 1.06 1.04 1.08 1.06 1.12 1.42 1.02 1.15 1.49 1.02 1.19 1.52
(3,10) 1.03 1.03 1.02 1.40 1.06 1.09 1.03 1.10 1.03 1.15 1.47 1.03 1.17 1.52 1.03 1.20 1.60
(5,5) 0.92 1.02 1.02 1.12 1.03 1.15 1.02 1.20 1.02 1.24 1.17 1.02 1.27 1.21 1.05 1.29 1.23
(5,10) 1.02 1.02 1.01 1.10 1.03 1.12 1.02 1.18 1.02 1.26 1.20 1.02 1.30 1.22 1.06 1.32 1.24
k=2
(5,5) 0.92 1.05 1.07 1.16 1.07 1.26 1.09 1.32 1.08 1.49 1.58 1.09 1.62 1.60 1.12 1.72 1.62
(5,10) 1.01 1.03 1.02 1.15 1.06 1.29 1.08 1.39 1.07 1.52 1.59 1.09 1.61 1.62 1.10 1.75 1.69
k=1
RE3 (3,5) 1.05 1.02 1.06 1.18 1.05 1.22 1.02 1.25 1.06 1.27 1.47 1.09 1.32 1.49 1.07 1.35 1.55
(3,10) 1.06 1.00 1.09 1.20 1.07 1.25 1.05 1.28 1.09 1.29 1.50 1.11 1.36 1.52 1.09 1.39 1.50
(5,5) 0.96 0.98 1.03 1.15 1.02 1.18 1.02 1.20 1.02 1.26 1.12 1.12 1.29 1.16 1.05 1.33 1.19
(5,10) 1.02 1.02 1.05 1.11 1.02 1.12 1.02 1.19 1.02 1.28 1.13 1.10 1.30 1.17 1.04 1.35 1.22
k=2
(5,5) 0.96 0.99 1.01 1.17 1.02 1.21 1.04 1.32 1.06 1.42 1.58 1.07 1.55 1.62 1.10 1.72 1.69
(5,10) 0.98 0.99 1.03 1.14 1.04 1.24 1.03 1.36 1.05 1.49 1.52 1.08 1.59 1.68 1.12 1.66 1.72

Table 5 The estimates of REs based on WALCS data set

M1
3 4 5 6 7 8 9 10
REi (m,r) v 1 1 1 2 1 2 1 2 1 2 3 1 2 3 1 2 3
k=1
RE1 (3,5) 1.23 1.24 1.27 1.50 1.37 1.62 1.39 1.68 1.42 1.76 1.47 1.50 1.94 1.58 1.53 2.18 1.66
(3,10) 1.27 1.22 1.29 1.54 1.42 1.66 1.44 1.73 1.46 1.81 1.52 1.54 2.01 1.61 1.58 2.22 1.70
(5,5) 1.12 1.12 1.29 1.33 1.38 1.50 1.39 1.57 1.39 1.65 1.33 1.45 1.75 1.59 1.45 1.82 1.58
(5,10) 1.01 1.03 1.14 1.36 1.24 1.53 1.26 1.59 1.30 1.69 1.30 1.38 1.78 1.42 1.41 1.84 1.46
k=2
(5,5) 0.90 0.79 0.87 0.98 1.15 1.18 1.22 1.25 1.31 1.38 1.33 1.40 1.39 1.42 1.45 1.46 1.48
(5,10) 0.90 0.94 0.92 1.03 1.13 1.22 1.26 1.30 1.33 1.36 1.30 1.38 1.42 1.44 1.41 1.48 1.46
k=1
RE2 (3,5) 1.12 1.14 1.16 1.50 1.22 1.23 1.21 1.25 1.23 1.30 1.65 1.22 1.38 1.78 1.22 1.42 1.82
(3,10) 1.13 1.13 1.12 1.54 1.23 1.26 1.19 1.28 1.19 1.33 1.71 1.20 1.40 1.82 1.23 1.44 1.91
(5,5) 1.01 1.12 1.12 1.23 1.19 1.33 1.18 1.39 1.18 1.44 1.36 1.22 1.52 1.45 1.26 1.54 1.47
(5,10) 1.12 1.12 1.11 1.21 1.19 1.30 1.18 1.37 1.18 1.46 1.39 1.22 1.55 1.46 1.27 1.58 1.48
k=2
(5,5) 1.01 1.16 1.18 1.28 1.24 1.46 1.26 1.53 1.25 1.73 1.83 1.30 1.94 1.91 1.34 2.06 1.94
(5,10) 1.11 1.13 1.12 1.27 1.23 1.50 1.25 1.61 1.24 1.76 1.84 1.30 1.93 1.94 1.32 2.09 2.02
k=1
RE3 (3,5) 1.16 1.12 1.17 1.30 1.22 1.42 1.18 1.45 1.23 1.47 1.71 1.30 1.58 1.78 1.28 1.61 1.85
(3,10) 1.17 1.10 1.20 1.32 1.24 1.45 1.22 1.48 1.26 1.50 1.74 1.33 1.63 1.82 1.30 1.66 1.79
(5,5) 1.06 1.08 1.13 1.27 1.18 1.37 1.18 1.39 1.18 1.46 1.30 1.34 1.54 1.39 1.26 1.59 1.42
(5,10) 1.12 1.12 1.16 1.22 1.18 1.30 1.18 1.38 1.18 1.48 1.31 1.32 1.55 1.40 1.24 1.61 1.46
k=2
(5,5) 1.06 1.09 1.11 1.29 1.18 1.40 1.21 1.53 1.23 1.65 1.83 1.28 1.85 1.94 1.32 2.06 2.02
(5,10) 1.08 1.09 1.13 1.25 1.21 1.44 1.19 1.58 1.22 1.73 1.76 1.29 1.90 2.01 1.34 1.99 2.06

On the other hand, the second data set is related to the medical field well-known as Veterans’ Administration Lungn Cancer Study (WALCS) data and it can be found in survival package of R statistical software. In this data set, sample of patients with lung cancer of size 137 data set include on eight variables. Here, we restrict our attention on the time of death of these patients who are divided into control, a standard therapy, group and treatment, chemotherapy, group. Suppose that our target is to assess if the survival time of the patients treated with chemotherapy (Y) is greater than those treated with a standard therapy (X). The exponentiality of Y and X was well-investigated by Zamanzade (2019) and Abdallah et al. (2021) with λ~2=0.78% and μ~2=0.86%. Similar to what is done with the single carbon fibers data set, WALCS is considered as the hypothetical population with the value of the =μ~2μ~2+λ~2=52%.

For the same values of m,r,m1,k and v shown in Section 4, 5,000 samples with replacement are drawn using perfect LRSS and perfect VLRSS separately from the two aforementioned datasets, then the three different REs are computed and reported in Tables 4 and 5. It appears that all the REs mostly exceed unity, and the improvement happens as increasing the values of m1 or v, yet the effect of the value of v is more pronounced. It is to be noted that the VLRSS-based estimators have better performance at small values of m relative to large ones. All of these remarks are consistent with what we noted in the preceding section.

6 Conclusion

In this study, we have considered the MLE and Bayesian methods of the system reliability =P(X<Y) using LRSS and VLRSS, where X and Y are independent one-parameter exponential distribution. The asymptotic statistical properties of the MLE estimators under VLRSS are analytically proved. The Bayes estimators are obtained by Jeffreys prior under general entropy loss function and LINEX loss function. It turns out from the numeric results based on simulated data and practical application that the estimates of reliability using VLRSS are more precise than LRSS analogues given that the quality of ranking is fairly good. Moreover, increasing the efficiency of the VLRSS-based estimators can be achieved by increasing m1, v or decreasing the set size. In a subsequent work, it may be of interest to study interval estimation of the system reliability parameter under the VLRSS scheme. Also, it is important to examine the superiority of VLRSS under other parametric distributions (see for instance Ali et al. (2020)). Further, using Upper Record Values based on VLRSS can be considered as an attractive future point (see Yousaf et al. (2019)).

Acknowledgements

The author is thankful to anonymous referees and an editor for their valuable comments and suggestions which substantially improved the paper.

Disclosure Statement

No potential conflict of interest was reported by the author.

Funding

No funding was received to assist with the preparation of this manuscript.

References

[1] Abdallah, M. S. (2022): Estimation of the Population Distribution Function using Varied L ranked set sampling. RAIRO-Operations Research, 56, 955–977.

[2] Abdallah, M. S. Jangphanish, K. and Volodin, A. (2021). Estimation of System Reliability Based on Moving Extreme and MiniMax Ranked Set Sampling for Exponential Distributions. Lobachevskii Journal of Mathematics. 42(13). 3061–3076.

[3] Akgul, F. Acıtaş, S. and Şenoğlu, B. (2018) Inferences on stress–strength reliability based on ranked set sampling data in case of Lindley distribution, Journal of Statistical Computation and Simulation, 88:15, 3018–3032, DOI: 10.1080/00949655.2018.1498095.

[4] Akgul, F. Yu, K. and Senoglu, B (2020): Estimation of the system reliability for generalized inverse Lindley distribution based on different sampling designs, Communications in Statistics – Theory and Methods, DOI: 10.1080/03610926.2019.1705977.

[5] Ali, S. Dey, S. Tahir, M.H. and Mansoor, M. (2020): Two-Parameter Logistic-Exponential Distribution: Some New Properties and Estimation Methods. American Journal of Mathematical and Management Sciences. 270–298.

[6] Al-Nasser, D. A. (2007). L ranked set sampling: a generalization procedure for robust visual sampling. Communications in Statistics – Simulation and Computation. 36. 33–44.

[7] Al-Omari, A. I. (2021): Maximum likelihood estimation in location-scale families using varied L ranked set sampling. RAIRO-Operations Research, 55, S2759–S2771.

[8] Al-Omari, A.I. (2015). The efficiency of L ranked set sampling in estimating the distribution function, Afrika Matematika. 26, 1457–1466.

[9] Bader, M. and Priest, A. (1982). Statistical aspects of fibre and bundle strength in hybrid composites, in Progress in Science and Engineering of Composites. (ICCM-IV, Tokyo,), pp. 1129–1136.

[10] Birnbaum, Z. M. (1956). On a use of the Mann-Whitney statistics. In: Proceedings of the third Berkeley symposium on mathematical statistics and probability. Contributions to the theory of statistics and probability, vol. 1, 13–7. Berkeley, CA: University of California Press.

[11] Bouza, C.N. & Al-Omari, A.I. (2019). Ranked Set Sampling, 65 Years Improving the Accuracy in Data Gathering. Elsevier, ISBN: 978-0-12-815044-3.

[12] Dell, T. R. & Clutter, J. L. (1972). Ranked set sampling theory with order statistics background. Biometrics. 28. 545-555.

[13] Dong, X.F. & Zhang, L.Y. (2019). Estimation of system reliability for exponential distributions based on L ranked set sampling, Communications in Statistics – Theory and Methods, DOI: 10.1080/03610926.2019.1691735.

[14] Esemen, M. Gurler, S. and Sevinc, B. (2021). Estimation of Stress–Strength Reliability Based on Ranked Set Sampling for Generalized Exponential Distribution. International Journal of Reliability, Quality and Safety Engineering. 28(2). 1–24.

[15] Frey, J., and Zhang, Y. (2019). Improved exact confidence intervals for a proportion using ranked-set sampling. Journal of the Korean Statistical Society, 48, 493–501.

[16] Göçoğlu A. and Demirel, N. (2019) Estimating the population proportion in modified ranked set sampling methods, Journal of Statistical Computation and Simulation, 89:14, 2694–2710, DOI: 10.1080/00949655.2019.1631315.

[17] Haq, A. Brown, J. Moltchanova, E. and Al-Omari, A.I. (2015). Varied L ranked set sampling scheme. Journal of Statistical Theory and Practice. 9, 741–767.

[18] Hassan, A, Al-Omari, A. and Nagy, H. (2021). Stress–Strength Reliability for the Generalized Inverted Exponential Distribution Using MRSS. Iran Journal Science Technology Transaction Science. 45:641–659.

[19] Kotz, S. Lumelskii, Y. Pensky, M. (2003). The stress–strength model and its generalizations: theory and applications. Singapore: World Scientific.

[20] Mahdizadeh, M. and Zamanzade, E. (2016). Kernel-based estimation of P(x>y) in ranked set sampling. SORT 40(2). 243–266.

[21] Mahdizadeh, M. and Zamanzade, E. (2018a). A new reliability measure in ranked set sampling. Statistics Papers. 59:861–891. https://doi.org/10.1007/s00362-016-0794-3.

[22] Mahdizadeh, M. and Zamanzade, E. (2018b). Smooth estimation of a reliability function in ranked set sampling, Statistics. 52(4):750–768.

[23] Mahdizadeh, M. and Zamanzade, E. (2020). Smooth estimation of the area under the ROC curve in multistage ranked set sampling. Statistical Papers. https://doi.org/10.1007/s00362-019-01151-6

[24] McIntyre, G.A. (1952). A method for unbiased selective sampling using ranked set sampling. Australian Journal of Agricultural Research. 3, 385–390.

[25] Morabbi, H. and Razmkhah, M. (2020): Quantile estimation based on modified ranked set sampling schemes using Pitman closeness. Communications in Statistics – Simulation and Computation, DOI: 10.1080/03610918.2020.1811329.

[26] Singh, S. Singh, U. and Sharma, V. (2014). Bayesian estimation and prediction for the generalized Lindley distribution under asymmetric loss function. Hacettepe Journal of Mathematics and Statistics. 43(4), 661–678.

[27] Sinha V.C. Saxena J. K. & Gupta A. (2016). Business Mathematics. Springer, New York.

[28] Vexler, A. and Hutson, A. (2018). Statistics in the health sciences: Theory, Applications, and computing. CRC press.

[29] Yousaf, F. Ali, S. and Shah, I. (2019). Statistical Inference for the Chen Distribution Based on Upper Record Values. Annals of Data Science. 6(4), 831–851.

[30] Zamanzade, E. (2019). EDF-based tests of exponentiality in pair ranked set sampling. Statistical Papers. 60(6), 2141–2159.

[31] Zamanzade, E. Mahdizadeh, M. and Samawi, H. (2020). Efficient estimation of cumulative distribution function using moving extreme ranked set sampling with application to reliability. Statistical Papers. https://doi.org/10.1007/s10182-020-00368-3.

[32] Zamanzade, E. Asadi, M. Parvardeh. A. (2022). A ranked-based estimator of the mean past lifetime with an application. Statistical Paper. 1–17.

Biography

images

Mohamed S. Abdallah received the bachelor’s degree in statistics from Cairo University in 2006, the master’s degree in statistics from Cairo University in 2010, and the philosophy of doctorate degree in statistics from Cairo University in 2019, respectively. He is currently working as a Lecturer at the Department of quantitative techniques, Faculty of Commerce, Aswan University. His research area includes ranked set sampling. He has been serving as a reviewer for many highly-respected journals.

Abstract

1 Introduction

2 Estimators of Under LRSS

2.1 MLE Estimators of

2.2 Bayesian Estimators of

3 Estimators of Under VLRSS

3.1 MLE Estimators of

3.2 Bayesian Estimators of

4 Comparison of the Proposed Estimators

5 Real Data Analysis

6 Conclusion

Acknowledgements

Disclosure Statement

Funding

References

Biography