Classical and the Bayesian Estimation of Process Capability Index 𝒞py: A Comparative Study

Sumit Kumar

Department of Mathematics, Chandigarh University, Mohali, Punjab, India
E-mail: stats.sumitbhal@gmail.com

Received 04 September 2021; Accepted 09 February 2022; Publication 16 March 2022

Abstract

In this study, to estimate the process capability index 𝒞py when the process follows different distributions (Lindley, Xgamma, and Akash distributions), I have used five methods of estimation, namely, the maximum likelihood method of estimation, the least and weighted least squares method of estimation, the maximum product of spacings method of estimation, and the Bayesian method of estimation. The Bayesian estimation is studied for symmetric loss function with the help of the Metropolis-Hastings algorithm method. The Metropolis-Hastings algorithm approach is used to study Bayesian estimation for symmetric loss functions. Four bootstrap approaches and Bayesian methods are used to create confidence intervals for the index 𝒞py. Based on their respective MSEs/risks for point estimates of 𝒞py and average widths (𝒜𝒲s) for interval estimates, I have investigated the performance of various estimators. To assess the accuracy of the various approaches, Monte Carlo simulations are conducted. It is found that the Bayes estimates performed better than the considered classical estimates in terms of their corresponding risks. To illustrate the performance of the proposed methods, two real data sets are analyzed.

Keywords: Bootstrap confidence interval, process capability index, Lindley distribution, Xgamma distribution, Akash distribution.

1 Introduction

Effective management and evaluation of output service quality is a prominent topic in the manufacturing industry. The most generally used indices to judge the processes appear to be process capability indices (PCIs), which are particularly popular among industries for evaluating (manufacturing) processes since they are dimensionless, easy to read, and comprehensible. Despite their flaws, these indexes are frequently employed in a range of industries, owing to the single-number summary’s simplicity and attraction to engineers and management. The most commonly utilised PCIs are 𝒞p,, 𝒞pk, 𝒞pmk, and Cpm [see Juran (1974), Kane (1986), Chan et al. (1988), and Pearn et al. (1992)]. They are predicated on the assumption that a given process may be characterised by a normal probability model with a process mean μ and standard deviation σ. Furthermore, in so many industrial and service activities, the assumption of normalisation is basically a simplifying notion that is frequently inaccurate [see, Gunter (1989)]. In their recent work, Maiti et al. (2010) obtained a generalized process capability index (GPCI) Cpy in their recent work. The index’s attractiveness is that it is closely linked to the vast majority of PCIs defined in the literature. Furthermore, it includes both normal and non-normal random variables, as well as continuous and discrete random variables, and is described as follows:

𝒞py = F(U)-F(L)F(UDL)-F(LDL)
= pp0,

where F(t)=P(Zt) is the cumulative distribution function of the quality characteristic Z. The lower and upper specification limits are L and U, respectively, whereas p is the process yield and p0 is the ideal yield. LDL and UDL are the lower and higher acceptable thresholds, respectively.

To draw the inference about PCIs, quality control engineers generally use the point and the interval estimation. The point estimator is employed to the process performance but in the case of variability in estimation, researchers also on confidence interval (CI) (see, Chan et al. 1988, Smithson (2001)). There are several techniques available in the literature to construct CIs like the bootstrap technique. This technique is a re-sampling method and free from distributional assumptions. Firstly, Efron (1979) introduced this technique. Franklin and Wasserman (1991) employed this technique for the construction of CIs of the PCI 𝒞pk. Tong and Chen (1998) likewise utilized bootstrap simulation methods to calculate lower confidence limits for the said indices 𝒞p, 𝒞pk and 𝒞pm when the process distributions were non-normal. Many researchers have already used this approach for other PCIs [see, for reference, Pearn et al. (2014, 2016); Rao et al. (2016); Dey et al. (2021); Saha et al. (2018, 2020a, 2020b); Kumar (2021)].

PCIs are analyzed and studied from both the Bayesian and classical perspectives. Nevertheless, many statisticians prefer the use of the Bayesian approach over the classical approach. When the actual distribution is normally distributed, Saxena and Singh (2006) address the Bayesian estimation of the PCI 𝒞p. Credible intervals for several PCIs were determined by Ouyang et al. (2002) and Lin et al. (2011). One can find the advantages and justification of the Bayesian approach in the works of Chan et al. (1988), Cheng and Spiring (1989), and Shiau et al. (1999a,1999b). Besides, several authors have discussed Bayesian estimation of the PCIs for many lifetime distributions. Readers may refer to the works of Huiming et al. (2007), Miao et al. (2011), Pearn et al. (2015), Seifi and Nezhad (2017), Saha et al. (2019), Leiva et al. (2014), Perakis and Xekalaki (2002) among others.

The following are the three goals of this paper: First, I have estimate 𝒞py using four distinct classical and Bayesian estimation approaches for various models. To estimate the parameter(s) of various distributions, I have selected four traditional estimation methods: maximum likelihood estimation (MLE), least square estimation (LSE), weighted least square estimation (WLSE), and maximum product spacing estimation (MPSE). Performance is not simply measured in terms of mean square error (MSE); another sort of risk is also employed. The second goal is to compute four bootstrap confidence intervals (BCI) of 𝒞py using the traditional techniques of estimation mentioned above: standard bootstrap (𝒮), percentile bootstrap (𝒫), student’s t bootstrap (𝒮𝒯), and bias-corrected percentile bootstrap (𝒞𝒫). The estimated average widths (𝒜𝒲s) of the BCIs are used to highlight their performance. The final goal is to derive Bayes estimates of the PCI 𝒞py under a symmetric function using gamma priors for the model’s parameters. The Metropolis-Hastings (M-H) method is used to calculate Bayes estimates. We then calculate Bayes credible intervals and compare them to the BCIs. To the best of our knowledge, no research has been conducted to investigate the PCIs 𝒞py employing four BCIs based on the aforementioned classical and Bayesian estimation techniques for the considered distributions. The study’s goal is to create a guideline for selecting the optimum way of estimating the indices, which I believe would be of great relevance to applied statisticians and quality control engineers in situations where the item/subgroup quality characteristic follows studied distributions.

The following is how the rest of the article is organized: Section 2 defines GPCI 𝒞py for the distributions under consideration. In addition, I have explain various traditional estimation methods (MLE, LSE, WLSE, and MPSE) for the index 𝒞py. Section 3 addresses BCIs such as 𝒮, 𝒫, 𝒮𝒯 and 𝒞𝒫 that are based on the aforementioned GPCI 𝒞py assessment procedures. In section 4, I derive Bayesian estimates of the index 𝒞py using the squared error loss function (SELF) and the highest posterior density (HPD) credible interval. In Section 5, a Monte Carlo simulation experiment was undertaken to evaluate the performances of the aforementioned classical and Bayes estimators of the index 𝒞py in terms of their associated MSEs and risks. Section 6 pointed out two real-life data sets for promotional purposes, and Section 7 includes the study’s conclusion.

2 Estimation of Generalized Process Capability Index 𝒞py

Here, I have derived the MLE, LSE, WLSE, MPSE, and BCIs of GPCI 𝒞py for some finite mixture distributions, viz., the LnD, XgD, and AkD, respectively.

2.1 Lindley Distribution

The LnD [See, Lindley (1958), Ghitany et al. (2008)] belongs to the exponential family and it can be written as a mixture of exponential and gamma distributions. Suppose Y is a random variable (RV) that follows the LnD(ψ). Then, its probability density function (PDF) and cumulative density function (CDF) are, respectively, given as

f(y;ψ) =ψ2ψ+1(1+y)e-ψy;y>0,ψ>0 (1)
F(y;ψ) =1-[1+ψy1+ψ]e-ψy. (2)

Now, GPCI 𝒞py, where the quality characteristic follows the LnD, is given as

𝒞py = [(1+ψLψ+1)e-ψL]-[(1+ψUψ+1)e-ψU]p0 (3)

Given a random sample (RS) Y1,Y2,,Yn of size n, drawn from the LnD(ψ) given in Equation (1), the corresponding log-likelihood function (=logL(ψ;Y)) is given as

=2nlogψ-nlog(ψ+1)+i=1nlog(1+yi)-ψi=1nyi (4)

By solving the ensuing equation, we will get the MLE of ψ, say, ψ^mle

ψ = 2nψ-n1+ψ-i=1nyi=0.

Thus, MLE of the parameter ψ is given by [see, Ghitany et al. (2008)]

ψ^mle=-(y¯-1)+(y¯-1)2-8y¯2y¯ (5)

The MLE of 𝒞py, denoted by 𝒞^pymle(LnD), can be obtained by operating the invariance property of MLE, which is given as

𝒞^pymle(LnD)=(1+Lψ^mle1+ψ^mle)e-Lψ^mle-(1+Uψ^mle1+ψ^mle)e-Uψ^mlep0. (6)

LSE and WLSE The LSE and WLSE were proposed by Swain et al. (1988) to estimate the parameters of the Beta distribution. Suppose F(y(j:n)) denotes the CDF of the ordered random variables y(1:n)<y(2:n)<<y(n:n), where, {y1:n,y2:n,,yn:n} is a random sample of size n from a distribution function F(). As a result, the LSEs of (ψ), say, (ψ^lse) can be found by reducing

L(ψ)=i=1n[F(y;ψ)-in+1]2

with respect to ψ, where F(y;ψ) is the CDF of the distribution. Equivalently, it can also be obtained by solving the following non-linear equation

i=1n[1-(1+ψyψ+1)e-ψy-in+1]Δ1(y;ψ)=0

where Δ1(y;ψ) is the first derivative of the respective distribution

Δ1(y;ψ)=ye-ψy(ψ+1)2[ψ2(y+1)+ψ(y+2)] (7)

Thus, the LSE for GPCIs under LnD can be obtained by replacing ψ with ψ^lse in Equation (3) and can be given as

𝒞^pylse=[(1+Lψ^lse1+ψ^lse)e-Lψ^lse]-[(1+Uψ^lse1+ψ^lse)e-Uψ^lse]P0 (8)

Therefore, in this case, the WLSE of ψ say ψ^wlse can be obtained by minimizing

W(ψ)=i=1n(n+1)2(n+2)i(n-i+1)[F(y;ψ)-in+1]2

to ψ. The estimators can be obtained by differentiating W(ψ) for ψ, and equating to zero.

i=1n(n+1)2(n+2)i(n-i+1)[(1-(1+ψyψ+1)e-ψy-in+1)]2Δ1(y;ψ)=0

where, Δ1(y;ψ) is given in equation 7. Thus, the Process Capability Indices of the above mentioned distribution for WLSE obtained by replacing ψ by ψ^wlse in Equation (3).

𝒞^pywlse=[(1+Lψ^wlse1+ψ^wlse)e-Lψ^wlse]-[(1+Uψ^wlse1+ψ^wlse)e-Uψ^wlse]P0 (9)

MPSE Cheng and Amin (1979) proposed the maximum product spacing method as an alternative to MLE for estimating unknown parameters of continuous univariate distributions. Ranneby (1984) independently developed this method as an approximation to the Kullback-Leibler information measure. Cheng and Amin (1983) demonstrated that this method is equally efficient as the MLE and consistent under more broad settings, which influenced our decision. Let us begin by defining

D(α;λ)=F(yi:n|α,λ)-F(yi-1:n|α,λ),i=1,2,,n+1 (10)

where F(y0:n|ψ)=0 and F(yn+1:n|ψ)=1-F(yn:n|ψ). Clearly, i=1n+1D(ψ)=1. The MPSEs of the parameter (α,ψ), say, (α^mpse, ψ^mpse) are obtained by the maximizing the geomatric mean of the spacings with respect to ψ as

GM=[i=1n+1Di(ψ)]1n+1

or equivalently, by maximizing the function

H=logGM=1n+1i=1n+1logDi(ψ)

with respect to α and λ. The estimates of ψ is obtained by solving the non-linear equations

δHδψ=1n+1i=1n+11Di(ψ)δDi(ψ)ψ=0

where,

δHδψ=F(yi:n)|ψψ-F(yi-1:n)|ψψ

can be obtained.

Thus, the GPCI of the above mentioned distribution for MPSE obtained by replacing ψ by ψ^mpse in Equation (3).

𝒞^pympse=[(1+Lψ^mpse1+ψ^mpse)e-Lψ^mpse]-[(1+Uψ^mpse1+ψ^mpse)e-Uψ^mpse]P0 (11)

2.2 Xgamma Distribution

The XgD is a new probability distribution derived from a particular finite mixing of exponential and gamma distributions [see, sen et al. (2016)]. If the PDF and CDF of a continuous RV Y are of the form, it is said to follow an XgD.

f(y;ψ) =ψ21+ψ(1+ψ2y2)e-ψy;y>0,ψ>0 (12)
F(y;ψ) =1-(1+ψ+ψy+ψ2y22)e-ψy1+ψ;y>0,ψ>0 (13)

Now GPCI 𝒞py, where the quality characteristic follows the XgD, is given as

𝒞py = {[(1+ψ+ψL+ψ2L22)e-ψL1+ψ]-[(1+ψ+ψU+ψ2U22)e-ψU1+ψ]p0} (14)

Given a RS Y1,Y2,,Yn of size n, drawn from the XgD(ψ) given in Equation (12), the corresponding log-likelihood function is given as

=2nlogψ-nlog(1+ψ)+i=1nlog(1+ψ2yi2)-ψi=1nyi (15)

By solving the ensuing equation, we will get the MLE of ψ, say, ψ^mle

2nψ-n(1+ψ)+i=1nyi22(1+ψ2yi2)=i=1nyi (16)

The MLE ψ^mle of the unknown parameters ψ can be obtained by optimizing the log-likelihood function concerning the involved parameters. In this regard, one can use the packages like, nlm() and/or maxLik() packages of the R software [see Dennis and Schnabel (1983), Henningsen and Toomet (2010)]. Alternatively, the parameters can be obtained by solving the above non-linear Equation (16) with the help of an iterative procedure like the Quasi Newton-Raphson method. Hence, the MLE of the GPCI 𝒞py is obtained by using the invariance property of MLE, of given as

𝒞^pymle(XgD)={[(1+ψ^mle+Lψ^mle+L2ψ2^mle2)e-Lψ^mle1+ψ^mle]-[(1+ψ^mle+Uψ^mle+U2ψ2^mle2)e-Uψ^mle1+ψ^mle]P0} (17)

LSE and WLSE Now using the theory of the LSE and WLSE has given in Subsection 2.1, we can get the expressions for XgD as

L(ψ)=i=1n[1-(1+ψ+ψy+ψ2y22)e-ψy1+ψ-in+1]2Δ2(y;ψ)=0

where Δ2(y;ψ) is the first derivative of the respective distribution,

Δ2(y;ψ)=ye-ψx2(1+ψ)2[2(2+ψ)+ψy(1+y+ψy)] (18)

Thus, the LSEs of GPCIs for the respective distribution can be obtained by replacing ψ with ψlse^ in Equation (14).

𝒞^pylse={[(1+ψ^lse+Lψ^lse+L2ψ2^lse2)e-Lψ^lse1+ψ^lse]-[(1+ψ^lse+Uψ^lse+U2ψ2^lse2)e-Uψ^lse1+ψ^lse]P0} (19)

Similarly, for XgD the WLSE of ψ say ψ^wlse can be obtained by solving the following expression

i=1n(n+1)2(n+2)i(n-i+1)
  [1-(1+ψ+ψy+ψ2y22)e-ψy1+ψ-in+1]2Δ2(y;ψ)=0

where, Δ2(y;ψ) are given in Equation (18). Thus, the WLSEs of GPCIs for the above-mentioned distribution can be obtained by replacing ψ with ψ^wlse in Equation (14).

𝒞^pywlse={[(1+ψ^wlse+Lψ^wlse+L2ψ2^wlse2)e-Lψ^wlse1+ψ^wlse]-[(1+ψ^wlse+Uψ^wlse+U2ψ2^wlse2)e-Uψ^wlse1+ψ^wlse]P0} (20)

MPSE Similarly, using the theory of the MPSE has given in Subsection 2.1, The MPSEs of GPCIs for XgD can obtain by replacing ψ with ψ^mpse in Equation (14).

𝒞^pympse={[(1+ψ^mpse+Lψ^mpse+L2ψ2^mpse2)e-Lψ^mpse1+ψ^mpse]-[(1+ψ^mpse+Uψ^mpse+U2ψ2^mpse2)e-Uψ^mpse1+ψ^mpse]P0} (21)

2.3 Akash distribution

The AkD [see Shanker (2015)] is a novel probability distribution derived from a particular finite mixing of exponential and gamma distributions. The revised one-parameter lifespan distribution’s PDF can be written as follows:

f(y;ψ)=ψ3ψ2+2(1+y2)e-ψy;y>0,ψ>0 (22)

and, the corresponding CDF is given by

F(y;ψ)=1-[1+ψy(ψy+2)ψ2+2]e-ψy;y>0,ψ>0 (23)

Now GPCI 𝒞py, where the quality characteristic follows the AkD, is given as

𝒞py = [1+ψL(ψL+2)ψ2+2]e-ψL-[1+ψU(ψU+2)ψ2+2]e-ψUp0 (24)

Given a RS Y1,Y2,,Yn of size n, drawn from the AkD(ψ) given in Equation (22), the corresponding log-likelihood function is given as

=3nlogψ-nlog(ψ2+2)+i=1nlog(1+yi2)-ψi=1nyi (25)

The MLE of ψ, say, ψ^mle can be obtained as the solution of the following equation

ψ3y¯-ψ2+2ψy¯-6=0

Again to obtain the MLE ψ^mle of the unknown parameter ψ, one can use the techniques mentioned above. After obtaining the MLE of the parameter ψ, the MLE of 𝒞py, denoted by 𝒞^pymle(AkD) can be obtained by operating the invariance property of MLE and which is given as

𝒞^pymle(AkD)=[[1+Lψ^mle(Lψ^mle+2)2+ψ2^mle]e-Lψ^mle-[1+Uψ^mle(Uψ^mle+2)2+ψ2^mle]e-Uψ^mleP0] (26)

LSE and WLSE Similarly, using the theory of the LSE and WLSE has given in Subsection 2.1, the LSE and WLSE of AkD can also be obtained by solving the following non-linear equation

L(ψ)=i=1n[1-(1+ψy(ψy+2)ψ2+2)e-ψy-in+1]Δ3(y;ψ)

where Δ3(y;ψ) is the first derivative of the respective distribution,

Δ3(y;ψ)=e-ψy(ψ2+2)ψy[ψ3(1+y2)+2ψ(ψy+y2+3)] (27)

Thus, the LSE for GPCIs under AkD can obtain by replacing ψ with ψ^lse in Equation (24) and be given as

𝒞^pylse=[1+Lψ^lse(Lψ^lse+2)2+ψ2^lse]e-Lψ^lse-[1+Uψ^lse(Uψ^lse+2)2+ψ2^lse]e-Uψ^lseP0

Similarly, for XgD the WLSE of ψ say ψ^wlse can be obtained by solving the following expression

i=1n(n+1)2(n+2)i(n-i+1)
  [1-[1+ψy(ψy+2)ψ2+2]e-ψy-in+1]2Δ2(y;ψ)=0

where, Δ2(y;ψ) s given in Equation (27). Thus, WLSEs of the GPCIs of the AkD can obtain by replacing ψ by ψ^wlse and can be given as

𝒞^pywlse=[1+Lψ^wlse(Lψ^wlse+2)2+ψ2^wlse]e-Lψ^wlse-[1+Uψ^wlse(Uψ^wlse+2)2+ψ2^wlse]e-Uψ^wlseP0 (29)

MPSE Similarly from Subsection 2.1, the MPSEs of GPCIs for AkD can be obtained by replacing ψ with ψ^mpse in Equation (24).

𝒞^pympse=[1+Lψ^mpse(Lψ^mpse+2)2+ψ2^mpse]e-Lψ^mpse-[1+Uψ^mpse(Uψ^mpse+2)2+ψ2^mpse]e-Uψ^mpseP0 (30)

3 Bootstrap Confidence Interval

Efron created the principle of bootstrap re-sampling approach (1979) [See Efron (1979)]. We can create inferential statistics related to the underlying distribution using a simple re-sampling procedure in this approach. Efron (1982), Hall (2013), and Davison and Hinkley provide in-depth treatments of the theoretical development of the bootstrap approach (1997). BCIs have recently been utilised by numerous researchers to create confidence intervals for various PCIs [see, for example, Chatterjee and Qiu (2009); Li et al. (2016), Rao et al.  (2016), Kumar et al. (2019; 2021), Kumar and Saha (2020)].

Here, I have obtained four BCIs, namely, 𝒮, 𝒫, 𝒮𝒯 and 𝒞𝒫 for calculating CIs of the GPCI 𝒞py. Let Y1,Y2,,Yn be a random sample of size n drawn from exponential distribution with parameter ψ. ALGORITHM:

Step 1: From the given random sample of size n, I compute the MLE ψ^ of ψ. A bootstrap sample of size n is obtained from the original sample by putting 1/n as mass at each point, denoted by Y1*,Y2*,,Yn*.

Step 2: We compute the MLE ψ^* of ψ as well as 𝒞^py* of 𝒞py. The m-th bootstrap estimator of 𝒞py is computed as 𝒞^py*(m)=𝒞^py (Y1*,Y2*,,Yn*).

Step 3: There are total number of nn re-samples and I have calculate B values of 𝒞^py* from these re-samples. Each of these 𝒞^py* would be estimator of 𝒞^py. The arrangement of the entire collection in assending would constitute an empirical bootstrap distribution {𝒞^py*(j);j=1,2,,B}, will be denoted as 𝒞^py*(1)𝒞^py*(2)𝒞^py*(B).

Here, in this study we considered B=1000 bootstrap samples.

Standard Bootstrap (𝒮) Confidence Interval

Let 𝒞^¯py* and Se* be the sample mean and sample standard deviation of {𝒞^py*(j);j=1,2,,B}, i.e.,

𝒞^¯py*=1Bj=1B𝒞^py*(j)

and

Se*=1(B-1)j=1B(𝒞^py*(j)-𝒞^¯py*)2,

respectively. A 100(1-α)% 𝒮 CI of the index 𝒞py is given by

{𝒞^py*-z(α/2).Se*,𝒞^py*+z(α/2).Se*}.

Percentile Bootstrap (𝒫) Confidence Interval

Let 𝒞^py*(τ) be the τ percentile of {𝒞^py*(j);j=1,2,,B}, i.e., 𝒞^py*(τ) is such that

1Bj=1BI(𝒞^py*(j)𝒞^py*(τ))=τ;0<τ<1,

where, I() is the indicator function. A 100(1-α)% 𝒫 CI of the index 𝒞py is given by

{𝒞^py*(B.(α/2)),𝒞^py*(B.(1-α/2))},

where, 𝒞^py*(r) is the r-th ordered value on the list of the B bootstrap estimators of 𝒞py.

Student’s t Bootstrap (𝒮𝒯) Confidence Interval

Let S* be the sample standard deviation of {𝒞^py*(j);j=1,2,,B}, i.e.,

S*=1Bj=1B(𝒞^py*(j)-𝒞^¯py*)2,

where,

𝒞^¯py*=1Bj=1B𝒞^py*(j).

Also, let t^*(τ) be the τ percentile of {𝒞^py*(j)-𝒞^pyS*};j=1,2,,B, i.e., t^*(τ) is such that

1Bj=1BI(𝒞^py*(j)-𝒞^pyS*t^*(τ))=τ;0<τ<1,

where, I() is the indicator function. A 100(1-α)% 𝒮𝒯 CI of the index 𝒞py is given by

{𝒞^¯py*-t^*(α/2).S*,𝒞^¯py*+t^*(α/2).S*}.

Bias-corrected Percentile Bootstrap (𝒞𝒫) Confidence Interval

This approach has been introduced to correct for the potential bias. The first step is to locate the observed 𝒞^py in the bootstrap order statistics 𝒞^py*(1)𝒞^py*(2)𝒞^py*(B). Firstly, using the ordered distributions of {𝒞^py*(j);j=1,2,,B}, compute the probability

P0=1Bj=1BI(𝒞^py*(j)𝒞^py),

where I() is the indicator function. Then, I have calculate z0=Φ-1(P0), where, Φ() is the standard normal CDF and this value is used to calculate the probabilities Pl and Pu, defined as

Pl=Φ(2z0-z(α/2))andPu=Φ(2z0+z(α/2)).

A 100(1-α)% 𝒞𝒫 CI of δ is given by

(𝒞^py*(B.Pl),𝒞^py*(B.Pu)).

where 𝒞^py*(r) is the r-th ordered value on the list of the B bootstrap estimators of 𝒞py.

4 Bayesian Estimation

The Bayesian estimation of the index 𝒞py is presented in this section. Bayesian analysis is a logical technique to mix observed and prior data. Prior distributions are crucial in the development of the Bayes estimator(s). There is no simple approach for selecting priors for a specific situation. More information can be found in Arnold and Press (1983). We analyse Bayesian estimation on the assumption that the random variables have independent gamma priors in the premise of the foregoing arguments.Let ψGamma(a,b). Because the Gamma distribution is versatile, it can take on a variety of shapes depending on parameter values, making it a good candidate for model parameter priors. More information can be found in Kundu and Pradhan (2009). Thus, the prior distribution of ψ is

π(ψ) = baΓ(a)ψa-1e-bψ;ψ>0, (31)

where a, and b are the hyper-parameters and are assumed to be known. The posterior distribution of ψ under LnD, XgD, and AkD are given in Equs. (32), (4), and (34) respectively.

P1(ψy) = K1-1(ψ21+ψ)nψa-1e-ψ(b+i=1nyi)i=1n(1+yi) (32)
= K1-1ψ2n+a-1e-ψ(b+i=1nyi)(1+ψ)-ni=1n(1+yi)

where

K1-1=0(ψ21+ψ)nψa-1e-ψ(b+i=1nyi)i=1n(1+yi)dψ

is the normalizing constant for LnD.

P2(ψy) = K2-1(ψ21+ψ)nψa-1e-ψ(b+i=1nyi)i=1n(1+ψyi22)
= K2-1ψ2n+a-1e-ψ(b+i=1nyi)(1+ψ)-ni=1n(1+ψyi22)

where

K2-1=0(ψ21+ψ)nψa-1e-ψ(b+i=1nyi)i=1n(1+ψyi22)dψ

is the normalizing constant for XgD.

P3(ψy) = K3-1(ψ32+ψ2)nψa-1e-ψ(b+i=1nyi)i=1n(1+yi2) (34)
= K3-1ψ3n+a-1e-ψ(b+i=1nyi)(2+ψ2)-ni=1n(1+yi2)

where

K3-1=0(ψ32+ψ2)nψa-1e-ψ(b+i=1nyi)i=1n(1+yi2)dψ

is the normalizing constant for AkD. We use the SELF to obtain the Bayes estimates of 𝒞py. The expression of the loss functions, the corresponding Bayes estimator and posterior risk are provided in Table 1. Where d is the estimate of parameter ψ.

Table 1 Bayes estimate under SELF and corresponding posterior risk

Loss function Bayes estimator Posterior risk
L=SELF=(ψ-d)2 E(ψ|y) Var(ψ|y)

Notice that if we can obtain the posterior distribution of 𝒞py, then the Bayes estimate of 𝒞py can be easily obtained, but the evaluation of the posterior distribution of 𝒞py is quite tedious. Therefore, the Bayes estimate under SELF of 𝒞py for known U and L concerning LnD, XgD, and AkD, can be obtained by the Equations (35), (36) and (37), respectively.

𝒞^pyLnD = E(𝒞py|y)=00𝒞pyP1(ψy)dψ (35)
= K-10ψ2n+a-1e-ψ(b+i=1nyi)(1+ψ)-ni=1n(1+yi)
×1p0[1+ψ+ψL1+ψe-ψL-1+ψ+ψU1+ψe-ψU]dψ
𝒞^pyXgD = E(𝒞py|y)=00𝒞pyP2(ψy)dψ (36)
= K-10ψ2n+a-1e-ψ(b+i=1nyi)(1+ψ)-ni=1n(1+ψyi22)
×1p0[1+ψ+ψL+ψ2L221+ψe-ψL
-1+ψ+ψU+ψ2U221+ψe-ψU]dψ
𝒞^pyAkD = E(𝒞py|y)=00𝒞pyP3(ψy)dψ (37)
= K-10ψ3n+a-1e-ψ(b+i=1nyi)(2+ψ2)-ni=1n(1+yi2)
×1p0[(1+2ψL+ψ2L21+ψ)e-ψL
×-(1+2ψU+ψ2U22+ψ2)e-ψU]dψ

Equations (35), (36) and (37) do not yield any standard form due to the involvement of two integrals in the denominator as well as in the numerator. Hence, the analytical solution of the same is not possible. Therefore, one may use any Bayes computation technique to obtain the solutions. Here, we use one Bayes computation technique namely, the M-H algorithm, which is more frequently used to approximate the posterior expectations. The detailed description of this approximation is given below:

Metropolis-Hastings Algorithm

Here, we consider an algorithm suggested by Metropolis and Hastings to compute the Bayes estimate as well as the credible interval of the index based on generated posterior samples. In this algorithm, samples are generated from the fully conditional posterior densities using an appropriate proposal distribution. The generated samples from the full conditional distribution are collected using the acceptance-rejection principle. For more details about this algorithm, the reader may refer to the articles by Metropolis et al. (1953), Smith and Robert (1993), and many others. To implement the M-H algorithm, the full conditional density of ψ under LnD can be written as;

P1(ψy)ψ2n+a-1e-ψ(b+i=1nyi)(1+ψ)-ni=1n(1+yi) (38)

The following algorithm may be used to extract the samples from P1(ψy).

1. Set the initial guess value {ψ(0)}.

2. Begin with r=1.

3. Generate a new sample for ψ from the respective conditional posterior densities by choosing any arbitrary proposal distribution as follows: ψ(r)P1(yψ(r-1))

4. Repeat step 2-3 for all r=1,2,3,,K(=10000) times and obtain posterior samples of size K for parameters ψ.

5. Using the above sequences obtained in step 4, we can obtain the sequence 𝒞pyr. After obtaining the posterior samples, the Bayes estimate of 𝒞py under SELF is obtained as

𝒞^pyLnD = E(𝒞pyy)1K-K0r=K0+1K𝒞pyr (39)

Similarly we can get the Bayes estimate of 𝒞py under SELF for XgD and AkD respectively, as follows

𝒞^pyXgD = E(𝒞pyy)1K-K0r=K0+1K𝒞pyr (40)
𝒞^pyAkD = E(𝒞pyy)1K-K0r=K0+1K𝒞pyr (41)

where K0=500 is the burn-in-period of Markov Chain.

6. Chen and Shao (1999) suggested the algorithm by which we can get the 100(1-α)% HPD credible interval for the index 𝒞py under considered models.

5 Simulation and Discussions

Here, we have carried out a Monte Carlo simulation study to assess the performances of the GPCIs 𝒞py under-considered models (LnD, XgD, AkD) using classical methods (MLE, LSE, WLSE, MPSE) and the Bayesian method of estimation. The classical estimators’ performances are evaluated in terms of MSEs, whereas the Bayes estimators are evaluated in terms of simulated risk. Besides, we have constructed BCIs (𝒮, 𝒫, 𝒮𝒯, 𝒞𝒫) for classical methods of estimation and HPD credible intervals for the Bayesian method. The performances of different CIs (BCIs and HPD) are assessed based on their estimated 𝒜𝒲s. “𝒜𝒲” is the ratio of the sum of the differences between the upper and lower specification limits to the number of trials K and a lower 𝒜𝒲 indicates better performance. we consider the sample sizes n = 10, 20, 30, 50 and 100, for parameter (ψ= 0.25, 0.75, 1.0, 1.25 with (L, U) = (0.1, 6) and p0=0.95, respectively. For each design, samples of each size n are drawn from the original sample and replicated 3,000 times. For Bayesian computation, we have considered the hyper-parameter values of the informative prior for comparing the Bayes estimates under the considered models. We have chosen the hyper-parameter values arbitrarily as (a,b)=(0.06,0.25),(0.56,0.75),(1,1),(1.56,1.25) for different sets of parameter values.

Table 2 True values and estimated values of 𝒞py by different methods of estimation along with their MSEs for LnD

n 𝒞py=0.8774483, ψ=0.5 𝒞py=0.976662, ψ=0.75
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.879789 0.865363 0.868285 0.855203 0.964209 0.983523 0.958614 0.958619
MSE 0.005885 0.008642 0.007083 0.005586 0.002215 0.089940 0.001800 0.001748
20 Est. 0.858109 0.851396 0.853165 0.840554 0.970680 0.967701 0.967992 0.964575
MSE 0.004977 0.006025 0.005685 0.004565 0.000495 0.000693 0.000677 0.000396
30 Est. 0.878808 0.876160 0.876407 0.866174 0.971949 0.969396 0.969704 0.967377
MSE 0.002393 0.002636 0.002467 0.002165 0.000409 0.000454 0.000426 0.000365
50 Est. 0.876891 0.874232 0.874623 0.868169 0.972035 0.972308 0.971977 0.968636
MSE 0.001509 0.001750 0.001623 0.001491 0.000137 0.000216 0.000201 0.000102
100 Est. 0.877121 0.874574 0.875038 0.872000 0.975294 0.974632 0.974821 0.973517
MSE 0.000697 0.000837 0.000777 0.000671 0.000580 0.000103 0.000092 0.000018
n 𝒞py=0.9896466, ψ=1 𝒞py=0.9780293, ψ=1.25
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.978130 0.978673 0.976522 0.978587 0.968040 0.968913 0.968853 0.973779
MSE 0.000493 0.005604 0.000511 0.000672 0.000462 0.002232 0.000804 0.000428
20 Est. 0.984350 0.983335 0.983741 0.984690 0.973546 0.973813 0.974149 0.977258
MSE 0.000090 0.000131 0.000117 0.000081 0.000258 0.000293 0.000271 0.000182
30 Est. 0.985879 0.985132 0.985448 0.986250 0.974904 0.975145 0.975368 0.977817
MSE 0.000046 0.000064 0.000056 0.000040 0.000154 0.000192 0.000176 0.000115
50 Est. 0.987334 0.986917 0.987122 0.987655 0.976161 0.976158 0.976299 0.978245
MSE 0.000019 0.000025 0.000022 0.000015 0.000085 0.000106 0.000098 0.000070
100 Est. 0.988560 0.988400 0.988485 0.988769 0.977240 0.977284 0.977348 0.978470
MSE 0.000005 0.000007 0.000006 0.000004 0.000040 0.000052 0.000047 0.000036

The estimate and corresponding MSEs of GPCI 𝒞py for LnD, XgD and AkD are obtained through classical methods of estimation and reported in Tables 2, 3, and 4, respectively. BCIs of GPCI 𝒞py for considered classical methods are provided in Tables 5, 6, and 7 for LnD, XgD and AkD, respectively. For all the models, Bayes estimates with risk and HPD credible interval through M-H algorithm are given in Tables 8 and 9. From first three tables, we observed that LnD performs better than XgD and AkD in terms of MSEs under considered classical methods and for considered parameter setups except for ψ=1.25. MPSE gives the smallest MSEs among all classical methods for almost all the considered setups and this trend is similar in all considered models. Analysis of Tables 5, 6, and 7 depicts that among all BCIs 𝒮𝒯 gives the least 𝒜𝒲 under all classical methods and for all models. Besides, MPSE performs better in calculating the 𝒜𝒲 of BCIs in all models. Among considered models LnD gives batter 𝒜𝒲 for all most all the considered parameter setups except for ψ=1.25. In Bayesian estimation using the M-H algorithm, LnD performs better as compared to Xgd and AkD in terms of their smaller average risks, and the HPD credible interval is also small for LnD as compared to other models for all parameter setups. From Tables 2 to 9, it has been observed that as the sample sizes increase, the MSEs, and risks of all the estimators are decrease, which verifies the consistency of the estimators that we have considered. Besides, the 𝒜𝒲s of BCIs and HPD credible intervals also decreased as we increased the sample size.

Table 3 True values and estimated values of 𝒞py by different methods of estimation along with their MSEs for XgD

n 𝒞py=0.7210604, ψ=0.5 𝒞py=0.9105752, ψ=0.75
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.729966 0.714715 0.713854 0.693307 0.903001 0.890752 0.891013 0.880434
MSE 0.014786 0.015694 0.015036 0.012326 0.004911 0.005714 0.005512 0.004085
20 Est. 0.728137 0.719640 0.719848 0.704688 0.905420 0.899643 0.899999 0.890818
MSE 0.007981 0.008916 0.008364 0.007333 0.002625 0.002965 0.002800 0.002389
30 Est. 0.727594 0.725333 0.725641 0.709470 0.906755 0.907548 0.906732 0.895123
MSE 0.004996 0.005767 0.005273 0.004665 0.000866 0.001503 0.000912 0.000864
50 Est. 0.722135 0.719436 0.719514 0.709635 0.909521 0.903733 0.905410 0.902261
MSE 0.002918 0.003425 0.003162 0.002701 0.000635 0.000915 0.000794 0.000575
100 Est. 0.721463 0.720774 0.720243 0.714225 0.909737 0.908302 0.908615 0.905389
MSE 0.001457 0.001679 0.001543 0.001319 0.000439 0.000591 0.000534 0.000419
n 𝒞py=0.9685448, ψ=1 𝒞py=0.9739773, ψ=1.25
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.956544 0.949060 0.949947 0.947833 0.962388 0.960015 0.960755 0.963296
MSE 0.000768 0.001448 0.001333 0.000626 0.000425 0.000657 0.000601 0.000389
20 Est. 0.962373 0.958862 0.959469 0.957306 0.968104 0.966722 0.967289 0.969029
MSE 0.000409 0.000517 0.000470 0.000349 0.000123 0.000192 0.000167 0.000096
30 Est. 0.964096 0.961881 0.962379 0.960412 0.970038 0.969356 0.969712 0.970876
MSE 0.000193 0.000293 0.000263 0.000132 0.000060 0.000082 0.000072 0.000043
50 Est. 0.965869 0.964440 0.964783 0.963392 0.971533 0.970948 0.971211 0.972213
MSE 0.000102 0.000140 0.000129 0.000083 0.000028 0.000041 0.000036 0.000019
100 Est. 0.967249 0.966848 0.966984 0.965821 0.972791 0.972544 0.972685 0.973224
MSE 0.000043 0.000056 0.000051 0.000028 0.000010 0.000014 0.000012 0.000007

Table 4 True values and estimated values of 𝒞py by different methods of estimation along with their MSEs for AkD

n 𝒞py=0.6451183, ψ=0.5 𝒞py=0.8907082, ψ=0.75
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.665227 0.652336 0.652887 0.594764 0.881376 0.872612 0.872651 0.833047
MSE 0.015521 0.016210 0.015715 0.014408 0.006537 0.006923 0.006747 0.005484
20 Est. 0.627245 0.617239 0.618919 0.582421 0.889076 0.884923 0.885180 0.861044
MSE 0.008680 0.009645 0.009381 0.008176 0.002748 0.003307 0.003152 0.002358
30 Est. 0.649798 0.647868 0.647036 0.617526 0.887893 0.885007 0.885383 0.867529
MSE 0.004458 0.005178 0.004889 0.004307 0.001974 0.002366 0.002235 0.001892
50 Est. 0.646021 0.644475 0.644294 0.624784 0.889751 0.887847 0.888168 0.876628
MSE 0.002906 0.003442 0.003202 0.002371 0.001227 0.001432 0.001347 0.001199
100 Est. 0.648702 0.650901 0.650566 0.636540 0.890053 0.889191 0.889390 0.882843
MSE 0.001590 0.001980 0.001787 0.001573 0.000582 0.000682 0.000634 0.000488
n 𝒞py=0.9747761, ψ=1 𝒞py=0.9859814, ψ=1.25
MLE LSE WLSE MPSE MLE LSE WLSE MPSE
10 Est. 0.966257 0.962349 0.962575 0.948346 0.975280 0.974110 0.972375 0.973278
MSE 0.000931 0.001073 0.001035 0.000924 0.000590 0.001503 0.002012 0.000585
20 Est. 0.969145 0.966658 0.967025 0.958633 0.980810 0.980119 0.979125 0.981273
MSE 0.000408 0.000574 0.000538 0.000338 0.000090 0.000117 0.001085 0.000084
30 Est. 0.970646 0.968992 0.969319 0.963343 0.982414 0.982034 0.981788 0.983060
MSE 0.000273 0.000358 0.000334 0.000234 0.000048 0.000062 0.000457 0.000036
50 Est. 0.972129 0.971183 0.971401 0.967524 0.983865 0.983576 0.983716 0.984473
MSE 0.000151 0.000186 0.000173 0.000122 0.000019 0.000024 0.000022 0.000012
100 Est. 0.973401 0.972908 0.973051 0.970956 0.984961 0.984843 0.984905 0.985380
MSE 0.000067 0.000080 0.000074 0.000044 0.000006 0.000007 0.000006 0.000003

Table 5 True values and 𝒜𝒲s of 𝒞py of BCIs for LnD

𝒞py n MLE LSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.294136 0.280690 0.209344 0.291864 0.656123 0.321227 0.207201 0.414685
0.877448 20 0.210582 0.205766 0.166374 0.211112 0.243901 0.236562 0.182931 0.233462
0.5 30 0.179497 0.177130 0.150699 0.180705 0.200585 0.194900 0.157803 0.193299
50 0.145064 0.143915 0.128103 0.145269 0.154407 0.152830 0.129211 0.150823
100 0.105432 0.104884 0.097071 0.105360 0.115049 0.114632 0.102558 0.114347
10 0.135649 0.126159 0.047982 0.115180 0.186165 0.172515 0.069833 0.142938
0.976662 20 0.089950 0.084236 0.040070 0.082346 0.138575 0.102258 0.047206 0.090716
0.75 30 0.073276 0.069389 0.037925 0.069920 0.077388 0.072993 0.033078 0.066440
50 0.051911 0.049812 0.029438 0.050939 0.059311 0.056615 0.030896 0.054296
100 0.035199 0.034268 0.023810 0.035099 0.036388 0.035104 0.021778 0.034802
10 0.093305 0.086358 0.022290 0.057988 0.115793 0.105702 0.022045 0.074261
0.989647 20 0.047465 0.043928 0.010436 0.031123 0.088019 0.056408 0.014554 0.040293
1 30 0.032484 0.030079 0.008168 0.021752 0.040860 0.037452 0.010238 0.029139
50 0.021916 0.017863 0.004590 0.012408 0.022963 0.021204 0.005139 0.014993
100 0.010215 0.009458 0.002727 0.006986 0.013411 0.012480 0.004046 0.010123
10 0.102690 0.096411 0.033154 0.072400 0.123670 0.114753 0.037203 0.090518
0.978029 20 0.066462 0.063186 0.031317 0.057623 0.097819 0.065840 0.028744 0.062803
1.25 30 0.047723 0.045609 0.024369 0.042230 0.055094 0.052678 0.027689 0.051649
50 0.035140 0.034105 0.020892 0.032773 0.037576 0.035928 0.020731 0.036242
100 0.025194 0.024784 0.018293 0.024416 0.028522 0.028001 0.020407 0.028377
𝒞py n WLSE MPSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.310544 0.295794 0.203689 0.282632 0.285017 0.273379 0.202178 0.293050
0.877448 20 0.229858 0.224253 0.171414 0.222131 0.208656 0.203425 0.160410 0.203876
0.5 30 0.201329 0.198203 0.166035 0.196747 0.175773 0.172703 0.144598 0.179581
50 0.153786 0.151717 0.131460 0.151669 0.143901 0.142529 0.121321 0.145268
100 0.111167 0.110506 0.099566 0.110349 0.100833 0.101413 0.088174 0.103117
10 0.159903 0.148093 0.049612 0.116259 0.131197 0.118116 0.043433 0.114271
0.976662 20 0.101563 0.094851 0.039842 0.084764 0.081318 0.081223 0.039787 0.078596
0.75 30 0.076629 0.072511 0.034718 0.067302 0.068384 0.064629 0.034083 0.068826
50 0.058623 0.056142 0.032215 0.054542 0.050628 0.048607 0.026266 0.049518
100 0.036052 0.034921 0.022859 0.034456 0.033529 0.031570 0.023427 0.034690
10 0.116290 0.106394 0.026012 0.079890 0.090587 0.078179 0.021037 0.056182
0.989647 20 0.057771 0.053797 0.014953 0.040517 0.044052 0.040166 0.010065 0.031055
1 30 0.037875 0.035132 0.008797 0.024993 0.030070 0.002905 0.006557 0.019737
50 0.021367 0.019675 0.004449 0.013659 0.020338 0.016791 0.004503 0.011977
100 0.011966 0.011194 0.003378 0.008930 0.009796 0.009039 0.002362 0.006800
10 0.116114 0.106735 0.032383 0.087482 0.090236 0.083952 0.024682 0.070959
0.978029 20 0.069914 0.066120 0.030636 0.061823 0.055009 0.051701 0.026083 0.056654
1.25 30 0.056191 0.053826 0.030470 0.053858 0.039090 0.037047 0.021346 0.042029
50 0.039595 0.038334 0.024173 0.038526 0.030270 0.029027 0.019838 0.030790
100 0.026729 0.026320 0.019296 0.026591 0.023065 0.022606 0.017519 0.023049

Table 6 True values and 𝒜𝒲s of 𝒞py of BCIs for XgD

𝒞py n MLE LSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.404213 0.396618 0.341336 0.372246 0.464587 0.448132 0.449843 0.432402
0.721060 20 0.306730 0.302914 0.298811 0.299697 0.348492 0.343875 0.352192 0.337741
0.5 30 0.256145 0.253767 0.263676 0.253334 0.289339 0.285637 0.304868 0.284427
50 0.202482 0.201293 0.219669 0.204139 0.222909 0.221130 0.206987 0.218780
100 0.143889 0.143036 0.139454 0.142729 0.159815 0.159838 0.155933 0.156733
10 0.230394 0.214949 0.137809 0.244377 0.284017 0.261854 0.179937 0.270438
0.910575 20 0.166718 0.160138 0.114500 0.164097 0.191347 0.181943 0.139160 0.192199
0.75 30 0.151999 0.149267 0.112405 0.142428 0.152699 0.147826 0.114533 0.145574
50 0.108212 0.106523 0.104902 0.123842 0.123989 0.122205 0.100959 0.110834
100 0.078546 0.078197 0.071296 0.080435 0.095715 0.095071 0.074667 0.089110
10 0.100545 0.091774 0.028866 0.075334 0.176356 0.162865 0.074564 0.140182
0.968545 20 0.072646 0.066961 0.023910 0.075030 0.107431 0.099626 0.034733 0.079362
1 30 0.054667 0.050676 0.021592 0.048380 0.074976 0.069483 0.026981 0.061665
50 0.035131 0.033110 0.017676 0.035334 0.053742 0.050940 0.024086 0.049720
100 0.027000 0.025704 0.015135 0.024920 0.025823 0.024285 0.013000 0.028615
10 0.113921 0.104872 0.034864 0.058311 0.124001 0.111816 0.033211 0.127577
0.973977 20 0.055423 0.050783 0.018254 0.043441 0.058817 0.053841 0.013526 0.047762
1.25 30 0.031787 0.030295 0.009966 0.026078 0.046213 0.042261 0.013871 0.036187
50 0.020585 0.019022 0.006952 0.019088 0.025613 0.023488 0.008151 0.017135
100 0.011518 0.010725 0.004295 0.010787 0.013442 0.012402 0.004393 0.012015
𝒞py n WLSE MPSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.460088 0.446721 0.434481 0.427817 0.400653 0.383063 0.337199 0.369410
0.721060 20 0.344096 0.339964 0.391265 0.330106 0.297349 0.293130 0.252886 0.296504
0.5 30 0.273240 0.271835 0.238852 0.260739 0.255607 0.253129 0.251097 0.252395
50 0.217184 0.216057 0.222478 0.214956 0.194604 0.192556 0.194341 0.191107
100 0.157843 0.157920 0.164907 0.157340 0.147023 0.140212 0.124748 0.141724
10 0.302805 0.283702 0.197969 0.301453 0.219958 0.203534 0.123130 0.242078
0.910575 20 0.212119 0.204998 0.123383 0.164024 0.154204 0.158711 0.100125 0.151573
0.75 30 0.162883 0.158423 0.136942 0.170730 0.148111 0.143458 0.105396 0.140124
50 0.125488 0.123054 0.110634 0.130880 0.091369 0.099841 0.085208 0.109574
100 0.098558 0.098506 0.100252 0.092195 0.061364 0.068010 0.056557 0.067917
10 0.173100 0.157255 0.052056 0.146356 0.097198 0.090542 0.028151 0.074352
0.968545 20 0.098569 0.091090 0.039812 0.078501 0.071171 0.065810 0.023385 0.071768
1 30 0.054262 0.049740 0.026511 0.049619 0.045541 0.040886 0.020895 0.045946
50 0.052109 0.048848 0.024247 0.046091 0.034406 0.032117 0.013219 0.034688
100 0.029045 0.027754 0.018093 0.029052 0.026965 0.024875 0.014228 0.023341
10 0.115728 0.102848 0.033623 0.083063 0.111466 0.101528 0.028261 0.048911
0.973977 20 0.053489 0.049699 0.011128 0.034469 0.046053 0.045547 0.017531 0.035451
1.25 30 0.036628 0.034693 0.008564 0.030869 0.030033 0.029148 0.005871 0.025056
50 0.020442 0.019054 0.005336 0.018296 0.019779 0.018395 0.006049 0.018692
100 0.011054 0.009930 0.003267 0.012478 0.009584 0.008882 0.002277 0.004374

Table 7 True values and 𝒜𝒲s of 𝒞py of BCIs for AkD

𝒞py n MLE LSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.437687 0.430605 0.411914 0.415991 0.461418 0.452105 0.402186 0.428910
0.645118 20 0.330898 0.327374 0.332497 0.324617 0.354265 0.349750 0.328684 0.345806
0.5 30 0.279391 0.278132 0.295479 0.276766 0.302593 0.300121 0.329704 0.298886
50 0.215426 0.214608 0.217570 0.214397 0.233573 0.232874 0.236927 0.231654
100 0.153630 0.153512 0.146728 0.152530 0.164612 0.163342 0.150733 0.162116
10 0.266976 0.250841 0.204355 0.285924 0.323048 0.307740 0.193237 0.261823
0.890708 20 0.209480 0.205526 0.137988 0.180488 0.219456 0.212140 0.183338 0.238014
0.75 30 0.169684 0.167224 0.148357 0.175352 0.198359 0.196279 0.160253 0.193397
50 0.134632 0.132693 0.112369 0.131351 0.145054 0.143826 0.112019 0.135153
100 0.093588 0.093092 0.090389 0.098186 0.099968 0.100422 0.089721 0.091989
10 0.115747 0.106237 0.045426 0.108971 0.153401 0.141536 0.054918 0.150901
0.974776 20 0.103980 0.098063 0.036447 0.067112 0.100915 0.094895 0.041952 0.085862
1 30 0.056523 0.052851 0.025361 0.057322 0.075928 0.071461 0.032226 0.062132
50 0.049763 0.047807 0.022174 0.035941 0.053144 0.050744 0.027869 0.049205
100 0.034374 0.033507 0.022178 0.031629 0.036033 0.035189 0.022769 0.032929
10 0.081450 0.073808 0.018443 0.051332 0.112037 0.100418 0.036216 0.104392
0.985981 20 0.047723 0.044160 0.013888 0.035435 0.052700 0.048457 0.018813 0.044577
1.25 30 0.024061 0.021944 0.004342 0.013169 0.041736 0.038384 0.015630 0.028089
50 0.016988 0.015716 0.003935 0.012054 0.017449 0.015978 0.003560 0.010694
100 0.009086 0.008278 0.002493 0.006470 0.011093 0.010151 0.002729 0.007426
𝒞py n WLSE MPSE
ψ 𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
10 0.476791 0.467952 0.468416 0.450657 0.427025 0.425572 0.403884 0.415798
0.645118 20 0.352671 0.348144 0.362391 0.344774 0.327929 0.322354 0.300892 0.327757
0.5 30 0.288292 0.285888 0.276887 0.282797 0.278749 0.274946 0.205127 0.266286
50 0.223318 0.222663 0.212974 0.219545 0.213691 0.212494 0.165830 0.209318
100 0.161620 0.161200 0.163585 0.160669 0.150806 0.150429 0.134732 0.146557
10 0.316434 0.301652 0.213860 0.297511 0.239781 0.238908 0.194207 0.264864
0.890708 20 0.220467 0.214655 0.174164 0.218879 0.200038 0.200559 0.129140 0.171952
0.75 30 0.171470 0.169755 0.146188 0.143276 0.160071 0.158180 0.120468 0.163569
50 0.136668 0.135182 0.117604 0.136248 0.122186 0.121402 0.107219 0.124179
100 0.103155 0.102816 0.064773 0.085772 0.083885 0.083734 0.069758 0.090797
10 0.075981 0.069321 0.010496 0.013689 0.109536 0.100336 0.039607 0.094399
0.974776 20 0.105981 0.096204 0.149310 0.097202 0.096568 0.090410 0.034790 0.058332
1 30 0.096203 0.093554 0.114821 0.054171 0.048971 0.048598 0.024529 0.047855
50 0.076862 0.075845 0.027831 0.037932 0.044006 0.042025 0.021365 0.034068
100 0.034664 0.033649 0.024008 0.034496 0.033836 0.033161 0.022179 0.031264
10 0.080546 0.073575 0.020598 0.013777 0.079109 0.070205 0.009976 0.032750
0.985981 20 0.069279 0.065396 0.123183 0.030402 0.045865 0.040532 0.010402 0.023541
1.25 30 0.046304 0.042530 0.026635 0.045790 0.020221 0.018201 0.004187 0.012621
50 0.044698 0.044740 0.089913 0.012600 0.016837 0.015361 0.002237 0.008470
100 0.021210 0.020895 0.039214 0.013721 0.008830 0.008271 0.002185 0.005409

Table 8 True and Bayes estimate of 𝒞py along with the Risk under SELF through M-H algorithm for LnD, XgD, and AkD

Model Estimate (Est.) and Risk of 𝒞py through M-H algorithm
𝒞py 0.8774483 0.976662 0.989647 0.978029
ψ 0.5 0.75 1 1.25
n Est. Risk Est. Risk Est. Risk Est. Risk
LnD 10 0.853024 0.004214 0.948373 0.004177 0.969128 0.000240 0.961106 0.000452
20 0.864481 0.003102 0.960922 0.001487 0.979044 0.000110 0.971526 0.000264
30 0.874767 0.001541 0.962843 0.001110 0.982419 0.000076 0.972255 0.000065
50 0.869895 0.001423 0.971615 0.000170 0.985448 0.000009 0.974809 0.000129
100 0.873286 0.000923 0.973450 0.000005 0.987620 0.000006 0.976895 0.000038
𝒞py 0.7210604 0.9105752 0.968545 0.973977
ψ 0.5 0.75 1 1.25
n Est. Risk Est. Risk Est. Risk Est. Risk
XgD 10 0.756474 0.009039 0.926176 0.003752 0.953478 0.000297 0.942043 0.000613
20 0.767116 0.005587 0.930856 0.001934 0.963577 0.000290 0.952245 0.000238
30 0.790385 0.003173 0.937375 0.001010 0.967313 0.000066 0.950271 0.000069
50 0.785451 0.002058 0.945529 0.000096 0.969858 0.000010 0.954306 0.000046
100 0.789346 0.000894 0.948281 0.000199 0.972459 0.000004 0.955171 0.000043
𝒞py 0.6451183 0.8907082 0.974776 0.985981
ψ 0.5 0.75 1 1.25
n Est. Risk Est. Risk Est. Risk Est. Risk
AkD 10 0.565347 0.017604 0.796856 0.011609 0.920499 0.004731 0.956472 0.000569
20 0.559024 0.008279 0.819741 0.007189 0.938336 0.001272 0.971138 0.000175
30 0.540392 0.006021 0.816193 0.004546 0.941741 0.000420 0.976528 0.000077
50 0.553668 0.003820 0.816866 0.002636 0.941253 0.001816 0.979522 0.000021
100 0.559860 0.001870 0.824001 0.001318 0.948663 0.000237 0.983446 0.000006

Table 9 True value of 𝒞py along with HPD Interval in terms of 𝒜𝒲s for LnD, XgD and AkD

HPD interval of 𝒞py through M-H algorithm
Model n 𝒞py 0.877448 0.976662 0.989647 0.978029
ψ 0.5 0.75 1 1.25
LnD 10 0.286503 0.128196 0.069984 0.071591
20 HPD 0.216304 0.079287 0.034720 0.044250
30 (𝒜𝒲s) 0.177423 0.068828 0.023888 0.038950
50 0.145669 0.044250 0.014169 0.030460
100 0.103922 0.032680 0.007353 0.022453
n 𝒞py 0.721060 0.910575 0.968545 0.973977
ψ 0.5 0.75 1 1.25
XgD 10 0.331779 0.131199 0.061547 0.072174
20 HPD 0.260186 0.101713 0.034209 0.046301
30 (𝒜𝒲s) 0.205779 0.081814 0.023311 0.042738
50 0.164615 0.058323 0.015419 0.033091
100 0.118632 0.041921 0.007601 0.025168
n 𝒞py 0.645118 0.890708 0.974776 0.985981
ψ 0.5 0.75 1 1.25
AkD 10 0.491227 0.380292 0.195269 0.106567
20 HPD 0.366641 0.287847 0.127084 0.054387
30 (𝒜𝒲s) 0.302958 0.246231 0.109191 0.036017
50 0.239482 0.197394 0.094328 0.023970
100 0.172408 0.140885 0.065415 0.011301

6 Data Analysis

In this section, we consider two real data sets and analyzed for illustrative purposes. Descriptive statistics of the considered data sets are displayed in Table 10. First, using the goodness of fit test, we verify whether the given data sets confirm that they belong to the LnD, XgD, and AkD. Results of the goodness of fit test are reported in Table 11. From Table 11, it is observed that the p-values for both the data sets are much higher than the level of significance (0.05), which indicates that the considered data sets are suitable for the considered model.

Data set I: The data set represents the waiting time (in minutes) before customer service in a bank the detailed description of the data set is mentioned in Ghitany et al. (2008). Here, we assume that the upper and lower specification limits L = 1 and U = 35.1 (each measurement in minutes), respectively.

Data set II: The second data set is regarding the first failure time (time in months) of 20 electric carts used for internal transformation and delivery in a large manufacturing facility. This data set discussed by Zimmer et al. (1998) for the Burr XII reliability analysis. Here, we assume that the upper and lower specification limits L = 0.95 and U = 52.1 (each measurement in minutes), respectively.

For the considered data sets, we have calculated the point estimates of GPCI 𝒞py using different classical estimation methods and the Bayesian estimation method. The classical estimates of the considered index are reported in Table 12 and the Bayes estimates (point and interval) of GPCI 𝒞py under SELF are reported in Table 14. Besides, the confidence limits of BCIs using different classical methods of estimation are reported in Table 13. From Table 13, it was found that for data set I MLE and LnD give the best performance as compared to other methods and distributions, respectively. Similarly, for data set II, MPSE and XgD play the same role. In the different BCIs, 𝒮𝒯 for data set I and 𝒞𝒫 for data set II perform better. It is observed that the width of the HPD is the minimum among the widths of BCIs, which shows similar trends of inference as seen in the simulation study. Specifically, LnD gives the least HPD for Data Set I and XgD gives the least HPD for Data Set II. From Tables 12 and 14, we observe that the estimated value of 𝒞py (under LnD and AkD) based on different methods of estimation indicates that the process is almost capable, i.e., the process is satisfactory from a capability point of view even though it is under statistical control.

Table 10 Descriptive Statistics for the considered data sets

Data Sets Minimum Q1 median mean Q3 Maximum Sd CS CK
I 0.8 4.675 8.1 9.877 13.02 38.5 7.236 1.472 5.54
II 0.9 4.725 10.75 14.68 20.12 53 13.663 1.348 4.279

Table 11 Goodness of fit summary for considered data set

Data Model -Log AIC BIC K.S K.S
Sets Likelihood Statistics (p-value)
I LnD 319.0374 640.0748 642.6800 0.0677 0.7495
XgD 132.7684 267.5367 270.1419 0.0625 0.8297
AkD 320.9646 643.9292 646.5344 0.1003 0.2672
II LnD 74.5745 151.1490 152.1448 0.1254 0.8736
XgD 75.9128 153.8256 154.8214 0.1753 0.5146
AkD 79.1776 160.3552 161.3510 0.2071 0.3130

Table 12 Estimates of GPCIs 𝒞py using different methods of estimation

Data Model ψ^ C^py
Sets MLE LSE WLSE MPSE
I LnD 0.186571 1.000987 1.001030 1.001154 0.015165
XgD 0.263407 0.995442 0.993535 0.994805 0.001645
AkD 0.295277 1.035844 1.033791 1.034129 0.000834
II LnD 0.128526 1.023422 1.023643 1.023759 1.021968
XgD 0.178251 1.022753 1.017489 1.018073 1.022919
AkD 0.201712 1.046044 1.044679 1.044851 1.044983

Table 13 Widths of BCIs for 𝒞py under different method of estimation for different models

Data set - I Data set - II
Est. Widths of 𝒞py for LnD
𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
MLE 0.007125 0.006571 0.000601 0.003240 0.025848 0.025948 0.000909 0.006820
LSE 0.007396 0.006559 0.000517 0.002880 0.023367 0.020624 0.000472 0.002329
WLSE 0.006622 0.005997 0.000268 0.001548 0.022409 0.019861 0.000240 0.001610
MPSE 0.006426 0.005871 0.000612 0.000599 0.000899 0.017821 0.000215 0.000158
Widths of 𝒞py for XgD
𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
MLE 0.013944 0.013003 0.007881 0.012974 0.015982 0.014896 0.000433 0.002391
LSE 0.017734 0.017636 0.011626 0.018388 0.032925 0.029829 0.010935 0.031626
WLSE 0.015281 0.014583 0.009184 0.014583 0.029070 0.025405 0.009756 0.026621
MPSE 0.014752 0.013251 0.008131 0.013258 0.024657 0.023337 0.000401 0.000388
Widths of 𝒞py for AkD
𝒮 𝒫 𝒮𝒯 𝒞𝒫 𝒮 𝒫 𝒮𝒯 𝒞𝒫
MLE 0.035285 0.035945 0.030909 0.037434 0.078136 0.080280 0.050420 0.071256
LSE 0.041131 0.040834 0.028299 0.042009 0.071672 0.072382 0.049761 0.073138
WLSE 0.043836 0.044732 0.035815 0.042761 0.099522 0.100507 0.066170 0.100507
MPSE 0.038869 0.039531 0.030182 0.039732 0.057403 0.056532 0.049625 0.083674

Table 14 Bayes estimates of Cpy through M-H algorithm with corresponding risk and HPD credible intervals

Model Data set - I Data set - II
Bayes estimate and HPD interval
Bayes est risk HPD Bayes est risk HPD
LnD 1.000192 0.000002 0.000402 1.018533 0.000080 0.000289
XgD 0.990712 0.000020 0.001643 1.019016 0.000020 0.000131
AkD 1.036033 0.000002 0.001532 1.041075 0.000058 0.000550

7 Conclusions

In this research, we looked at four traditional methods of GPCI 𝒞py point estimate (MLE, LSE, WLSE, and MPSE) as well as the Bayesian method (M-H algorithm) and demonstrated the proposed methods with two real-life instances. We conducted simulation research to compare these strategies with different sample sizes and different combinations of the unknown parameters because it is not possible to compare these methods conceptually. For the GPCI 𝒞py, we examined BCIs and HPD intervals in addition to point estimation.

Simulation study results show that the performance of the M-H algorithm is satisfactory. Further, simulation results suggest that for almost all the cases, Bayes estimates perform better than classical methods of estimation. It’s worth noting that the prior distributions’ hyper-parameters must be carefully chosen. Among the other conventional methods of estimation, MPSE produces the best results in terms of MSEs for practically all sample sizes and parameter values. Among the considered BCIs, 𝒮𝒯 performed better in terms of 𝒜𝒲s. Also, the 𝒜𝒲s of HPD under SELF are smaller than considered BCIs. The data analysis also echoed the similar pattern of results that we have observed in the simulation study. As a result of the entire analysis, we can conclude that LnD outperforms XgD and AkD for almost all paremeter values except ψ=1.25, and that the performance level of the investigated distribution is LnD>XgD>AkD. I believe that if this research approach works well, the industries will be able to use it in the future to evaluate the capabilities of any process distribution.

References

[1] Chatterjee S., Qiu P. (2009). Distribution-free cumulative sum control charts using bootstrap-based control limits. The Annals of Applied Statistics, 3(1), 349–369.

[2] Chan, L. K., Cheng, S. W., and Spiring, F. A. (1988). A new measure of process capability: 𝒞pm. Journal of Quality Technology, 20(3), 162–175.

[3] Chen, M. H. and Shao, Q. M. (1999). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics, 8(1), 69–92.

[4] Cheng, S. W. and Spiring, F. A. (1989). Assessing process capability: a Bayesian approach. IEE Transactions, 21(1), 97–98.

[5] Cheng, R. C. H. and Amin, N. A. K. (1979). Maximum product-of-spacings estimation with applications to the lognormal distribution. Math Report, 79.

[6] Cheng, R. C. H. and Amin, N. A. K. (1983). Estimating parameters in continuous univariate distributions with a shifted origin. Journal of the Royal Statistical Society: Series B (Methodological), 45(3), 394–403.

[7] Choi, I. S., and Bai, D. S. (1996). Process capability indices for skewed distributions. Proceedings of 20th International Conference on Computer and Industrial Engineering, Kyongju, Korea, 1211–1214.

[8] Dennis, J. E., and Schnabel, R. B. (1983). Numerical methods for unconstrained optimization and non-linear equations. Prentice-Hall, Englewood Cliffs, NJ.

[9] Dey, S., Saha, M., and Kumar, S. (2021). Parametric Confidence Intervals of Spmk for Generalized Exponential Distribution. American Journal of Mathematical and Management Sciences, 1–22.

[10] Franklin, A. F., and Wasserman, G. S. (1991). Bootstrap confidence interval estimation of 𝒞pk: an introduction. Communications in Statistics - Simulation and Computation, 20(1), 231–242.

[11] Ghitany, M. E., Atieh B., and Nadarajah, S. (2008). Lindley distribution and its application. Mathematics and Computers in Simulation, 78, 493–506.

[12] Gunter, B. H. (1989). The use and abuse of 𝒞pk. Quality Progress, 22(3), 108–109.

[13] Hsiang, T. C., and Taguchi, G. (1985). A tutorial on quality control and assurance – the Taguchi methods. ASA Annual Meeting, Las Vegas, Nevada, 188.

[14] Huiming, Z. Y., Jun, Y. and Liya, H. (2007). Bayesian evaluation approach for process capability based on sub samples. IEEE International Conference on Industrial Engineering and Engineering Management, Singapore, 1200–1203.

[15] Juran, J. M. (1974). Juran’s quality control handbook, 3rd ed. McGraw-Hill, New York, USA.

[16] Kane, V. E. (1986). Process capability indices. Journal of Quality Technology, 18, 41–52.

[17] Kumar S. (2021). Classical and Bayesian Estimation of the Process Capability Index Cpy Based on Lomax Distributed. In Yadav D.K. (Eds.), Advance Research Trends in Statistics and Data Science (pp. 115–131). MKSES Publication. http://doi.org/10.5281/zenodo.4699531.

[18] Kumar, S., and Saha, M. (2020). Estimation of Generalized Process Capability Indices Cpy for Poisson Distribution. Invertis Journal of Management, 12(2), 123–130.

[19] Kumar, S., Dey, S., and Saha, M. (2019). Comparison between two generalized process capability indices for Burr XII distribution using bootstrap confidence intervals. Life Cycle Reliability And Safety Engineering, 8(4), 347–355.

[20] Kumar, S., Yadav, A. S., Dey, S., and Saha, M. (2021). Parametric inference of generalized process capability index Cpyk for the power Lindley distribution. Quality Technology & Quantitative Management, 1–34.

[21] Kundu, D., and Pradhan, B. (2009). Bayesian inference and life testing plans for generalized exponential distribution. Sci. China Ser. A Math. 52 (special volume dedicated to Professor Z. D. Bai), 1373–1388.

[22] Leiva, V., Marchanta, C., Saulob, H., Aslam, M., and Rojasd, F. (2014). Capability indices for Birnbaum–Saunders processes applied to electronic and food industries. Journal of Applied Statistics, 41(9), 1881–1902.

[23] Li C., Mukherjee A., Su Q., Xie M. (2016). Distribution-free phase-II exponentially weighted moving average schemes for joint monitoring of location and scale based on subgroup samples. International Journal of Production Research, 54(24), 7259–7273.

[24] Lin, T. Y., Wu, C. W., Chen, J. C., and Chiou, Y. H. (2011). Applying Bayesian approach to assess process capability for asymmetric tolerances based on Cpmk index. Applied mathematical modelling, 35(9), 4473–4489.

[25] Lindley, D. V. (1958). Fiducial distributions and Bayes’ theorem. Journal of the Royal Statistical Society, 20, 102-107.

[26] Maiti, S. S., Saha, M. and Nanda, A. K. (2010). On generalizing process capability indices. Journal of Quality Technology and Quantitative Management, 7(3), 279–300.

[27] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6): 1087–1092.

[28] Miao, R., Zhang, X., Yang, D., Zhao, Y. and Jiang, Z. (2011). A conjugate Bayesian approach for calculating process capability indices. Expert Systems with Applications, 38(7), 8099–8104.

[29] Ouyang, L. Y., Wu, C. C., and Kuo, H. L. (2002). Bayesian assessment for some process capability indices. International journal of information and management sciences, 13(3), 1–18.

[30] Pearn, W. L., Kotz, S., and Johnson, N. L. (1992). Distributional and inferential properties of process capability indices. Journal of Quality Technology, 24, 216–231.

[31] Pearn, W. L., Tai, Y. T., Hsiao, I. F., and Ao, Y. P. (2014). Approximately unbiased estimator for non-normal process capability index 𝒞Npk. Journal of Testing and Evaluation, 42, 1408–1417.

[32] Pearn, W. L., Wu, C. C. and Wu, C. H. (2015). Estimating process capability index C pk: classical approach versus Bayesian approach. Journal of Statistical Computation and Simulation, 85(10), 2007–2021.

[33] Pearn, W. L., Tai, Y. T., and Wang, H. T. (2016). Estimation of a modified capability index for non-normal distributions. Journal of Testing and Evaluation, 44, 1998–2009.

[34] Perakis, M. and Xekalaki, E. (2002). A process capability index that is based on the proportion of conformance. Journal of Statistical Computation and Simulation, 72(9), 707–718.

[35] Rao, G. S., Aslam, M., and Kantam, R. R. L. (2016). Bootstrap confidence intervals of 𝒞Npk for Inverse Rayleigh and Log-logistic distributions. Journal of Statistical Computation and Simulation, 86(5), 862–873.

[36] Ranneby, B. (1984). The maximum spacing method. an estimation method related to the maximum likelihood Method. Scandinavian Journal of Statistics, 11(2), 93–112.

[37] Saxena, S. and Singh, H. P. (2006). A Bayesian estimator of process capability index. Journal of Statistics and Management Systems, 9(2), 269–283.

[38] Seifi, S. and Nezhad, M. S. F. (2017). Variable sampling plan for resubmitted lots based on process capability index and Bayesian approach. The International Journal of Advanced Manufacturing Technology, 88(9-12), 2547–2555.

[39] Saha, M., Kumar, S., Maiti, S. S., and Yadav, A. S. (2018). Asymptotic and bootstrap confidence intervals of generalized process capability index 𝒞py Cpy for exponentially distributed quality characteristic. Life Cycle Reliability And Safety Engineering, 7(4), 235–243.

[40] Saha, M., Dey, S., Yadav, A. S., and Kumar, S. (2019). Classical and Bayesian inference of C py for generalized Lindley distributed quality characteristic. Quality And Reliability Engineering International, 35(8), 2593–2611.

[41] Saha, M., Kumar, S., Maiti, S. S., Singh Yadav, A., and Dey, S. (2020a). Asymptotic and bootstrap confidence intervals for the process capability index cpy based on Lindley distributed quality characteristic. American Journal Of Mathematical And Management Sciences, 39(1), 75–89.

[42] Saha, M., Kumar, S., and Sahu, R. (2020b). Comparison of two generalized process capability indices by using bootstrap confidence intervals. International Journal of Statistics and Reliability Engineering, 7(1), 187–195.

[43] Shanker, R. (2015): Akash Distribution and Its Applications. International Journal of Probability and Statistics, 4(3): 65–75.

[44] Sen, S., Maiti, S. S., and Chandra, N. (2016). The xgamma distribution: statistical properties and application. Journal of Modern Applied Statistical Methods, 15(1), 38.

[45] Shiau, J. J. H., Chiang, C. T. and Hung, H. N. (1999a). A Bayesian procedure for process capability assessment. Quality and Reliability Engineering International, 15(5), 369–378.

[46] Shiau, J. J. H., Hung, H. N. and Chiang, C. T. (1999b). A note on Bayesian estimation of process capability indices. Statistics and Probability Letters, 45(3), 215–224.

[47] Smithson, M. (2001). Correct confidence intervals for various regression effect sizes and parameters: the importance of non-central distributions in computing intervals. Educational and Psychological Measurement, 61, 605–632.

[48] Smith, A. F. and Roberts, G. O. (1993). Bayesian computation via the gibbs sampler and related markov chain monte carlo methods. Journal of the Royal Statistical Society. Series B (Methodological), 55.

[49] Swain, J. J., Venkatraman, S. and Wilson, J. R. (1988). Least-squares estimation of distribution functions in Johnson’s translation system. Journal of Statistical Computation and Simulation, 29(4), 271–297.

[50] Tong, G. and Chen, J. P. (1998). Lower confidence limits of process capability indices for non-normal distributions. Quality Engineering, 9, 305–316.

[51] Zimmer, W. J., Keats, J. B., and Wang, F. K. (1998). The Burr XII Distribution in Reliability Analysis, Journal of Quality Technology, 30, 386–394.

Biography

images

Sumit Kumar is currently working as an Assistant Professor in the Department of Mathematics at Chandigarh University, Mohali, Punjab. He did his M.Sc. in Statistics from the Department of Statistics at Chaudhary Charan Singh University, Meerut, and his Ph.D. from the Department of Statistics at the Central University of Rajasthan. He has made good contributions in the areas of statistical quality control, classical and Bayesian inference, and distribution theory. He has also reviewed several papers for different reputed journals. He has published 11 research articles and 1 edited book chapter in reputed national/international journals. He has presented his research work at various national and international conferences and attended several seminars and FDP’s on statistics and related areas

Abstract

1 Introduction

2 Estimation of Generalized Process Capability Index 𝒞py

2.1 Lindley Distribution

2.2 Xgamma Distribution

2.3 Akash distribution

3 Bootstrap Confidence Interval

Standard Bootstrap (𝒮) Confidence Interval

Percentile Bootstrap (𝒫) Confidence Interval

Student’s t Bootstrap (𝒮𝒯) Confidence Interval

Bias-corrected Percentile Bootstrap (𝒞𝒫) Confidence Interval

4 Bayesian Estimation

Metropolis-Hastings Algorithm

5 Simulation and Discussions

6 Data Analysis

7 Conclusions

References

Biography