Approximate Bayes Estimators of the Parameters of the Inverse Gaussian Distribution Under Different Loss Functions
Ilhan Usta1,* and Merve Akdede2
1Eskisehir Technical University, Department of Statistics, Eskisehir, Turkey
2Usak University Department of Statistics, Usak, Turkey
E-mail: iusta@eskisehir.edu.tr; merve.akdede@usak.edu.tr
*Corresponding Author
Received 20 July 2019; Accepted 31 March 2020; Publication 13 October 2020
Inverse Gaussian is a popular distribution especially in the reliability and life time modelling, and thus the estimation of its unknown parameters has received considerable interest. This paper aims to obtain the Bayes estimators for the two parameters of the inverse Gaussian distribution under varied loss functions (squared error, general entropy and linear exponential). In Bayesian procedure, we consider commonly used non-informative priors such as the vague and Jeffrey’s priors, and also propose using the extension of Jeffrey’s prior. In the case where the two parameters are unknown, the Bayes estimators cannot be obtained in the closed-form. Hence, we employ two approximation methods, namely Lindley and Tierney Kadane (TK) approximations, to attain the Bayes estimates of the parameters. In this paper. the effects of considered loss functions, priors and approximation methods on Bayesian parameter estimation are also presented. The performance of Bayes estimates is compared with the corresponding classical estimates in terms of the bias and the relative efficiency throughout an extensive simulation study. The results of the comparison show that Bayes estimators obtained by TK method under linear exponential loss function using the proposed prior outperform the other estimators for estimating the parameters of inverse Gaussian distribution most of the time. Finally, a real data set is provided to illustrate the results.
Keywords: Inverse Gaussian distribution, Bayes estimator, extension of Jeffrey’s prior, Lindley approximation, Tierney Kadane approximation.
The Inverse Gaussian (IG) distribution arises as the distribution of the passage time in Brownian motion with positive drift. Initial studies on the statistical properties of IG distribution was conducted by Tweedie (1957). The theory of IG distribution and its statistical properties were also studied by Chhikara and Folks (1975). For more details on the IG distribution, the readers can referred to Chhikara and Folks (1989), Johnson et al. (1994) and Seshadri (1999).
The probability density function (pdf) of two parameter IG distribution is
(1) |
with We denote the IG distribution by . The expected value and variance of the IG distribution are given as and , respectively. The cumulative distribution function (cdf) is given as follows
(2) |
where is the cdf for the standard normal distribution.
Iyengar and Patwardhan (1988) suggested the IG distribution as an alternative to log-normal, gamma, Weibull and generalized gamma distributions where the initial failure rate is high. Lemeshko et al. (2010) proposed the IG distribution as a competitor of the families of generalized Weibull, log-normal and log-logistic distributions since its hazard function has shape. Therefore, many authors have shown interest in the classical and Bayesian estimation of parameters of the IG distribution. For instance, the maximum likelihood estimators (MLEs) of and were obtained by Tweedie (1957). Chhikara and Folks (1989) derived the minimum variance unbiased estimators (MVUEs) of parameters. Banarjee and Bhattacharyya (1979) obtained the Bayes estimators of the parameters and reliability function of under the vague prior, and . Sinha (1986) used Lindley approximation to obtain Bayes estimates of the unknown parameters as well as the reliability function using under the squared error loss function (SELF) with a vague prior. Ahmad and Jaheen (1995) used Lindley and Tierney and Kadane (TK) approximations to compute the Bayes estimators for the parameters and reliability function of considering the conjugate prior under SELF. Recently, Singh et al. (2008) obtained Bayes estimators using Lindley’s approximation under SELF and the general entropy loss function (GELF). Pandey and Bandyopadhyay (2012) used Markov Chain Monte Carlo method and Lindley approximation to compute Bayes estimators using gamma prior under SELF. In addition, many authors have studied the Bayesian estimation of the parameters of IG distribution using censored data, see for example, Basak and Balakrishnan (2012), Jia et al. (2017), Rostamian and Nematollahi (2019).
The purpose of this study is to derive Bayes estimators for the unknown parameters of based on a class of non-informative priors: the vague prior, Jeffrey’s prior, and the proposed extension of Jeffrey’s prior, under symmetric (SELF) and asymmetric (linear exponential and GELF) loss functions. It is noticed that the Bayes estimators of the parameters cannot be expressed in explicit forms. Thus, we use two approximation methods, Lindley and TK, to compute the Bayes estimates. A comprehensive simulation study is conducted to evaluate the performance of the proposed Bayes estimators, MLEs and MVUEs in terms of bias and the mean squared error.
The rest of this study is set as follows: In Section 2, the MLEs and MVUEs of the two-parameter IG distribution are given. Section 3 presents the non-informative priors, the posterior distributions and the considered loss functions. Bayes estimation using Lindley and TK approximation methods is also outlined in this section. By using the approximation methods, the approximate Bayes estimates under the proposed extension of Jeffrey’s prior are derived in Section 4. The results of simulation study are provided in Section 5. An application to real data set is given in Section 6. We conclude the study in Section 7.
Let be a random sample of size from the distribution, then the likelihood and log-likelihood functions are respectively as follows:
(3) | |||
(4) |
where . By differentiating (5) with respect to the parameters and setting to zero, we have
(5) | |||
(6) |
From (6) and (7), the MLEs of and are obtained as follows, respectively:
(7) |
According to Chhikara and Folks [5], the MVUEs of and are given as
(8) |
The property that distinguishes the Bayesian estimation approach from classical estimation is that the model parameter is considered as a random variable. It has a certain model named as the prior distribution. The prior represents any knowledge or beliefs about the parameter. In case that the prior knowledge about the parameters is available, informative priors are used for Bayesian estimation. However, in many real cases, it is very difficult to know in advance information about the parameters.
Therefore, using non-informative priors is the more suitable alternative to informative priors. For this reason, in this study, we assume the absence of the prior knowledge about the parameters, and obtain Bayes estimators under three different non-informative priors: the vague prior, Jeffrey’s prior, and the proposed extension of Jeffrey’s prior.
First, we consider the vague prior for and given as follows
(10) |
which also used by Banarjee and Bhattacharyya (1979), and Sinha (1986).
Thus, the joint posterior distribution of is derived as follows
(11) |
Substituting the vague prior (10) in (11), the posterior distribution becomes as the following
(12) |
or
(13) |
where is the normalizing constant.
One of the commonly used non-informative priors was suggested by Jeffrey (1961) as , where is the Fisher Information matrix. For two parameter IG distribution, the joint Jeffrey’s prior of is given as
(14) |
Substituting the Jeffrey’s prior (14) in (11), we obtain
(15) |
where is the normalizing constant.
Al-Kutubi and Ibrahim (2009) suggested an extension of Jeffrey’s prior for exponential distribution as , . Following the similar path, we propose to use the extended Jeffrey’s prior for two parameters of IG distribution which can be express as the following form
(16) |
This is also a generalisation of the non-informative priors in this study. For instance, the extended Jeffrey’s prior in (16) yields Jeffrey’s prior in (16) when . If and is constant, (18) becomes the vague prior in (12).
The posterior distribution of is derived by using the proposed prior in (16) as follows
(17) |
where is the normalizing constant.
In this subsection, we briefly present various loss functions used to drive Bayes estimators of the IG parameters in this study.
One of the widely used loss functions is SELF, which has the symmetry property. SELF is defined as follows
(18) |
Let be an arbitrary function of . The Bayes estimator of under SELF, denoted by , that equals to the posterior mean is given as
(19) |
SELF is widely used due to the simplicity of its algebraic calculations. However, as SELF is symmetric, it may not be appropriate for many estimation problems especially where overestimation or underestimation of a parameter is important. Therefore, asymmetric loss functions are suggested as more appropriate alternatives.
Varian (1975) defined an asymmetric loss function, called linear exponential (LINEX) loss function, as an alternative to the symmetric loss functions where overestimation is more serious than underestimation. LINEX loss function is defined as follows
(20) |
where is the loss parameter. The sign and the magnitude of represent the direction and the degree of symmetry, respectively.
The Bayes estimator of under (20), denoted by , was obtained by Zellner (1986) as following
(21) |
General entropy loss function (GELF) introduced by Calabria and Pulcini (1994) was defined as the following
(22) |
where is the loss parameter and its magnitude shows the degree of symmetry.
The Bayes estimator of under GELF, denoted by , is given by
(23) |
Note that if in (25), the Bayes estimator under GELF, , coincides with .
We would like to emphasize that Bayes estimators under the considered loss functions contain the ratio of the integrals which cannot be simplified in close forms. Therefore, Lindley and TK approximations are employed to obtain the approximate Bayes estimates of IG(,).
Lindley (1980) suggested an approximation for the integrals of the form
(24) |
where and are any functions of and is the log-likelihood function. If is assumed to be the prior distribution of , then (24) becomes the expectation of as in the following form
(25) |
where is the logarithm of the prior.
Applying Lindley’s approximation method for two parameter case, say , can be asymptotically obtained as
(26) | |||
where all terms are evaluated at the MLEs of , , , , and .
Lindley approximation method is often used to evaluate the integrals in (25); however, this method requires the third derivatives of the log-likelihood function. An alternative approximation method was derived by Tierney and Kadane (1986). By defining the functions and , respectively, the expectation of the can be expressed as
(27) |
Then applying TK method, the approximate posterior expectation of the in (27) is obtained as following
(28) |
where and attain their maximum value at and , respectively. and are the inverses of the negatives Hessian matrices of and at and , respectively.
In this section, the extended Jeffrey’s prior is taken into the consideration as the prior distribution and Bayes estimators of the parameters of the IG distribution are approximately computed by using Lindley and TK methods under the considered loss functions.
For our estimation problem with , we derive the following expressions by using the log-likelihood function in (5) and the extension of Jeffrey’s prior in (18) as
where and given in (8).
Substituting the above expressions into (28), the approximate posterior expectation of can be expressed as in the following form
(29) | ||
• Let , () then the approximate posterior expectation of is obtained as
The approximate Bayes estimator of under LINEX loss function, , is
(30) |
• Let , then the approximate posterior expectation of is obtained as
The approximate Bayes estimator of under LINEX loss function, , is
(31) |
• Let , , then
The approximate Bayes estimator of under GELF, , is
(32) |
• Let , , then
The approximate Bayes estimator of , under GELF, , is
(33) |
We note that the approximate Bayes estimators of and under SELF, denoted by and , respectively, are obtained from and with .
For our case with , we observe the following expression by using the log-likelihood function in (5) and the extended Jeffrey’s prior in (18) as
(34) | |||
Then, are obtained by solving the following equations:
(35) | ||
Further, we obtain the second order derivatives of with respect to and as
(36) | ||
Next, using the expressions in (36), we compute the inverse of negatives hessian matrix of at as follows
(37) |
Let , then function is defined
(38) |
and are obtained by solving the following equations:
(39) | ||
Then, we obtain the inverse of negatives hessian matrix of at as
(40) |
Thus, the approximate Bayes estimator of , denoted by , is derived as
(41) | |||
We obtain the approximate Bayes estimator of under LINEX loss function likewise.
Let , then function is defined
(42) |
We obtain by solving the following equations:
(43) | ||
Taking the second order derivatives of , we compute the inverse of negatives hessian matrix of at as
(44) |
Hence, the approximate Bayes estimator of , denoted by , is obtained as follows
(45) | |||
Similarly, we derive the approximate Bayes estimator of under GELF.
It is noted that the approximate Bayes estimators of and under SELF, and , respectively, are derived from and with . Besides, the same procedure in Section 4 is repeated for the vague prior and Jeffrey’s prior.
We conduct a comprehensive simulation study to investigate the performances of the classical and Bayesian estimators of the parameters in terms of bias and relative efficiency (RE). In the simulation study, we consider a sample of sizes to illustrate the effect of small, moderate and large samples on the estimators. The true values of are taken as , , and . The constant values for the extended Jeffrey’s prior are considered as . The loss parameter for LINEX loss function and GELF is taken as .
Table 1 Bias values of estimates for
Lindley | Tierney-Kadane | ||||||||||||
c | |||||||||||||
k | K | ||||||||||||
n=25 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
.007 | .007 | .067 | .073 | .061 | .065 | .052 | .078 | .086 | .060 | .074 | .053 | ||
1 | .007 | .007 | .037 | .044 | .030 | .035 | .020 | .035 | .042 | .022 | .032 | .015 | |
2 | .007 | .007 | .007 | .015 | .001 | .005 | .010 | .001 | .007 | .010 | .001 | .017 | |
3 | .007 | .007 | .023 | .015 | .030 | .025 | .038 | .031 | .026 | .040 | .033 | .048 | |
.004 | .004 | .034 | .037 | .030 | .033 | .026 | .038 | .042 | .031 | .036 | .027 | ||
1 | .004 | .004 | .019 | .023 | .015 | .018 | .010 | .019 | .022 | .013 | .017 | .009 | |
2 | .004 | .004 | .004 | .008 | .000 | .003 | .005 | .002 | .005 | .003 | .001 | .008 | |
3 | .004 | .004 | .011 | .007 | .015 | .012 | .019 | .015 | .012 | .020 | .017 | .024 | |
.002 | .002 | .539 | .571 | .369 | .528 | .438 | .792 | .382 | .216 | .714 | .403 | ||
1 | .002 | .002 | .268 | .399 | .043 | .249 | .115 | .215 | .319 | .064 | .184 | .026 | |
2 | .002 | .002 | .002 | .198 | .203 | .025 | .152 | .088 | .099 | .276 | .109 | .223 | |
3 | .002 | .002 | .273 | .047 | .404 | .291 | .377 | .318 | .169 | .458 | .334 | .425 | |
.014 | .014 | .281 | .336 | .186 | .273 | .216 | .345 | .502 | .148 | .324 | .216 | ||
1 | .014 | .014 | .147 | .231 | .041 | .137 | .071 | .134 | .283 | .001 | .120 | .043 | |
2 | .014 | .014 | .014 | .115 | .088 | .003 | .062 | .018 | .098 | .127 | .030 | .095 | |
3 | .014 | .014 | .120 | .013 | .203 | .130 | .184 | .153 | .061 | .245 | .163 | .222 | |
n50 | |||||||||||||
.005 | .005 | .035 | .038 | .031 | .034 | .026 | .037 | .041 | .031 | .036 | .027 | ||
1 | .005 | .005 | .020 | .024 | .016 | .019 | .011 | .019 | .023 | .014 | .018 | .010 | |
2 | .005 | .005 | .005 | .009 | .001 | .004 | .004 | .003 | .006 | .002 | .002 | .006 | |
3 | .005 | .005 | .010 | .007 | .014 | .012 | .019 | .013 | .009 | .017 | .014 | .021 | |
.000 | .000 | .015 | .017 | .013 | .014 | .011 | .016 | .018 | .013 | .015 | .011 | ||
1 | .000 | .000 | .007 | .009 | .006 | .007 | .003 | .007 | .009 | .005 | .007 | .003 | |
2 | .000 | .000 | .000 | .002 | .002 | .001 | .004 | .001 | .001 | .003 | .001 | .005 | |
3 | .000 | .000 | .007 | .006 | .009 | .008 | .012 | .008 | .007 | .011 | .009 | .013 | |
.013 | .013 | .257 | .313 | .163 | .249 | .191 | .303 | .474 | .121 | .285 | .184 | ||
1 | .013 | .013 | .122 | .206 | .016 | .112 | .044 | .108 | .251 | .024 | .094 | .018 | |
2 | .013 | .013 | .013 | .089 | .114 | .024 | .090 | .041 | .069 | .146 | .052 | .116 | |
3 | .013 | .013 | .148 | .042 | .232 | .158 | .213 | .168 | .082 | .256 | .178 | .233 | |
.008 | .008 | .125 | .164 | .077 | .121 | .089 | .139 | .211 | .067 | .132 | .090 | ||
1 | .008 | .008 | .059 | .105 | .008 | .053 | .020 | .056 | .115 | .005 | .049 | .013 | |
2 | .008 | .008 | .008 | .043 | .058 | .013 | .046 | .017 | .037 | .071 | .022 | .056 | |
3 | .008 | .008 | .074 | .023 | .120 | .080 | .110 | .085 | .037 | .134 | .090 | .121 | |
n100 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
.001 | .001 | .014 | .016 | .012 | .014 | .010 | .015 | .017 | .012 | .014 | .010 | ||
1 | .001 | .001 | .007 | .009 | .005 | .006 | .002 | .007 | .008 | .004 | .006 | .002 | |
2 | .001 | .001 | .001 | .001 | .003 | .002 | .005 | .001 | .001 | .004 | .002 | .006 | |
3 | .001 | .001 | .008 | .007 | .010 | .009 | .013 | .009 | .007 | .011 | .010 | .013 | |
.001 | .001 | .007 | .007 | .006 | .006 | .004 | .007 | .008 | .006 | .006 | .004 | ||
1 | .001 | .001 | .003 | .004 | .002 | .002 | .001 | .003 | .004 | .002 | .002 | .001 | |
2 | .001 | .001 | .001 | .000 | .002 | .001 | .003 | .001 | .000 | .002 | .002 | .003 | |
3 | .001 | .001 | .005 | .004 | .006 | .005 | .007 | .005 | .004 | .006 | .005 | .007 | |
.006 | .006 | .131 | .170 | .081 | .126 | .094 | .141 | .212 | .070 | .134 | .092 | ||
1 | .006 | .006 | .062 | .109 | .010 | .057 | .023 | .059 | .119 | .003 | .052 | .016 | |
2 | .006 | .006 | .006 | .046 | .057 | .012 | .045 | .014 | .039 | .069 | .020 | .053 | |
3 | .006 | .006 | .074 | .022 | .121 | .080 | .110 | .081 | .034 | .131 | .086 | .118 | |
.015 | .015 | .083 | .106 | .058 | .081 | .064 | .087 | .117 | .055 | .083 | .064 | ||
1 | .015 | .015 | .049 | .074 | .023 | .046 | .029 | .048 | .076 | .019 | .045 | .027 | |
2 | .015 | .015 | .015 | .041 | .011 | .012 | .005 | .012 | .039 | .015 | .009 | .008 | |
3 | .015 | .015 | .019 | .007 | .044 | .022 | .039 | .022 | .003 | .049 | .025 | .042 |
For 5000 repetitions, the performance of the classical and Bayesian estimators is measured with different criteria such as bias and RE (see Usta (2013)), given as follows:
where is the estimate of for the simulated sample. All programs and random number generation are accomplished in MATLAB (2019a).
Simulation results are given in Tables 1-4 for all combinations of and . Tables 1-2 show the biases for the estimates of and .Tables 3-4 present the REs of the estimates. It is to be noted that, in these tables, and stand for the vague and Jeffrey’s priors, respectively.
Table 2 Bias values of estimates for
Lindley | Tierney-Kadane | ||||||||||||
c | |||||||||||||
k | k | ||||||||||||
n25 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
.255 | .016 | .164 | .327 | .024 | .143 | .028 | .164 | .345 | .019 | .141 | .006 | ||
1 | .255 | .016 | .255 | .409 | .101 | .232 | .106 | .261 | .451 | .111 | .239 | .104 | |
2 | .255 | .016 | .164 | .327 | .024 | .143 | .028 | .170 | .353 | .027 | .148 | .013 | |
3 | .255 | .016 | .074 | .240 | .048 | .054 | .044 | .072 | .247 | .066 | .050 | .086 | |
.477 | .060 | .298 | .862 | .144 | .255 | .028 | .297 | 1.135 | .228 | .252 | .016 | ||
1 | .477 | .060 | .477 | .985 | .031 | .432 | .182 | .484 | 1.358 | .063 | .439 | .171 | |
2 | .477 | .060 | .298 | .862 | .144 | .255 | .028 | .304 | 1.144 | .220 | .260 | .008 | |
3 | .477 | .060 | .119 | .726 | .248 | .079 | .116 | .117 | .922 | .385 | .073 | .196 | |
.296 | .020 | .204 | .372 | .059 | .182 | .065 | .203 | .626 | .053 | .181 | .043 | ||
1 | .296 | .020 | .296 | .455 | .136 | .273 | .144 | .313 | .553 | .158 | .291 | .154 | |
2 | .296 | .020 | .204 | .372 | .059 | .182 | .065 | .216 | .416 | .067 | .194 | .056 | |
3 | .296 | .020 | .112 | .284 | .014 | .091 | .009 | .102 | .285 | .041 | .079 | .060 | |
.595 | .044 | .412 | 1.000 | .048 | .367 | .134 | .410 | 1.431 | .140 | .365 | .089 | ||
1 | .595 | .044 | .595 | 1.125 | .066 | .549 | .292 | .615 | 1.590 | .043 | .570 | .295 | |
2 | .595 | .044 | .412 | 1.000 | .048 | .367 | .134 | .428 | 1.353 | .121 | .383 | .108 | |
3 | .595 | .044 | .228 | .862 | .153 | .186 | .014 | .220 | 1.107 | .306 | .175 | .102 | |
n50 | |||||||||||||
.123 | .004 | .081 | .151 | .015 | .070 | .012 | .081 | .153 | .015 | .070 | .006 | ||
1 | .123 | .004 | .123 | .192 | .055 | .113 | .051 | .125 | .199 | .058 | .114 | .051 | |
2 | .123 | .004 | .081 | .151 | .015 | .070 | .012 | .083 | .155 | .017 | .072 | .008 | |
3 | .123 | .004 | .038 | .109 | .023 | .028 | .026 | .038 | .110 | .026 | .028 | .036 | |
.259 | .003 | .173 | .440 | .063 | .152 | .035 | .173 | .479 | .081 | .152 | .024 | ||
1 | .259 | .003 | .259 | .512 | .005 | .237 | .114 | .260 | .573 | .001 | .239 | .111 | |
2 | .259 | .003 | .173 | .440 | .063 | .152 | .035 | .175 | .481 | .079 | .154 | .026 | |
3 | .259 | .003 | .088 | .364 | .127 | .068 | .041 | .088 | .388 | .160 | .067 | .061 | |
.127 | .001 | .084 | .155 | .018 | .074 | .015 | .084 | .168 | .018 | .073 | .010 | ||
1 | .127 | .001 | .127 | .195 | .058 | .116 | .054 | .132 | .208 | .064 | .121 | .058 | |
2 | .127 | .001 | .084 | .155 | .018 | .074 | .015 | .089 | .162 | .022 | .078 | .014 | |
3 | .127 | .001 | .042 | .112 | .020 | .031 | .023 | .040 | .112 | .024 | .030 | .034 | |
.278 | .022 | .193 | .462 | .046 | .172 | .054 | .192 | .503 | .065 | .171 | .043 | ||
1 | .278 | .022 | .278 | .534 | .022 | .257 | .133 | .284 | .601 | .022 | .262 | .134 | |
2 | .278 | .022 | .193 | .462 | .046 | .172 | .054 | .198 | .509 | .059 | .177 | .048 | |
3 | .278 | .022 | .107 | .387 | .110 | .087 | .023 | .107 | .412 | .145 | .086 | .043 | |
n100 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
.052 | .010 | .031 | .063 | .000 | .026 | .004 | .031 | .064 | .000 | .026 | .005 | ||
1 | .052 | .010 | .052 | .083 | .020 | .046 | .016 | .052 | .085 | .020 | .047 | .016 | |
2 | .052 | .010 | .031 | .063 | .000 | .026 | .004 | .032 | .064 | .000 | .026 | .004 | |
3 | .052 | .010 | .011 | .043 | .020 | .006 | .023 | .011 | .043 | .020 | .005 | .025 | |
.083 | .039 | .042 | .167 | .075 | .032 | .026 | .042 | .175 | .079 | .032 | .029 | ||
1 | .083 | .039 | .083 | .205 | .038 | .073 | .013 | .084 | .217 | .039 | .074 | .012 | |
2 | .083 | .039 | .042 | .167 | .075 | .032 | .026 | .043 | .175 | .078 | .033 | .029 | |
3 | .083 | .039 | .002 | .128 | .111 | .008 | .065 | .002 | .133 | .118 | .009 | .070 | |
.043 | .019 | .022 | .054 | .009 | .017 | .012 | .022 | .054 | .009 | .017 | .014 | ||
1 | .043 | .019 | .043 | .074 | .011 | .037 | .007 | .044 | .077 | .013 | .039 | .008 | |
2 | .043 | .019 | .022 | .054 | .009 | .017 | .012 | .023 | .056 | .008 | .018 | .012 | |
3 | .043 | .019 | .002 | .034 | .028 | .003 | .032 | .002 | .034 | .029 | .004 | .034 | |
.097 | .026 | .056 | .182 | .062 | .046 | .013 | .056 | .189 | .066 | .046 | .016 | ||
1 | .097 | .026 | .097 | .220 | .025 | .087 | .027 | .099 | .233 | .025 | .089 | .027 | |
2 | .097 | .026 | .056 | .182 | .062 | .046 | .013 | .058 | .191 | .064 | .048 | .014 | |
3 | .097 | .026 | .015 | .143 | .098 | .005 | .052 | .016 | .147 | .105 | .005 | .056 |
Based on the results in Table 1, MLE and MVUE of have the smallest bias values in most of the considered cases as expected, since the classical estimators are unbiased for , while Bayes estimators perform well for the large sample size according to bias. Among Bayes estimates of , ones obtained under the proposed prior with perform better than the other Bayes estimates in terms of bias. Furthermore, the Bayes estimators under asymmetric loss functions show a good performance to estimate in most cases especially when the loss parameter is . We also observed that, in general, Bayes estimates of obtained using Lindley method have smaller bias as compared to TK method.
The results from Table 2 show that the MVUEs of the shape parameter outperforms the MLEs and Bayes estimators in terms of bias for . However, for , the Bayes estimators have the smallest biases in most of the cases. On the other hand, once the loss parameter , indicating overestimation, the Bayes estimators of under LINEX loss function and GELF again using the proposed prior with provide smaller biases than the other Bayes estimators. Moreover, the approximate Bayes estimators obtained using Lindley method is preferable to the estimators based on TK method.
Table 3 The relative efficiency of estimators for
Lindley | Tierney-Kadane | ||||||||||||
c | |||||||||||||
k | k | ||||||||||||
n25 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
1.00 | 1.00 | 1.483 | 1.562 | 1.394 | 1.463 | 1.320 | 1.635 | 1.773 | 1.374 | 1.581 | 1.330 | ||
1 | 1.00 | 1.00 | 1.187 | 1.266 | 1.113 | 1.172 | 1.069 | 1.167 | 1.241 | 1.054 | 1.143 | 1.034 | |
2 | 1.00 | 1.00 | 1.000 | 1.054 | 0.951 | 0.990 | 0.936 | 0.970 | 1.005 | 0.916 | 0.961 | 0.921 | |
3 | 1.00 | 1.00 | 0.911 | 0.941 | 0.897 | 0.911 | 0.911 | 0.906 | 0.921 | 0.901 | 0.906 | 0.926 | |
1.00 | 1.00 | 1.245 | 1.294 | 1.196 | 1.235 | 1.157 | 1.294 | 1.363 | 1.206 | 1.275 | 1.167 | ||
1 | 1.00 | 1.00 | 1.098 | 1.137 | 1.069 | 1.088 | 1.039 | 1.098 | 1.137 | 1.049 | 1.088 | 1.029 | |
2 | 1.00 | 1.00 | 1.000 | 1.029 | 0.980 | 1.000 | 0.971 | 0.990 | 1.020 | 0.961 | 0.990 | 0.961 | |
3 | 1.00 | 1.00 | 0.961 | 0.971 | 0.941 | 0.951 | 0.951 | 0.951 | 0.961 | 0.941 | 0.951 | 0.961 | |
1.00 | 1.00 | 2.446 | 2.425 | 1.677 | 2.421 | 2.146 | 4.167 | 1.507 | 1.159 | 3.547 | 1.941 | ||
1 | 1.00 | 1.00 | 1.542 | 1.935 | 0.902 | 1.493 | 1.194 | 1.366 | 1.533 | 0.802 | 1.292 | 1.007 | |
2 | 1.00 | 1.00 | 1.000 | 1.484 | 0.749 | 0.971 | 0.860 | 0.887 | 1.327 | 0.767 | 0.872 | 0.832 | |
3 | 1.00 | 1.00 | 0.822 | 1.116 | 0.847 | 0.824 | 0.866 | 0.846 | 1.006 | 0.897 | 0.852 | 0.912 | |
1.00 | 1.00 | 1.691 | 1.893 | 1.301 | 1.664 | 1.478 | 2.000 | 2.953 | 1.153 | 1.899 | 1.465 | ||
1 | 1.00 | 1.00 | 1.269 | 1.548 | 0.966 | 1.243 | 1.096 | 1.224 | 1.960 | 0.902 | 1.189 | 1.032 | |
2 | 1.00 | 1.00 | 1.000 | 1.256 | 0.843 | 0.984 | 0.916 | 0.949 | 1.268 | 0.833 | 0.937 | 0.892 | |
3 | 1.00 | 1.00 | 0.884 | 1.041 | 0.859 | 0.882 | 0.888 | 0.883 | 0.989 | 0.888 | 0.885 | 0.910 | |
n50 | |||||||||||||
1.00 | 1.00 | 1.240 | 1.279 | 1.192 | 1.231 | 1.154 | 1.269 | 1.327 | 1.192 | 1.250 | 1.154 | ||
1 | 1.00 | 1.00 | 1.096 | 1.135 | 1.058 | 1.087 | 1.038 | 1.096 | 1.125 | 1.048 | 1.077 | 1.029 | |
2 | 1.00 | 1.00 | 1.000 | 1.029 | 0.981 | 1.000 | 0.971 | 0.990 | 1.019 | 0.962 | 0.990 | 0.962 | |
3 | 1.00 | 1.00 | 0.952 | 0.971 | 0.942 | 0.952 | 0.952 | 0.952 | 0.962 | 0.942 | 0.952 | 0.952 | |
1.00 | 1.00 | 1.106 | 1.128 | 1.085 | 1.106 | 1.064 | 1.128 | 1.149 | 1.085 | 1.106 | 1.064 | ||
1 | 1.00 | 1.00 | 1.043 | 1.064 | 1.021 | 1.043 | 1.000 | 1.043 | 1.064 | 1.021 | 1.043 | 1.000 | |
2 | 1.00 | 1.00 | 1.000 | 1.000 | 0.979 | 1.000 | 0.979 | 1.000 | 1.000 | 0.979 | 1.000 | 0.979 | |
3 | 1.00 | 1.00 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | |
1.00 | 1.00 | 1.686 | 1.895 | 1.288 | 1.658 | 1.468 | 1.918 | 2.950 | 1.136 | 1.828 | 1.430 | ||
1 | 1.00 | 1.00 | 1.258 | 1.538 | 0.961 | 1.232 | 1.089 | 1.210 | 1.877 | 0.905 | 1.177 | 1.030 | |
2 | 1.00 | 1.00 | 1.000 | 1.247 | 0.861 | 0.986 | 0.931 | 0.957 | 1.259 | 0.862 | 0.947 | 0.914 | |
3 | 1.00 | 1.00 | 0.913 | 1.048 | 0.915 | 0.913 | 0.938 | 0.917 | 1.004 | 0.942 | 0.921 | 0.957 | |
1.00 | 1.00 | 1.302 | 1.455 | 1.124 | 1.287 | 1.195 | 1.355 | 1.753 | 1.086 | 1.328 | 1.194 | ||
1 | 1.00 | 1.00 | 1.117 | 1.264 | 0.985 | 1.105 | 1.041 | 1.106 | 1.330 | 0.963 | 1.093 | 1.025 | |
2 | 1.00 | 1.00 | 1.000 | 1.116 | 0.927 | 0.994 | 0.965 | 0.987 | 1.112 | 0.921 | 0.980 | 0.957 | |
3 | 1.00 | 1.00 | 0.953 | 1.020 | 0.937 | 0.952 | 0.958 | 0.952 | 1.006 | 0.947 | 0.952 | 0.964 | |
n100 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
1.00 | 1.00 | 1.104 | 1.125 | 1.083 | 1.104 | 1.063 | 1.104 | 1.146 | 1.083 | 1.104 | 1.063 | ||
1 | 1.00 | 1.00 | 1.042 | 1.063 | 1.021 | 1.042 | 1.021 | 1.042 | 1.063 | 1.021 | 1.042 | 1.021 | |
2 | 1.00 | 1.00 | 1.000 | 1.021 | 1.000 | 1.000 | 0.979 | 1.000 | 1.000 | 0.979 | 1.000 | 0.979 | |
3 | 1.00 | 1.00 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | 0.979 | |
1.00 | 1.00 | 1.042 | 1.083 | 1.042 | 1.042 | 1.042 | 1.042 | 1.083 | 1.042 | 1.042 | 1.042 | ||
1 | 1.00 | 1.00 | 1.042 | 1.042 | 1.000 | 1.000 | 1.000 | 1.042 | 1.042 | 1.000 | 1.000 | 1.000 | |
2 | 1.00 | 1.00 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
3 | 1.00 | 1.00 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
1.00 | 1.00 | 1.328 | 1.495 | 1.137 | 1.311 | 1.210 | 1.372 | 1.768 | 1.096 | 1.344 | 1.203 | ||
1 | 1.00 | 1.00 | 1.124 | 1.283 | 0.986 | 1.112 | 1.043 | 1.114 | 1.345 | 0.964 | 1.098 | 1.026 | |
2 | 1.00 | 1.00 | 1.000 | 1.121 | 0.926 | 0.993 | 0.963 | 0.986 | 1.114 | 0.921 | 0.980 | 0.956 | |
3 | 1.00 | 1.00 | 0.952 | 1.017 | 0.942 | 0.952 | 0.961 | 0.951 | 1.003 | 0.949 | 0.951 | 0.964 | |
1.00 | 1.00 | 1.202 | 1.317 | 1.091 | 1.190 | 1.128 | 1.216 | 1.389 | 1.078 | 1.203 | 1.128 | ||
1 | 1.00 | 1.00 | 1.081 | 1.179 | 0.997 | 1.073 | 1.029 | 1.078 | 1.195 | 0.989 | 1.068 | 1.023 | |
2 | 1.00 | 1.00 | 1.000 | 1.075 | 0.948 | 0.995 | 0.971 | 0.993 | 1.072 | 0.943 | 0.989 | 0.966 | |
3 | 1.00 | 1.00 | 0.958 | 1.003 | 0.938 | 0.956 | 0.953 | 0.956 | 0.997 | 0.940 | 0.954 | 0.953 |
We observe from Table 3 that as the sample size increases, the MSEs of all estimates of decrease. The results for also show that all Bayes estimators under the considered loss functions, except LINEX loss function with , using the proposed prior outperform the others for all the sample sizes. In particular, Bayes estimators under LINEX loss function with using the proposed prior with are generally the best in terms of RE. This is followed by the proposed prior with . Furthermore, Bayes estimates obtained through TK method compete quite well with those obtained through Lindley method.
Table 4 The relative efficiency of estimators for
Lindley | Tierney-Kadane | ||||||||||||
C | |||||||||||||
k | k | ||||||||||||
n25 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
1.00 | 1.460 | 1.260 | 1.862 | 0.919 | 1.218 | 1.047 | 1.259 | 2.013 | 0.889 | 1.217 | 1.022 | ||
1 | 1.00 | 1.460 | 1.460 | 2.096 | 0.996 | 1.406 | 1.155 | 1.470 | 2.379 | 0.997 | 1.415 | 1.145 | |
2 | 1.00 | 1.460 | 1.260 | 1.862 | 0.919 | 1.218 | 1.047 | 1.266 | 2.029 | 0.891 | 1.223 | 1.024 | |
3 | 1.00 | 1.460 | 1.106 | 1.647 | 0.878 | 1.079 | 0.976 | 1.106 | 1.724 | 0.828 | 1.076 | 0.953 | |
1.00 | 1.430 | 1.243 | 2.253 | 0.852 | 1.204 | 1.043 | 1.241 | 3.639 | 0.716 | 1.202 | 1.020 | ||
1 | 1.00 | 1.430 | 1.430 | 2.417 | 0.854 | 1.379 | 1.145 | 1.433 | 4.225 | 0.744 | 1.383 | 1.132 | |
2 | 1.00 | 1.430 | 1.243 | 2.253 | 0.852 | 1.204 | 1.043 | 1.245 | 3.652 | 0.714 | 1.205 | 1.021 | |
3 | 1.00 | 1.430 | 1.099 | 2.090 | 0.866 | 1.073 | 0.978 | 1.099 | 3.135 | 0.721 | 1.070 | 0.955 | |
1.00 | 1.501 | 1.289 | 1.937 | 0.917 | 1.245 | 1.054 | 1.288 | 2.960 | 0.876 | 1.243 | 1.026 | ||
1 | 1.00 | 1.501 | 1.501 | 2.174 | 0.999 | 1.444 | 1.176 | 1.532 | 2.669 | 1.000 | 1.473 | 1.177 | |
2 | 1.00 | 1.501 | 1.289 | 1.937 | 0.917 | 1.245 | 1.054 | 1.309 | 2.216 | 0.887 | 1.262 | 1.037 | |
3 | 1.00 | 1.501 | 1.122 | 1.716 | 0.868 | 1.091 | 0.971 | 1.120 | 1.841 | 0.806 | 1.086 | 0.943 | |
1.00 | 1.503 | 1.290 | 2.393 | 0.845 | 1.246 | 1.055 | 1.289 | 6.087 | 0.680 | 1.244 | 1.026 | ||
1 | 1.00 | 1.503 | 1.503 | 2.573 | 0.858 | 1.446 | 1.177 | 1.518 | 6.678 | 0.728 | 1.460 | 1.169 | |
2 | 1.00 | 1.503 | 1.290 | 2.393 | 0.845 | 1.246 | 1.055 | 1.301 | 5.853 | 0.680 | 1.255 | 1.031 | |
3 | 1.00 | 1.503 | 1.123 | 2.214 | 0.847 | 1.091 | 0.971 | 1.122 | 5.101 | 0.672 | 1.088 | 0.940 | |
n50 | |||||||||||||
1.00 | 1.219 | 1.124 | 1.371 | 0.958 | 1.104 | 1.016 | 1.124 | 1.388 | 0.954 | 1.104 | 1.010 | ||
1 | 1.00 | 1.219 | 1.219 | 1.493 | 0.997 | 1.193 | 1.071 | 1.222 | 1.534 | 0.995 | 1.195 | 1.068 | |
2 | 1.00 | 1.219 | 1.124 | 1.371 | 0.958 | 1.104 | 1.016 | 1.126 | 1.392 | 0.954 | 1.106 | 1.011 | |
3 | 1.00 | 1.219 | 1.051 | 1.265 | 0.928 | 1.037 | 0.981 | 1.051 | 1.272 | 0.918 | 1.036 | 0.976 | |
1.00 | 1.238 | 1.135 | 1.669 | 0.887 | 1.113 | 1.018 | 1.134 | 1.818 | 0.856 | 1.112 | 1.012 | ||
1 | 1.00 | 1.238 | 1.238 | 1.805 | 0.901 | 1.210 | 1.077 | 1.239 | 2.034 | 0.881 | 1.211 | 1.074 | |
2 | 1.00 | 1.238 | 1.135 | 1.669 | 0.887 | 1.113 | 1.018 | 1.135 | 1.821 | 0.855 | 1.113 | 1.012 | |
3 | 1.00 | 1.238 | 1.055 | 1.543 | 0.889 | 1.040 | 0.981 | 1.055 | 1.634 | 0.852 | 1.039 | 0.974 | |
1.00 | 1.222 | 1.127 | 1.374 | 0.960 | 1.107 | 1.017 | 1.127 | 1.400 | 0.955 | 1.106 | 1.011 | ||
1 | 1.00 | 1.222 | 1.222 | 1.496 | 0.999 | 1.196 | 1.073 | 1.230 | 1.544 | 0.994 | 1.204 | 1.074 | |
2 | 1.00 | 1.222 | 1.127 | 1.374 | 0.960 | 1.107 | 1.017 | 1.133 | 1.401 | 0.958 | 1.112 | 1.013 | |
3 | 1.00 | 1.222 | 1.053 | 1.267 | 0.929 | 1.038 | 0.981 | 1.053 | 1.274 | 0.920 | 1.039 | 0.976 | |
1.00 | 1.239 | 1.138 | 1.666 | 0.880 | 1.116 | 1.019 | 1.138 | 1.824 | 0.845 | 1.116 | 1.012 | ||
1 | 1.00 | 1.239 | 1.239 | 1.792 | 0.897 | 1.212 | 1.080 | 1.244 | 2.036 | 0.874 | 1.216 | 1.079 | |
2 | 1.00 | 1.239 | 1.138 | 1.666 | 0.880 | 1.116 | 1.019 | 1.142 | 1.833 | 0.844 | 1.119 | 1.014 | |
3 | 1.00 | 1.239 | 1.058 | 1.546 | 0.877 | 1.043 | 0.978 | 1.059 | 1.648 | 0.835 | 1.042 | 0.971 | |
n100 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | .75 | |||||
1.00 | 1.093 | 1.053 | 1.158 | 0.977 | 1.044 | 1.006 | 1.053 | 1.160 | 0.976 | 1.043 | 1.005 | ||
1 | 1.00 | 1.093 | 1.093 | 1.214 | 1.000 | 1.082 | 1.029 | 1.094 | 1.221 | 1.000 | 1.082 | 1.029 | |
2 | 1.00 | 1.093 | 1.053 | 1.158 | 0.977 | 1.044 | 1.006 | 1.053 | 1.161 | 0.976 | 1.044 | 1.005 | |
3 | 1.00 | 1.093 | 1.022 | 1.111 | 0.964 | 1.016 | 0.993 | 1.022 | 1.112 | 0.962 | 1.016 | 0.992 | |
1.00 | 1.078 | 1.042 | 1.249 | 0.941 | 1.035 | 1.004 | 1.042 | 1.273 | 0.934 | 1.035 | 1.003 | ||
1 | 1.00 | 1.000 | 1.000 | 1.212 | 0.875 | 0.990 | 0.948 | 1.000 | 1.249 | 0.871 | 0.991 | 0.947 | |
2 | 1.00 | 1.000 | 0.966 | 1.158 | 0.873 | 0.959 | 0.931 | 0.966 | 1.181 | 0.866 | 0.959 | 0.930 | |
3 | 1.00 | 1.000 | 0.942 | 1.110 | 0.879 | 0.938 | 0.923 | 0.942 | 1.124 | 0.870 | 0.937 | 0.922 | |
1.00 | 1.080 | 1.043 | 1.139 | 0.975 | 1.036 | 1.004 | 1.043 | 1.142 | 0.975 | 1.036 | 1.004 | ||
1 | 1.00 | 1.080 | 1.080 | 1.190 | 0.994 | 1.069 | 1.023 | 1.081 | 1.199 | 0.995 | 1.071 | 1.023 | |
2 | 1.00 | 1.080 | 1.043 | 1.139 | 0.975 | 1.036 | 1.004 | 1.044 | 1.144 | 0.975 | 1.036 | 1.002 | |
3 | 1.00 | 1.080 | 1.017 | 1.098 | 0.967 | 1.012 | 0.995 | 1.017 | 1.099 | 0.964 | 1.012 | 0.994 | |
1.00 | 1.088 | 1.049 | 1.266 | 0.938 | 1.041 | 1.005 | 1.049 | 1.291 | 0.931 | 1.041 | 1.004 | ||
1 | 1.00 | 1.088 | 1.088 | 1.327 | 0.943 | 1.077 | 1.026 | 1.089 | 1.369 | 0.938 | 1.078 | 1.026 | |
2 | 1.00 | 1.088 | 1.049 | 1.266 | 0.938 | 1.041 | 1.005 | 1.049 | 1.293 | 0.930 | 1.041 | 1.004 | |
3 | 1.00 | 1.088 | 1.019 | 1.212 | 0.941 | 1.014 | 0.993 | 1.019 | 1.227 | 0.932 | 1.014 | 0.992 |
The results from Table 4 show that for each estimator, the MSEs decrease when the sample size increases. It is observed that MVUE performs better than MLE in terms of RE. However, considering the REs, Bayes estimators under LINEX loss function with present the best result for all the sample sizes. Among Bayes estimators of , the estimates under the proposed prior, especially with , have generally lower RE. We further observed that Bayes estimates computed by TK approximation method, in general, provides smaller REs than Lindley’s approximation for estimating the shape parameter.
In this section, a real data set is analyzed to verify how the considered estimators perform in a real-life context. The data set is given in Table 6 taken from Chhikara and Folks (1989). It was also studied by Sinha (1986), and Pandey and Bandyopadhyay (2012).
Table 5 Active repair times for an airborne communication
0.2, 0.3. 0.5, 0.5, 0.5, 0.5, 0.6, 0.6, 0.7, 0.7, 0.7, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0, 1.1, 1.3, 1.5, |
1.5, 1.5, 1.5, 2.0, 2.0, 2.2, 2.5, 2.7, 3.0, 3.0, 3.3, 3.3, 4.0, 4.0, 4.5, 4.7, 5.0, 5.4, 5.4, 7.0, |
7.5, 8.8, 9.0, 10.3, 22.0, 24.5. |
The estimates of the parameters and obtained by using the MVUE, MLE and Bayes estimators under the vague prior and the proposed extension of Jeffrey’s prior with c3 are reported in Table 6. The results of the Kolmogorov–Smirnov (K-S) test are also given in Table 6.
Table 6 Estimation of parameters and K-S statistics for the real data set
k | ||||||
Vague Prior | MVUE | 3.6065 | 1,5507 | 0,0743 | 0,9451 | |
MLE | 3.065 | 1,6589 | 0,0682 | 0,9731 | ||
SELF | 4.2423 | 1,6226 | 0,0804 | 0,9045 | ||
LINEX | 0,75 | 4.3174 | 1,6683 | 0,1115 | 0,8777 | |
LINEX | 0,75 | 3.8057 | 1,5804 | 0,0686 | 0,9717 | |
GELF | 0,75 | 4.1984 | 1,6137 | 0,0782 | 0,9206 | |
GELF | 0,75 | 3.9784 | 1,5596 | 0,0690 | 0,9700 | |
Ext. Jeffrey Prior c3 | SELF | 3.3151 | 1,5839 | 0,0776 | 0,9242 | |
LINEX | 0,75 | 3.1538 | 1,5428 | 0,0898 | 0,8205 | |
LINEX | 0,75 | 3.4877 | 1,6288 | 0,0658 | 0,9809 | |
GELF | 0,75 | 3.2060 | 1,5208 | 0,0875 | 0,8427 | |
GELF | 0,75 | 3.2986 | 1,5749 | 0,0792 | 0,9130 |
We observe from Table 6 that results obtained from the real data set are compatible with the simulation results.
In this paper, we consider Bayesian estimation of the two-parameter inverse Gaussian distribution under symmetric (squared error) and asymmetric (linear exponential and general entropy) loss functions. In Bayesian approach, we consider commonly used non-informative priors: the vague and Jeffrey’s priors, as well as propose using the extended Jeffrey’s prior.
However, Bayes estimators cannot be expressed analytically, because the ratio of the integrals in the posterior expectations is not in closed forms. For this reason, we employ Lindley and TK approximations to compute the approximate Bayes estimators of parameters and . The comparison between Bayesian estimates and the corresponding classical estimates (MLE and MVUE) is carried out based on biases and REs with a simulation study.
The results of the simulation study show that the Bayes estimators of parameters under LINEX with using the proposed prior generally outperform others according to REs. while the IG parameters are best estimated by Bayes estimators under LINEX with using the proposed extension of Jeffrey’s prior according to REs. We also observed that Bayes estimates using the proposed prior with are superior to the other considered priors in terms of RE. On the other hand, TK method competes quite well with Lindley’s method to obtain the approximate Bayes estimates of parameters. Based on all we recommend to use Bayes estimators obtained by TK method under LINEX loss function () using the proposed extension of Jeffrey’s prior () for estimating the parameters of IG distribution.
This study is dedicated to the memory of my PhD student Merve Akdede, who died at an early age.
Sandhya, K. and Umamaheswari, T. S. (2013). Reliability of a multi-component stress strength model with standby system using mixture of two Exponential distributions, Journal of Reliability and Statistical Studies, 6(2), pp. 105–113.
Ahmad, K. E. and Jaheen, Z. F. (1995). Approximate Bayes estimators applied to the inverse Gaussian lifetime model, Computers and Mathematics with Applications, 29(12), pp. 39–47.
Al-Kutubi, H. S. and Ibrahim, N. A. (2009). Bayes estimator for exponential distribution with extension of Jeffery prior information, Malaysian Journal of Mathematical Sciences, 3(2), pp. 297–313.
Banerjee, A. K. and Bhattacharyya, G. K. (1979). Bayesian results for the inverse Gaussian distribution with an application, Technometrics, 21(2), pp. 247–251.
Basak, P. and Balakrishnan, N. (2012). Estimation for the three-parameter inverse Gaussian distribution under progressive Type-II censoring, Journal of the Statistics Computation and Simulation, 82 (7), pp. 1055–1072.
Calabria, R. and Pulcini, G. (1994). Bayes 2-sample prediction for the inverse Weibull distribution, Communication in Statistics Theory and Methods, 23(6), pp. 1811–1824.
Chhikara, R. S. and Folks, J. L. (1975). Statistical distributions related to the inverse Gaussian, Communication in Statistics, 4(12), pp. 1081–1091.
Chhikara, R. S. and Folks, J. L. (1989). The Inverse Gaussian Distribution. New York, Marcel Dekker.
Jia, J., Yan, Z. and Peng X. (2017). Estimation for inverse Gaussian distribution under first-failure progressive hybrid censored samples, Filomat, 31(18), 5743–5752.
Iyengar, S. and Patwardhan, G. (1988). Recent developments in the inverse Gaussian distribution, (In Handbook of Statistics, Vol. 7 edited by Krishnaiah P. R. and Rao C. R.), 479–490.
Jeffreys H. (1961). Theory of Probability. 3 Ed., Oxford, Clarendon Press.
Johnson, N. L., Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, Vol. II, John Wiley & Sons.
Lemeshko, B. Y., Lemeshko, S. B. and Akushkina, K. A. (2010). Inverse Gaussian Model and Its Applications in Reliability and Survival Analysis, (In. Mathematical and Statistical Models and Methods in Reliability Statistics for Industry and Technology edited by Rykov, V. V., Balakrishnan, N. and Nikulin, M. S.), Basel, Birkhäuser.
Lindley, D. V. (1980). Approximate Bayes method, Trabajos de Estadistica Y de Investigacion Operativa, 31(1), pp. 223–237.
Pandey, B. N. and Bandyopadhyay P. (2012). Bayesian estimation of inverse Gaussian distribution, Located at: arXiv:1210.4524v1.
Rostamian, S. and Nematollahi N. (2019). Estimation of stress–strength reliability in the inverse Gaussian distribution under progressively type II censored data, Mathematical Sciences, 13, pp. 175–191.
Seshadri, V. (1999). The Inverse Gaussian Distribution: Statistical Theory and Applications, Berlin, Springer - Verlag.
Singh, P. K., Singh, S. K. and Singh, U. (2008). Bayes estimator of inverse Gaussian parameters under general entropy loss function using Lindley’s approximation, Communications in Statistics - Simulation and Computation, 37(9), pp. 1750–1762.
Sinha, S. K. (1986). Bayesian estimation of the reliability function of the inverse Gaussian distribution, Statistics & Probability Letters, 4(6), pp. 319–323.
Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities, Journal of the American Statistical Association. 81(393), pp. 82–86.
Tweedie, M. C. K. (1957). Statistical properties of inverse Gaussian distribution, The Annals of Mathematical Statistics, 28, pp. 362–377.
Usta, I. (2013). Different estimation methods for the parameters of the extended Burr XII distribution, Journal of Applied Statistics, 40(2), pp. 397–414.
Varian, H. R. (1975). A Bayesian Approach to Real Estate Assessment (In Savage LJ. Feinberg SE. Zellner A. editors. Studies in Bayesian Econometrics and Statistics: In Honor of L. J. Savage. Amsterdam: North-Holland Pub. Co), pp. 195–208.
Zellner, A. (1986). Bayesian estimation and prediction using asymmetric loss functions, Journal of the American Statistical Association, 81(394), pp. 446–451.
Ilhan Usta received his B.Sc. and B.Sc. (double major) degrees in Statistics and Mathematics from Anadolu University, Turkey in 2003 and 2004, respectively; M.Sc. and Ph.D. degrees in Statistics from Anadolu University, Institute of Science, Turkey in 2006 and 2009, respectively. He still works as a Professor at Eskisehir Technical University, Department of Statistics. His major areas of interest are theory of statistics, censored data and parameter estimation, Bayesian estimation, wind energy, entropy optimization distribution and portfolio theory.
Merve Akdede received her B.Sc. degree in Statistics from Dokuz Eylul University, Turkey in 2008; M.Sc. degree in Statistics from Texas A&M University, USA in 2013. Although she has been a Ph.D. student at Eskisehir Technical University since 2015, she died suddenly at an early age in 2019. We will continue to keep her memory alive with this study and do good by her until the day we meet again.
Journal of Reliability and Statistical Studies, Vol. 13_1, 87–112.
doi: 10.13052/jrss0974-8024.1315
© 2020 River Publishers
2 Classical Estimation Methods
3.1 Posterior Distribution Under the Vague Prior
3.2 Posterior Distribution Under Jeffrey’s Prior
3.3 Posterior Distribution Under Extended Jeffrey’s Prior
3.4.1 Squared error loss function
3.4.2 Linear exponential loss function
3.4.3 General entropy loss function
3.6 Tierney-Kadane Approximation
4 Bayes Estimators Under Extended Jeffrey’s Prior
4.1 Bayes Estimators Using Lindley Approximation
4.1.1 Bayes estimators under LINEX loss function
4.1.2 Bayes estimators under general entropy loss function
4.2 Bayes Estimators Using TK Approximation
4.2.1 Bayes estimators under LINEX loss function