Bayesian Generalized Self Method to Estimate Scale Parameter of Inverse Rayleigh Distribution

The purposes of this study are to estimate the scale parameter of invers Rayleigh distribution under MLE and Bayesian Generalized square error loss function (SELF). The posterior distribution is considered to use two types of prior, namely Jeffrey’s prior and exponential distribution. The proposed methods are then employed in the real data. Several criteria for the selection model are considered in order to identify the method which results in a suitable value of parameter estimated. This study found that Bayesian Generalized SELF under Jeffrey’s prior yielded better estimation values than MLE and Bayesian Generalized SELF under exponential distribution.


INTRODUCTION
Rayleigh distribution is a special form of Weibull distribution, meanwhile, inverse Rayleigh distribution is a special form of Inverse Weibull distribution. The inverse Rayleigh distribution is very useful lifetime model that can be used for analyzing infant mortality, survival analysis, reliability and quality control. The probability density function (pdf) of the inverse Rayleigh distribution with scale parameter is defined as follows [1]: The cumulative distribution function (CDF) of the inverse Rayleigh distribution is given by Here is the scale parameter. The behavior of instantaneous failure rate of the inverse Rayleigh distribution has been increasing and decreasing failure rate patterns for lifetime data. A significant amount of work has been done related to the inverse Rayleigh distribution model in the classical framework but not much in a Bayesian setup, especially in Bayesian Generalized SELF (Squared Error Loss Function). Several studies Bayesian Generalized Self Method to Estimate Scale Parameter of Inverse Rayleigh Distribution Ferra Yanuar 271 have used inverse Rayleigh distribution for several cases. Soliman and Al-Aboud [2] used classical method and Bayesian for parameter estimation based on a set of upper record values from the Rayleigh distribution. Aslam and Jun [3] derived an acceptance sampling plan from a truncated life test where multiple items in a group could be tested simultaneously by a tester when the lifetime of an item followed either an inverse Rayleigh or a log-logistic distribution. Soliman et al. [4] discussed the parameter estimation for an inverse Rayleigh distribution based on lower record values. They implemented a maximum likelihood (ML) estimator of the unknown parameter and Bayesian analysis with informative prior used to derive these estimators and the predictive intervals. Ali [5] explored the modeling of the heterogeneity existing in the lifetime processes using the mixture of the inverse Rayleigh distribution, and the spotlight is the Bayesian inference of the mixture model using non-informative (the Jeffreys and the uniform) and informative (gamma) priors. Studied by Dey & Dey [6] derived Bayesian estimation of the scale parameter and reliability function of an inverse Rayleigh distribution. Yousef & Lafta [7] explored how to estimate the scale parameter for distribution of inverse Rayleigh using different methods, such as the method of Maximum likelihood estimator and moment method. Dey [8] obtained Bayesian estimates of an inverse Rayleigh distribution using squared error and LINEX loss functions. Meanwhile, Rasheed [9] designed some Bayesian estimators for the parameter scale and reliability function of the inverse Rayleigh distribution under the Generalized squared error loss function (SELF).
In the present study, we consider the estimation of unknown parameters in an inverse Rayleigh distribution. The aim of this study is to estimate the scale parameter of inverse Rayleigh distributions using frequentist method (MLE) and the Bayesian approach which are employed to empirical data. The Bayesian approach here is Bayesian Generalized SELF under two types of priors, namely Jeffrey's prior and exponential's prior. The criteria to determine better performance of estimation method are based on the smallest value of Akaike Information Criteria (AIC), Akaike Information Criteria correction (AICc) and Bayesian Information Criteria (BIC).
The remainder of this study is organized as follows: the maximum likelihood estimation (MLE), Bayesian Generalized SELF, Jeffrey's method, exponential distribution, criteria for the goodness of fit of parameter estimation method are derived in Section 2. Estimation method using Bayesian Generalized SELF under two types of priors and implementation of the proposed method to the real data are discussed in Section 3. Finally, Section 4 as the last section provides some concluding remarks.

METHODS
In this section, we explore all methods which are implemented in this present study. The maximumm likelihood estimation method in the beginning and then followed by Bayesian approach and criteria for model selection.

Maximum Likelihood Estimation
In this section, we derived the classical estimator of the scale parameter for the inverse Rayleigh distribution represented by the maximum likelihood estimator. Let be a sequence of i.i.d random variables from invers Rayleigh distribution with scale parameter θ, written as ~ ( ), with probability density function of is ( ; ) as presented by Eq. (1). Thus, the maximum likelihood estimation is formulated as follows [10]: To obtained the estimate for θ is derived by maximizing Eq. (3) until we have: ̂=

Bayesian Generalized SELF Method
This section deals with the problem of obtaining Bayesian estimators for the scale parameter θ from the inverse Rayleigh distribution The Bayes method is a parameter estimation method based on the Bayes theorem. The basic concepts of the Bayes method are as follows. Suppose that 1 , 2 , … . , is a random example of the distribution ( ; ) where is the parameter of the distribution. Estimation of parameter will be based on random example 1 , 2 , … . , . The Bayes method is an estimation method based on combining information obtained from samples (objective knowledge), known as the likelihood function, with prior information regarding the distribution of estimated parameters [11], [12]. Multiplying the likelihood function by the prior distribution gives the posterior distribution. In other words, the posterior distribution is a conditional probability density function of a parameter θ which is given the observation = ( 1 , 2 , … . , ). The formula for defining the posterior distribution is stated by the following formula [13]: Meanwhile, the estimator for the scale parameter ( ) using the Bayes Generalized SELF method will be described as follows [9]: The estimator for parameter is obtained by minimizing the expectation for , which is denoted by (̂ ; ). The expected value of this function can be found by combining (̂ ; ) and the probability density function of , here denoted by ℎ( | ). Thus, the expectation of using the Bayes Generalized SELF is as follows: To obtain the estimated value for ̂ with the Bayesian Generalized SELF method, the Eq. (7) is derived on ̂ , so that:

Jeffreys' Prior as Non-Informative Prior
The most widely used noninformative priors in Bayesian analysis is Jeffreys' prior. This method is also attractive because it is proper under mild conditions and requires no elicitation of hyperparameters. Jeffreys' rule is derived from likelihood function then take the prior distribution to be the determinant of the square root of the Fisher information matrix, denoted by ( ) ∝ √ ( ). Fisher's information for the parameter , defined as [13], [14] ( ) = −  By combining this Jeffrey's prior and likelihood function, it yields the following posterior distribution : The posterior distribution in Eq. (12) has identic form with Gamma distribution with scale parameter 1 and shape parameter n, written as | ~ ( 1 , ).

Exponential Distribution as Conjugate Prior
We also derive the parameter estimation for based on Bayesian Generalized SELF with exponential distribution as prior. The probability distribution function for random variable which has Exponential distribution with scale parameter , written as ~( ), is formulated as follows: Eq. (13) then is combined with likelihood function in Eq. (3) until we have the posterior distribution as follows:

Criteria Model Selection
The Akaike information criterion (AIC) which is widely used for statistical inference, is an estimator of out-of-sample prediction error and thereby the relative quality of statistical models for a given set of data [15]. Given several models for the data, AIC estimates the quality of each model, relative to each of the other models. This method provides a means for each model. When a statistical model is used to represent the process that generated the data, the representation will almost never be exact; so some information will be lost by using the model to represent the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model. In estimating the amount of information lost by a model, AIC deals with the trade-off between the goodness of fit of the model and the simplicity of the model. In other words, AIC deals with both the risk of overfitting and the risk of underfitting. Suppose that we have a statistical model of some data. Let k be the number of estimated parameters in the model. Let ̂ be the maximum value of the likelihood function for the model. Then the AIC value of the model is the following [15]: For condition < 40 with n represent the amount of data, it's suggested to use AICc (Akaike Information Criteria correction): = + 2 ( + 1) − − 1 Another method to estimate the quality of each model relative to each of the other models is Bayesian Information Criteria (BIC), which is represented by following: Given a set of candidate models for the data, the preferred model is the one with the minimum AIC, AICc and BIC value.

RESULTS AND DISCUSSION
In this current study, we then employ the Bayesian Generalized SELF under non informative prior that is Jeffreys prior to estimate the scale parameter of invers Rayleight distribution. We also consider the Bayesian SELF under informative prior namely an exponential distribution. Both methods as well as MLE are employed to the empirical data then. The most suitable method to be implemented is determined based on the smallest values of AIC, AICc and BIC. In this study, we choose first polynomial until fourth polynomial to be applied to estimate :

Implementation of Proposed Method to Real Data
The result of analytical study above then implemented to real data. The real data set represents the 72 exceedances for the years 1958-1984 (rounded to one decimal place) of flood peaks (in m3/s) of the Wheaton River near Carcross in Yukon Territory, Canada [16]. The data are as follows: We then employ both proposed methods and MLE to this empirical data. The comparison of criteria for model selection based on three methods are provided in Table  1.  Table 1 informs us that this present study yielded almost similar values for estimated mean for all three methods (in the third column). Based on the criteria model selection, this study found that Jeffrey's prior as noninformation prior, tends to result smaller values than MLE and exponential 's prior for all four polynomials. The smallest values for these criteria are at Jeffrey's prior at fourth polynomial ( 4 ).
These results inform us that the method to estimate scale parameter of invers Rayleigh distribution using Bayesian Generalized SELF under Jeffrey's prior tends to result better values than MLE and Bayesian Generalized SELF under exponential's prior. This present study proved it by employing all proposed method to real data with size sample is relatively moderate, n = 72.

CONCLUSIONS
This study employed the MLE, Bayesian Generalized SELF under Jeffrey's prior and Bayesian Generalized SELF under exponential's prior to estimate the scale parameter of invers Rayleight distribution of a real data. All 72 sample of flood peaks data in Canada are involved in this study. This real data has invers Rayleigh distribution. This study found that estimation mean of scale parameter from invers Rayleigh distribution based on MLE, Bayesian Generalized SELF under Jeffrey's prios and Bayesian Generalized SELF under exponential's prior tend to result similar values. Based on criteria of selection model, this study proved that Bayesian Generalized SELF under Jeffrey's prior tend to result the smallest value of AIC, AICc and BIC.