Let X1....X10 be a random sample from a distribution with the probability density function f(x|θ)={θxθ1,if0<x<1 0,otherwise,

where θ > 0 Is an unknown parameter. The prior distribution of θ is given by  

π(θ)={θeθ,if θ>0 0,otherwise,

The Bayes estimator of θ under squared error loss is  

This question was previously asked in
CSIR-UGC (NET) Mathematical Science: Held on (2024 June)
View all CSIR NET Papers >
  1. 121Σi=110lnXi
  2. 112Σi=110lnXi
  3. 3+Σi=110lnXi13
  4. 2+Σi=110lnXi11

Answer (Detailed Solution Below)

Option 1 : 121Σi=110lnXi
Free
Seating Arrangement
3.6 K Users
10 Questions 20 Marks 15 Mins

Detailed Solution

Download Solution PDF

Concept:

Prior Distribution: In Bayesian estimation, the prior distribution represents our beliefs about the

parameter θ  before seeing any data. In this problem, the prior distribution of θ is given by 

π(θ)={θeθ,if θ>00,otherwise

Likelihood Function:

The likelihood function expresses the probability of observing the data given the parameter θ.

For this problem, the probability density function (pdf) of the observations X1,X2,,Xn given θ is:

f(x|θ)={θxθ1,if 0<x<10,otherwise

For a random sample X1,X2,,Xn , the likelihood function is the product of the individual densities


L(θ|X1,X2,,Xn)=i=1nθXiθ1

This simplifies to L(θ|X1,X2,,Xn)=θni=1nXiθ1
 

Explanation:

To solve the problem of finding the Bayes estimator of θ under squared error loss,

let’s break it down step by step. The random sample X1,X2,,X10 comes from a

distribution with the probability density function (pdf):

f(x|θ)={θxθ1,if 0<x<10,otherwise, where θ>0 is an unknown parameter.

The prior distribution of θ is given by

π(θ)={θeθ,if θ>00,otherwise

The likelihood function for a sample X1,X2,,Xn from the given pdf is

L(θ|x1,x2,,xn)=i=1nf(xi|θ)

Substitute the given pdf f(x|θ):

L(θ|x1,x2,,xn)=i=1nθxiθ1

L(θ|x1,x2,,xn)=θni=1nxiθ1

This can be rewritten as

L(θ|x1,x2,,xn)=θni=1nxiθi=1nxi1

⇒ L(θ|x1,x2,,xn)=θni=1nxiθi=1n1xi

Taking the log of the likelihood function

logL(θ|x1,x2,,xn)=nlogθ+θi=1nlogxii=1nlogxi

The posterior distribution is proportional to the product of the likelihood and the prior distribution.

So, we multiply the likelihood function by the prior distribution:

π(θ|x1,x2,,xn)L(θ|x1,x2,,xn)×π(θ)

The prior distribution is π(θ)=θeθ, so

π(θ|x1,x2,,xn)θni=1nxiθθeθ
 

Taking the logarithm of this logπ(θ|x1,x2,,xn)=(n+1)logθ+θi=1nlogxiθ

The Bayes estimator under squared error loss is the mean (expectation) of the posterior distribution. That is

θ^=E[θ|X1,X2,,Xn]

From the posterior distribution, which is a Gamma distribution, the expectation of θ is given by

θ^=n+1i=1nlogxi

For this problem,  n = 10, so

θ^=11i=110logxi
 

Thus, option 1) is correct.

Latest CSIR NET Updates

Last updated on Jun 23, 2025

-> The last date for CSIR NET Application Form 2025 submission has been extended to 26th June 2025.

-> The CSIR UGC NET is conducted in five subjects -Chemical Sciences, Earth Sciences, Life Sciences, Mathematical Sciences, and Physical Sciences. 

-> Postgraduates in the relevant streams can apply for this exam.

-> Candidates must download and practice questions from the CSIR NET Previous year papers. Attempting the CSIR NET mock tests are also very helpful in preparation.

More Statistics & Exploratory Data Analysis Questions

Get Free Access Now
Hot Links: teen patti master downloadable content teen patti star login teen patti - 3patti cards game downloadable content teen patti vungo