In regression, mean response (or expected response) and predicted response, also known as mean outcome (or expected outcome) and predicted outcome, are values of the dependent variable calculated from the regression parameters and a given value of the independent variable. The values of these two responses are the same, but their calculated variances are different. The concept is a generalization of the distinction between the standard error of the mean and the sample standard deviation.
In simple linear regression (i.e., straight line fitting with errors only in the y-coordinate), the model is
where y i {\displaystyle y_{i}} is the response variable, x i {\displaystyle x_{i}} is the explanatory variable, εi is the random error, and α α --> {\displaystyle \alpha } and β β --> {\displaystyle \beta } are parameters. The mean, and predicted, response value for a given explanatory value, xd, is given by
while the actual response would be
Expressions for the values and variances of α α --> ^ ^ --> {\displaystyle {\hat {\alpha }}} and β β --> ^ ^ --> {\displaystyle {\hat {\beta }}} are given in linear regression.
Since the data in this context is defined to be (x, y) pairs for every observation, the mean response at a given value of x, say xd, is an estimate of the mean of the y values in the population at the x value of xd, that is E ^ ^ --> ( y ∣ ∣ --> x d ) ≡ ≡ --> y ^ ^ --> d {\displaystyle {\hat {E}}(y\mid x_{d})\equiv {\hat {y}}_{d}\!} . The variance of the mean response is given by
This expression can be simplified to
where m is the number of data points.
To demonstrate this simplification, one can make use of the identity
The predicted response distribution is the predicted distribution of the residuals at the given point xd. So the variance is given by
The second line follows from the fact that Cov --> ( y d , [ α α --> ^ ^ --> + β β --> ^ ^ --> x d ] ) {\displaystyle \operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)} is zero because the new prediction point is independent of the data used to fit the model. Additionally, the term Var --> ( α α --> ^ ^ --> + β β --> ^ ^ --> x d ) {\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)} was calculated earlier for the mean response.
Since Var --> ( y d ) = σ σ --> 2 {\displaystyle \operatorname {Var} (y_{d})=\sigma ^{2}} (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by
The 100 ( 1 − − --> α α --> ) % % --> {\displaystyle 100(1-\alpha )\%} confidence intervals are computed as y d ± ± --> t α α --> 2 , m − − --> n − − --> 1 Var {\displaystyle y_{d}\pm t_{{\frac {\alpha }{2}},m-n-1}{\sqrt {\operatorname {Var} }}} . Thus, the confidence interval for predicted response is wider than the interval for mean response. This is expected intuitively – the variance of the population of y {\displaystyle y} values does not shrink when one samples from it, because the random variable εi does not decrease, but the variance of the mean of the y {\displaystyle y} does shrink with increased sampling, because the variance in α α --> ^ ^ --> {\displaystyle {\hat {\alpha }}} and β β --> ^ ^ --> {\displaystyle {\hat {\beta }}} decrease, so the mean response (predicted response value) becomes closer to α α --> + β β --> x d {\displaystyle \alpha +\beta x_{d}} .
This is analogous to the difference between the variance of a population and the variance of the sample mean of a population: the variance of a population is a parameter and does not change, but the variance of the sample mean decreases with increased sample size.
The general case of linear regression can be written as
Therefore, since y d = ∑ ∑ --> j = 1 n X d j β β --> ^ ^ --> j {\displaystyle y_{d}=\sum _{j=1}^{n}X_{dj}{\hat {\beta }}_{j}} the general expression for the variance of the mean response is
where S is the covariance matrix of the parameters, given by
Lokasi Pengunjung: 3.142.196.66