Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Statistical Modeling and Data Analysis
Given a data set, first question a statistician ask is,
โWhat is the statistical model to this data?โ
We then characterize and analyze the parameters of the model
with an objective in mind.
โข Example : SBP of Cancer Patients vs. Normal patients
Cancer: 145, 165, 134, 120, 112, 156, 145, 133, 135, 120
Normal: 138, 120, 112, 110, 128, 134, 128, 109, 138, 140
Objective: Do cancer patients have higher SBP than the normal
patients?
1
Population of cancer patients with a probability distribution
Population of normal patients with a probability distribution
normal
cancer
๐1
๐2
Systolic blood pressure
Objective is to test the Hypothesis: ๐2 > ๐1
Does the data support this hypothesis?
2
Assumption: The data is random and is generated from the
normal distributions?
โข Random Variable ๐
๐โถ๐โ๐
๐ is the collection of all subjects. What we observe is one
realization
๐(๐ )
โข Random Sample: {๐1 , ๐2 , โฆ , ๐๐ }
We collect a sample of subjects {๐ 1 , ๐ 2 , โฆ , ๐ ๐ }
3
Observed Sample: {๐ ๐ 1 , ๐ ๐ 2 , โฆ , ๐ ๐ ๐ }
Assumption: {๐ 1 , ๐ 2 , โฆ , ๐ ๐ } โ Simple Random Sample
(equally likely than any other sample)
โข Multivariate Observations
๐ฟ=
๐1
๐2
: ๐ โ ๐
๐
โฎ
๐๐
An observed vector is one realization of this, i.e., ๐ฟ(๐ )
4
Random Sample: {๐ฟ1 , ๐ฟ2 , โฆ , ๐ฟ๐ }
Observed sample is a realization of
{๐ฟ ๐ 1 , ๐ฟ ๐ 2 , โฆ , ๐ฟ ๐ ๐ }
Note: If the simultaneous inference is to made on its
components, the probability statement should be
viewed in terms of probability of observing
{๐ 1 , ๐ 2 , โฆ , ๐ ๐ }
5
Stochastic Process
{๐ ๐ก , 0 โค ๐ก < โ}
Observed value of this is one realization
{๐ ๐ก, ๐ , 0 โค ๐ก < โ}
Can we describe a probability distribution of
{๐ ๐ก , 0 โค ๐ก < โ}?
Kolmogorov Consistency Theorem says that probability
distribution can be described.
6
These are three realizations with ๐ 0 = 0
7
Discrete time points
{๐ ๐ก , ๐ก = โฏ , โ2, โ1, 0, 1, 2, โฏ }
If this process is stationary, then a probability model for
๐(๐ก) can be described in a concise way. For example,
๐ ๐ก = ๐๐ ๐ก โ 1 + ๐ ๐ก
=
โ
๐ ๐(๐ก
๐
๐=0
โ ๐),
where {๐ ๐ก } is white noise.
8
Image Process:
9
{๐ ๐ , ๐ ๐ ๐}, where ๐ is the set of all pixels.
Note that what we observe is a realization of this
{๐ ๐, ๐ , ๐ ๐ ๐}
10
The same can be said about weather map.
11
Data Analysis
Generally speaking, we perform one or more of the
following tasks in data analysis (statistical inference)
โข Estimate the model
โข Hypothesis testing
โข Predictive analysis
Given the sample data, objective is to make inference about
the population described by the probability model.
All inferences are based on probability model assumed.
12
Estimation
๐ โ ๐ธ๐ ๐ก๐๐๐๐ก๐ ๐๐ ๐
Think of estimating any parameters of a probability model.
For example, estimating ๐ฝ0 and ๐ฝ1 of a regression model
๐ฆ = ๐ฝ0 + ๐ฝ1 ๐ฅ + ๐
How good is the estimate ๐?
Well, you might say that if ๐ โ
๐, it is a good estimate.
Not so simple! Note that ๐ is unknown.
13
Frequentistโs Interpretation
Note that ๐ depends on the sample we observe.
Sample
๐ฝ
|๐ฝ โ ๐ฝ|
๐ฝ
|๐ฝ โ ๐ฝ|
โฆโฆ
--
--
--
--
โฆโฆ
--
--
--
--
observed
observed
observed
observed
observed
โฆโฆ
--
--
--
--
โฎ
โฎ
โฎ
โฎ
โฎ
๐ is better than ๐ if the average of ๐ฝ โ ๐ฝ
๐
is smaller than
๐
the average of ๐ฝ โ ๐ฝ , i.e.,
๐ธ ๐โ๐
2
<๐ธ ๐โ๐
2
for all ๐.
14
๐ is better than ๐ if
๐ธ ๐โ๐
2
<๐ธ ๐โ๐
2
for all ๐.
A best estimate, in this sense, is of course not possible. If ๐0 โก
๐0 irrespective of the observed sample, then
๐ธ ๐0 โ ๐
2
= 0 for ๐ = ๐0
We restrict to a class of estimators, and then try to find best
Estimate within this class.
For example, we may consider a class of all unbiased
estimators.
15
Theories are well developed for achieving best estimates
among the class of unbiased estimates for simple
probability models.
For complicated model, we can always fall back to
maximum likelihood estimates.
Obtain the estimate by maximizing the likelihood function
๐ฟ ๐ ๐ฅ1 , ๐ฅ2 , โฆ , ๐ฅ๐ = Pr(๐ฅ1 , ๐ฅ2 , โฆ , ๐ฅ๐ |๐)
For small sample size ๐, this may not always yield good
estimate, but for large sample size ๐, this generally yield
optimal estimates.
16
Asymptotic Optimality of Maximum Likelihood Estimate
{๐๐ } โ sequence of asymptotically normal estimates
1
๐๐ โ ๐
โ2
๐๐
๐ โ๐ ๐(0, ๐ผ) as ๐ โ โ
๐๐ ๐ can be interpreted as asymptotic variance of {๐๐ }.
๐๐ ๐ โฅ ๐ผ๐ ๐
โ1 ,
๐ผ๐ (๐) - Fisher Information Matrix
Under regular probability models, maximum likelihood
estimates {๐๐๐ฟ } achieves the lower bound.
17
Bayesian Interpretation
Prior Distribution - ๐(๐)
Through this we might say that some values of ๐ are more likely
than other values.
๐ is better than ๐ if
2
๐ธ ๐ โ ๐ ๐ ๐ ๐๐ <
2
๐ธ ๐ โ ๐ ๐(๐)๐๐.
A best estimate is now possible; for example,
๐๐ต = ๐ธ(๐|๐๐๐ก๐)
The RHS is the expectation with respect to the posterior distribution
of ๐.
18
Prior Distribution - ๐ ๐
Really? Where did it come from?
You may not believe this, but we are really talking in terms of
a statistical philosophy.
Can you really believe that the true state of nature ๐ is
random?
normal
cancer
๐1
๐2
Systolic blood pressure
19
๐1 and ๐2 are supposed to be fixed mean SBPs of the
normal and cancer populations. Now, we are saying that
they are random.
Bayesian Paradigm
๐ is never a fixed value; under most circumstances some
values of ๐ are more likely than other values.
Before a data is analyzed, we should explore this prior. Then
update it based on the information provided by the data.
Prior: ๐(๐)
Data: ๐(๐๐๐ก๐|๐)
Posterior: ๐(๐|๐๐๐ก๐)
All information about ๐ is contained in the posterior.
20
Example:
1 in 1,000 in the population carry a particular genetic disorder.
Certain tests on a person are performed, and data is collected
Data: {๐ฅ1 , ๐ฅ2 , โฆ , ๐ฅ๐ }
๐ ๐๐๐ก๐ + ,
Prior: ๐ + =
๐(๐๐๐ก๐|โ)
1
1000
Posterior: ๐ + ๐๐๐ก๐ =
๐
๐
๐๐๐ก๐ +
๐๐๐ก๐ +
๐ + +๐
๐ +
๐๐๐ก๐ โ
๐ โ
21
The main issues with Bayesian inference are
(1) Appropriateness of the prior
(2) Computation of the posterior distribution
{๐1 , ๐2 , โฆ , ๐๐ } random sample from ๐(๐, ๐ 2 )
Prior:
๐ ~ ๐(๐0 , ๐ 2 ๐0 )
2
๐ 2 โ1 ~ ๐๐
This is a conjugate prior because the posterior distribution is of
same form as the prior distribution.
Is this prior appropriate?
22
Prior:
๐ ~ ๐(๐0 , ๐ 2 ๐0 )
2
๐ 2 โ1 ~ ๐๐
If nothing is known about (๐, ๐ 2 ), ๐0 โ โ, ๐ = 1, ๐0 = 0.
This gives almost flat prior for ๐ and ๐ 2 .
There are other ways to assign non-informative priors.
Note that if
Prior:
๐ ~ ๐(๐0 , ๐02 )
2
๐ 2 โ1 ~ ๐๐
then we will have computational problem of computing
posterior distribution.
23
Computation of the posterior
There are two popular techniques of computing posterior
distribution:
1. Metropolis-Hasting Algorithm
2. Gibbs Sampler
These techniques can be used effectively for complex
probability model and reasonable priors.
24
Frequentist vs. Bayesian
Frequentist
Bayesian
All data information is
contained in the likelihood
function.
All data information is
contained in the likelihood
function and the prior
The estimates are viewed
in terms of how they behave
on the average
Estimates are viewed in
terms of where they are
located in the posterior
Estimates are generally obtained
by maximizing the likelihood
function. Techniques include
Newton-Raphson, EM-algorithm
Estimates are obtained from
the posterior. Techniques
include Gibbas Sampler,
Metropolis-Hasting etc.
25
Mixture Models
Suppose the population is a mixture of two or more populations.
๐ฆ๐ = ๐ฝ0๐ + ๐ฝ1๐ ๐ฅ๐ + ๐๐
๐ฝ0๐
~ ๐๐๐ฅ๐ก๐ข๐๐ ๐๐ ๐๐๐๐๐๐๐
๐ฝ1๐
Bayesians would have a good answer to estimate this model than
frequentists would.
26
Hypothesis Testing
Think about how it started in statistical literature.
Data: {๐1 , ๐2 , โฆ , ๐๐ } drawn from a probability model.
๐ป: ๐ป๐ฆ๐๐๐กโ๐๐ ๐๐ associated with the probability model
Does the data support this hypothesis?
Bayesians had an answer to this, but they were not popular at
the time.
Ans. ๐(๐ป|๐๐๐ก๐)
27
๐ โ ๐๐๐๐๐ (Fisher)
{๐1 , ๐2 , โฆ , ๐๐ } drawn from ๐(๐, ๐ 2 )
Hypothesis ๐ป: ๐ = ๐0
Compute ๐ฅ โ ๐0 = ๐ก
๐ โ ๐ฃ๐๐๐ข๐
= Pr ๐๐๐ ๐๐๐ฃ๐๐๐ ๐ ๐ฃ๐๐๐ข๐ ๐ก ๐๐ ๐๐๐๐ ๐๐ฅ๐ก๐๐๐๐ ๐ป)
If this ๐ โ ๐ฃ๐๐๐ข๐ is vey small (< 0.05), then the data
provide very little evidence in support of the hypothesis.
Conclusion: Reject the Hypothesis
28
Analysis of Variance (ANOVA)
ANOVA is one of the most popular statistical tools of analyzing
data.
Factor 1
Y
Factor 2
A Response
Variable
Factor 3
Does Y (the response) depends on any of the factors?
29
Example 1: You are doing a research on mpg (miles per
gallon) for a brand of automobiles.
Question: What effects mpg?
Wind speed
mpg
Air temperature
Air moisture
Do wind speed, air temperature, and air moisture effect
mpg?
30
Example 2:
Research Question: Does blood pressure (BP) depend on
weight and gender?
Weight
BP
Gender
31
There is a variation in BP.
Some is due to weight,
and some is due to
gender.
BP
* Female
* Male
*
**
*
*
*
*
*
*
*
*
*
Weight
32
Concept:
Variation(BP) = Variation(Weight) + Variation(Gender)
+ Variation(Error)
These variation can be described by Sums of Squares
โฆ
2
SS(BP) = SS(Weight) + SS(Gender) + SS(Error)
๐๐๐ต๐ = ๐๐๐ค
+
๐๐๐
+ ๐๐๐
๐๐ is the degrees of freedom that represent the effective
number of terms in the sums of squares
33
F-Statistics
Weight:
Test Statistic ๐น1 =
๐๐ ๐ค๐๐๐โ๐ก
๐๐๐ค
๐๐ ๐ธ๐๐๐๐
๐๐๐
=
๐๐๐ค
.
๐๐๐ธ
Hypothesis ๐ป: Weight is not a factor in BP
๐ โ ๐ฃ๐๐๐ข๐ =
๐(๐๐๐ ๐๐๐ฃ๐๐๐ ๐ฃ๐๐๐ข๐ ๐๐๐๐ ๐๐ฅ๐ก๐๐๐๐ ๐กโ๐๐ ๐น1 |๐ป)
If p-value (<0.05), then there is little evidence that weight
is not a factor
Gender: Test Statistics ๐น2 =
๐๐ ๐๐๐๐๐๐
๐๐๐ค
๐๐ ๐ธ๐๐๐๐
๐๐๐
=
๐๐๐บ
๐๐๐ธ
Same can be done to see if gender is a factor.
34
Neyman โ Pearson Lemma
Basis for Classical Hypothesis Testing
๐ป0 : Null hypothesis
๐ป๐ : Alternative Hypothesis (Research Hypothesis)
TS: Test Statistics
Decision Rule
Conclusion
Type-I Error: False Discovery
Type-II Error: False Non-Discovery
Devise a decision rule so that
๐ผ = Pr(False Discovery)
is very small (=0.05). Through Neyman-Pearson Lemma, a
most powerful decision rule can be obtained.
35
๐ป0 : ๐ = ๐0
Uniformly Most Powerful Unbiased Decision Rule is
๐ โ ๐0 > ๐,
where ๐ is such that
Pr X โ ๐0 > ๐ = 0.05.
Note that this is a frequentist method since the probability
statement should be interpreted in a frequentist manner.
36
Likelihood Approach
Neyman-Perason Lemma works only on simple probability
models.
Test Statistics
โ2 log ๐ = 2(max log ๐ฟ โ max log ๐ฟ)
๐ป
If the hypothesis ๐ป is correct, the โ2 log ๐ should be closed to
0. Thus, we reject the hypothesis ๐ป if
โ2 log ๐ > ๐
The cut-off point ๐ can be obtained through asymptotic
distribution of โ2 log ๐, which is usually ๐ 2 .
37
Model Selection
Suppose you want choose one model out of several. This is a
type of multiple hypotheses problem.
Regression: ๐ฆ = ๐ฝ0 + ๐ฝ1 ๐ฅ1 + ๐ฝ2 ๐ฅ2 + โฏ ๐ฝ๐ ๐ฅ๐ + ๐
Not all predictors ๐ฅ1 , ๐ฅ2 , โฆ ๐ฅ๐ are significant, and you want to
select the set of significant predictors. This can be viewed as
selecting one of the several models ๐๐ , ๐ = 1,2, โฆ , ๐
โ2 log ๐๐๐ = 2(max log ๐ฟ โ max log ๐ฟ)
๐๐
Choose the model that yields the smallest โ2 log ๐๐๐
38
This yields a biased selection, meaning that a model with
higher number of parameters has a better chance of being
selected.
AIC or BIC Information criteria
๐ด๐ผ๐ถ = 2 max log ๐ฟ โ 2 โ # ๐๐ ๐๐๐๐๐๐๐ก๐๐๐ in Mj
๐๐
๐ต๐ผ๐ถ = 2 max log ๐ฟ โ log ๐ โ # ๐๐ ๐๐๐๐๐๐๐ก๐๐ ๐๐ ๐๐
๐๐
Select the model with the highest value of AIC (or BIC)
39
Bayesian Hypothesis Testing
Data: {๐1 , ๐2 , โฆ , ๐๐ } drawn from ๐(๐, ๐ 2 )
Hypothesis ๐ป: ๐ = ๐0
Prior: ๐0 = ๐๐ ๐ = ๐0 , 1 โ ๐0 = Pr(๐ โ ๐0 )
Posterior: ๐ = Pr(๐ = ๐0 |๐ท๐๐ก๐), 1 โ ๐ = Pr(๐ โ ๐0 |๐ท๐๐ก๐)
Bayes Factor:
๐ต๐น =
๐
1โ๐
๐0
1โ๐0
If this Bayes factor (๐ต๐น โฅ 20), data has sufficient evidence to
support the hypothesis ๐ป: ๐ = ๐0 .
40
Frequentist Vs. Bayesian
Note that both ๐ โ ๐ฃ๐๐๐ข๐ and classical hypothesis tests are
frequentists since the statements are made in terms of
probability.
๐ โ ๐ฃ๐๐๐ข๐ = Pr ๐๐๐ ๐๐๐ฃ๐๐๐ ๐ ๐ฃ๐๐๐ข๐ ๐ก ๐๐ ๐๐๐๐ ๐๐ฅ๐ก๐๐๐๐ ๐ป)
๐ผ = ๐๐ ๐น๐๐๐ ๐ ๐ท๐๐ ๐๐๐ฃ๐๐๐ฆ
= Pr((๐ โ ๐0 | > ๐)
The Bayes Factor is used in Bayesian tests which is based on the
posterior probability Pr(๐ป|๐ท๐๐ก๐)
41
Multiple Hypotheses:
Consider 1000 independent tests each at Type-error of ฮฑ = 0.05.
Then 5% of the null hypotheses would be falsely rejected. In other
words, if 50 of the hypotheses were rejected, there is no guarantee
that they were not all falsely rejected.
FWER: m = # of hypotheses
ฯ = P(One or more falsely rejected hypotheses)
= 1 โ (1๏ญ๏ก )m
๏ก ๏ฝ1๏ญ (1๏ญ๏ฐ )1/ m ๏ป ๏ฐ / m (Bonferroni Correction)
If m is large, ฮฑ would be very small. Thus the power of detecting any true
positive would be very small.
Sequential Bonferroni Corrections:
Let p[1] ๏ฃ p[2] ๏ฃ ... ๏ฃ p[m] be the p-values of independent tests with
corresponding null hypotheses H(1), H(2),...., H(m).
Holmโs Method (Holm, 1979; Scand. Statist.)
โข If p ๏พ ๏ฐ / m , accept all nulls.
[1]
โข If p[1] ๏ฃ ๏ฐ / m, reject H ; if p ๏พ ๏ฐ /(m ๏ญ1) , accept the rest of nulls.
[2]
(1)
โข Continue until first j such that p ๏พ ๏ฐ /(m ๏ญ j ๏ซ.1) In that case reject
[ j]
all H ,i ๏ฃ j ๏ญ1,and accept the rest of nulls.
(i)
Simes Method (Biometrika, 1986):
โข If p ๏ฃ ๏ฐ , reject all nulls.
[m]
โข If not, but if p[m๏ญ1] ๏ฃ ๏ฐ / 2 , reject all H(i),i ๏ฝ1,2,..., m ๏ญ1
๏ฐ
โข Continue until first p[i] ๏ฃ
. In that case reject all H( j), j ๏ฝ1,2,...,i
m ๏ญ i ๏ซ1
Note: Both Holmโs and Simes methods are designed to refine the
FWER.
False Discovery Rate (FDR): Benjamini and Hochberg (1995), JRSS
When the number of hypotheses m is very large (say in
thousands), and if each individual hypothesis is not important,
then FWER criterion is not very useful since it yields few
discoveries. For example, in a microarray data analysis, the
objective is to detect potential genes for future exploration. Here,
each individual gene is not important. In such cases, tests with a
controlled FWER would yield few discoveries.
FDR = Expected proportion of false rejections.
Accept Null
Reject Null
Total
True Null
U
V
True
Alternatives
T
S
m
0
m๏ญm
0
m- R
R
m
FDR = E [ V ], where, V ๏ฝ 0 if R ๏ฝ 0
R
R
= E [ V | R ๏พ 0] P( R ๏พ 0)
R
Note that FWER = P(R>0)
Benjamini and Hochberg proved that the following procedure produces
FDR ๏ฃ q :
i
p
๏ฃ
q, then reject all
Let k be the largest integer i such that [i]
m
H
( j)
, j ๏ฝ1,2,..., k.
The result was proved under the assumption of independent test statistics.
It was later extended to a positively correlated test statistics by Benjamini
and Yekutieli, 2001; Ann. Stat.
Bayesian Interpretation (Storey, 2003, Ann. Stat.)
V
pFDR ๏ฝ E[ | R ๏พ 0]
R
H i :๏ฑi ๏ฝ 0 vs. H ai :๏ฑi ๏น 0,
0
i ๏ฝ1,2,..., m
Let Ti be test statistics that reject H i if Ti ๏พ c.
0
Ti , i ๏ฝ1,2,..., m are independently distributed.
๏ฑ ,๏ฑ ,....,๏ฑm are i.i.d. with p ๏ฝ P(๏ฑi ๏ H i ) ๏พ 0, then
1 2
0
pFDR ๏ฝ P(H i | Ti ๏พ c)
0
Note: pFDR is a posterior version of the Type-I error
Directional Hypothesis Problem (Three decision problem):
Suppose H i :๏ฑ ๏ฝ 0 is rejected, but it is also important to find the direction
0 i
of ๏ฑi , i.e., ๏ฑi ๏ผ 0 or ๏ฑi ๏พ 0.
So the problem is to find subsets
S๏ญ and S๏ซ of {1,2,..., m} such that
S๏ญ ๏ฝ {i : ๏ฑi ๏ผ 0} and S๏ซ ๏ฝ {i : ๏ฑi ๏พ 0}
Example: Gene selection
When the genes are altered under adverse condition, such as
cancer, the affected genes show under or over expression in a
microarray.
X i ๏ญ Expression Level
X i ~ P(๏ฑi ,๏ก )
H i :๏ฑi ๏ฝ 0 vs H ๏ญi :๏ฑi ๏ผ 0 or H ๏ซi :๏ฑi ๏พ 0
0
The objective is to find the genes with under expressions
and genes with over expressions.
Directional Error (Type III error):
Type III error is defined as P( Selection of false direction if the null is rejected).
The traditional method does not control the directional error. For example,
| t |๏พ t๏ก / 2 , and t ๏พ t๏ก / 2 , an error occurs if ๏ฑ ๏ผ 0.
Sarkar and Zhou (2008, JSPI)
Finner ( 1999, AS)
Shaffer (2002, Psychological Methods)
Lehmann (1952, AMS; 1957, AMS)
Main points of these work is that if the objective is to find the true
direction of the alternative after rejecting the null, then a Type III error must be
controlled instead of Type I error.
Bayesian Decision Theoretic Framework
i :๏ฑ ๏ผ ๏ฑ , H i :๏ฑ ๏พ ๏ฑ
H i :๏ฑi ๏ฝ ๏ฑ (say, 0) , H ๏ญ
i 0 ๏ซ i 0
0
0
Suppose ๏ฑ ,๏ฑ ,...,๏ฑm are generated from
1 2
๏ฐ (๏ฑ ) ๏ฝ p๏ญ๏ฐ ๏ญ (๏ฑ ) ๏ซ p ๏ฐ (๏ฑ ) ๏ซ p๏ซ๏ฐ ๏ซ (๏ฑ )
0 0
where
๏ฐ - (๏ฑ ) ๏ฝ g๏ญ (๏ฑ ) I (๏ฑ ๏ผ 0), ๏ฐ (๏ฑ ) ๏ฝ I (๏ฑ ๏ฝ 0), g (๏ฑ ) I (๏ฑ ๏พ 0)
0
๏ซ
g๏ญ (๏) ๏ญ density with support contained in (-๏ฅ,0)
g๏ซ (๏) ๏ญ density with support contained in (0, ๏ฅ)
g๏ญ and g ๏ซ could be trucated densities of a density on ๏ฑ .
The skewness in the prior is introduced by (p , p , p ).
-1 0 1
p๏ญ ๏ผ p๏ซ reflects that the right tail is more likely than the
left tail.
p- ๏ฝ 0 (or p๏ซ ๏ฝ 0) would yield a one - tail test.
p- ๏ฝ p๏ซ with g- and g ๏ซ as truncated of a symmetric
density would yield a two - tail test.
p- and p๏ซ can be assigned based on what tail is
more important.
Loss Function
L(ฮธ, d) ๏ฝ
m
๏ฅ
i ๏ฝ1
L (๏ฑ , d )
i i i
where di ๏ฝ (d๏ญi , d i , d๏ซi ) taking values
0
(1,0, 0) for selecting H ๏ญi
(0, 1, 0) for selecting H i
0
(0, 0, 1) for selecting H ๏ซi
Let ๏ค ๏ฝ (๏ค ๏ญi ,๏ค i ,๏ค ๏ซi ) be a randomized rule.
0
i
The average risk for a decision rule ๏ค ๏ฝ (๏ค ,...,๏ค m ) is given by
1
r๏ค (๏ฐ ) ๏ฝ p๏ญr๏ญ๏ค (๏ฐ ๏ญ ) ๏ซ p r๏ค (0) ๏ซ p๏ซr๏ซ๏ค (๏ฐ ๏ซ )
00
where
r๏ญ๏ค (๏ฐ ๏ญ ) ๏ฝ ๏ฅ ๏ฒ๏ฑ ๏ผ0 R(๏ฑi ,๏คi )๏ฐ ๏ญ (๏ฑi )d๏ฑi
i i
r๏ซ๏ค (๏ฐ ๏ซ ) ๏ฝ ๏ฅ ๏ฒ๏ฑ ๏พ0 R(๏ฑi ,๏คi )๏ฐ (๏ฑi )d๏ฑi
1
i i
r๏ค (0) ๏ฝ ๏ฅ R(0,๏คi )
0
i
For a fixed prior ๏ฐ , decision rules can be compared by comparing the space
S(๏ฐ ) ๏ฝ {(r๏ญ๏ค (๏ฐ ๏ญ ), r๏ค (0), r๏ซ๏ค (๏ฐ ๏ซ )) :๏ค ๏ D*}
0
consider the class of all rules ๏ค for which R(0,๏ค ) are the same
slope depends
on p๏ญ and p๏ซ
p๏ญ ๏พ p๏ซ
r๏ซ๏ค (๏ฐ )
Bayes
Rule
r๏ญ๏ค (๏ฐ )
Remark : This theorem implies that if apriori it is known that H ๏ซi is
more likely than H ๏ญi ( p๏ซ ๏พ p๏ญ ), then the average risk of the Bayes rule
in the positive direction will be smaller than average risk in the negative
direction.
For the "0 -1", this would mean that the expected number of falsely
delected genes in the positive direction would be less than the expected
number of falsely detected genes in the negative direction.