machine learning , Signal detection involves many methods related to Bayes , It's easy to get confused and lose touch with the essence , Nearest learning Bayesian classifier , Bayesian decision theory is needed as the foundation , It is hereby recorded and sorted out .
seeing the name of a thing one thinks of its function , Bayesian decision theory is to use probability to make decisions , It's a probabilistic approach . It belongs to the branch of machine learning —— Statistical machine learning .
This paper discusses the idea of Bayesian decision-making , thought , Through point by point derivation , Let you know what it's based on , What is the purpose .
<>( One ) Take multi category tasks as an example
This paper takes multi classification task as an example , Explain how to make decisions with probability ( classification ).
Suppose there is N Category mark , Namely y∈{c1,c2,…,cN}y\in\left\{c_1,c_2,\ldots,c_N \right\}y∈{c1,c2,…,cN
};
λij\lambda_{ij}λij Is to mark a reality as cjc_jcj The samples were misclassified as cic_ici Loss incurred ;
Based on posterior probability p(ci∣x)p(c_i|x)p(ci∣x), We can get the sample ? Classified as cic_ici Expected loss generated (expected
loss), That is, in the sample ? On the “ Conditional risk ”( conditional risk) :
(1)R(ci∣x)=∑j=1Nλijp(cj∣x)R(c_i|x)=\sum_{j=1}^N\lambda_{ij}p(c_j|x)\tag1R(ci∣x
)=j=1∑Nλijp(cj∣x)(1)
x Belong to j Multiply the probability of class by the j Class misjudged as No i Class risk , For all j Take expectations , That is to say x Classified as i Conditional risk .
Bayesian methods like to call loss risk , It's actually a thing , The smaller the better , The goal of optimization is to minimize risk / loss .
I've learned random processes ,(1) It's just conditional expectations , If you want unconditional expectation, you have to expect conditional expectation again ( I think it's very offensive to say so ······).
(1) Yes j There was a statistical average ( expect ), We got the sample x Conditional risk , Next, we will do statistical average for all samples , To get the overall Bayesian decision risk :
(2)R(h)=Ex[R(h(x)∣x)]R(h)=E_x[R(h(x)|x)]\tag2R(h)=Ex[R(h(x)∣x)](2)
among h That's the criterion we're going to solve , That is mapping h:X→yh:X\to yh:X→y,X Is the sample set ,y Is a category set .
I found that probability is involved , The formula of expectation is difficult to explain clearly to others , But in fact, if you understand the posterior , likelihood , Conditional mean value these very basic knowledge and ideas ,(1)(2) It's very simple and easy to understand , But it's also admirable .
(2) More theoretical significance . After all, it is difficult to achieve statistical average for all samples , It is to multiply the probability of occurrence of each sample by its conditional risk , Sum all samples .
To minimize the overall risk , In fact, we only need to minimize the conditional risk of each sample .
So the problem becomes , Find a decision criterion h, bring R(h(x)∣x)R(h(x)|x)R(h(x)∣x) minimize :
minhR(h(x)∣x)\min_h R(h(x)|x)hminR(h(x)∣x)
Of course, it would be better to choose the category with the least conditional risk !
So the judgment criteria h namely : Choose the category with the least conditional risk directly :
(3)h∗=argminc∈yR(c∣x)h^*=\arg\min_{c\in y}R(c|x)\tag3h∗=argc∈yminR(c∣x)(3)
This judge h∗h^*h∗ It's called Bayesian optimal classifier Bayes optimal
classifier. The optimal solution is to minimize the conditional risk of each sample , So the overall risk of all samples is minimized , The total loss is the least .
Previous least squares ,SVM Isomodel , The idea of solving the model is to minimize the loss , But the loss of least squares is the sum of mean square error ,SVM Is based on maximizing the interval margin The way of thinking , And here we use conditional probability ( To be exact, the posterior probability used here ) To act as a loss , It's a completely different idea .
further more , Let's classify the loss from the beginning λij\lambda_{ij}λij Concretization , If the purpose of training and learning is to minimize the error rate of classification , Well, it can be defined like this λij
\lambda_{ij}λij:
(4)λij={0,i=j1,otherwise\lambda_{ij}=\left\{ \begin{aligned} 0&,i=j\\
1&,otherwise\\ \end{aligned} \right.\tag4λij={01,i=j,otherwise(4)
Substitution (1), We can find out :
R(ci∣x)=1−p(ci∣x)R(c_i|x)=1-p(c_i|x)R(ci∣x)=1−p(ci∣x)
because ∑j=1Np(cj∣x)=1\sum_{j=1}^Np(c_j|x)=1∑j=1Np(cj∣x)=1.
Namely ,x The conditional risk is 1 Subtracting a posteriori probability .( It is only limited to the error of judgment (4) Ha )
therefore , The problem goes further * From minimizing conditional risk to maximizing posterior probability .
Judgment criteria h In other words, it is embodied as the maximum posterior probability .
h∗=argmaxc∈yp(c∣x)h^*=\arg\max_{c\in y}p(c|x)h∗=argc∈ymaxp(c∣x)
therefore , For new samples in the test phase , We only need to divide it into the category with the largest posterior probability to ensure the lowest classification error rate .
In conclusion , The original intention of Bayesian decision theory is to use posterior probability and decision error loss to represent the decision risk of each sample , Then it is based on the overall view of minimizing the overall decision-making risk , It is found that this is equivalent to minimizing the conditional risk of each sample , What is the mathematical expression of conditional risk of each sample , This needs to be embodied to determine the loss , With the simplest (4) If you want to judge the loss , The minimum conditional risk is further reduced to maximizing the posterior probability of each sample . So after such a series of changes in thinking , The optimal decision criterion under this kind of decision loss is obtained —— For each sample, the category marker with the largest posterior probability is selected .
<>( Two ) How to get it P(c|x)?
In the first section, I explained the whole story carefully , Finally, a simple conclusion is obtained : Divide the new samples into the category with the largest posterior probability .
Here comes the new problem , How do I calculate the posterior probability of new samples ? How to learn the model which can estimate the posterior probability as accurately as possible through the limited training sample set
What about ? This is not a simple question . Thus two camps emerged : Discriminant and generative .
* Discriminant model ,discriminative models: Posterior probability of direct modeling p(c∣x)p(c|x)p(c∣x)
* Generative model ,generative models: Modeling joint probability p(c,x)p(c,x)p(c,x), We get a posteriori p(c∣x)p(c|x)p(c∣x)
Technology
Daily Recommendation