Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Chapter 6: Classification
Jilles Vreeken
IRDM β15/16
17 Nov 2015
IRDM Chapter 6, overview
1.
2.
3.
4.
Basic idea
Instance-based classification
Decision trees
Probabilistic classification
Youβll find this covered in
Aggarwal Ch. 10
Zaki & Meira, Ch. 18, 19, (22)
IRDM β15/16
VI: 2
Chapter 6.1:
The Basic Idea
Aggarwal Ch. 10.1-10.2
IRDM β15/16
VI: 3
Definitions
Data for classification comes in tuples (π₯, π¦)
ο§
vector π₯ is the attribute (feature) set
ο§
ο§
attributes can be binary, categorical or numerical
value π¦ is the class label
ο§
ο§
we concentrate on binary or
nominal class labels
compare classification
with regression!
A classifier is a function that
maps attribute sets to class
labels, π(π₯) = π¦
IRDM β15/16
Attribute set
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
Class
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
VI: 4
Classification function as a black box
Attribute set π
IRDM β15/16
Classification
function
Class label π¦
VI: 5
Descriptive vs. Predictive
In descriptive data mining the goal is to give a
description of the data
ο§
ο§
those who have bought diapers have also bought beer
these are the clusters of documents from this corpus
In predictive data mining the goal is to predict the future
ο§
ο§
those who will buy diapers will also buy beer
if new documents arrive, they will be similar to one of the cluster
centroids
The difference between predictive data mining and
machine learning is hard to define
IRDM β15/16
VI: 6
Descriptive vs. Predictive
In descriptive data mining the goal is to give a
description of the data
ο§
ο§
those who have bought diapers have also bought beer
these are the In
clusters
of documents
from care
this corpus
Data
Mining we
more
about
insightfulness
In predictive data mining the goal is to predict the future
than
prediction
performance
those who
will buy
diapers will also
buy beer
ο§
ο§
if new documents arrive, they will be similar to one of the cluster
centroids
The difference between predictive data mining and
machine learning is hard to define
IRDM β15/16
VI: 7
Descriptive vs. Predictive
Who are the borrowers that will default?
ο§
descriptive
If a new borrower comes, will they default?
ο§
predictive
Predictive classification is the
usual application
ο§
and what we concentrate on
IRDM β15/16
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
VI: 8
General classification Framework
IRDM β15/16
VI: 9
Classification model evaluation
ο§
a confusion matrix is simply
a contingency table between
actual and predicted class labels
Actual class
Recall contingency tables
Predicted class
Class=1
Class=0
π 11
π 10
Class=1
Class=0
Many measures available
we focus on accuracy and error rate
π 01
π 00
ο§
π 11 +π 00
11 +π 00 +π 10 +π 01
ππππππππ = π
ο§
π 10 +π 01
11 +π 00 +π 10 +π 01
πππππ ππππ = π
=
π π π₯ β π¦ =
π π π₯ = 1, π¦ = β1 + π π π₯ = β1, π¦ = 1 =
π π π₯ = 1 π¦ = β1 π π¦ = β1 + π π π₯ = β1 π¦ = 1 π(π¦ = 1)
thereβs also precision, recall, F-scores, etc.
IRDM β15/16
(here I use the π ππ notation to make clear we consider absolute numbers,
in the wild πππ can mean either absolute or relative β pay close attention)
VI: 10
Supervised vs. unsupervised learning
In supervised learning
ο§
ο§
training data is accompanied by class labels
new data is classified based on the training set
ο§
classification
In unsupervised learning
ο§
ο§
the class labels are unknown
the aim is to establish the existence of classes in the data,
based on measurements, observations, etc.
ο§
IRDM β15/16
clustering
VI: 11
Chapter 6.2:
Instance-based classification
Aggarwal Ch. 10.8
IRDM β15/16
VI: 12
Classification per instance
Let us first consider the most simple effective classifier
βsimilar instances have similar labelsβ
Key idea is to find instances in the training data that are
similar to the test instance.
IRDM β15/16
VI: 13
π-Nearest Neighbors
The most basic classifier is π-nearest neighbours
Given database π« of labeled instances, a distance function π,
and parameter π, for test instance π, find the π instances from π«
most similar to π, and assign it the majority label over this top-π.
We can make it more locally-sensitive by weighing by distance πΏ
π πΏ = π βπΏ
IRDM β15/16
2 /π‘ 2
VI: 14
π-Nearest Neighbors, ctd.
πNN classifiers work surprisingly well in practice, iff we have
ample training data and your distance function is chosen wisely
How to choose π?
ο§
ο§
ο§
odd, to avoid ties.
not too small, or it will not be robust against noise
not too large, or it will lose local sensitivity
Computational complexity
ο§
ο§
training is instant, π(0)
testing is slow, π(π)
IRDM β15/16
VI: 15
Chapter 6.3:
Decision Trees
Aggarwal Ch. 10.3-10.4
IRDM β15/16
VI: 16
Basic idea
We define the label by asking series of questions
about the attributes
ο§
ο§
each question depends on the answer to the previous one
ultimately, all samples with satisfying attribute values have
the same label and weβre done
The flow-chart of the questions can be drawn as a tree
We can classify new instances by following the
proper edges of the tree until we meet a leaf
ο§
decision tree leafs are always class labels
IRDM β15/16
VI: 17
Example: training data
age
income
student
credit_rating
buys PS4
β€ 30
high
no
fair
no
high
no
excellent
no
β€ 30
30 β¦ 40
high
no
fair
yes
medium
no
fair
yes
> 40
low
yes
fair
yes
low
yes
excellent
no
> 40
> 40
30 β¦ 40
low
yes
excellent
yes
medium
no
fair
no
β€ 30
low
Yes
fair
yes
medium
yes
fair
yes
β€ 30
medium
yes
excellent
yes
medium
no
excellent
yes
high
yes
fair
yes
medium
no
excellent
no
β€ 30
> 40
30 β¦ 40
30 β¦ 40
> 40
IRDM β15/16
VI: 18
Example: decision tree
age?
IRDM β15/16
β€ 30
31β¦40
> 40
student?
yes
credit rating?
no
yes
excellent
fair
no
yes
no
yes
VI: 19
Huntβs algorithm
The number of decision trees for a
given set of attributes is exponential
Finding the most accurate tree is NP-hard
Practical algorithms use greedy heuristics
ο§
the decision tree is grown by making a series of locally optimal
decisions on which attributes to use and how to split on them
Most algorithms are based on Huntβs algorithm
IRDM β15/16
VI: 20
Huntβs algorithm
1.
2.
3.
1.
2.
3.
4.
Let ππ‘ be the set of training records for node π‘
Let π¦ = {π¦1 , β¦ , π¦π } be the class labels
If ππ‘ contains records that belong to more than one class
select attribute test condition to partition the
records into smaller subsets
create a child node for each outcome of test condition
apply algorithm recursively to each child
else if all records in ππ‘ belong to the same class π¦π ,
then π‘ is a leaf node with label π¦π
IRDM β15/16
VI: 21
Example: Decision tree
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
IRDM β15/16
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
ππππ
Defaulted=No
Has multiple labels,
best label = βnoβ
VI: 22
Example: Decision tree
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
IRDM β15/16
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
Home owner
yes
no
No
Yes
Only one
label
Has multiple
labels
VI: 23
Example: Decision tree
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
IRDM β15/16
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
Home owner
yes
no
No
Yes
Only one
label
Has multiple
labels
VI: 24
Example: Decision tree
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
IRDM β15/16
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
Home owner
yes
no
No
Marital status
Divorced,
Single
Married
No
Yes
Has multiple
labels
Only one
label
VI: 25
Example: Decision tree
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
IRDM β15/16
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
Home owner
yes
no
No
Marital status
Divorced,
Single
Married
Annual income
<80K
No
β₯80K
Only one
label
Only one
label
Yes
Yes
VI: 26
Selecting the split
Designing a decision-tree algorithm
requires answering two questions
1.
2.
IRDM β15/16
How should we split the training records?
How should we stop the splitting procedure?
VI: 27
Splitting methods
Binary attributes
Body
temperature
Warmblooded
IRDM β15/16
Coldblooded
VI: 28
Splitting methods
Nominal attributes
Single
Marital
status
Divorced
Married
Multiway split
Marital
status
{Married}
{Single,
Divorced}
Marital
status
{Single}
{Married,
Divorced}
Marital
status
{Single,
Married}
{Divorced}
Binary split
IRDM β15/16
VI: 29
Splitting methods
Ordinal attributes
Shirt
Size
{Small,
Medium}
IRDM β15/16
{Large,
Extra Large}
Shirt
Size
Shirt
Size
{Small}
{Medium, Large,
Extra Large}
{Small,
Large}
{Medium,
Extra Large}
VI: 30
Splitting methods
Numeric attributes
Annual
income
>80K
Yes
IRDM β15/16
Annual
income
>80K
No
<10K
[10K,25K)
[25K,50K)
[50K,80K)
>80K
VI: 31
Selecting the best split
Let π(π β£ π‘) be the fraction of records of class π in node π‘
The best split is selected based on the degree
of impurity of the child nodes
ο§
ο§
π(0 | π‘) = 0 and π(1 | π‘) = 1 has high purity
π(0 | π‘) = 1/2 and π(1 | π‘) = 1/2 has the smallest purity
Intuition:
high purity β better split
IRDM β15/16
VI: 32
Example of purity
Gender
Male
Female
Car Type
Family
Luxury
Sports
C0: 6
C1: 4
C0: 4
C1: 6
low purity
IRDM β15/16
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
high purity
VI: 33
Impurity measures
πΈπΈπΈπΈπΈπΈπΈ π‘ = β οΏ½ π ππ π‘ log 2 π ππ π‘
ππ βπΆ
πΊπΊπΊπΊ π‘ = 1 β οΏ½ π ππ π‘
ππ βπΆ
2
πΆπΆπΆπΆπΆπΆπΆπΆπΆπΆπΆπΆπΆπΆ πππππ π‘ = 1 β max π ππ π‘
ππ βπΆ
IRDM β15/16
VI: 34
Comparing impurity measures
1
0.9
0.8
0.7
0.6
0.5
Entropy
0.4
Gini
0.3
Error
0.2
0.1
0
0
IRDM β15/16
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
p
1
(for binary classification, with π the probability for class 1, and (1 β π) the probability for class 2)
VI: 35
Comparing conditions
The quality of the split: the change in impurity
ο§
ο§
ο§
ο§
ο§
ο§
called the gain of the test condition
π
π π£π
Ξ=πΌ π βοΏ½
πΌ π£π
π
π
πΌ(β
) is the impurity measure
π is the number of attribute values
π is the parent node, π£π is the child node
π is the total number of records at the parent node
π(π£π ) is the number of records associated with the child node
Maximizing the gain β minimising the weighted
average impurity measure of child nodes
If πΌ β
= πππππππ(β
), then Ξ = Ξππππ is called information gain
IRDM β15/16
VI: 36
Example: computing gain
G: 0.4898
G: 0.480
(7 × 0.4898 + 5 × 0.480) / 12 = 0.486
IRDM β15/16
VI: 37
Problem of maximising Ξ
Gender
Female
Male
Car Type
Luxury
Family
Sports
C0: 6
C1: 4
C0: 4
C1: 6
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
Customer id
π£1
π£2 π£3
π£π
C0: 1 C0: 1 C0: 0 β¦ C0: 1
C1: 0 C1: 0 C1: 1
C1: 0
Higher purity
IRDM β15/16
VI: 38
Stopping splitting
Stop expanding when all records
belong to the same class
Stop expanding when all records
have similar attribute values
Early termination
ο§
ο§
ο§
e.g. gain ratio drops below certain threshold
keeps trees simple
helps with overfitting
IRDM β15/16
VI: 39
Problems of maximising Ξ
Impurity measures favor attributes with many values
Test conditions with many outcomes may not be desirable
ο§
number of records in each partition is too small to make predictions
Solution 1: gain ratio
ο§
ο§
ο§
Ξππππ
ππππ πππππ = πππππππππ
πππππππππ = β βππ=1 π π£π log 2 π π£π
π(π£π ) is the fraction of records at child; π = total number of splits
used e.g. in C4.5
Solution 2: restrict the splits to binary
IRDM β15/16
VI: 40
Geometry of single-attribute splits
Decision boundaries are always axisparallel for single-attribute splits
IRDM β15/16
VI: 41
Geometry of single-attribute splits
Seems easy
to classify,
butβ¦
How to split?
IRDM β15/16
VI: 42
Combatting overfitting
Overfitting is a major problem with all classifiers
As decision trees are parameter-free, we need to
stop building the tree before overfitting happens
ο§
ο§
overfitting makes decision trees overly complex
generalization error will be big
In practice, to prevent overfitting, we use
ο§
ο§
ο§
ο§
test/train data
perform cross-validation
model selection (e.g. MDL)
or simply choose a minimal-number of records per leaf
IRDM β15/16
VI: 43
Handling overfitting
In pre-pruning we stop building the decision tree
when a stopping criterion is satisfied
In post-pruning we trim a full-grown decision tree
ο§
ο§
ο§
from bottom to up try replacing a decision node with a leaf
if generalization error improves, replace the sub-tree with a leaf
new leaf nodeβs class label is the majority of the sub-tree
IRDM β15/16
VI: 44
Summary of decision trees
Fast to build
Extremely fast to use
ο§
small ones are easy to interpret
ο§
ο§
good for domain expertβs verification
used e.g. in medicine
Redundant attributes are not (much of) a problem
Single-attribute splits cause axis-parallel decision
boundaries
Requires post-pruning to avoid overfitting
IRDM β15/16
VI: 45
Chapter 6.4:
Probabilistic classifiers
Aggarwal Ch. 10.5
IRDM β15/16
VI: 46
Basic idea
Recall Bayesβ theorem
In classification
ο§
ο§
ο§
Pr π π =
Pr π π Pr π
Pr π
random variable π is the attribute set
random variable π is the class variable
π depends on π in a non-deterministic way (assumption)
The dependency between π and π is
captured by Pr[π | π] and Pr[π]
ο§
the posterior and prior probability
IRDM β15/16
VI: 47
Building a classifier
Training phase
ο§
learn the posterior probabilities Pr[π | π] for every
combination of π and π based on training data
Test phase
for a test record πβ, we compute the class πβ that
maximizes the posterior probability Pr[πβ | πβ]
Pr πβ ππ Pr ππ
πβ = arg max Pr ππ πβ = arg max
π
π
Pr πβ
ο§
So, we need Pr πβ ππ ] and Pr[ππ ]
ο§
ο§
= arg max{Pr[πβ|ππ ]Pr[ππ ]}
π
Pr[ππ ] is easy, itβs the fraction of test records that belong to class ππ
Pr πβ ππ ], howeverβ¦
IRDM β15/16
VI: 48
Computing the probabilities
Assume that the attributes are conditionally independent
given the class label β the classifier is naïve
π
ο§
Pr π π = ππ = οΏ½ Pr ππ π = ππ
where ππ is the π-th attribute
π=1
Without independency there would be too many variables to
estimate, with independency, it is enough to estimate Pr[ππ | π]
π
ο§
Pr π π = Pr π οΏ½ Pr ππ π / Pr π
Pr[π] is fixed, so can be omitted
π=1
But how do we estimate the likelihood Pr[ππ | π]?
IRDM β15/16
VI: 49
Categorical attributes
If ππ is categorical Pr[ππ = π₯π | π = π] is simply the
fraction of training instances in class π that take value π₯π
on the π-th attribute
Pr π»π»π»π»π»π»π»π»π» = π¦π¦π¦ ππ =
Pr πππππππππππππ = π πππ =
IRDM β15/16
3
7
2
3
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
VI: 50
Continuous attributes: discretisation
We can discretise continuous attributes to intervals
ο§
these intervals act like ordinal attributes (because they are)
The problem is how to discretize
ο§
ο§
too many intervals:
too few training records per interval β unreliable estimates
too few intervals:
intervals merge ranges correlated to different classes,
making distinguishing the classes more difficult (impossible)
IRDM β15/16
VI: 51
Continuous attributes, continued
Alternatively we assume a distribution
ο§
normally we assume a normal distribution
We need to estimate the distribution parameters
ο§
ο§
for normal distribution, we use sample mean and sample variance
for estimation, we consider the values of attribute ππ that are
associated with class ππ in the test data
We hope that the parameters for distributions are
different for different classes of the same attribute
ο§
why?
IRDM β15/16
VI: 52
Example β Naïve Bayes
Annual income
Class = No
sample mean = 110
sample variance = 2975
Class = Yes
sample mean = 90
sample variance = 25
Test data: π = (π»π» = ππ, ππ = π, π΄π΄ = β¬120πΎ)
Pr πππ = 0.3,
TID
Home
Owner
Marital
Status
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
No
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
Annual Defaulted
Income Borrower
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
Pr ππ = 0.7
Pr π ππ = Pr π»π» = ππ ππ × Pr ππ = π ππ × Pr π΄π΄ = β¬120πΎ ππ
4
4
= × × 0.0072 = 0.0024
7
7
Pr π πππ = Pr π»π» = ππ πππ × Pr ππ = π πππ × Pr π΄π΄ = β¬120πΎ πππ
=1×0×π =0
πΌ = 1/Pr[π]
Pr ππ π = πΌ × Pr ππ × Pr π ππ = πΌ × 0.7 × 0.0024 = 0.0016πΌ,
β Pr[ππ β£ π] has higher posterior and π should hence be classified as non-defaulter
IRDM β15/16
VI: 53
No
No
No
No
Yes
No
No
Yes
No
Yes
Continuous distributions at fixed point
If ππ is continuous, Pr ππ = π₯π π = ππ = 0 !
ο§
but we still need to estimate that numberβ¦
Self-cancelling trick
Pr π₯π β π β€ ππ β€ π₯π + π π = ππ
=οΏ½
π₯π +π
π₯π βπ
ο§
2ππππ
1
β2
exp
β 2ππ(π₯π ; πππ , πππ )
π₯ β πππ
β
2πππ2
2
but 2π cancels out in the normalization constantβ¦
IRDM β15/16
VI: 54
Zero likelihood
We might have no samples with ππ = π₯π and π = ππ
ο§
ο§
ο§
naturally only a problem for categorical variables
Pr ππ = π₯π π = ππ = 0 β zero posterior probability
it can be that all classes have zero posterior probability for some data
Answer is smoothing (π-estimate):
ο§
ο§
ο§
ο§
ππ + ππ
Pr ππ = π₯π π = ππ =
π+π
π = # of training instances from class ππ
ππ = # training instances from ππ that take value π₯π
π = βequivalent sample sizeβ
π = user-set parameter
IRDM β15/16
VI: 55
More on Pr ππ = π₯π π = ππ =
The parameters are π and π
ο§
if π = 0, then likelihood is π
ο§
ο§
ππ +ππ
π+π
π is βpriorβ of observing π₯π in class ππ
parameter π governs the trade-off between π and
observed probability ππ /π
Setting these parameters is again problematicβ¦
Alternatively, we just add one pseudo-count to each class
ο§
ο§
Pr[ππ = π₯π | π = ππ ] = (ππ + 1) / (π + |πππ(ππ )|)
|πππ(ππ )| = # values attribute ππ can take
IRDM β15/16
VI: 56
Summary for Naïve Bayes
Robust to isolated noise
ο§
itβs averaged out
Can handle missing values
ο§
example is ignored when building the model,
and attribute is ignored when classifying new data
Robust to irrelevant attributes
ο§
Pr(ππ | π) is (almost) uniform for irrelevant ππ
Can have issues with correlated attributes
IRDM β15/16
VI: 57
Chapter 6.5:
Many many more classifiers
Aggarwal Ch. 10.6, 11
IRDM β15/16
VI: 58
Itβs a jungle out there
There is no free lunch
ο§
ο§
there is no single best classifier for every problem setting
there exist more classifiers than you can shake a stick at
Nice theory exists on the power of classes of classifiers
ο§
ο§
support vector machines (kernel methods) can do anything
so can artificial neural networks
Two heads know more than 1, and π-heads know more than 2
ο§
ο§
if youβre interested look into bagging and boosting
ensemble methods combine multiple βweakβ classifiers into one big
strong team
IRDM β15/16
VI: 59
Itβs about insight
Most classifiers focus purely on prediction accuracy
ο§
in data mining we care mostly about interpretability
The classifiers we have seen today work very well in
practice, and are interpretable
ο§
so are rule-based classifiers
Support vector machines, neural networks, and ensembles
give good predictive performance, but are black boxes.
IRDM β15/16
VI: 60
Conclusions
Classification is one of the most important and most
used data analysis methods β predictive analytics
There exist many different types of classification
ο§
ο§
weβve seen instance-based, decision trees, and naïve Bayes
these are (relatively) interpretable, and work well in practice,
There is no single best classifier
ο§
ο§
if youβre mainly interested in performance β go take Machine Learning
if youβre interested in the why, in explainability, stay here.
IRDM β15/16
VI: 61
Thank you!
Classification is one of the most important and most
used data analysis methods β predictive analytics
There exist many different types of classification
ο§
ο§
weβve seen instance-based, decision trees, and naïve Bayes
these are (relatively) interpretable, and work well in practice,
There is no single best classifier
ο§
ο§
if youβre mainly interested in performance β go take Machine Learning
if youβre interested in the why, in explainability, stay here.
IRDM β15/16
VI: 62