ID3算法的weka实现 – Lux_Veritas的专栏 – 博客频道 – CSDN.NET

ID3:归纳决策树(Induction Decision Tree Version 3) 

ID3算法一种由数据构造递归的的过程。选择一个属性作为根节点,按照其他属性将数据集分类,每一个子节点得到一个数据集。对这种划分的质量进行评估,递归执行该过程,直至全部节点不能再进行划分。某节点不能划分的条件有2:一个是节点具有单类,二是节点具有单一属性。

质量评估的标准为:

①信息增益:根节点的信息值,与分裂子节点后各节点平均信息值的差

②信息增益率:信息增益有一个弊端,当例如ID码这种对分类结果没有任何用处,但是信息增益极大的属性,仅靠信息增益判断是不行的。

信息增益率 信息增益 节点的固有信息值(不考虑类,仅凭信息量)

虽然ID的信息增益率仍是最高的,但是他的优势已经大大减小了。在实际生产中,会相应处理掉ID码这种无用的属性。

以上是我认为ID3算法的应该注意的几个地方,下面着重分析weka的源码。

 

 

ID3算法的weka实现核心部分在建立树的部分,即makeTree()方法。前期有相关数据集的简单处理,相关作者文献信息等的处理,这里暂不介绍。

首先,Id3类要继承 AbstractClassifier

有一些比较重要的成员变量:

Id3[]成员变量是保存分类树的变量,数组的每一个元素都是当前结点的子结点

m_Attribute变量保存结点进行分裂是基于的属性,即根据哪个属性分裂结点

③如果当前结点为叶子结点,则m_ClassValue变量代表当前结点的类别

m_Distribution说明当前结点属于某种类别的概率

m_ClassAttribute变量为数据集的类别

 

算法的入口为buildClassifier()方法,在其内部调用maketree()方法

其中getCapabilities().testWithFail(data)实际上是在数据的预处理之后,判断给定的数据集是否能被Id3处理。deleteWithMissingClass()Instances类中的方法,作用是移除那些缺失某属性的实例(具体实现见Instances),得到要求的实例集。

 

makeTree()是算法的核心所在。

首先计算最大信息增益

data.numAttributes()返回属性的个数,infoGains保存每一属性的信息增益值。

enumerateAttributes()Instances类中的方法,作用是返回实例集的全部属性的集合类。

computeInfoGain(data, att)是实际计算信息增益值的函数。

最后将具有最大增益的属性赋值给m_Attribute,作为当前结点的分裂属性。

当某结点的信息增益为0时,此结点为叶子结点,不再分裂。

m_Attribute = null,已经为叶结点,分裂属性当然为null。由于m_Distribution保存类别的概率,data.numClasses()获得数据集的类别量。m_Distribution[(int) inst.classValue()]++,用于对属于各个类别的具体实例进行计数。

Utils.normalize()相当于归一化。m_ClassValue为叶子结点的类别,当然是概率最大的为其类别值。

如果不是叶结点,则要在分裂属性上将分裂当前结点。

splitData()为本类中的方法,分裂属性有多少属性值,就将数据集在当前结点分裂出多少棵子树,即:Instances[] splitData = new Instances[att.numValues()]。然后将将每一个新分裂出的数据集声明为足够大的集合类,详见Instances类中双参数的构造方法。

inst.value(att),取得属性的值,根据属性的值将各实例分配到分裂出的数据集中。最后compactify()用于调整集合类到最小容量。

    最后将每一个m_Successors声明为一个Id3类,并递归调用执行makeTree(),将每一子树进行分裂,直至到全部叶结点,程序退出。

同学,你好!
weka本身就是开源的!
你如果已经下载并安装了weka,那么你只要进入安装目录,找到“weka-src.jar“这个文件,解压缩,或者直接用eclipse打开,那么所有的源代码都在你眼前!
ID3位置:”weka-src.jar\src\main\java\weka\classifiers\trees\ID3.java“

1/*
2*    This program is free software; you can redistribute it and/or modify
3*    it under the terms of the GNU General Public License as published by
4*    the Free Software Foundation; either version 2 of the License, or
5*    (at your option) any later version.
6*
7*    This program is distributed in the hope that it will be useful,
8*    but WITHOUT ANY WARRANTY; without even the implied warranty of
9*    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
10*    GNU General Public License for more details.
11*
12*    You should have received a copy of the GNU General Public License
13*    along with this program; if not, write to the Free Software
14*    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
15*/
16/*
17*    Id3.java
18*    Copyright (C) 1999 University of Waikato, Hamilton, New Zealand
19*
20*/
21package weka.classifiers.trees;
22import weka.classifiers.Classifier;
23import weka.classifiers.Sourcable;
24import weka.core.Attribute;
25import weka.core.Capabilities;
26import weka.core.Instance;
27import weka.core.Instances;
28import weka.core.NoSupportForMissingValuesException;
29import weka.core.RevisionUtils;
30import weka.core.TechnicalInformation;
31import weka.core.TechnicalInformationHandler;
32import weka.core.Utils;
33import weka.core.Capabilities.Capability;
34import weka.core.TechnicalInformation.Field;
35import weka.core.TechnicalInformation.Type;
36import java.util.Enumeration;
37/**
38<!-- globalinfo-start -->
39* Class for constructing an unpruned decision tree based on the ID3 algorithm. Can only deal with nominal attributes. No missing values allowed. Empty leaves may result in unclassified instances. For more information see:
40*
41* R. Quinlan (1986). Induction of decision trees. Machine Learning. 1(1):81-106.
42* <p/>
43<!-- globalinfo-end -->
44*
45<!-- technical-bibtex-start -->
46* BibTeX:
47* <pre>
48* @article{Quinlan1986,
49*    author = {R. Quinlan},
50*    journal = {Machine Learning},
51*    number = {1},
52*    pages = {81-106},
53*    title = {Induction of decision trees},
54*    volume = {1},
55*    year = {1986}
56* }
57* </pre>
58* <p/>
59<!-- technical-bibtex-end -->
60*
61<!-- options-start -->
62* Valid options are: <p/>
63*
64* <pre> -D
65*  If set, classifier is run in debug mode and
66*  may output additional info to the console</pre>
67*
68<!-- options-end -->
69*
70* @author Eibe Frank (eibe@cs.waikato.ac.nz)
71* @version $Revision: 6404 $
72*/
73public class Id3
74extends Classifier
75implements TechnicalInformationHandler, Sourcable {
76/** for serialization */
77static final long serialVersionUID = -2693678647096322561L;
78/** The node's successors. */
79private Id3[] m_Successors;
80/** Attribute used for splitting. */
81private Attribute m_Attribute;
82/** Class value if node is leaf. */
83private double m_ClassValue;
84/** Class distribution if node is leaf. */
85private double[] m_Distribution;
86/** Class attribute of dataset. */
87private Attribute m_ClassAttribute;
88/**
89* Returns a string describing the classifier.
90* @return a description suitable for the GUI.
91*/
92public String globalInfo() {
93return  "Class for constructing an unpruned decision tree based on the ID3 "
94+ "algorithm. Can only deal with nominal attributes. No missing values "
95+ "allowed. Empty leaves may result in unclassified instances. For more "
96+ "information see: \n\n"
97+ getTechnicalInformation().toString();
98}
99/**
100* Returns an instance of a TechnicalInformation object, containing
101* detailed information about the technical background of this class,
102* e.g., paper reference or book this class is based on.
103*
104* @return the technical information about this class
105*/
106public TechnicalInformation getTechnicalInformation() {
107TechnicalInformation    result;
108result = new TechnicalInformation(Type.ARTICLE);
109result.setValue(Field.AUTHOR, "R. Quinlan");
110result.setValue(Field.YEAR, "1986");
111result.setValue(Field.TITLE, "Induction of decision trees");
112result.setValue(Field.JOURNAL, "Machine Learning");
113result.setValue(Field.VOLUME, "1");
114result.setValue(Field.NUMBER, "1");
115result.setValue(Field.PAGES, "81-106");
116return result;
117}
118/**
119* Returns default capabilities of the classifier.
120*
121* @return      the capabilities of this classifier
122*/
123public Capabilities getCapabilities() {
124Capabilities result = super.getCapabilities();
125result.disableAll();
126// attributes
127result.enable(Capability.NOMINAL_ATTRIBUTES);
128// class
129result.enable(Capability.NOMINAL_CLASS);
130result.enable(Capability.MISSING_CLASS_VALUES);
131// instances
132result.setMinimumNumberInstances(0);
133return result;
134}
135/**
136* Builds Id3 decision tree classifier.
137*
138* @param data the training data
139* @exception Exception if classifier can't be built successfully
140*/
141public void buildClassifier(Instances data) throws Exception {
142// can classifier handle the data?
143getCapabilities().testWithFail(data);
144// remove instances with missing class
145data = new Instances(data);
146data.deleteWithMissingClass();
147makeTree(data);
148}
149/**
150* Method for building an Id3 tree.
151*
152* @param data the training data
153* @exception Exception if decision tree can't be built successfully
154*/
155private void makeTree(Instances data) throws Exception {
156// Check if no instances have reached this node.
157if (data.numInstances() == 0) {
158m_Attribute = null;
159m_ClassValue = Instance.missingValue();
160m_Distribution = new double[data.numClasses()];
161return;
162}
163// Compute attribute with maximum information gain.
164double[] infoGains = new double[data.numAttributes()];
165Enumeration attEnum = data.enumerateAttributes();
166while (attEnum.hasMoreElements()) {
167Attribute att = (Attribute) attEnum.nextElement();
168infoGains[att.index()] = computeInfoGain(data, att);
169}
170m_Attribute = data.attribute(Utils.maxIndex(infoGains));
171// Make leaf if information gain is zero.
172// Otherwise create successors.
173if (Utils.eq(infoGains[m_Attribute.index()], 0)) {
174m_Attribute = null;
175m_Distribution = new double[data.numClasses()];
176Enumeration instEnum = data.enumerateInstances();
177while (instEnum.hasMoreElements()) {
178Instance inst = (Instance) instEnum.nextElement();
179m_Distribution[(int) inst.classValue()]++;
180}
181Utils.normalize(m_Distribution);
182m_ClassValue = Utils.maxIndex(m_Distribution);
183m_ClassAttribute = data.classAttribute();
184} else {
185Instances[] splitData = splitData(data, m_Attribute);
186m_Successors = new Id3[m_Attribute.numValues()];
187for (int j = 0; j < m_Attribute.numValues(); j++) {
188m_Successors[j] = new Id3();
189m_Successors[j].makeTree(splitData[j]);
190}
191}
192}
193/**
194* Classifies a given test instance using the decision tree.
195*
196* @param instance the instance to be classified
197* @return the classification
198* @throws NoSupportForMissingValuesException if instance has missing values
199*/
200public double classifyInstance(Instance instance)
201throws NoSupportForMissingValuesException {
202if (instance.hasMissingValue()) {
203throw new NoSupportForMissingValuesException("Id3: no missing values, "
204+ "please.");
205}
206if (m_Attribute == null) {
207return m_ClassValue;
208} else {
209return m_Successors[(int) instance.value(m_Attribute)].
210classifyInstance(instance);
211}
212}
213/**
214* Computes class distribution for instance using decision tree.
215*
216* @param instance the instance for which distribution is to be computed
217* @return the class distribution for the given instance
218* @throws NoSupportForMissingValuesException if instance has missing values
219*/
220public double[] distributionForInstance(Instance instance)
221throws NoSupportForMissingValuesException {
222if (instance.hasMissingValue()) {
223throw new NoSupportForMissingValuesException("Id3: no missing values, "
224+ "please.");
225}
226if (m_Attribute == null) {
227return m_Distribution;
228} else {
229return m_Successors[(int) instance.value(m_Attribute)].
230distributionForInstance(instance);
231}
232}
233/**
234* Prints the decision tree using the private toString method from below.
235*
236* @return a textual description of the classifier
237*/
238public String toString() {
239if ((m_Distribution == null) && (m_Successors == null)) {
240return "Id3: No model built yet.";
241}
242return "Id3\n\n" + toString(0);
243}
244/**
245* Computes information gain for an attribute.
246*
247* @param data the data for which info gain is to be computed
248* @param att the attribute
249* @return the information gain for the given attribute and data
250* @throws Exception if computation fails
251*/
252private double computeInfoGain(Instances data, Attribute att)
253throws Exception {
254double infoGain = computeEntropy(data);
255Instances[] splitData = splitData(data, att);
256for (int j = 0; j < att.numValues(); j++) {
257if (splitData[j].numInstances() > 0) {
258infoGain -= ((double) splitData[j].numInstances() /
259(double) data.numInstances()) *
260computeEntropy(splitData[j]);
261}
262}
263return infoGain;
264}
265/**
266* Computes the entropy of a dataset.
267*
268* @param data the data for which entropy is to be computed
269* @return the entropy of the data's class distribution
270* @throws Exception if computation fails
271*/
272private double computeEntropy(Instances data) throws Exception {
273double [] classCounts = new double[data.numClasses()];
274Enumeration instEnum = data.enumerateInstances();
275while (instEnum.hasMoreElements()) {
276Instance inst = (Instance) instEnum.nextElement();
277classCounts[(int) inst.classValue()]++;
278}
279double entropy = 0;
280for (int j = 0; j < data.numClasses(); j++) {
281if (classCounts[j] > 0) {
282entropy -= classCounts[j] * Utils.log2(classCounts[j]);
283}
284}
285entropy /= (double) data.numInstances();
286return entropy + Utils.log2(data.numInstances());
287}
288/**
289* Splits a dataset according to the values of a nominal attribute.
290*
291* @param data the data which is to be split
292* @param att the attribute to be used for splitting
293* @return the sets of instances produced by the split
294*/
295private Instances[] splitData(Instances data, Attribute att) {
296Instances[] splitData = new Instances[att.numValues()];
297for (int j = 0; j < att.numValues(); j++) {
298splitData[j] = new Instances(data, data.numInstances());
299}
300Enumeration instEnum = data.enumerateInstances();
301while (instEnum.hasMoreElements()) {
302Instance inst = (Instance) instEnum.nextElement();
303splitData[(int) inst.value(att)].add(inst);
304}
305for (int i = 0; i < splitData.length; i++) {
306splitData[i].compactify();
307}
308return splitData;
309}
310/**
311* Outputs a tree at a certain level.
312*
313* @param level the level at which the tree is to be printed
314* @return the tree as string at the given level
315*/
316private String toString(int level) {
317StringBuffer text = new StringBuffer();
318if (m_Attribute == null) {
319if (Instance.isMissingValue(m_ClassValue)) {
320text.append(": null");
321} else {
322text.append(": " + m_ClassAttribute.value((int) m_ClassValue));
323}
324} else {
325for (int j = 0; j < m_Attribute.numValues(); j++) {
326text.append("\n");
327for (int i = 0; i < level; i++) {
328text.append("|  ");
329}
330text.append(m_Attribute.name() + " = " + m_Attribute.value(j));
331text.append(m_Successors[j].toString(level + 1));
332}
333}
334return text.toString();
335}
336/**
337* Adds this tree recursively to the buffer.
338*
339* @param id          the unqiue id for the method
340* @param buffer      the buffer to add the source code to
341* @return            the last ID being used
342* @throws Exception  if something goes wrong
343*/
344protected int toSource(int id, StringBuffer buffer) throws Exception {
345int                 result;
346int                 i;
347int                 newID;
348StringBuffer[]      subBuffers;
349buffer.append("\n");
350buffer.append("  protected static double node" + id + "(Object[] i) {\n");
351// leaf?
352if (m_Attribute == null) {
353result = id;
354if (Double.isNaN(m_ClassValue)) {
355buffer.append("    return Double.NaN;");
356} else {
357buffer.append("    return " + m_ClassValue + ";");
358}
359if (m_ClassAttribute != null) {
360buffer.append(" // " + m_ClassAttribute.value((int) m_ClassValue));
361}
362buffer.append("\n");
363buffer.append("  }\n");
364} else {
365buffer.append("    checkMissing(i, " + m_Attribute.index() + ");\n\n");
366buffer.append("    // " + m_Attribute.name() + "\n");
367// subtree calls
368subBuffers = new StringBuffer[m_Attribute.numValues()];
369newID = id;
370for (i = 0; i < m_Attribute.numValues(); i++) {
371newID++;
372buffer.append("    ");
373if (i > 0) {
374buffer.append("else ");
375}
376buffer.append("if (((String) i[" + m_Attribute.index()
377+ "]).equals(\"" + m_Attribute.value(i) + "\"))\n");
378buffer.append("      return node" + newID + "(i);\n");
379subBuffers[i] = new StringBuffer();
380newID = m_Successors[i].toSource(newID, subBuffers[i]);
381}
382buffer.append("    else\n");
383buffer.append("      throw new IllegalArgumentException(\"Value '\" + i["
384+ m_Attribute.index() + "] + \"' is not allowed!\");\n");
385buffer.append("  }\n");
386// output subtree code
387for (i = 0; i < m_Attribute.numValues(); i++) {
388buffer.append(subBuffers[i].toString());
389}
390subBuffers = null;
391result = newID;
392}
393return result;
394}
395/**
396* Returns a string that describes the classifier as source. The
397* classifier will be contained in a class with the given name (there may
398* be auxiliary classes),
399* and will contain a method with the signature:
400* <pre><code>
401* public static double classify(Object[] i);
402* </code></pre>
403* where the array <code>i</code> contains elements that are either
404* Double, String, with missing values represented as null. The generated
405* code is public domain and comes with no warranty.
406* Note: works only if class attribute is the last attribute in the dataset.
407*
408* @param className the name that should be given to the source class.
409* @return the object source described by a string
410* @throws Exception if the source can't be computed
411*/
412public String toSource(String className) throws Exception {
413StringBuffer        result;
414int                 id;
415result = new StringBuffer();
416result.append("class " + className + " {\n");
417result.append("  private static void checkMissing(Object[] i, int index) {\n");
418result.append("    if (i[index] == null)\n");
419result.append("      throw new IllegalArgumentException(\"Null values "
420+ "are not allowed!\");\n");
421result.append("  }\n\n");
422result.append("  public static double classify(Object[] i) {\n");
423id = 0;
424result.append("    return node" + id + "(i);\n");
425result.append("  }\n");
426toSource(id, result);
427result.append("}\n");
428return result.toString();
429}
430/**
431* Returns the revision string.
432*
433* @return        the revision
434*/
435public String getRevision() {
436return RevisionUtils.extract("$Revision: 6404 $");
437}
438/**
439* Main method.
440*
441* @param args the options for the classifier
442*/
443public static void main(String[] args) {
444runClassifier(new Id3(), args);
445}
446}

来源URL:http://cache.baiducontent.com/c?m=9f65cb4a8c8507ed4fece7631046893b4c4380146d96864968d4e414c422461f002cf4bc5366474488832f261cfc091ab1a168252a5577f1c893d60bc0bc98292582263f6459db0144dc5cf8921532c151cb0ce8b81897ad814284d9d3c4af5144b959&p=8b2a975486cc41ac5ead8268460e9c&newp=98759a45d5c51df20be2963c5c5d8f231610db2151d1d64922&user=baidu&fm=sc&query=weka+id3%CB%E3%B7%A8&qid=9d1f723400018f5f&p1=2