上次的随机逻辑回归模型是发掘自变量和因变量的线型相关,决策树和神经网络是非线型关系变量的筛选.
#-*- coding: utf-8 -*-import pandas as pdinputfile = '../data/sales_data.xls'data = pd.read_excel(inputfile, index_col = u'序号')#将类别标签好/是/高,转化为1,-1data[data == u'好'] = 1data[data == u'是'] = 1data[data == u'高'] = 1data[data != 1] = -1x = data.iloc[:,:3].as_matrix().astype(int) #读取前三列作为自变量y = data.iloc[:,3].as_matrix().astype(int) #读取第三列作为因变量,并转为为整型数据from sklearn.tree import DecisionTreeClassifier as DTCdtc = DTC(criterion='entropy') #基于信息增益dtc.fit(x, y) #训练模型#训练完毕,输出结果可视化from sklearn.tree import export_graphvizx = pd.DataFrame(x)from sklearn.externals.six import StringIOx = pd.DataFrame(x)with open("tree.dot", 'w') as f: f = export_graphviz(dtc, feature_names = x.columns, out_file = f)
用的是决策树算法中的ID3算法(基于信息熵),最终使分类后的数据集的熵最小,C4.5决策树算法利用信息增益率划分数据集,CART决策树算法是利用Gini(基尼)指数划分数据集