基于KNIME的机器学习和人工智能基础决策树
- 0 - Introduction
- 1. The basics of decision trees
- 2. What you should know
- 3. How to use the practice files
- 1 - Introducing Decision Trees
- 1. What is a decision tree
- 2. The pros and cons of decision trees
- 3. Introducing KNIME
- 4. A quick review of machine learning basics with examples
- 5. An overview of decision tree algorithms
- 2 - Introducing the C5.0 Algorithm
- 1. Ross Quinlan, ID3, C4.5, and C5.0
- 2. Understanding the entropy calculation
- 3. How C4.5 handles missing data
- 4. The Give Me Some Credit data set
- 5. Working with the prebuilt example
- 6. KNIME settings for C4.5
- 7. How C4.5 handles nominal variables
- 8. How C4.5 handles continuous variables
- 9. Equal size sampling
- 10. A quick look at the complete C4.5 tree
- 11. Evaluating the accuracy of your C4.5 tree
- 12. When to turn off pruning
- 3 - Introducing Classification Trees
- 1. Introducing Leo Breiman and CART
- 2. What is the Gini coefficient
- 3. How CART handles missing data using surrogates
- 4. Changing the settings in KNIME
- 5. How CART handles nominal variables
- 6. A quick look at the complete CART tree
- 7. Evaluating the accuracy of your CART tree
- 4 - Introducing Regression Trees
- 1. MPG data set
- 2. The regression tree prebuilt example
- 3. The math behind regression trees
- 4. How RT handles nominal variables
- 5. Ordinal variable handling
- 6. Closer look at a full regression tree
- 7. KNIME's missing data options for regression trees
- 8. Line plot
- 9. Accuracy
- 5 - Conclusion
- 1. Next steps