User:Pfcohen/Books/Machine Learning
Appearance
The Wikimedia Foundation's book rendering service has been withdrawn. Please upload your Wikipedia book to one of the external rendering services. |
You can still create and edit a book design using the Book Creator and upload it to an external rendering service:
|
This user book is a user-generated collection of Wikipedia articles that can be easily saved, rendered electronically, and ordered as a printed book. If you are the creator of this book and need help, see Help:Books (general tips) and WikiProject Wikipedia-Books (questions and assistance). Edit this book: Book Creator · Wikitext Order a printed copy from: PediaPress [ About ] [ Advanced ] [ FAQ ] [ Feedback ] [ Help ] [ WikiProject ] [ Recent Changes ] |
Machine Learning
[edit]- Introduction and Main Principles
- Machine learning
- Data analysis
- Occam's razor
- Curse of dimensionality
- No free lunch theorem
- Accuracy paradox
- Overfitting
- Regularization (machine learning)
- Inductive bias
- Data dredging
- Ugly duckling theorem
- Uncertain data
- Background and Preliminaries
- Knowledge discovery in Databases
- Knowledge discovery
- Data mining
- Predictive analytics
- Predictive modelling
- Business intelligence
- Reactive business intelligence
- Business analytics
- Reactive business intelligence
- Pattern recognition
- Reasoning
- Abductive reasoning
- Inductive reasoning
- First-order logic
- Inductive logic programming
- Reasoning system
- Case-based reasoning
- Textual case based reasoning
- Causality
- Search Methods
- Nearest neighbor search
- Stochastic gradient descent
- Beam search
- Best-first search
- Breadth-first search
- Hill climbing
- Grid search
- Brute-force search
- Depth-first search
- Tabu search
- Anytime algorithm
- Statistics
- Exploratory data analysis
- Covariate
- Statistical inference
- Algorithmic inference
- Bayesian inference
- Base rate
- Bias (statistics)
- Gibbs sampling
- Cross-entropy method
- Latent variable
- Maximum likelihood
- Maximum a posteriori estimation
- Expectation–maximization algorithm
- Expectation propagation
- Kullback–Leibler divergence
- Generative model
- Main Learning Paradigms
- Supervised learning
- Unsupervised learning
- Active learning (machine learning)
- Reinforcement learning
- Multi-task learning
- Transduction
- Explanation-based learning
- Offline learning
- Online learning model
- Online machine learning
- Hyperparameter optimization
- Classification Tasks
- Classification in machine learning
- Concept class
- Features (pattern recognition)
- Feature vector
- Feature space
- Concept learning
- Binary classification
- Decision boundary
- Multiclass classification
- Class membership probabilities
- Calibration (statistics)
- Concept drift
- Prior knowledge for pattern recognition
- Online Learning
- Margin Infused Relaxed Algorithm
- Semi-supervised learning
- Semi-supervised learning
- One-class classification
- Coupled pattern learner
- Lazy learning and nearest neighbors
- Lazy learning
- Eager learning
- Instance-based learning
- Cluster assumption
- K-nearest neighbor algorithm
- IDistance
- Large margin nearest neighbor
- Decision Trees
- Decision tree learning
- Decision stump
- Pruning (decision trees)
- Mutual information
- Adjusted mutual information
- Information gain ratio
- Information gain in decision trees
- ID3 algorithm
- C4.5 algorithm
- CHAID
- Information Fuzzy Networks
- Grafting (decision trees)
- Incremental decision tree
- Alternating decision tree
- Logistic model tree
- Random forest
- Linear Classifiers
- Linear classifier
- Margin (machine learning)
- Margin classifier
- Soft independent modelling of class analogies
- Statistical classification
- Statistical classification
- Probability matching
- Discriminative model
- Linear discriminant analysis
- Multiclass LDA
- Multiple discriminant analysis
- Optimal discriminant analysis
- Fisher kernel
- Discriminant function analysis
- Multilinear subspace learning
- Quadratic classifier
- Variable kernel density estimation
- Category utility
- Evaluation of Classification Models
- Data classification (business intelligence)
- Training set
- Test set
- Synthetic data
- Cross-validation (statistics)
- Loss function
- Hinge loss
- Generalization error
- Type I and type II errors
- Sensitivity and specificity
- Precision and recall
- F1 score
- Confusion matrix
- Matthews correlation coefficient
- Receiver operating characteristic
- Lift (data mining)
- Stability in learning
- Features Selection and Features Extraction
- Data Pre-processing
- Discretization of continuous features
- Feature selection
- Feature extraction
- Dimension reduction
- Principal component analysis
- Multilinear principal-component analysis
- Multifactor dimensionality reduction
- Targeted projection pursuit
- Multidimensional scaling
- Nonlinear dimensionality reduction
- Kernel principal component analysis
- Kernel eigenvoice
- Gramian matrix
- Gaussian process
- Kernel adaptive filter
- Isomap
- Manifold alignment
- Diffusion map
- Elastic map
- Locality-sensitive hashing
- Spectral clustering
- Minimum redundancy feature selection
- Clustering
- Cluster analysis
- K-means clustering
- K-means++
- K-medians clustering
- K-medoids
- DBSCAN
- Fuzzy clustering
- BIRCH (data clustering)
- Canopy clustering algorithm
- Cluster-weighted modeling
- Clustering high-dimensional data
- Cobweb (clustering)
- Complete-linkage clustering
- Constrained clustering
- Correlation clustering
- CURE data clustering algorithm
- Data stream clustering
- Dendrogram
- Determining the number of clusters in a data set
- FLAME clustering
- Hierarchical clustering
- Information bottleneck method
- Lloyd's algorithm
- Nearest-neighbor chain algorithm
- Neighbor joining
- OPTICS algorithm
- Pitman–Yor process
- Single-linkage clustering
- SUBCLU
- Thresholding (image processing)
- UPGMA
- Evaluation of Clustering Methods
- Rand index
- Dunn index
- Davies–Bouldin index
- Jaccard index
- MinHash
- K q-flats
- Rule Induction
- Decision rules
- Rule induction
- Classification rule
- CN2 algorithm
- Decision list
- First Order Inductive Learner
- Association rules and Frequent Item Sets
- Association rule learning
- Apriori algorithm
- Contrast set learning
- Affinity analysis
- K-optimal pattern discovery
- Ensemble Learning
- Ensemble learning
- Ensemble averaging
- Consensus clustering
- AdaBoost
- Boosting
- Bootstrap aggregating
- BrownBoost
- Cascading classifiers
- Co-training
- CoBoosting
- Gaussian process emulator
- Gradient boosting
- LogitBoost
- LPBoost
- Mixture model
- Product of Experts
- Random multinomial logit
- Random subspace method
- Weighted Majority Algorithm
- Randomized weighted majority algorithm
- Graphical Models
- Graphical model
- State transition network
- Bayesian Learning Methods
- Naive Bayes classifier
- Averaged one-dependence estimators
- Bayesian network
- Variational message passing
- Markov Models
- Markov model
- Maximum-entropy Markov model
- Hidden Markov model
- Baum–Welch algorithm
- Forward–backward algorithm
- Hierarchical hidden Markov model
- Markov logic network
- Markov chain Monte Carlo
- Markov random field
- Conditional random field
- Predictive state representation
- Learning Theory
- Computational learning theory
- Version space
- Probably approximately correct learning
- Vapnik–Chervonenkis theory
- Shattering (machine learning)
- VC dimension
- Minimum description length
- Bondy's theorem
- Inferential theory of learning
- Rademacher complexity
- Teaching dimension
- Subclass reachability
- Sample exclusion dimension
- Unique negative dimension
- Uniform convergence (combinatorics)
- Witness set
- Support Vector Machines
- Kernel methods
- Support vector machine
- Structural risk minimization
- Empirical risk minimization
- Kernel trick
- Least squares support vector machine
- Relevance vector machine
- Sequential minimal optimization
- Structured SVM
- Regression analysis
- Outline of regression analysis
- Regression analysis
- Dependent and independent variables
- Linear model
- Linear regression
- Least squares
- Linear least squares (mathematics)
- Local regression
- Additive model
- Antecedent variable
- Autocorrelation
- Backfitting algorithm
- Bayesian linear regression
- Bayesian multivariate linear regression
- Binomial regression
- Canonical analysis
- Censored regression model
- Coefficient of determination
- Comparison of general and generalized linear models
- Compressed sensing
- Conditional change model
- Controlling for a variable
- Cross-sectional regression
- Curve fitting
- Deming regression
- Design matrix
- Difference in differences
- Dummy variable (statistics)
- Errors and residuals in statistics
- Errors-in-variables models
- Explained sum of squares
- Explained variation
- First-hitting-time model
- Fixed effects model
- Fraction of variance unexplained
- Frisch–Waugh–Lovell theorem
- General linear model
- Generalized additive model
- Generalized additive model for location, scale and shape
- Generalized estimating equation
- Generalized least squares
- Generalized linear array model
- Generalized linear mixed model
- Generalized linear model
- Growth curve
- Guess value
- Hat matrix
- Heckman correction
- Heteroscedasticity-consistent standard errors
- Hosmer–Lemeshow test
- Instrumental variable
- Interaction (statistics)
- Isotonic regression
- Iteratively reweighted least squares
- Kitchen sink regression
- Lack-of-fit sum of squares
- Leverage (statistics)
- Limited dependent variable
- Linear probability model
- Mallows's Cp
- Mean and predicted response
- Mixed model
- Moderation (statistics)
- Moving least squares
- Multicollinearity
- Multiple correlation
- Multivariate probit
- Multivariate adaptive regression splines
- Newey–West estimator
- Non-linear least squares
- Nonlinear regression
- Logistic Regression
- Logit
- Multinomial logit
- Logistic regression
- Bio-inspired Methods
- Bio-inspired computing
- Evolutionary Algorithms
- Evolvability (computer science)
- Evolutionary computation
- Evolutionary algorithm
- Genetic algorithm
- Chromosome (genetic algorithm)
- Crossover (genetic algorithm)
- Fitness function
- Evolutionary data mining
- Genetic programming
- Learnable Evolution Model
- ToC