Supervised learning
Concepts
These components implements a supervised algorithm.
Be careful !!! These component must be integrated into a "metasupervised" component.
Attributes status
Target attribute must be discrete, input can be continuous or discrete.Supervised learning components
Component  Description  Parameters  Note 
Binary logistic regression 
Maximum likelihood method, LevenbergMarquardt optimization algorithm.
From J. DEBORD library (http://ourworld.compuserve.com/homepages/JDebord/regnlin.htm). 
 Target attribute must have two values.  Input attributes must be continuous.  Constant is imposed. 

kNearest Neighbor (kNN) 
KNearest neighbour  Heterogenous Value Difference Metric (HVDM), discrete descriptors can be used.
 kNN : D. Aha, D. Kibler, M. Albert, "Instancebased learning algorithms", Machine Learning, vol.6, pp. 3766, 1991. 
Number of neighbor 
 Input can be discrete and/or continuous.  No attribute standardization is necessary. 
Multilayer perceptron 
Multilayer perceptron, backpropagation algorithm.
 T. Mitchell, "Machine learning", Mc GrawHill International Editions, pp.86126, 1997. 
Neural network architecture  Use hidden layer  Neuron in the hidden layer Learning parameters  Learning rate  Size of pruning/validation set  Attribute standardization Stoping rule  Max number of iterations  Thresold error rate  Error stagnation  Gap of error stagnation evaluation 
 Input must be continuous. 
PrototypeNN 
Kernels (prototypes) are built offline, with a clustering
algorith for instance. At each kernel, we affect a class membership. So,
for a new instance, we give it the class of the nearest prototype.
It is an interpretation and a generalization of the approach suggested in Hastie et al.
(pp. 411433). 
 Attribute to define kernels  Standardization of attributes 
 Input must be continuous. 
ID3 
Quinlan's ID3 algorithm, with some minor modifications:
 ID3 : J.R. Quinlan, "Discovering rules by induction from large collections
of examples", D. Michie ed., Expert Systems in the Microelectronic age, pp. 168201, 1979. 
 Min size of node to split  Min size of leaf to produce  Max depth of the tree  Max entropy gain for a spliting 

Linear Discriminant Analysis 
Linear discriminant analysis.
 R.A. Fisher, "The use of multiple measurements in taxonomic problems",
Annals of Eugenics, vol. 7, pp. 179188, 1936. 
Input must be continuous. Be careful about colinearity.  
Naive Bayes 
Naive Bayes algorithm.
 P. Domingos, M. Pazzani, "On the optimality of the simple bayesian classifier under zeroone loss", Machine Learning, vol. 29, pp.103130, 1997. 
Input must be discrete.  
Radial basis function 
Radial basis function, offline processing.
 K. Mehrotra, C. Mohan, S. Ranka, "Elements of artificial neural network", MIT Press, pp.141156, 1997. 
 Attribute which defines kernels.  Other parameters : see MLP. 
Input attributes must be continuous.
I have some doubts about the actual implementation ! 