- Ease of use - Control - Management - Sophistication - Embedding - Back to Products -
Search and automation procedures
Neural network design is a complex and challenging process, with numerous possible architectures available, and difficult decisions to be made about feature selection, data resampling, and model complexity. Trajan's Intelligent Problem Solver encapsulates sophisticated search algorithms to rapidly and effectively determine a sensible architecture, but retains enough flexibility to allow you to constrain the search as you develop a firmer model. Trajan's resampling procedures allow you to rapidly perform within-sample and between sample experiments. Sensitivity Analysis and Feature Selection algorithms help you determine the inputs relevant to your problem domain.
All the most widely used models are available: Multiple Layer Perceptrons, Radial Basis Functions, Generalized Regression Neural Networks; Probabilistic Neural Networks, and Self Organizing Feature Maps. In addition, Trajan supports linear models, principal components extraction, and clustering networks.
Supervised training algorithms
Trajan supports a range of algorithms for training supervised neural networks. For MLPs, algorithms include the classic Back Propagation and the fast second-order algorithms Conjugate Gradient Descent, Quasi-Newton and Levenberg-Marquardt. Two-stage training is automatically supported. These algorithms are integrated with weight-based decay regularization, additive noise, sensitivity-based input pruning and (in the case of classification networks) automated selection of decision thresholds based on ROC curve analysis. Iterative training can be interrupted at any time, and error rates can optionally be displayed, in which case you can also interactively extend training, change algorithms, etc. For RBFs, Trajan supports a range of exemplar placement algorithms including sampling and K-Means, several smoothing factor selection algorithms including K-Nearest neighbor, and output layer optimization by Singular Value Decomposition or Conjugate Gradient Descent.
Unsupervised algorithms and clustering
Trajan supports the two phase Kohonen training algorithm for Self Organizing Feature Maps, and includes an iterative Topological Map allowing viewing and class labeling of topological layer neurons. Clustering algorithms include Learned Vector Quantization and KL Nearest Neighbor Classifiers, and exemplar labeling algorithms such as K Nearest Neighbor and Voronoi Neighbors.
Feature selection is a key procedure in neural network design. Feature selection is built into both the Intelligent Problem Solver and Custom Network Designer. However, you can in addition conduct Sensitivity Analysis to determine the relevance of each input in a finished model, or use specialized forward selection, backward selection and genetic algorithm procedures, integrated with PNN and GRNN networks, to select from the data set. If you have a large number of numeric variables, Trajan's built-in Principal Components Analysis (PCA) networks can be used for feature extraction.
Resampling and ensembles
Resampling of data is a key process in neural network design. Resampling allows more accurate estimation of generalization performance of networks and, if the networks are formed into an ensemble, improved performance. Trajan supports a sophisticated range of resampling algorithms, including Monte-Carlo, cross validation and bootstrap algorithms, which are fully integrated with Trajan's Intelligent Problem Solver and Custom Network Designer tools. As with other aspects of the system, Trajan's comprehensive and in-depth sampling algorithms may be used at a number of levels of detail: from defining subset size alone in Monte Carlo sampling, or specifying that cases with missing values be omitted, right through to individually nominating the cases to be used.
Trajan supports two major forms of ensemble: averaging and voting ensembles. The ensemble's output may be formed by a weighted average of the output activations of member networks, or by a vote held between the members. Ensembles may be formed automatically by the Intelligent Problem Solver, from the resampled networks generated by the Custom Network Designer, or designed by hand.