Model Candidate Selection and Tuning



The algorithm is introduced during the model selection process — when an analyst chooses a kind, or class, of model to implement.












Selecting a model can be relatively easy or considerably complicated, depending on the context. Influencing the analyst's decision are the following concerns:

  • The kind of output variable: If an outcome variable consists of ordinal classes or multiple classes, there are fewer options for models.

  • The ability to implement an asymmetric cost ratio: Not all types of errors have the same consequences in machine learning. Stakeholders often prefer one type of error over another. An asymmetric cost ratio allows the analyst to enforce mathematical constraints on the algorithm that force it to make certain errors more often than others.

  • The ability to explain the reasons for predictions: Algorithms exist on a spectrum of explainability. Suppose there is an external pressure that requires explanations. In that case, an analyst will probably opt for either a simpler class of models or an algorithm that allows them to generate visualizations of how the model came to a decision. These graphical plots indicate how important input variables were to predictions and how changes in the input variables tend to change the model's prediction.

  • The potential for overfitting: If the dataset has many input variables, the analyst may select a model that is more resistant to overfitting.

  • The opportunities for tuning: Some models have hyperparameters that are more influential, which allows the analyst greater flexibility during the model tuning stage.

  • Practical resource limitations: Machine learning algorithms take processing power, time, and memory space to run. More complex algorithms take more of these. Developers choose algorithms based on the deployment environment's restraints.



Classification Models

Classification is a process of finding a function which helps in dividing the dataset into classes based on different parameters.


Logistic Regression

Decision Tree

Random Forest

Naïve Bayes

K-Nearest Neighbors

Support Vector Machines


Regression Models

Regression is a process used to investigate the relationship between two or more continuous variables and estimate one variable based on the others.


Linear Regression

Multiple Linear Regression

Polynomial Regression

Support Vector Regression

Decision Tree Regression

Random Forest Regression



Model Tuning

Tuning is a trial-and-error process that involves changing hyperparameters, the "dials" or "knobs" of a machine learning model. Choosing suitable hyperparameters is crucial for the model's performance.


Hyperparameters differ from other model parameters — they are not learned by the model automatically through training methods. Instead, the analyst must set these parameters manually.


All machine learning algorithms have a "default" set of hyperparameters. They are different for each algorithm; for example, the number of trees in a tree-based algorithm or the value of k in k-nearest neighbors.


The generic set of hyperparameters for each algorithm provides a solid starting point. It will generally result in a well-performing model. However, it may not have the optimal configurations for the dataset and problem at hand. To find the best hyperparameters for the data, the analyst needs to tune them.


Tuning is often a dance between steps in which some hyperparameter is changed. The algorithm is rerun on the data. The analyst then evaluates the model's performance on the validation set. This process identifies which set of hyperparameters results in the most accurate model.


At this point, a model has been chosen as a candidate and tested in comparison to others. The highest performing models' hyperparameters are tuned based on the validation set. The analyst now has to make one of the final decisions — which machine learning model will be moving to the next stage: deployment. This is typically guided by which model has the best performance on the success function. Once this decision is made, the model goes through an additional layer of evaluation, the test set, data that the model has not seen before to provide an unbiased evaluation of the model's performance.





If you notice anything incorrect or missing from our explanations, please let us know through the contact form! We want our content to be as accurate and useful as possible — and you can help us do that.

Want to submit a case study or have a question?

Thank you to our sponsors: 

ClearBG_logos-02.png
ClearBG_logos-01.png

© 2020 Atlas Lab      Privacy Policy