Skip to main content

Posts

Showing posts from February, 2021

Model Evaluation Techniques

 The machine learning model should be trained and predict the output. But if we train the model on a dataset and also tested it on the same dataset. Then we will get almost 100% accuracy. But if we provide unseen data, it fails to predict the correct output. This problem occurs a problem called  overfitting . The model should work fine on the unseen data also. "Model evaluation is nothing but calculate the accuracy score on the unseen data." The methods to evaluate the model are divided into two categories such as holdout and cross-validation . Holdout:  In this method, the data is trained on the data and tested on the different or unseen data. That is we divide our original dataset into training data and testing data. The general ratio of the training and testing data is 80:20. It involves the following steps. Divide the dataset into training and testing data. (Generally 80:20) Train the model by using the training data. Test the data by using the testing data. Calculate the

Hypothesis space and inductive bias

 We all know that every machine learning model should come up with high accuracy (without overfitting). That means the model should predict the correct output on the testing data or unseen data. The main target of all machine learning model is to predict the correct output on the testing data.