As a neural network accumulates more parameters, it is more exposed to overfitting. Too many epochs can lead to overfitting of the training dataset, whereas too few may result in an underfit model.
A viable solution is to train on the training dataset, but stop training when performance on the validation dataset begins to deteriorate. Early stopping is one such technique that helps in less wastage of training resources. The Keras module contains a built-in callback designed for this purpose called the Early Stopping Callback.
Using tf.keras.callbacks.EarlyStopping, you can implement the Keras API, the high-level API of TensorFlow.
Patience is an important parameter of the Early Stopping Callback.
If the patience parameter is set to X number of epochs or iterations, then the training will terminate only if there is no improvement in the monitor performance measure for X epochs or iterations in a row.
For further understanding, please refer to the explanation of the code implementation below.
Source: Overfitting and Underfitting
from tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import ConviD, Flatten, Dense, MaxPooling1Dfrom tensorflow.keras.callbacks import EarlyStoppingmodel = Sequential([Conv1D(16, 5, activation='relu', input_shape=(128, 1)),MaxPooling 1D(4),Flatten(),Dense(10, activation='softmax')])model.compile(optimizer='adam", loss='categorical_crossentropy',metrics=['accuracy'])early_stopping = EarlyStopping(monitor='val_accuracy', patience-5)model.fit (X_train, y train, validation_split-0.2, epochs=100,callbacks=[early_stopping])
pip install pytorch-ignitefrom ignite.engine import Engine, Eventsfrom ignite.handlers import EarlyStoppingdef score_function(engine):val_loss = engine.state.metrics['nll']return -val_losshandler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)# Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset).evaluator.add_event_handler(Events.COMPLETED, handler)