-
Notifications
You must be signed in to change notification settings - Fork 219
Description
Describe the feature or idea you want to propose
I am testing the different AutoEncoders implemented in aeon. I have noticed that the models are only storing the training loss, accessible for plotting/inspection in model.summary()
. However, there are no validation losses.
Describe your proposed solution
I made a small addition in the self.training_model_.fit()
in
aeon/aeon/clustering/deep_learning/_ae_abgru.py
Lines 284 to 292 in 0412d5b
self.history = self.training_model_.fit( | |
X, | |
X, | |
batch_size=mini_batch_size, | |
epochs=self.n_epochs, | |
verbose=self.verbose, | |
callbacks=self.callbacks_, | |
) | |
where only a new argument of validation_split = int [0,1]
is needed according to keras api https://keras.io/api/models/model_training_apis/
I did this on my experiments and it seems to work fine. I would consider this enhancement very useful, as naive users might go for the 2000 n_epochs default run and end up overfitting all their models, and then the manifold learning and subsequent clustering makes no much sense
I would be happy to create a PR for most, if not all, the models in the deep learning clustering module.
Basically, it would be a new argument for the initialization of the clusterer classes (validation_split).
I am new to Keras, so please let me know if I overlooked any easier way to do this.
Describe alternatives you've considered, if relevant
No response
Additional context
No response