Current behavior
If there is only one worker ,training with EarlyStopping callback is ok. When multi workers with EarlyStopping callback doing distribute training, all workers will be hanging and waiting for synchronizing.

Expected behavior
I want the EarlyStopping callback works well not only on one worker task but also on multi workers distribute training job.
System information
- GPU model and memory:
- OS Platform:
- Docker version:
- GCC/CUDA/cuDNN version:
- Python/conda version:
- TensorFlow/PyTorch version:
Code to reproduce
....
callbacks_list.append(EarlyStopping(monitor="val_loss",
min_delta=self.ctx.min_delta,
patience=self.ctx.patience,
verbose=verbose,
mode="min",
baseline=None,
restore_best_weights=True)
)
....
keras_model.fit(
x=None,
y=None,
validation_data=valid_ds,
steps_per_epoch=self.ctx.steps_per_epoch,
validation_steps=self.ctx.valid_steps_per_epoch,
epochs=self.ctx.callback_num,
callbacks=callbacks_list,
checkpoint_dir=self.ctx.model_save_path,
keep_checkpoint_max=1,
verbose=0)
Willing to contribute
Yes