Skip to content

[Feature Request] Implement multiple optimizer #21741

@atiaisaac

Description

@atiaisaac

I would like to propose the addition of multi-optimizer setting with the option of changing the learning rate via callbacks for discriminative layer training purposes.

MOTIVATION

There is currently no direct way of implementing discriminative training without going low level by customizing what happens in model.fit();Even with that, it is still unclear whether it will work out of the box of if the learning rate can be changed via callbacks. I believe providing a way to use discriminative learning would greatly enhance the usability of keras as a whole

PROPOSED CHANGE

Adding an optimizer that can be passed as an argument to model.compile()

    model = keras.Sequential([
        keras.layers.Dense(64, activation='relu', name='layer1'),
        keras.layers.Dense(32, activation='relu', name='layer2'),
        keras.layers.Dense(10, activation='softmax', name='output')
    ], name='example_model')
    
    model.build(input_shape=(None, 20))
    
    adam_opt = keras.optimizers.Adam(learning_rate=0.001)
    sgd_opt = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
    
    layer1_vars = model.layers[0].trainable_variables
    layer2_and_output_vars = (
        model.layers[1].trainable_variables + 
        model.layers[2].trainable_variables
    )
    
    multi_optimizer = MultiOptimizer([
        (adam_opt, layer1_vars),
        (sgd_opt, layer2_and_output_vars)
    ])
    
    model.compile(
        optimizer=multi_optimizer,
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy']
    )

This feature would make it so that, one can easily experiment with different training regimen easily without having to switch frameworks constantly.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions