Skip to content

Conversation

@stefanossala
Copy link

Add FFNN and LSTM PyTorch implementation

Copy link
Collaborator

@MaikBastian MaikBastian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I have just added some comments. Nothing major, just some little cleanups and suggestions. Please let me know what you think! :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we can remove the two benchmark scripts again. I will add the arguments you used to the README, so that the training can be replicated in the future.

if architecture == "FFNN":
results.append(model.evaluate(statistics, labels, batch_size=args.batch_size, verbose=1))
if architecture in ("CNN", "LSTM", "Transformer"):
if hasattr(model, "evaluate"): # Keras model
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to be more explicit when checking which library to use. Maybe we could add a parameter to the function, i.e. backend which is an enum with two values: keras and pytorch. This parameter can then be provided when the function is called and either the FFNN or LSTM is evaluated .

stats_np = statistics.numpy()

"""
# Normalization: solo al primo batch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and everywhere else: Please translate comments from Italian to English. :)

model = None

if architecture in ("FFNN", "CNN", "LSTM", "Transformer"):
if architecture == "FFNN" and model_path.endswith(".pth"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A parameter with the enum Backend could be useful here, again.


if architecture in ("FFNN", "CNN", "LSTM", "Transformer"):
if architecture == "FFNN" and model_path.endswith(".pth"):
from cipherTypeDetection.train import TorchFFNN
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this line to the top of the file, to the other imports.

for device in tf.config.list_physical_devices('GPU'):
tf.config.experimental.set_memory_growth(device, True)

class FFNN(nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could move both modules into their own (or a combined) file.




def train_torch_ffnn(model, args, train_ds):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function could be combined with train_torch_lstm. Maybe just as train_torch? If I see it correctly, there is just a small difference, where the inputs are converted into a different type in the feature learning case. Maybe we could add a parameter approach, which would be an enum with the two cases feature_learning or feature_engineering?

If you have better ideas for my suggested names, please feel free to change them! :)

model.train()

"""
# --- Early stopping check ---
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add the early stopping again, but with an additional boolean parameter on the function to enable / disable it more easily?




def predict_torch_ffnn(model, test_ds, args):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in the training case, we should be able to combine both prediction functions.

model = create_model(architecture, extend_model, output_layer_size, max_train_len)
if architecture in ("FFNN", "CNN", "LSTM", "Transformer") and extend_model is None:
model.summary()
if hasattr(model, "summary"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here a parameter backend could be useful, again. (See also a few times below.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants