Skip to content

Conversation

@johndpope
Copy link

No description provided.

@gael-vanderlee
Copy link
Owner

gael-vanderlee commented Jan 20, 2021

Looks good! Could you integrate the docker commands better in the readme ? Perhaps make a "Docker" section in "Getting started" with instructions.

@johndpope
Copy link
Author

sure. been battling getting the generated files off the docker image. Is it a case of when the style is trained - a new image can be hotswapped in? or does the new image need training as well? I'm imaging a use case for video - but seems prohibitively (ocmputationally) expensive.

@gael-vanderlee
Copy link
Owner

Every new image needs training, so this project doesn't work on video (or only very slowly). For video, you could implement existing solutions, there are video segmentation models (such as https://github.com/kmaninis/OSVOS-PyTorch) as well as video style transfer models (such as https://github.com/manuelruder/fast-artistic-videos) that encode temporal information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants