Aladin-nst: Self-supervised disentangled representation learning of artistic style through neural style transfer

Unviersity of Surrey, [1] Adobe Resarch [2]
arXiv preprint arXiv:2304.05755

Visualization of our NST-driven style representation learning method. We show a training iteration with batch size 6, with 6 content images and 3 style images (in our experiments, we use much larger batch sizes but use 6 here for clarity). The content images are stylized with a pre-trained and frozen Neural Style Transfer method using two copies of the 3 style images. We extract a style embedding using layer-wise global moment statistics and the logits from a more localized vision transformer.

Abstract

Representation learning aims to discover individual salient features of a domain in a compact and descriptive form that strongly identifies the unique characteristics of a given sample respective to its domain. Existing works in visual style representation literature have tried to disentangle style from content during training explicitly. A complete separation between these has yet to be fully achieved. Our paper aims to learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image. We use Neural Style Transfer (NST) to measure and drive the learning signal and achieve state-of-the-art representation learning on explicitly disentangled metrics. We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics, encoding far less semantic information and achieving state-of-the-art accuracy in downstream multimodal applications.

-->

BibTeX

@inproceedings{Ruta:ArXiv05755:2023,
        AUTHOR = Ruta, Dan and Tarres, Gemma Canet and Black, Alexander and  Gilbert Andrew and Collomosse, John",
        TITLE = "Aladin-nst: Self-supervised disentangled representation learning of artistic style through neural style transfer",
        BOOKTITLE = " arXiv abs/2304.05755",
        YEAR = "2023",
        ​}