Aladin-nst: Self-supervised disentangled representation learning of artistic style through neural style transfer

(Left) Example style groups from the BAM-FG dataset. The images in each group are style consistent, but they are also semantically consistent. For example, the top left style group has a consistent weathered paper style but is also consistent in the subject matter of character design. The top right has consistent pastel style but is consistently interiors. The bottom left is consistent moody vignette dark photography style, but all images are of landscapes. Bottom right vector art images all contain faces. (Right) Example synthetic style consistent images, as used in our work (via NeAT). The left-most images in each style group are the reference style image. The BAM-FG data (left) shows style consistency at the cost of entanglement with semantic consistency, unlike the synthetic data (right).

Abstract

Representation learning aims to discover individual salient features of a domain in a compact and descriptive form that strongly identifies the unique characteristics of a given sample respective to its domain. Existing works in visual style representation literature have tried to disentangle style from content during training explicitly. A complete separation between these has yet to be fully achieved. Our paper aims to learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image. We use Neural Style Transfer (NST) to measure and drive the learning signal and achieve state-of-the-art representation learning on explicitly disentangled metrics. We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics, encoding far less semantic information and achieving state-of-the-art accuracy in downstream multimodal applications.

Visualization of our NST-driven style representation learning method. We show a training iteration with batch size 6, with 6 content images and 3 style images (in our experiments, we use much larger batch sizes but use 6 here for clarity). The content images are stylized with a pre-trained and frozen Neural Style Transfer method using two copies of the 3 style images. We extract a style embedding using layer-wise global moment statistics and the logits from a more localized vision transformer.

Style-based image retrieval comparison between our method variants and previous literature.

Poster

BibTeX

@inproceedings{Ruta:adadinnst:ECCVWS:2024,
        AUTHOR = Ruta, Dan and Tarres, Gemma Canet and Black, Alexander and  Gilbert Andrew and Collomosse, John",
        TITLE = "Aladin-nst: Self-supervised disentangled representation learning of artistic style through neural style transfer",
        BOOKTITLE = "European Conference of Computer Vision 2024, Vision for Art (VISART VII) Workshop, 2024",",
        YEAR = "2023",
        }