On the Direct Alignment of Latent Spaces
Published in NeuRIPS Workshop on Unifying Representations in Neural Models (UniReps), 2023
Zorah Lähner, Michael Moeller
Abstract
With the wide adaption of deep learning and pre-trained models rises the question of how to effectively reuse existing latent spaces for new applications. One important question is how the geometry of the latent space changes in-between different training runs of the same architecture and different architectures trained for the same task. Previous works proposed that the latent spaces for similar tasks are approximately isometric. However, in this work we show that method restricted to this assumption perform worse than when just using a linear transformation to align the latent spaces. We propose directly computing a transformation between the latent codes of different architectures which is more efficient than previous approaches and flexible wrt. to the type of transformation used. Our experiments show that aligning the latent space with a linear transformation performs best while not needing more prior knowledge.
Resources
Bibtex
@inproceedings{laehner2023alignment,
author = {Zorah L\"ahner and Michael Moeller},
title = { On the Direct Alignment of Latent Spaces },
booktitle = {NeuRIPS Workshop on Unifying Representations in Neural Models (UniReps)},
year = 2023,
}