We present Representation Autoencoders (RAE), a class of autoencoders that utilize pretrained, frozen representation encoders such as DINOv2 and SigLIP2 as encoders with trained ViT decoders. RAE can ...
SVG Autoencoder - Uses a frozen representation encoder with a residual branch to compensate the information loss and a learned convolutional decoder to transfer the SVG latent space to pixel space.
Abstract: The accurate prediction of gas mixture concentrations plays a vital role in the development of intelligent electronic nose. However, in most previous studies, time-series data from sensor ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results