We present Representation Autoencoders (RAE), a class of autoencoders that utilize pretrained, frozen representation encoders such as DINOv2 and SigLIP2 as encoders with trained ViT decoders. RAE can ...
MAESTRO: Masked Autoencoders for Multimodal, Multitemporal, and Multispectral Earth Observation Data
MAESTRO_FLAIR-HUB_base — pre-trained on FLAIR-HUB MAESTRO_S2-NAIP-urban_base — pre-trained on S2-NAIP-urban Land cover segmentation in France, with 12 semantic classes. Note that the FLAIR#2 version ...
Abstract: Image hiding aims to hide the secret data in the cover image for secure transmission. Recently, with the development of deep learning, some deep learning-based image hiding methods were ...
Abstract: Variational Graph Autoencoders (VAGE) emerged as powerful graph representation learning methods with promising performance on graph analysis tasks. However, existing methods typically rely ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results