RefAny3D: 3D Asset-Referenced Diffusion Models for Image Generation

ICLR 2026

1ShanghaiTech, 2SYSU, 3U of T, 4HKUST, 5SynWorld, 6MUST
Corresponding author.
teaser image.

Abstract

In this paper, we propose a 3D asset-referenced diffusion model for image generation, exploring how to integrate 3D assets into image diffusion models. Existing reference-based image generation methods leverage large-scale pretrained diffusion models and demonstrate strong capability in generating diverse images conditioned on a single reference image. However, these methods are limited to single-image references and cannot leverage 3D assets, constraining their practical versatility. To address this gap, we present a cross-domain diffusion model with dual-branch perception that leverages multi-view RGB images and point maps of 3D assets to jointly model their colors and canonical-space coordinates, achieving precise consistency between generated images and the 3D references. Our spatially aligned dual-branch generation architecture and domain-decoupled generation mechanism ensure the simultaneous generation of two spatially aligned but content-disentangled outputs, RGB images and point maps, linking 2D image attributes with 3D asset attributes. Experiments show that our approach effectively uses 3D assets as references to produce images consistent with the given assets, opening new possibilities for combining diffusion models with 3D content creation.

Method

method.

Overview of RefAny3D. Given a 3D asset, we render multi-view inputs as conditioning signals for the diffusion model and simultaneously generate the point map of the target RGB image. To ensure pixel-level consistency across different viewpoints, we adopt a shared positional encoding strategy. Moreover, to disentangle the RGB domain from the point map domain, we incorporate Domain-specific LoRA and Text-agnostic Attention. Benefiting from this 3D-aware disentangle ment design, our method is able to generate high-quality images that maintain strong consistency with the underlying 3D assets.

Data Construction Pipeline

data pipeline.

(a) Data construction pipeline. We first use GroundingDINO to extract the objects of interest, then convert the images into 3D models using Hunyuan3D, and finally apply FoundationPose to estimate the poses of the 3D models in the images. (b) Examples from the dataset.

Qualitative Results

qualitative results.

Qualitative comparison with other methods.

RGB and Point Map

point map.

Qualitative results with different 3D assets as references. Our method takes a given 3D mesh as input and generates both RGB images and point maps in a unified manner. By enforcing pixel-level spatial alignment between the point maps and RGB outputs, the framework ensures consistent geometry–texture correspondence across views. Moreover, the incorporation of point maps enhances the model’s 3D structural awareness, thereby improving the fidelity and consistency of image generation with respect to the reference 3D assets.

Ablation Study

ablation study.

Ablation studies on different components of our method: (a) full model; (b) without Shared Positional Embedding for Cross-Domain; (c) without Text-agnostic Attention; (d) without Domain-specific LoRA.

ablation study.

Comparisons of ablation studies and the editing-based baseline.

BibTeX