Results of our orthographic-style projection are shown. The left column presents the original panoramas, which exhibit distortion caused by cylindrical projection. In contrast, the right column displays our results, which more accurately preserve structural integrity and present the scene in a way that closely resembles human visual perception.
Abstract
Photographing architecturally expansive structures presents significant challenges due to the limited field of view inherent to standard imaging systems. Traditional solutions, including the use of wide-angle optics or panoramic stitching techniques, frequently introduce significant geometric distortions that compromise the fidelity of architectural forms. To address this limitation, we introduce a novel framework for synthesizing orthographic-style projections of elongated buildings, with an emphasis on structural preservation. The proposed method leverages a neural semantic segmentation network to autonomously identify architectural components and foreground elements within a panoramic image. Geometric rectification is performed via a forward mapping algorithm, derived from extracted boundary contours and vanishing point estimations, and is complemented by an inpainting stage to recover missing regions. Empirical evaluations confirm that our pipeline maintains architectural accuracy more effectively than conventional techniques, while offering an accessible and streamlined image acquisition process.
Publication
Yu-Hsuan Hsieh, Yu-Ting Wu.
Orthographer: Generating Orthographic-Style Projections for Elongated Architectural Structures.
Computer Graphics Forum, to appear. BibTeX (coming soon)
CGF paper (coming soon)
Digital library (coming soon)
Last Update: June 2025