
Framework Sets New a Benchmark for Creating 3D Scenes from 2D Images
Researchers at Georgia Tech have created a new three-dimensional (3D) acceleration framework that instantly transforms flat, two-dimensional photos into immersive 3D images.
School of Computer Science (SCS) researchers say their new framework is the first to reconstruct a 2D image and render it in 3D at 30 frames per second in less than two seconds using only the bandwidth required for standard USB drives.
This breakthrough is a step forward from existing 3D reconstruction methods based on neural radiance field (NeRF) technologies. These frameworks use too much memory and struggle to render complex images on virtual reality (VR) headsets or smartphones.
A team led by SCS Associate Professor Yingyan (Celine) Lin developed Fusion-3D to solve these challenges. The team set four goals for the project when it started two years ago:
· Photorealistic quality
· Real-time rendering
· Instant reconstruction
· Scalability
Fusion-3D is distinguished by a unique processing chip Lin and her team developed. The processor is designed to run faster and more efficiently, streamlining each step of the 3D creation process.
Lin’s research team also developed a multi-chip architecture for managing large-scale scenes. The architecture includes a mechanism that parses complex images into tiles and organizes them into layers assigned to specific chips.
This functionality balances workloads and minimizes the required interaction between chips.
“The Fusion-3D framework can be seamlessly integrated into any scenario that requires efficient 3D reconstruction and rendering, making it a foundational technology for a wide range of 3D intelligence applications,” said Sixu Li, SCS Ph.D. student.
Li is a co-author with Lin of Fusion-3D: Integrated Acceleration for Instant 3D Reconstruction and Real-Time Rendering. Yang Zhao, Chaojian Li, Bowei Guo, Jingqun Zhang, Wenbo Zhu, Zhifan Ye, and Cheng Wan are also co-authors.
"I am very proud of my students for advancing efficient real-world 3D reconstruction,” said Lin. “They developed the first real-time rendering/inference edge 3D rendering—now the most cited paper from ICCAD 2022—and the first real-time 3D reconstruction training technique, ranking as the 5th most cited paper at ISCA 2023. This best paper award further acknowledges our significant contributions to the field."
The paper won the best paper award at the 2024 IEEE/ACM International Symposium on Microarchitecture (MICRO). The selection committee praised the paper for its "compelling end-to-end demonstration of an emerging application of importance."
The CoCoSys: Center for the Co-Design of Cognitive Systems supported the development of Fusion-3D through the SRC-DARPA JUMP 2.0 program.
As computing revolutionizes research in science and engineering disciplines and drives industry innovation, Georgia Tech leads the way, ranking as a top-tier destination for undergraduate computer science (CS) education. Read more about the college's commitment:… http://t.co/9e5udNwuuD pic.twitter.com/MZ6KU9gpF3
— Georgia Tech Computing (@gtcomputing) September 24, 2024