Panel For Example Panel For Example Panel For Example

Difference Between VR Video and 360-Degree Video

Author : Adrian April 10, 2026

 

Introduction

Google has long emphasized the immersive potential of virtual reality, and to create the most convincing sense of presence, the content presented in VR needs to match what the eye would see as closely as possible.

Light field capture, stitching, and rendering are advanced techniques. By generating motion parallax and highly realistic texture and lighting from static captures, light fields can deliver a very high-quality sense of presence. Google previously published a free Steam application, "Welcome to Light Fields", supporting HTC Vive, Oculus Rift, and WMR headsets to demonstrate this technology.

 

Google's Light Field Technology

Capturing and Processing Light Fields

With light fields, nearby objects appear close to you and change noticeably when you move your head. Distant objects change less. Different objects emit light differently, which provides strong cues that you are in a 3D space. When viewed through a VR headset with positional tracking, light fields can reproduce captured fragments of the real world for truly impressive VR experiences.

Light fields record all the different rays of light entering a space. To capture them, Google modified a GoPro Odyssey Jump camera rig, bending it into a 16-lens vertical arc and mounting it on a rotating platform.

The camera rig rotates and records approximately 1,000 outward-facing viewpoints over a roughly 70 cm sphere in about one minute. This provides a two-foot-wide ray volume, which determines the size of the top space—the area in which a user can move their head to explore the scene. To render views for a headset, Google samples rays from camera positions on the sphere surface to construct new interior views that match head movements. They align and compress these into a custom dataset file, which Google reads with a custom renderer implemented as a Unity engine plugin.

Recording the World with Light Fields

Google chose several distinctive locations to test their light field camera rig. They captured the lacquered teak and redwood interiors of the Gamble House in Pasadena, the glossy tile of the Venice Mosaic Tile House, the sunlit stained glass of the Church of St. Stephen in Granada Hills, and, with access granted by the Smithsonian National Air and Space Museum and its 3D digitization office, views from inside NASA's Space Shuttle Discovery flight deck. Google also recorded a series of light fields to experiment with applying eye contact in 6-degree-of-freedom experiences.

Light fields showcase how realistic VR experiences can become. Although still experimental, the approach points to a higher level of realism for VR content.

 

VR Video vs. 360-Degree Video

Basic Distinction

Use the following image to illustrate the difference between 360-degree panoramic video and VR video:

Comparison between 360-degree video and VR video

Panoramic video is straightforward: unlike traditional video with a single viewing angle, panoramic video allows viewers to look around 360 degrees. VR video builds on that by allowing viewers to move freely within the scene, offering 360-degree views from arbitrary positions inside the environment.

Panoramic video can be 3D or 2D, and it can be viewed on a screen or with a headset. VR video must be viewed with a headset and must be 3D.

Panoramic video is linear, played back along a timeline. VR video can allow multiple users to occupy different positions simultaneously and view the scene from those positions, somewhat like pausing time and moving within a frozen moment to inspect people and objects from different vantage points.

Similarities

  • Both can be 3D (panoramic video also has 2D variants).
  • Both allow 360-degree rotation to look up, down, left, and right.
  • Both provide visual immersion.

Main Difference

The key difference is that VR video allows viewers to walk around and observe the scene from different positions, while panoramic video does not. With panoramic video, the viewer is fixed at the capture position and can only look around 360 degrees in place.

In a VR film, traditional cinematography tools such as camera moves, scene cuts, and zooms may be replaced by the viewer's ability to choose viewpoint. VR video can be thought of as "3D panoramic video plus free movement."

At present, most publicly available "VR videos" are actually 360-degree panoramic videos. Many consumer devices marketed as "VR cameras" are essentially 360-degree panoramic cameras. So what does true VR video look like?

 

True VR Video and Light Field Cameras

Capturing genuine VR video is currently difficult with existing equipment. A single camera captures light from only one position, but free movement within a scene requires capturing light from arbitrary positions, which would require placing many cameras throughout the scene—an impractical solution. Could current light field technology solve this problem?

In principle, yes. To capture true VR video, we need light field cameras.

A light field camera uses an array of many miniature lenses or sensors to capture and record light coming from different angles. By recording not only color and intensity but also the incoming direction of light rays, light field cameras enable post-processing algorithms to reconstruct images from arbitrary positions within the captured volume.

Lytro's Immerge is an example: it contains hundreds of lenses and image sensors arranged in several layers, roughly equivalent to multiple grouped cameras, and is paired with dedicated servers and editing tools. Unlike traditional cameras, a light field camera records the direction of incoming light in addition to color and intensity. With directional information and appropriate algorithms, the environment can be analyzed and a 3D model reconstructed. This reconstructed model is similar to a computer-generated model, except it is derived from captured data rather than manually created.

From a reconstructed 3D model, a computer can generate real-time views for arbitrary positions, providing 3D plus 360-degree panoramas. The views captured at the camera positions are real, while the views for moved positions are synthesized in post-processing. Lytro acknowledged that current implementations allow viewpoint changes within a limited range and cannot fully reveal object backsides without additional modeling of reflected light. For fully realized VR video, computer modeling and real-time rendering remain the simplest solution today.

Most VR games can be considered VR video: they provide 3D, 360-degree environments with free movement and controllable timelines. These are built with computer modeling, so there is no camera capture involved. Should we still discuss the process of shooting VR videos? Yes. Panoramic video, although a "pseudo-VR" format, is a useful early stage of VR content. Panoramic video with 3D effects already offers significant immersion compared with standard video and has broad application potential.

 

Outlook

Future VR presentation will diversify. Headset design may evolve from single displays toward light-field-based displays that let the eyes focus at different depths rather than on a single screen. Research institutions and companies are exploring light-field head-mounted displays that enable true accommodation cues. Better light-field capture and more natural integration between virtual light fields and the real world, along with natural interaction methods, will be ongoing topics for research and industry development.