INTRODUCTION TO 360 VIDEO
There are two ways to create content for virtual reality – computer generated imagery (CGI), or live action, using a virtual reality video camera. The former is generally used for games, the latter involves capturing video ‘in all directions’, so it can be played back in a virtual reality headset like a video. The viewer with the VR headset can look in any direction, as if they are positioned where the camera is, so you must capture footage in every direction they might look. This is VR video at its simplest.
Our whitepaper focusses on the requirements for capturing professional grade, live action, virtual reality video (aka ‘immersive video’, aka ‘spherical video’, aka ‘360 video’), that is future proofed for several generations of VR hardware.
In this white paper we cover…
- A layperson’s guide to VR video cameras
- The pros and cons of different approaches to VR video capture
- The best way to future-proof your VR production and content
A LAYPERSON’S GUIDE TO 360 VIDEO CAMERAS
The principle of capturing live action video ‘in all directions’ is easy to understand. Combine a number of cameras, arrange them in a circular, or better still… spherical layout, and film outwards in all directions simultaneously. Carefully arrange each camera (and lens) to overlap each other’s picture somewhat, and clever software can later stitch the video together.
Well that’s the theory… the truth is more complicated than that (although getting easier).
The first challenge is to capture the scene in 3D. There is currently a proliferation of 360 video cameras, that purport to offer VR video capture, however they only offer only 2D 360 video capture. Since all VR headsets are intrinsically 3D capable, it makes little sense to capture 2D video for VR, if future proofing is a concern. 3D is much more immersive.
In the past, the traditional approach to capturing 3D video was to use two cameras side-by-side, for left and right eyes. This is possible in a 360 video camera, you double the number of cameras, but this becomes expensive when the number of cameras required is already high. There are other limitations to this approach too.
A newer approach, thanks to software beginning to emerge, is to do away with the duplicate set of cameras, and use computational photography (a form of image interpolation), that reproduces viewpoints ‘in between’ the physical cameras’s viewpoints, effectively creating virtual left and right eyes.
Both approaches – left-and-right pairs of cameras, and computational photography – are equally viable for producing 3D 360 video, although the latter is likely to become the standard, as it offers several benefits.
An important consideration with 360 video is whether the camera is shooting single-axis horizontal video (traditional), or two-axis (vertical) as well. Think of your viewer wearing their VR headset… In traditional 360 video a user can turn their head from left to right, but what happens if they look up at the sky and then turn their head from left to right? What happens if they tilt their head 90 degrees to the side? A true VR camera will have cameras arranged in at least 2 axis of movement and ideally 3, to capture all the needed 3D visual data.
You can demonstrate the need for this by watching a typical 3D video and tilting your head 90 degrees to one side. You lose all 3D depth information because the original camera rig was shooting side-by-side video. It would need two more cameras arranged up-and-down to provide the necessary 3D image data.
Another way to demonstrate this is to hold up a thin book in front of your eyes horizontally. Adjust the book until you can see as little of its profile as possible. Now turn the book 90 degrees so it’s vertical. How much of it can you see?
It may look like a blurry mess, and this is perfectly normal, but you’ll see it appears wider. This is because your eyes are looking at the book from ‘either side’. You see more of it.
In the same way, a camera that only has two side-by-side cameras in a single axis cannot possibly capture the necessary visual information to reproduce the scene if you were to tilt your head 90 degrees.
An optimal 360 video camera will therefore have sufficient cameras, arranged appropriately, so that the world is captured from sufficient positions to cater for all possible orientations of a VR viewer’s head.
At 360 Designs, we’ve been thinking about these issues and have developed a patent-pending solution we call 3-Axis Video.
With 3-Axis Video, you have 3 separate rings of cameras, arranged in a sphere. Each ring corresponds to the way you naturally turn you head when wearing a VR headset.
- The X-axis captures video horizontally as if you are sitting upright and turning your head from left to right
- The Y-axis captures video as if you are lying back, on your back, and turning your head from left to right
- The Z-axis captures video as if you then ‘spin around on your back 90 degrees’ and do the same left right head movement.
By capturing video in each of 3 axes, you solve 3 problems:
- Ensuring full spherical video capture (up, down, left and right)
- Capturing sufficient 3D depth information to later process 3D video in all possible head orientations (future proofing)
- A simplicity of approach that means you can use existing off-the-shelf stitching software to process the single axis mono video for each axis.
We think that capturing the necessary visual information in all 3-axes is the only safe way to future proof your immersive VR content, as increasingly sophisticated computational photography VR stitching software comes to market – a crucial factor if you are considering a significant investment in 360 video production, or are capturing a one-off, never-to-be-repeated event.
For more on this, read the blog post which inspired it, or check out our EYE™ 360 camera, which features a modular 1-3 axis design, and is the only camera on the market to offer the capability of capturing 3-axis 3D video. If you are interested in live 360 video, please check out our blog post – ‘Is Live 360 Video the killer application for VR?‘