My interest in 3D capture started a two years ago when as an R&D project for my animation studio. In fact, I started 3D Scan Expert out of the motivation that I couldn’t find good information about the possibilities of 3D capture for creative projects!
Of course, 3D scanning and photogrammetry are used to capture static objects and people that are trying their very best to stand still for a moment. There are many purposes for just using the static results but for scanning people these scans can also be digitally animated. This can be used to create characters for video games or digital actors part of film visual effects.
Animated 3D scans are not a replacement for 2D video.
Scanning a person and then animating the result actually isn’t so hard, and can be done on a very tight budget. Below is a scan of my business partner Patrick (whom also makes awesome Photoshop plug-ins, by the way) scanned with an iPad, a $379 Structure Sensor 3D Scanner and the itSeez3D app. I loaded the scan data into Adobe’s free online Mixamo character animation tool to get this result within minutes, lit and rendered in real-time through Sketchfab.
[wp_ad_camp_1]
Mind you that the embedded content above is not a video — you can play it as such but you can also use your mouse or finger to rotate around the character and view it from all sides. As you can see this is pretty cool already but it still looks like a game character even though Mixamo’s animation templates are sourced from actual human motion capture performances.
To put it short: animated 3D scans are not a replacement for 2D video. Sure you can take digital animation many steps further and the latest Playstation games with digital characters or digital actors in the latest Star Wars movies look awesome. But it’s still animation — not recorded video.
One step closer maybe is capturing something in 3D in different stages over time. I’ve recently done this by using Photogrammetry to capture five stages of an apple being eaten. As far as I know this is one of the first examples of a Volumetric 3D Stop-Motion Animation:
Still, I’m not calling it Animation for nothing. It’s staged, and performed over a longer period of time in stead of captured in real-time and played back at the same speed, like video.
Now, with the increasing popularity of display technologies like Virtual Reality (VR) and Augmented Reality (AR) (both in headsets and on smartphones) developments around the “next video format” are suddenly accelerating. This is because these new technologies, and especially AR, demand real-time 3D content. And because we’re already living in the age of video, static content won’t cut it anymore. We want motion — and even more so emotion.
[wp_ad_camp_5]
In its simplest form, this could be done by capturing multiple frames in real-time through Photogrammetry and render them in the same way I did with my stop-motion apple. That would result in something like this (not my work):
This shows the tip of the iceberg of the possibilities of Volumetric Video, clearly capturing emotion like in a 2D video, but with the added ability of the user being able to change the viewpoint (a.k.a. Free Viewpoint Volumetric Video). This example was captured using an array of GoPro camera’s pointed just at the face of the person, and processing the frames through Photogrammetry Software. The same effect can be achieved using multiple Depth Sensors (and software like MimeSys) to achieve something like this:
As you can see those are both partial captures. To capture complete, 360-degrees, full-body performances a lot more technology is needed.
To capture this new kind of Volumetric Video various companies are developing new recording studios
To be able to capture that kind of “true” Volumetric Video — containing audio-visual performances that can be viewed from any angle — various companies are developing completely new recording studios. Yesterday, The Verge published an awesome mini-documentary about this called “Are Holograms the Future of How We Record Memories?” (see video below). It covers companies like Microsoft that are at the forefront of creating what some call holograms. The company that used to just make Windows and Word is now creating devices such as HoloLens and is even opening special Mixed Reality Capture Studios in London and San Francisco.
If you’re interested in either 3D, video, communication, performing or the future in general, be sure to watch the video below.
[wp_ad_camp_5]
As you might aspect from someone with a background in motion design, animation and visual effects these new developments interest me a lot. So you can be sure read more about it on 3Dscanexpert.com in the future!
Also, I’ll be sharing more content about this on social media so be sure to follow me on your favorite network if you don’t want to miss out!
[tw-social icon=”twitter” url=”http://twitter.com/3dscanexpert” title=””][/tw-social] [tw-social icon=”facebook” url=”http://facebook.com/3dscanexpert” title=””][/tw-social] [tw-social icon=”instagram” url=”http://instagram.com/3dscanxpert” title=””][/tw-social]
This post was edited after publishing to also include the stop-motion and volumetric video examples from Sketchfab, thanks to a tip from the CEO of of that very online 3D/4D/VR/AR (and what not in the future) sharing website!
I can’t agree more that the future of video is volumetric video. I have fallowed Microsoft, 8i, Intel work for a while now but truly got the hang when tried EF EVE. I became so clear that when this app allowed me to capture volumetric video from our tiny studio with no green screen for such a little price – this will go main stream. Now just need to wait for more cameras, the quality will improve and kill the 2D video.
I think there is a big difference between depth sensor based volumetric video capture like EF EVE does and professional setups like 8i. The former is relatively low-res which makes it ideal for live streaming but not usable for professional capture for film, VFX or game development. It’s essentially no different than the difference between shooting normal video with a smartphone camera or a professional film camera.
Thanks for taking the time to put this post together – this was a really helpful intro to the current state of the tech.
Have you ever tried to build your own volumetric capture setup? Are there any papers, tutorials, or guides you might recommend?