A volumetric video “revolution”
The sports industry is looking to 3D video as a way forward. This transformation has been driven by a number of factors, including the coronavirus pandemic, developments in computer vision and deep learning, off-the-shelf AR/VR technology (Oculus, Vive, Hololens, etc), and games engines like Unreal and Unity.
But until recently, one problem remained unsolved: How do you process all the volumetric data to produce 3D video for an at-home audience?
Data challenges
In a recent white paper, Intel Sports laid out the huge challenges they faced in producing “near real-time” volumetric video for sports.
According to the authors, every volumetric frame takes approximately 30 seconds to produce using today’s standard on-site servers. That’s way too slow for video. Since each camera in the capture system captures 30 frames per second, a system designed to produce real-time 3D video would need to handle thousands of frames per second.
The authors estimate that the total processing load could top 200 terabytes per game. As a result, on-site processing is out. “Introducing a large data center in each venue to process this massive amount of data,” they write, “would be cost prohibitive and impossible due to the physical space, logistics and resources required.”
Cloud compute
As we’ve known for a while, cutting-edge applications of spatial computing will require new computing methods. This applies to AR apps, which will rely on offsite volumetric data in the “AR cloud.” It applies to autonomous vehicles and robots, which will rely on a combo of edge computing and cloud computing for an ideal mix of speed and power. And it applies to volumetric video.
In the white paper, Intel Sports explains how they solved the data problem with their own cloud-compute solution. They go into detail, exploring the parts of the distributed system, as well as the microservices and algorithmic steps needed to handle the massive processing requirements of real-time volumetric video. Read more here.