Companies like Amazon make it easy to view their AR content. Making your own content is still very difficult.
Emmanuel de Maistre is a pioneer in the field of spatial computing. As early as 2012, he founded Redbird to provide drone-based reality capture and data processing at a time when UAVs were still seen as an immature technology. Just a few years later, his foresight paid off big and he sold the company to Airware.
Now, he is turning his eye to augmented reality with a new (stealth-mode) company that aims to make it easy for anyone to create AR content—even people with no technological savvy. That’s why I caught up with him to learn why AR content is still so difficult to create, to discuss the other challenges we need to overcome before AR goes mainstream, and get his insights on the current state of AR and spatial computing.
Interview edited for length and clarity.
What does the term “spatial computing” mean to you?
If you ask a hundred people this question, I think everyone will have their own definition. For me, every time you interact with a computer device or mobile device and there is a spatial aspect, that’s spatial computing. People don’t realize that geolocation-based apps like Uber, Airbnb, and many others are spatial computing. They rely on spatial information, a 2D map, to deliver a service.
What’s new though—and the reason why spatial computing is trending for the past few years—is that we are switching from 2D to 3D. As 3D hardware like lidar is added to every device, these devices are becoming spatially aware. And that’s why spatial computing is becoming much bigger right now.
What does your company’s spatial computing product do, as it is currently available, today?
Right now, we are not out there yet. But we’re making a way for anyone to create or edit AR content that is persistent, geo-anchored, and that can be shared across a variety of users.
If you’re familiar with AR, you probably have Pokémon Go in mind, or IKEA’s apps. And those are great examples. But you don’t get to create the Pokémon game yourself, right? And using IKEA’s technology you might put a couch into your living room—but it’s a fairly standalone experience, and creating such an application is still pretty complex.
And you want to make it easier for people to create and share these 3D objects for AR?
What we want is for anyone to start configuring AR scenes to augment the physical world. And the use cases the applications are just endless—from gaming and entertainment, all the way to industries like enterprise construction, oil and gas, energy and so on. We feel that tomorrow, everyone will create AR like they create a website, or post “holograms” to the physical environment like they are posting a photo to their Instagram.
Would you say that the difficulty of creating AR content is one of the biggest challenges we have to face to make spatial computing more mainstream?
Yes. It’s not an easy process to create entire AR experience, and that’s true from gaming assets to BIM models. If you look into the enterprise construction side, creating and configuring 3D assets usually requires specialized and complex software. You would need to research tools like Maya, Unity, Unreal Engine, or Blender—you have a whole list. The problem is, to use these tools, you would need to be a specialist or a graphic engineer. So yes, there’s a big challenge there.
If creation is the big problem, is the 3D model—the “mirror world” or “AR cloud” that the apps use to place the 3D objects in space—still a challenge? Or do you think that’s basically taken care of?
There is still a big challenge. If you think about placing an AR object in the real world, it needs to be precisely anchored in terms of an x,y,z position. [Points to his hand]. Assume that this is a this is a digital object [points to his desk] and assume this is the ground. [Lays his hand on top of the desk to demonstrate] For AR to look real, you need to make sure your item lays over the ground precisely. You need to make sure doesn’t go under or above. That’s the first challenge.
Second, if you think about using GPS coordinates for this kind of application, they are pretty inaccurate. The GPS accuracy of your phone is a few meters, like 10 feet-ish. So if you want to position an item onto a map, via the phone, the accuracy of that anchor would be potentially off by 10 feet. Which is huge.
One major problem that needs to be solved is how to lay out the AR content in the world, both “globally”—at some known GPS coordinates—but also locally—in a unique location in a unique room. Niantic has solved a lot of challenges on global positioning, but achieving accurate local positioning hasn’t been quite resolved yet.
Is this why the lidar sensors going into iPhones are such a big deal? Not just because they make it easier to capture 3D information?
With the lidar on the phone, you can start doing more accurate 3D maps of the scene, of the geometry. And that’s where you can extract positions that are way more accurate than the GPS coordinates or than other types of geo-positioning techniques.
The addition of lidar to phones is as important as the addition of cameras. I’ve heard people criticizing the lidar in the iPhone and the poor quality of scans versus the Trimble, Leica, or Topcon scanners. Of course it’s not the same accuracy. But for a very low cost, there is now a lidar in everyone’s pocket. That’s a big revolution.
What’s the most transformative use of spatial computing that you’ve seen, either on the business side or the consumer side?
For one, everything that Google and others are doing with neural networks, and how they’re applied to 3D scene processing. The other big success right now in AR is face masks—the silly, funny face masks you can use on Snapchat. I mean they leverage AI like crazy to detect all the features of the face, the nose, the eyes, the mouth. Spatial computing and AI are the reason why these face masks are so accurate.
So the biggest development in hardware is the iPhone lidar, and the biggest development in software is Google’s scene processing and Snapchat’s filters. Does that mean that consumer technology is the big driver in spatial computing right now?
100%. It’s like it was with drones. I remember in 2012, big aerospace companies were claiming leadership in the development of commercial drones. Boeing and Airbus were very serious, they were saying, we will be the leader for drone technologies tomorrow. Right now, it’s clear they kept military and security applications while newcomers are taking over the commercial space.
And the second thing I remember is, all these land surveyors being extremely critical about consumer drones when they first came out in 2013-2014. Most experts couldn’t believe drones could replace total stations and laser scanning. 18 months after that, drones were making the cover of xyHt and other surveying magazines, who were saying that drones are the future.
I think the same will apply in spatial computing. Consumer-grade technologies will be the standout for all kinds of applications. There might be a few exceptions, like autonomous vehicles, because of the safety and regulation issues.
What is the biggest misunderstanding about spatial computing today?
People don’t realize that spatial computing is much bigger than just viewing or capturing 3D assets. It’s not just about VR headsets, or mobile AR. Every single device, every machine, every vehicle will soon be spatially aware . And every single software will deal with some sort of spatial awareness.
Few people realize it really is the new stage of computing, and it goes far beyond replacing our phones with glasses. That’s just one aspect of the spatial computing revolution.
Speaking of this revolution, where is spatial computing on the hype cycle?
We are pretty much at the bottom right now. On one side, we see incredible announcements from the big tech companies like Microsoft, Facebook, and Apple. They’re all saying spatial computing is the future, and this is what we are investing in right now.
But on the other side, especially on the investor side, there is still a lot of doubt. This might be because of the relatively disappointing performance of VR between 2016 and 2020—the “VR winter.” The only successes that people can identify are Pokémon Go and Oculus.
So very few successes, a lot of failures. A lot of people have burned themselves for the past five years, and that makes me think we’ve gone through the peak of the hype cycle.
What would it take for a business to succeed with AR technology right now, given that we’re in the “trough of disillusionment” and investors have so many doubts?
They need to make it easier to use.
Think of AR. The number of frameworks that you can use, like Google’s ARcore or Apple’s ARKit, or the amount of hardware and sensors that you can leverage is fundamentally different than it was five years ago. Right now if you want to make an AR app, Apple is providing the frameworks, so you can finish in a few months, maybe less. The “bricks” that are available today for building a spatial computing app are radically different than they were a few years ago.
But to leverage these tools, these “bricks,” you still need to do a fair amount of integration, and you still need to code quite a bit. The people that want to create new use cases, apps, experiences, whatever you want to call them—they pretty much need to be experts. I think a successful AR business will be the one that enables anyone to apply the technology without technical expertise.
The same thing was true for websites. Twenty years ago, designed used FrontPage and Dreamweaver to build websites. The outcome wasn’t the process would take weeks. Now you can know nothing about web design, and within an hour, for a hundred dollars or less, you can have a good website up and running thanks to services like Wix, Webflow, or Squarespace.
A successful AR business will be the one that makes AR this easy for anyone to use. That way, people can create the assets and arrange them in a meaningful way for a specific use case, whether it’s gaming advertising, entertainment, leisure, hospitality, whatever.
I don’t think we have tapped into people’s creative potential for creating AR, because it’s so difficult to use. Make it easier, and that’s when we’ll see the killer apps that aren’t out yet.
Is there something you’d like to say about spatial computing that we didn’t cover? Something you have a strong opinion about, or an important point you’d like to make?
I think there’s some kind of fundamental flaw with mobile AR. Sure, we can visualize amazing AR content using just our phones. But we’re looking at 3D content via a 2D interface, and the interaction is far from perfect. Mobile AR is great for some use cases (like gaming) and for democratizing the technology. But we will unleash the true potential of AR when we will interact directly with the AR content using hand tracking, in 3D.
Find Emmanuel on LinkedIn