When an era-defining technology like spatial computing hits the mainstream, we react with excitement. We think of the seemingly infinite possibilities and all the “game-changing” applications it offers. It’s rare that most people slow down to consider the risks.

But Avi Bar-Zeev is not most people. He’s a major figure in the spatial-computing industry, an XR pioneer who played a big part in the development of the technology with HoloLens, Apple, Amazon, Google Earth, Second Life, Disney VR, and more. He’s also one of the technology’s most insightful critics—his regular blog posts explore questions like how safe VR is for kids, who’s responsibility for privacy when we’re all wearing Ray-Ban spy glasses, and what makes AR special.

When we caught up with him this month, we had a wide-ranging conversation about what spatial computing really means for users in the business and consumer worlds. He explained how spatial computing will change our interactions with computers, digital information, and the real world. He walked through the significant risks of implementing spatial computing without proper preparation. And, since he’s not just a critic but also a technologist and advisor, he talked how you and your business can avoid getting burned with spatial computing tech.

Sean Higgins: What does the term “spatial computing” mean to you? It seems like a contentious term for some people.

Avi Bar-Zeev: It doesn’t feel contentious to me. It’s the evolution of computers. First, we talked to computers by flipping switches. And then we used punch cards. Then we had keyboards and screens. Then we had a bunch of rectangles on the screen—we put rectangles inside of rectangles. Now we’re talking about using our whole bodies, the space around us, our voice, pretty much everything that we have access to. We’re talking about using all of it to interface with the computer.

So “spatial computing” may not even be the best term, something like “human computing” might be better. We’re bringing people more fully into the mix, not just our fingertips and our eyeballs, but all of us, and whatever facilities we have. For a lot of people, that’s going to be way more empowering.

But I think that the most important thing is that it’s supposed to be more natural. We’ll be able to interact at a human level, as opposed to always adapting us to the machine, like we’ve done so far in the past.

Many people say that spatial computing is about turning every single atom of our three-dimensional environment into a piece of data that can be computed. But to you, that seems less important than the way we’re changing the way we interact with the computer.

Exactly. Digitizing the world is a means to an end. It’s not the end. The reason for digitizing the world is so that we can make software that better understands the world. But it’s for our benefit, right?

When I worked on Keyhole, which became Google Earth, we talked very much about using information that was out there, but inaccessible to computers, and making it understandable. But the end result was for humans. So we could make better maps, so that we could understand space, so that we could build apps on mobile phones that would be able to know where you are, and potentially what you’re doing—in a way that hopefully preserves privacy.

The goal is to have better interactions with the world, with the technology, and with each other. It’s about people. Ultimately if you’re not considering the people, you’re wasting your time.

Avi Bar-Zeev calls “bullshit” on the metaverse during last year’s AWE.

Based on a number of your recent blog posts, it seems like you’re feeling salty about the terminology we use to talk about this technology—AR, VR, XR and so on. I think you would argue these terms are also missing the point.

Yes, I think that that happens we get excited about different aspects of a new technology. “Metaverse” is one of those aspects that I have been writing a lot about lately. And I think you could detect that I’m a little annoyed with the term, or at least the usage of it.

It’s important that words have meaning. This is so that we can talk to each other and have a fruitful conversation. So that you can say “blue” or “red,” and I know you mean “blue” or “red,” even though they’re abstract concepts.

The word “metaverse” has become like the Smurf of technology, you can just plug it in anywhere and it can mean anything. What do you mean when you say “metaverse”? You need to have a conversation ahead of time to define what each person means.

And some of these concepts have already had words attached them for years. Like “spatial computing,” or even “AR cloud,” which I didn’t love, but accepted. And then we come along, and we pick one word to mean all those things.

We’ve lost track of all these different concepts we defined over the years, so it’s like we’re starting over all the time. A bunch of people get excited, and they start to rush in, forgetting that there’s 30 to 50 years of experience and a lot to learn. There’s a lot of richness and nuance that we’re simply forgetting when we use a term like “metaverse.” That’s my main concern.

If thinking “metaverse” is the wrong path, what’s the right core concept? For someone who wants to understand this latest stage of computing history—whether they’re a business, or a consumer—what’s the essential thing they should focus on learning about?

I hate to say it, but Zuckerberg gets closest when he talks about the embodied internet. That’s the single most important change.

My simplest definition of the metaverse is that it’s the next iteration of the internet. It includes the interfaces we already have—we’re still going to have all the mobile phones and computers and so on. But what we’re adding to the mix is the ability to go into that database and explore it in first person. And the most important part of that is not that we’re exploring some vast “Tron”-like information landscape. It’s that we’re there together, it’s that we can interact.

You know, Facebook is telling a bit of a lie when they say they’re connecting people. What you see on Facebook today are only the artifacts of people. You see text, you see pictures, you see videos that were taken ahead of time.

We’re trying to get to a place where we can feel like we’re interacting face to face again, like we did in the real world. So that you and I can have this conversation and feel like we’re sitting in the same place together. And we can make natural eye contact, and use gestures, like I do with my hands all the time. So these things just feel natural.

The benefits of this embodied internet are easy to imagine. But your blogs on the topic focus a lot on the potential risks. Can you elaborate on some of those risks?

We’ve seen what happens when people interact online. Interacting in the real world is hard enough, but interacting online brings a whole new level of craziness. And we haven’t quite figured that out yet.

My biggest worry about all this is that we’re rushing headlong into it without all the safety measures that we need. There is no regulating force in this new world, other than for-profit companies with their own little territories. There is no force coming in to say, “Here are the ways we should behave, and here’s what we do with a small percentage of people who misbehave.”

And unfortunately, one bad actor can cause grief to thousands or millions of people. It’s a really important thing to get it right before we all rush in and say we’re going to live there. It’s nowhere near ready for us to live there. It’s still not even quite the Wild West yet, because there is no law, there are no rules. Everybody is just rushing in to grab whatever pickaxe and gold they can find right now.

At Prime Air, Bar-Zeev worked on a program that would simulate ten years of flights in 2 years.

We’re moving too quickly, which means that we don’t understand what this new technology will bring for both consumers and businesses. And this makes hype dangerous, because it inspires a lack of caution that amplifies the risks even further.

I think that’s right. I have been watching the Theranos trial, and the founder made a fair point. She said, this is the way Silicon Valley works. This is what we all do. We all hype it up and make promises for things that don’t exist. We know there’s going to be problems, but we come back and try to fix those later. And, at least in the medical industry, that doesn’t work out so well.

But we don’t even have the same set of rules for the information landscape. People make all sorts of claims about technologies, which either don’t work or are actively harmful. And nobody is going back to them and saying, “Hey you said this and you haven’t proven it.” So companies make claims in order to raise enough money, to get enough interest, to get users, to build enough hype, and then become self-sustaining.

Then if it implodes, by that time, all the early investors have sold their shares, and it’s someone else’s problem. If it succeeds, then it goes on for as long as it can and keeps evolving. As you know, the Facebook story is: Keep riding that wave and then mutate into the next form before anybody can even catch up with what you did in the past.

But there has to be some rationality on top of this. There has to be someone saying, “Hold on this isn’t working. There’s a problem here, let’s take a minute and figure it out.”

This reminds me a lot of when I covered drones during their early days of their commercial use in industries like architecture, engineering, construction, and surveying. I spoke to a lot of lawyers who focused on the inadequacy of the laws for drone operation, which couldn’t keep up with the actual technology. They often referred to the introduction of airplanes, when homeowners would get angry about airplanes flying overhead, because they understood their property boundaries as extending up to the stars.

I say this because it’s clear this problem is coming up a lot lately: We’re constantly behind technological development in terms of understanding and regulation. Is this a problem that we can even hope to solve with the embodied internet? Or is it an inevitable problem?

I think that’s a really good example. I worked on Prime Air at Amazon. Because I’m not an AI or computer vision person, I worked on the human side of it. I wound up gravitating towards the air traffic control side of the problem, and asking, “How do we enable human operators to make these things safer?”

There you have an example of regulation, with the FAA in the US, which is a very conservative organization when it comes to allowing new things to happen. But on the flip side, you also have a bunch of independent solo pilots who don’t want a lot of technology like transponders in their planes. They’re very libertarian in that way, probably because flying is one of the freest things you can do. You’re getting away from all the boundaries, right?

So with air traffic control, you have clashes. The FAA is trying to figure out how to introduce millions of robots into the airspace and make it safe, and they’re potentially interfering with small-craft pilots. I’d say they’re going slow, but you have got to credit them with having the right mentality about this, because it’s about safety first.

How did you approach this problem of safety at scale when you were working on Prime Air? Seems like it could offer some lessons about how to approach the safety problem in the metaverse.

One of the things that I worked on for Prime Air was a program where we would simulate drone flights beforehand. In two years, we wanted to simulate 10 years of flying. And if we could show in the simulations—which get validated by real world data—that flying was safe, then we could say, OK this is going to work, this is going to scale. We still need real-world tests too, but later.

We don’t do that very much for things on the internet. We don’t simulate. We don’t try to create virtual people and have them interact and see what happens when things go to scale. Things break. We could learn where and how they might.

You would think if somebody were trying to build the next blockchain, let’s say, the first thing they would do is simulate what happens when a million people try to hit this blockchain at once. “Oh, crap, our gas prices go through the roof,” right? You need to try it out at scale with simulations before you put a bunch of people in there and see that it doesn’t work.

Let’s say I’m a small business or a developer, and I don’t make AR/VR/spatial computing tech, but I’m interested in using it because I’ve heard it’s going to change everything. How do the risks you’re explaining apply to me? Why should I care about testing before I scale?

OK, so maybe here’s a good conceptual platform to build on.

We start using new technologies because we see a certain number of people that tried it first and said it was good. It expands out to a whole bunch of more people—but often the first set of users were some 25-35 year old white guys. That hides a problem.

If we’re not testing these things on a sample of the general population up front, if women, minorities, and other cultures are not part of the design and testing process, then it’s probably going to fail when we put it out in the real world. It’s very much like the simulation idea that I talked about. The simulation also has to be diverse here. The user base and the design inputs need to be very diverse to get the right results.

A simple example is certain headsets I won’t name. When they were introduced, they just simply did not fit a large number of people for a variety reasons. For one, there’s a lot of people out there who actually care about their hair—I don’t, but a lot of people do. And they don’t really want to put a strap on their head, because it’s going to mess up their hair. Some people have hair that is more voluminous, due to the style, or potentially even genetics. So the straps weren’t fitting some people, they were not able to use the headsets comfortably.

Now if you’re a business owner, and you’ve hopefully got a diverse employee population, and a subset of your employees are not able to use the device that you just implemented for meetings, you’re looking at a potential lawsuit.

You’re also disempowering your own employees from doing something that they could have done using other means and other technologies. Think about meetings during the pandemic—there are technologies that would have been more accessible and usable by a greater number of people than some of these early, cheap VR headsets on the market.

That’s because the vendors just didn’t account for all these so-called “edge cases.” But they happen to not be edge cases, because they’re actually very, very common. The fact is that these headsets just weren’t designed with everybody in mind.

So potential enterprise users of spatial computing tech should think more broadly about their employees, and test, and move slowly to be sure that they can use this technology responsibly.

What is the harm of moving slowly? What’s the downside of actually being careful? It might cost us a little more money upfront to think about these things and to go slow. But the cost of doing it wrong, the cost of making the mistake, the cost of hurting people or leaving people out? That’s tremendous.

We should move slowly and think, even when we believe we’re being helpful. I wrote this article about 100 original voices in XR, and I got to meet a lot of people in the field that I hadn’t met before. A few of them were experts in diversity training who had previously taken VR and applied it to diversity training, or DEI. They were asking, “How do we use VR to teach people who don’t have experience dealing with people from other cultures or skin colors? How do we train them to have positive mutual interactions?”

The way that a lot of companies do this to put people in a racially charged situation with an actor or a non-player character, and then train the person to deal with it on the fly. But the fact is, that can cause more trauma, and cause more stress, and lead to worse relationships than other methods. It’s a naïve way to approach that problem and often wrong.

What do you think about these scare headlines telling every business they need a head of metaverse?

I think that these companies better pick carefully. Because if you just take some person whose only experience is in marketing, or hyping up NFTs and so on, they’re going to do a crappy job as your new chief metaverse officer.

You need somebody who understands insurance, who understands liability, data, and user-experience research. Those are the people you want to put in charge of your metaverse implementation, because they’re going to be the ones able to call BS on the hype and tell you what could sink your business if you do it wrong.

You really want people who understand risk management in those positions. They’re going to be more boring than the marketers, they’re not going to be excited about out the future possibilities—they’re not going to be your chief evangelist. Hire an evangelist if you want, but don’t think of them as a good fit for c-level role that has a lot of responsibility and liability when it comes to the performance of your company. You better pick the right person for that role, or you’re going to have a problem.

Image courtesy of Campfire.

Should companies also think more broadly about whether the technology is fully realized for their applications? For instance, a lot of XR technology says it’s ideal for design reviews but that doesn’t seem to play out in reality.

Yeah, totally. There’s a company that I advise called Campfire3D. The company was founded on the idea that that everybody has sold their XR hardware as being able to do 3D design reviews. In every marketing video you see, there’s always a couple of people who are grabbing a 3D object, turning it in their hands, and handing it over to someone else.

That shows that these companies don’t understand what a design review entails.

For years, everybody promised that design reviews were the first killer use case for AR and VR, and nobody delivered on it. Every time you bought a product, all you got was a bunch of avatars sitting in a room handing models back and forth. So it turns out you have to sit down with real designers and understand how they do real design reviews. What do they need from this process? If you ask them, you can build the tools that will actually help.

Since we’re knee-deep in talking about the problems with this technology, are there any other issues that we should be thinking about?

Privacy is is critical. And the biggest threat to privacy, honestly, is the is the ad-driven business model. It’s not that the ads are evil by themselves. Ads often are just giving us information or creative expressions, and they’re largely protected by the First Amendment. The problem is the way that our personal information is being used to deliver ads.

What I’m arguing for, and other more prominent people are arguing for, is that we need regulate the business model, not the expression of the ads themselves.

The fact that our personal information is being mined, in order to drive the right ad at the right time? It seems a bit harmless because, because if I do woodworking, I’m going to see that ad for woodworking. What’s wrong with that? What’s wrong is that is, when XR has eye tracking, and emotional analysis, the computer is going to understand how we think and feel about everything in our environment. It’s going to know how we feel about people in our lives. It’s going to know how we feel about political issues.

It’s going to be able to know about your emotional triggers. Is a political issue like abortion going to be a trigger? Or are you a sap for a romantic gestures? But whatever those issues, whatever our hot-button triggers are, the companies will know how to get us into an emotional and much less rational state, which is the perfect state for advertising to us, because we’ll be susceptible to all sorts of influences.

And this technology is going to amplify that risk, as usual.

When you’re talking about VR or AR, that’s especially dangerous because you can now replace things in the world. Ad placement isn’t just that the bottle of Coke shows up on someone’s desk. The VR world will cycle through every different kind of cola until one of them catches your eye. And now the system knows that’s the one you’re interested in.

By optimizing the system, driving it for the maximum extraction of ad revenue, we’ve turned people into data mines and not so much free-thinking individuals. Ultimately, if we take it to its extreme, we lose our autonomy, we lose our ability to think for ourselves because the systems are pushing our buttons.

I argue that the computing interface of the future is the one that knows us so intimately that it can help us get our work done. But if companies are using it to exploit us, then we’re going to have to just come down and say no, we can’t do that. It’s too dangerous.

You know that my goal is to help enterprise users understand this technology, but it’s clear that the same issues that affect consumers will affect enterprise users, too. There’s no way to say, “this won’t matter in the business world.”

The only thing that changes are the use cases, right? The people are the same people, either at home or at work. At work, we might use it for collaboration. At home, we might use it for communication, entertainment, and more. But fundamentally, the technology is the same. So what works in one place will probably work in the other, and what’s harmful in one place is probably also harmful in the other.

So your view of spatial computing technology is that the potential benefits are huge, and the potential risks are huge, so we owe it to ourselves—as people, not just consumers or enterprise users—to be very careful.

It’s a good summary. I would add, let’s be cautious about these things, and human centered, but let’s also not wait forever to curb the bad uses. Let’s actually act on the negatives that we see today. Let’s not ignore it, and just hope that it gets better over time. Taking action is the only way we’re going to steer this stuff towards the best outcomes.