We met Jeri Ellsworth at this year’s Augmented World Expo (AWE), the XR industry’s flagship event. Her company Tilt Five was having a phenomenal day and their new holographic gaming system was pulling huge traffic. With good reason, too: Slip on a pair of Tilt Five’s glasses and flip open a game board, and the system projects a crisp, focused 3D world that you control with a wand.

In short: Where many XR products in the past have over-promised and under-delivered, Tilt Five’s new system does exactly what it says—and at a cost of only $359.

When we caught up with Ellsworth after the event, the reason for her company’s success became clear. She walked us through a career developing consumer gaming devices and toys, and heading up the R&D department at Valve, which made key contributions to the tech behind the HTC Vive and Oculus VR headsets. She chatted about the early days of the mouse and the iPhone, the first spreadsheet software, and the Silicon Valley Homebrew Computer Club that birthed era-defining companies like Apple and Osborne Computer.

And each of these topics—unrelated as they might seem—offered an important lesson about navigating the present and future of spatial computing.

Jeri Ellsworth, CEO and Co-founder of Tilt Five

Sean Higgins: I like to start these interviews by asking, What does “spatial computing” mean to you? Or is that even a term you think about?

Jeri Ellsworth: I think about it a lot. And I try to compare it to technologies in the past. I have this feeling that “spatial computing” is going to disappear out of our vocabulary.

It will be like mouse interactions on computers used to be. Technology insiders had to describe it very literally so people understood what it meant. Now, you just use a touchpad. Everyone knows how to use a mouse. Spatial computing is going to be one of these things—like the internet—where everyone just knows what it is.

As much as I hate to use the word “paradigm,” you’re saying that spatial computing is a new a new computing paradigm …but just for now.

Right, computing has been evolving this way since the ‘60s. If you look at the early pioneers in the AR and VR space, even back to work at Stanford, there has been constant evolution—and it really feels like spatial computing is the next evolution. We’ve been stuck to these screens for a long time, but people are used to working in 3D space. I know how to pick up my cup of coffee very well, so why aren’t we computing this way?

I’m really fascinated with this sudden chatter about spatial computing, but it’s been percolating for decades and decades. There have been little signs of it forever. Our phones have got GPS, so now they can track us as we move through different locations and augment our life. Or on the fun side, think of the Wii Remote, a motion controller. It was super primitive, but it was super compelling the first time people got to use it, and it was super intuitive.

All of this leads to my belief that this is the next computing platform, because it has this ability to be so intuitive for even non techie people.

What comes first for spatial computing once we adopt it more widely? Are we all going to jump straight into virtual experiences?

I think that the excitement around like virtual reality is a little bit misplaced. I think augmented reality type experiences come first. Those are going to change user behavior.

Users are going become very comfortable with these more mild experiences, and then, somewhere down the road a decade or two, virtual reality will be just a subset of what we’re already doing. So we’ll have our glasses or a magic pixie dust we put on the table that creates holograms, but we’ll be passing in and out of fully immersive, partially immersive experiences all the time.

Even when I worked at Valve, there was a lot of effort going into VR. But I said, no, no, no that comes after AR, you have to move your users’ behaviors and train them up. You can’t just drop them into virtual and expect that it’s going to be mass market.

This is this the argument people make about autonomous vehicles, right?

Right, look what Tesla’s doing with autonomous assist. People are comfortable with that after how long? And it can go 70 miles per hour.

For another example, it’s very much like the early days of home computers. Let’s look at the mouse again. Microsoft did something brilliant with Windows. People didn’t know how to click and double click and move a mouse around, so they put Solitaire and Minesweeper in the OS. As you know, these are simple little experiences, and they get people performing the odd mechanics of using a mouse. Then pretty soon it becomes muscle memory and no one even talked about using a mouse anymore.

To bring this back to Tilt Five, it seems like your game table product is the tip of the spear. In that it helps get users used to spatial computing, whether they’re enterprise customers or consumers.

Exactly, it’s about users.

I think the most formative part of my career was when I started working in toy design. I worked with some amazing mentors that taught me how to think about the end user more than the device that we were creating. I actually see this as a big problem in the XR and spatial-computing space: There’s so much emphasis on how many pixels, or how far I can walk around the world, and not enough focus on questions like, How do we start to acclimate users to this? How do we delight them?

At Tilt Five, our whole thesis is that it’s going to be a lot easier to delight people in the early days, then create a new productivity-type interaction later on. Like Microsoft did with Minesweeper, that was brilliant. They entertained their users, gave them enough motivation to go through the motions, and then the computer became a productivity tool.

That’s why we’re wholly focused on the living rooms. I know how to delight people from my experience in the toy industry: You come up with a minimum viable product that is just enough, that has a game loop in it, or an interaction loop that gets you engaged and doesn’t frustrate you. It just keeps you coming back to it. And that’s just so critical in the early days.

This is in sharp contrast to the approach of some bigger names we’ve seen over the past few years, like Magic Leap. You are starting with the user experience and building toward bigger things from there.

We weren’t too wrapped up in trying to boil the ocean like some of the big players. Think about that headset. People are not wearing that outside—no one’s going to wear dark glasses outside. So we just take that off the table, right?

We started by eliminating use cases. We just threw away everything that we thought the users wouldn’t do. Then we looked at it and said, Okay, what do we have left? People will sit in their home and wear the dark glasses with their friends and feel comfortable. So that’s important. People want to have intuitive interactions, so we need to have this magic wand. We need to be able to fill the table with the experience.

So we start jettisoning a bunch of things. We also came up with this really clever optical technique where instead of trying to beam the light directly into your eyes—which is insanely difficult and expensive—we turned it inside out. Instead of putting the light directly in your eyes, it projects from the glasses out to the game board and the special board redirects the light back to you so only you so you can see it.

By doing that, we removed all of these physics constraints around the optics, and we were able to solve one of the biggest holy grails for AR, which is the problem of vergence accommodation. Essentially, with the table, everything’s in focus correctly.

The Tilt Five System

And all of this just works, and in a way people can relate to. But what’s the next step? What do you all want to do next with this product?

I think that we can go quite a ways in the entertainment space, but a majority of developers that are reaching out to us are not gaming developers. So a natural place we can go with it is enterprise. We can support enterprise users and more professional users with different features that they might want. So that’ll be low hanging fruit for us.

Since you mentioned enterprise users, I would love to talk about what you see happening there. There are some pretty classic use cases, like design reviews of 3D models. But do you see any alternate applications?

I think you hit on some of the early applications, obviously you’re going to be using your system for to collaborate on CAD models. You know, almost every one of our partners at AWE was showing various versions of enterprise applications and professional applications.

One was a company called Arcturus that does volumetric capture and video editing. And you might think about it as Adobe Premiere for volumetric capture. You can edit and play back volumetric movies and interactive content, which is amazing, right? You could do use that to do training.

We had another group, Nucleus VR, that was working with the European Space Agency. They are using our system for command and control for the teachers who are teaching actual astronauts, as those astronauts use a VR headset to walk through a life-sized version of the International Space Station.

The teachers love our system, because they don’t want to be in a full VR system. They want to have all that situational awareness, they want to see the eyes of their friends and colleagues. And so it’s super valuable for them, even though the other part of their system is fully immersive.

It seems like part of your strategy is to make the game board so easy to use that people will get it and find ways to use it on their own.

Right. And like I said, I always try to draw comparisons to other technologies—so let’s just use the iPhone as an example. The iPhone is arguably the world’s best enterprise device now. But when it came out, it played music, it did maps, and it let you browse the internet. It was very primitive.

But enterprise developers looked at this and thought, Oh, there’s this really valuable problem I can solve by using this touchscreen device, which is inexpensive to get and straightforward to use. And that kind of interaction opens up unimaginable uses.

We want to get into 100 million homes, and then the markets will follow.

A through the lens shot of a game in progress.

Since you made it straightforward to use, did you also focus on making it easy to develop for? I understand you already have an SDK in place.

Our development environment is drop-dead simple to use. We support the standard game engines and native plugins if you have a custom app. So if you have a game that already exists in Unity, or even an enterprise app that exists in Unity, it’s basically drag and drop. Take our plugin, drag your app into Unity. Within five minutes you can see your scene rendered on the table.

For us, it’s really important that this development cycle goes really fast. It needs to be real time.

Folks that are working in VR have to take the headset off all the time. They can’t really edit the scene in real time. With our product, you can just wear the glasses, you can program, you’ll move things around on your 2D screen, and then you look over and they’re moving up and down in 3D space. It’s really magical.

Years ago, one of my mentors told me that whenever you’re making a product, and developers are one of your customers, if there’s anything that’s rough or difficult for them to do, just make a tool for them. And we put so much time into these tools.

It seems unusually simple to work with, especially for companies with limited development skills or “resources.”

I love it when we have a new partner coming in. They ask, oh, is it going to be difficult? And we get it rendering in five minutes. There we go.

Do you think this is the time that enterprises can look at spatial computing tech and say, I can look into using this. I don’t have to worry about being an early adopter and getting burned on a $6,000 headset that has a two degree field of view.

I think you know, for all the enterprise developers and companies that are buying our tech, it’s a no-brainer. Our kit is $359. That’s background noise to them. They could buy it, try it, educate themselves. That’s super valuable in itself, even if there isn’t an immediate application for it. So that’s fun.

The other thing is, there’s going to be really strong verticals where like systems like ours are just perfect, right? And if you can identify a narrow problem that it can solve, they should grab a system, solve that problem and start using it right away.

Is the best approach for enterprise users is to start small?

Yeah, I don’t think if I were in a Fortune 500 company, I would just make a mandate that the future is XR and we need to start moving everything over. That’s probably a fool’s errand at this point.

Is that kind of hype the biggest problem with this technology right now? If not, what is?

I think in our industry on the whole, our big problem is this notion that there’s a silver bullet right around the corner that’s going to solve every one of our XR problems. This causes people to make bad choices. We’ve all heard it, right? “In two months, Apple is going to ship XR glasses that are perfect, and they’re going to do everything.” I’ve been hearing that for seven years, and it hasn’t happened.

And, honestly, it’s not helping us, as a group of people working on XR, to believe that Facebook will release glasses that will do everything we want to do. It’s such a broad, open space right now that systems like ours can grab a big chunk of it.

A through the lens shot of a game in progress.

When I do these interviews I like to ask, who are the people who are going to push spatial computing technology forward? A lot of people I interview say that it’s the generation growing up right now, who have time to get used to spatial computing tech. But with your philosophy of training the user, and your tool being so simple, I think you would disagree.

I mean, yes and no. I think we all have this dream of what XR is going to be in 20 years, right? I think the people that grew up with it will be the folks 20 years from now will be so savvy that they’ll be making innovations that really make the dream come true.

But I also think that, looking at the history of home computers, it was a wide range of people that were having runaway hits in different vertical markets. The folks who developed VisiCalc, they weren’t kids, they were older folks. I think it’s going to be a broad spectrum of folks that are visionaries. It’s people that can see a little bit further into the future than some of the myopic folks.

For example, there was a wide variety of people going to Homebrew Computer Club in Silicon Valley, and just tons of technology companies like Apple, Microsoft, Osborne sprung out of like these groups of very diverse genders, races, ages, and it kind of feels like we’re in that similar phase with spatial computing.

It’s a strong argument that the basic foundation developing a new technology is community—and that a short look at tech history could teach us this lesson pretty quickly.

Exactly. I think you hit the nail on the head. I’m a big fan of like technology history.

What’s something we didn’t talk about that you have a strong opinion about? Or is there anything important that doesn’t get much coverage?

Ethics. I am terrified about companies like Facebook that want to use XR, metaverse, and spatial computing, to manipulate users. They can call it whatever they want, but you know, their business is manipulating users.

When I worked at Valve Software, I was in the hardware lab. They said, your mission is to bring the whole family together to play games. That’s it, go figure it out. And so I hired a bunch of people. We started researching, we were doing things like hooking electrodes to people’s heads and back into games to see if we could get more engagement. We had psychologists helping us get more engagement.

By tracking eyes, and skin, and posture, we could make game loops that were more addictive—or that’s what it should have been called. At the time, yes, I was being a naive engineer. I was thinking to myself, these are some really interesting challenges I’m working on. I wasn’t thinking about the ethical implications of it in the early days.

So now we fast forward to companies that have headsets that are taking all your biometrics, they’re recording your world around you. They know you better, they know your friends better than you do, and then they send that data to a supercomputer out in the cloud. And they use it to learn how to use AI to manipulate you to look at an ad or do something—that’s really scary.

I think about that a lot. I really hate that.