Shortly after I got my new Oculus Quest I was back into the VR enthusiasm I felt 2 years ago. It is just so cool. And having no cable attached actually opens up new possibilities. Like… what if I could build something where you can endlessly walk, like, really walk, without teleportation, just by using some intelligent virtual architecture that changes the path ahead of you and removes where you have been, so that you can endlessly turn, even in a small area like 2 by 2 meters?
Welcome Non-Euclidean Geometry
It turns out, I am not the first one with this idea but that is actually good, as I now have a reference for what is possible. There are basically three ways to achieve it:
- Stencil Buffer
Imagine you walk along a corridor with a corner at the end. As soon as you walk around the corner, you don’t actually see anymore where you have come from. This means, at this, let’s call it “trigger point”, we could swap out the place that was formerly occupied by the old now invisible corridor with something else. If we turn back, we would have to again detect this trigger point and show the old corridor again, removing the geometry that is now behind us.
A very nice documentation of this in action can be found here. Pay special attention to the lower right image which shows the geometry in action.
Corridors have some advantages but also some significant drawback. Since geometry is removed and added on the fly, it is hard to have a good light-baking solution in place which adds a lot of visual quality and performance. It also is very important to pay special attention to lighting conditions, since the removal of lights will lead to visible changes in lighting conditions disrupting the feeling of a smooth endless world. On the plus side, you only need to render one VR camera which gives you a lot of leeway for graphics and other effects.
Instead of constructing pieces of geometry consecutively, an alternative approach would be to somehow make a player travel seamlessly between full rooms without him noticing. To make this work, we put both rooms somewhere in the world and create a camera in each of them, linked to the player movement and head, so that each camera is always showing what a player looks at, or would look at if in the room.
The second step is to put a plane somewhere in the corridor and project the camera image from the other room onto it with a render texture. One the player touches the plan we teleport him to the same location in the next room, making this now the active one. If the player turns around, he will now see the original room projected on a plane as a render texture.
Very nice illustrations of that in action (and source code) can be found in these videos:
Portals are amazing. You can change from and to any lighting condition. There is no visible change noticeable. You can change to vastly different worlds making it appear like magic. The big drawback of portals is the increased amount of draw calls needed. Basically two (or more!) scenes need to be rendered and if getting close to the portal the quality must be convincing so low-res is not an option.
This method uses only one camera, eliminating the cost of portals by providing a very similar experience. A very good explanation is shown in the following video:
The basic components are depth masks and a stencil shader. Depth masking hides objects when they are viewed through the mask and stencil shading only shows objects instead. Geometry is then switched on and off dynamically like in the corridor approach. There are multiple tricky questions that quickly come up like handling colliders which are still there, just not rendered, light that is shining, shadows which are cast from now invisible objects etc.
I’ll try and dive into all the methods above in a future post.