Visual side of things
What it really takes to create realistic and immersive hi-end VR experiences that feels and plays nice? Well, it requires a lot of testing and technical knowledge and most importantly a whole new way of thinking and understanding what makes VR so different and immersive. Some things that previously worked fine might not be optimal for VR anymore.
Video games are usually more or less a compromise between performance and visuals. Current generation of AAA games are looking really good. Naturally gamers would like to see that level of visuals in VR too. This creates huge challenges for VR developers to match with these expectations with the current hardware. Most of the developers just ignores these challenges and makes simple and casual games. That is why a lot of VR games have taken more or less a stylized approach that helps to gain a little of that performance back but eventually people will want to get more immersive VR experiences.
In this first part I will focus on visual side of things and I will be covering topics like performance, things that might not work well in VR and so on. Keep in mind that what I share here is based on our experiences what we have learned from our VR tests and Planetrism game project. Every project is a different story and these are not strict rules but more like things to keep in mind.
It takes a lot of time, effort, sweat and tears to achieve good performance but its important for smooth experience. Optimizing will be a huge part throughout the whole project.
VR games need to perform fast all the time. You can play some console or pc games with fps as low as 30 or fps jumping between different clamped values but in VR you just can't get away doing the same thing. We found this out the hard way at pretty early on in our first tests. It's easy to start thinking that VR games are just like regular pc games so at first we had pretty complex materials and a lighting setups that cost a lot of runtime performance. It was a shock to see that the fps was 45 at some times. We needed to start figuring out what was causing that.
You can find a lot of interesting information about rendering in UE with just a few console commands. In this case I used “stat scenerendering” and “stat gpu”. Remember that the performance in editor is different, usually worse than it would be in the build. In this case the performance is way better than it even needs to be and it gives us a little more room to have for future ideas.
The first thing you should always do when you find out that the game is performing poorly is to open up the profiler. If you are using Unreal Engine or Unity then you have a lot of useful tools available. In our case we found out that the game was gpu bound so then it was just a matter of trying to balance things. Usually when you find yourself in this situation it's important to understand what aspects are important for your game and prioritize where to use the limited runtime performance. Our issues was related to overdrawn and lighting complexity. Luckily we had a lot of experience about optimizing games to perform smoothly on mobile devices so we started to pay attention to small things and eventually we managed to get 90+ fps.
Engines nowadays are very complex and it's very easy to do mistakes when dealing with settings or push too much information into the screen at the same time. Usually the issue is the developer that doesn't know what he or she is doing (me included) and not the engine/tool. That's why it's almost impossible to give any magic answer for people that are asking how to optimize their games. Figure out what what are the key aspects for you game and then do compromises with other aspects. Don't spend your time asking questions in social media but instead read the documentation and dive deeper into technical aspects.
Depending on the engine you are using and the renderer that is available to you it's important to understand the limitations, benefits and disadvantages. In our case we first used UE default deferred renderer but then later on we switched to forward renderer mainly because that was the renderer that Robo Recall VR game used. Pretty soon we found out the limitations of forward rendering so in our case we needed to switch back to deferred renderer, mainly because of our dynamic lighting setup and because the renderer didn´t support various features we needed. It's important to find the right renderer early on and it depends a lot what kind of project you are creating and what features you need like anti aliasing method, lighting type etc... Planetrism was a totally different game compared to Robo Recall so we didn´t get the same benefits of that. I advice to try and see what works the best for you.
You should also disable any unnecessary rendering feature that might increase shader complexity or add extra load to rendering thread. Remember that engines usually have a lot of features on by default that works in most cases but VR is always a different story.
Lighting is one of the things that will eat most of your performance if you don´t pay attention to it. At first we used static lighting because that was the thing “VR gurus” told everyone to do. Don´t believe everything what people are saying on the forums. Static lighting sure helps with runtime performance but we needed to setup a dynamic lighting system for our game. It's easy to drag & drop directional light into your scene for the sun and then place a skylight to get that nice ambient lighting. The thing is that the devil is in the details and if you still want to achieve that +90fps it would require a lot of tweaking.
There are some major things to keep in mind. I would say the rule of thumb is that if you have a dynamic light that is casting shadows then that is a potential performance issue later on the development. Exception proves the rule and that is the sun. Sun must cast shadows because otherwise you would end up with a world that looks very dull and flat but first thing to do for any local lights is to turn shadow casting off unless its a torchlight or similar that is important for the gameplay. Also pay attention to light overlapping when using dynamic lights like point lights because that might be an issue later on.
Image showcasing the light overlapping. In this case you can see that light sources are point lights with a certain radius and there are some very small overlapping happening.
Speaking of shadows you should always keep in mind how far you want to render shadows. For large environments like we have in Planetrism, it's important to render shadows pretty far to avoid visible shadow disappearing/popping when player is moving. That's why game engines are using cascade shadow maps for directional lights to balance with the performance cost. This way shadows are much more accurate at closer range where the quality matters and then slowly dropping shadow resolution down the further it is from the camera.
Rule here is that the more you have those shadow cascades the more it will cost runtime performance. Default values are usually too high for VR games. That is why its important to find a good balance with shadow cascades to get away as few of them as possible but still keep the shadow quality and shadow distance at decent quality. It's not an easy task and in our case it was a lot of tweaking and compromises with shadow quality and distance. If you can afford it then I would recommend using distance field shadows and that way you can bring down shadow cascades as low as one because distance field shadows will kick in after that to take care of distant shadows. Contact shadows are also very handy way to add finer detail shadowing but that might not work for every situation so use it with caution and with values that doesn't introduce visual artifacts.
Contact shadows helps to add finer details to small models like foliage and grass. This is useful for the sunlight but you should really test if it works in your situation.
Depending on your project it might be possible to use mixed lighting approach where you are only baking indirect lighting and then have dynamic direct lighting at runtime for certain lights. In our case mixed lighting might work in theory to some degree but because our gameplay and large environments it was much better and easier to have a fully dynamic lighting.
Spacestation level in Planetrism. Fully dynamic lighting with multiple local lights. The trick here is that all of them have very small radius, no shadow casting and culling based on distance. All of this helps to keep the performance smooth and over 90+ fps.
Usually static lighting will give you the best visual results, especially for indirect lighting + with good runtime performance because then you can use as much time as you can offline to bake that lighting and at runtime it's just textures. Lightmap sizes can end up being a problem in some cases but then you should consider changing your lighting setup. Dynamic lighting in VR usually means you have to do a lot of tricks to hit that 11ms and lighting can't use all the power. Large environments and dynamic gameplay usually force lighting to be more or less dynamic so usually that means it will cost more runtime performance. That's why the first thing you want to check out if you are having a bad performance is your lighting.
Post processing effects are also very costly. Some effects are not suitable for VR like motion blur or depth of field because both of them are artificially created that tries to mimic real-life effects. Maybe in the future we can create these sort of effects right when we have good eye tracking and better sensors. That's why it's better to turn them off for now. Also note that some non-VR post processing effects might not work in VR because the nature of VR rendering and how the image is mapped into the screens.
SSAO (Screen Space Ambient Occlusion) is going to cost a lot depending on the settings. SSAO will have a huge visual impact for fully dynamic lighting but it will also create some visual errors due to its screen space nature. in VR you can see these errors better with lower settings and that is why it's usually good to turn it off too. There are a lot of different AO methods available but usually the performance impact is too huge for VR.
Top is without SSAO and bottom one is with SSAO on. As you can see, it helps to ground objects better with fully dynamic lighting. The difference is not huge because the minimal AO intensity.
Because Planetrism is a multi platform game we ended up giving an option for players to turn various post process effects on/off.
This is something that devs usually left to default but it will have a huge impact for the VR experience. Anti aliasing is important for solving geometric and specular aliasing but it might have some drawbacks that might eat too much performance or cause visual issues. It's also depends on your renderer what options are available.
In the case of UE the default method is to use TAA (Temporal Anti Aliasing). That helps to get rid of the saw tooth effects but the way it works usually also makes the results blurry and there is also visible ghosting effect for moving objects. That's why it's far from the optimal choice for VR where we have a lot of small movement and right now headset screen resolutions are also very low so we don´t want to introduce any more blur to that. Performance impact is not that bad with TAA so it works well for non-VR games but for VR it's not the optimal choice.
Top result is without TAA and bottom one is with TAA on. It's kind of hard to tell the difference but TAA makes it look smoother and there is no specular aliasing visible but the overall feel is blurry when it's combined with grass wind movement.
We could use super sampling where we are rendering the game at higher resolution and then sample that down to get rid of the aliasing effect. This is the brute force way of solving the issue but it works, if you have enough power in your system for that. Unfortunately VR frame rate needs to be over 90 in most cases so usually there is not that much room for this because hardware is doing a lot of work already.
Right now the best AA option for VR is Multisample Anti-Aliasing (MSAA) because that will keep the results more sharper than TAA but drawbacks are that it will cost more performance compared to TAA although its cheaper than super sampling and in the case of UE its only supported with forward renderer.
On top of these there are a lot more of various other methods and some that are using more mixed approaches. If you can then it would be a good idea to leave different choices for player to choose from depending on their systems and preferences.
Draw calls, shader complexity, overdrawn and transparency
Draw calls will eat your cpu performance easily if you don't pay attention to that. Cpu will batch various rendering tasks and send those for gpu to render. Basically we can think that every visible material equals at least one draw call, in some cases even more. If you have lots of models visible and every one of those models have lots of materials that means you have a lot of draw calls, if you are not batching them. Also know that lighting can, in some cases double or triple the draw calls so pay a lot of attention to this because this might create various bottlenecks that usually means your game can end up being a cpu bound. That's something you don't want to have because games needs cpu to do much more than just handling draw calls.
Easiest way to reduce draw calls is to batch materials together, avoid using too many individual models and use instancing when possible, cull models based on distance or if you can afford it use occlusion culling algorithms. If you are using modular elements, try to find a middle ground with the amount of models you have but still have it to cull well to keep draw calls at optimal count.
Shader complexity is another thing to keep in mind. You can do amazing things with materials in Unreal Engine but it's important to pay attention to the instruction count. Also it's a good idea to make cheaper versions of materials for important and costly assets that you can use with LODs. This is very useful for open world games where you can turn a lot of expensive material features off because you would not even be able to see the difference at the distance.
In this case cave foliage is the heaviest to render due to its shading model and masked method. You can see the overdraw issue on the right where its starting to be almost white.
Transparency is the thing that can kill your performance. You can't fully avoid using transparent materials but for things like particles or water surfaces you might not have any other option. You should also try to use the cheapest transparency option because most of the time you don't need specular highlights for things like dust particles but glass and water surfaces usually benefits from that.
That water is looking pretty nice but with transparency you are paying the performance cost.
Triscounts and model silhouette in VR
This is something that people are asking a lot, what is a good polycount, how much can I have triangles on the scene etc.. Triscount is not the issue nowadays or it shouldn't be. First thing like always is to understand what are the key parts to focus on. Character model needs to have more details than the background rocks and so on. You should also know that models like characters usually use meshes that are using skeletons to drive animations and that's why the performance footprint will be higher for these models than static meshes. Because of this it's important to use LODs to reduce scene tris count and think when to use static meshes and maybe add movement with shaders instead of using a skeletal meshes, if possible.
Usually the answer for the first question is to use enough polygons to achieve a good silhouette and that's it. For non-VR games you can get away with fewer polygons because camera might not get very close but in VR its harder to limit that and we shouldn't limit how close players can see because this is important to keep the immersion alive. That's why you should have higher poly models for things that you know player will see up close like items and interactive props that player will be able to pick up and inspect.
In this case for fire extinguisher I added a lot of details into the geometry level in order to keep the model detailed enough because player is able to pick this up and inspect very close in VR. Normal maps are mostly handling the small details that doesn't have to impact the silhouette. Polycount is high but only for LOD 0.
Also you should know that normal maps doesn't have that much power in VR than we have used to think. In non-VR games we could fake a lot of things with normal maps, things that would give the illusion of depth. In VR that illusion is very easy to break and that's why it's important to understand that in some cases you should add details to the geometry instead of relying too much on normal maps. Same goes with parallax mapping because it will look wrong in VR with high intensity. Tessellation works in visual wise but the performance impact is usually way too much for VR.
So long story short, you can use still use the same techniques like before but keep in mind that VR allows players to see things very close and you have to aim for +90fps most of the time so good luck with that!