18th September 2015

VR Impact on Development

Another interesting aspect of VR is how it impacts the development process and how you structure your workplace when building VR experiences. Valve’s normal “cabal” structure is pretty well documented elsewhere but we have had to adapt how we organize our desks for VR. It certainly helps that all of our desks are on wheels so we could just move things around to adapt.

The key thing for us is that its important for every developer to have a quick ability to stand up from their desk and try out the code they are writing. I should say that depending on the thing you are working on probably 50% of the time or more we just end up putting on a headset at our desks and trying things out there. But when working on things that are more about interactions its important to be able to stand up and move around a bit. We have structured areas so that we have a bunch (4-8) desks that surround a common area that can be used for this testing. So all the desks are around the outside and when you need to try something out you just stand up and use the space.

Depending on the group these spaces are different sizes- some of the people working on things that need more moving around have spaces that more closely approximate the maximum tracked area, but I rarely need that and am sitting with a few others in a smaller space. Each area has its own pair of base stations that we can all share but we each have our own HMD which I realize is a luxury that most developers don’t have yet (balanced by the fact that we have to try out new builds of stuff all the time to make sure it works before it gets to others). Right now because base stations can interfere with each other each area is curtained off from other areas. With the final product this shouldn’t be necessary since the sensors will be able to reject incorrect signals.

Of course with shared spaces its important to share- we can’t all be testing and bumping into each other at the same time. Also there is an important rule that the person with the HMD on their face has right of way. If you are moving through the area its your responsibility to avoid them since they can’t see you (for now).

posted in Developers, Technology, VR | 0 Comments

25th August 2015

VR Performance and CPUs- Cores vs Frequency

I feel like I’ll probably write a bunch about performance and VR as we go along since its such a complex topic and there is so much stuff we still need to learn. PC and gaming performance has always been an interesting topic but whereas in the past it was mostly about “can I average 50fps or 55fps on this game?” now it becomes “can I consistently get 90fps 99.9% of the time so I don’t get sick”. 100fps doesn’t matter, 92fps doesn’t matter, but 89fps just sucks. 90fps becomes a really sharp performance line.

So far we only have really preliminary performance data since it depends so much on the content and all of that is in-progress and hasn’t really been optimized. People also haven’t done the work to have their experiences scale up and down depending on the PC they are running on. For example with GPU performance you can do a lot by changing the size of the render target which is normally 1.4x the size of the real physical panels in each dimension (so a total of 1.96 times the pixesl). Having at least 1.4x scale gives you ideal anti-aliasing and sharpness when the panel distortion is applied but if you don’t have enough GPU performance you could reduce that at the expense of some sharpness and graphical quality. Not ideal but it sure beats missing frames.

If CPU becomes the limiting factor your choices are a little more complicated. For one games are traditionally poorly multi-threaded and thus don’t take good advantage of additional cores which is the main CPU performance improvement lately. When dual-core CPUs first came out most games didn’t take advantage of them at all and while the situation has improved quite a bit recently few games seem to take full advantage of 4-core CPUs never-mind some of the 6-core machines that work great as development boxes (compilers LOVE 6-cores now!)

One of the issues in picking the ideal CPU is that CPUs are typically limited by their power consumption / heat generated. So a 6-core CPU is usually limited to much lower frequencies than a 4-core CPU. While it can have overall much more computing resources if the given task isn’t able to keep at least 6 threads busy it will become gated on the performance of a single core which is higher in the chip with fewer cores. So I suspect for many gaming scenarios the high end 4-core CPUs will be better for now than the otherwise better 6-core ones until games can make better use of threads. I’m looking forward to trying out the new Intel Core i7-6700k which is the new 4-core 4.0ghz Skylake CPU that should hopefully be the fastest yet for these scenarios.

posted in Graphics, Performance, Technology, VR | 0 Comments

18th August 2015

VR API support in Games

Various recent discussions have been somewhat confusing about what it takes to add VR support to a game, and specifically what it takes to support multiple VR devices. As usual there are several layers to the issue so some degree of confusion is pretty understandable.

At its most simple level VR support is provided by two key aspects-
1) rendering a stereo view of the scene into two textures (or one texture with one eye on one side and the other on the other side) and handing that off to the VR api. At that point the VR API does a bunch of complex stuff including some corrections, distortion for its optics (which can be calibrated for individual devices) and composting additional system UI. But none of that latter stuff is something the app developers have to worry about.
2) Updating the head position as the HMD is tracked. The underlying VR API should provide the game with a “pose” that represents the exact location and orientation of the head so that as you move your head around in the real world your view will exactly correspond.

At the above level it should be very easy for VR game authors to target multiple vendors headsets. If you are using Unity or Unreal Engine, support for both the Oculus and OpenVR SDKs are built-in / available with the addition of a simple plug-in. Better yet OpenVR is designed with a driver model for different hardware and our goal is to support any major HMD if you use it (although we are limited in our ability to promise specific support because vendors can change their APIs in breaking ways at any time and then we need to update).

Beyond the basics of the headset and scene rendering are other issues around the interactions that the game supports. If your game doesn’t use controllers and is a basic seated or simple standing experience, it should pretty easily work with any headset. In a similar fashion if you use non-tracked controllers like the Xbox or Steam controllers, anyone can buy one of those and use it with any headset. Tracked controllers become a more complicated problem- they each have some more specific behaviors and are tied to the specific tracking system. So in the short run, it would be hard to mix and match the controllers and if a specific game expects the HTC Vive trackpad input as part of its interaction, or another game is designed around the Oculus Touch specific features those might be harder to support across both platforms.

Our goal with SteamVR/OpenVR all along has been to develop technology and make it available to others in the VR community so that we can have high quality VR that doesn’t make people sick and where our customers can buy any VR equipment and enjoy their games on it. We hope Lighthouse will become a standard for tracking so that you can use controllers from many vendors (in many styles) to interact with all your games, but the OpenVR APIs can also work with non-lighthouse tracked controllers. Hopefully the VR industry can avoid following the early days of the 3d-accelerator market where games were created that only worked with one vendors hardware.

posted in Technology, VR | 0 Comments

11th August 2015

Cloudhead on Locomotion in VR

Cloudhead Games has some great ideas about locomotion in VR in this video.

This video brings up a couple of interesting topics worth discussing a bit more.
The first is about some details of the chaperone system. The Cloudhead guys discuss how they are showing some hints in game about the bounds of your real world space and I thought it would be interesting to discuss that in the context of the overall system.

Chaperone exists first and foremost to help people feel comfortable about moving around in room scale VR without having to worry about hurting yourself by running into something in the real world. One interesting aspect of this is that the chaperone system can really help the feeling of presence because once you get used to it you feel much more free to move around in your space. We do a ton of play testing and demos and pretty commonly see the first couple of minutes people move around with a lot of hesitation but once they get comfortable they let the experience they are in take over.

Within the chaperone system we think of two boundaries. The hard-bounds are the real limits of the physical space. If you are in an empty room, that would be the walls. If the room isn’t empty it might be some artificial line that represents the furniture. But the point is hard-bounds are real “do not pass” line. Hard-bounds are implemented in the OpenVR system and when the player gets near them we will draw the shape of the room over anything the current experience is drawing right in the compositor. This is actually somewhat intentionally presence breaking. If you are about to collide with something we want to bump you out of the immersive experience right away.

We also tried experimenting with using just a partial wall in front of you for the hard-bounds to tone it down a bit. I didn’t like that approach much because once I was getting into those limits I really wanted to see the shape of the whole room to help reorient myself.

Soft-bounds however are another matter. Soft-bounds are another set of lines inset a bit from the hard-bounds and they really represent the space you should be inside most of the time when playing. When you are standing inside the soft-bounds, you shouldn’t be able to easily wack things outside the hard-bounds. Soft-bounds aren’t drawn by the system because we don’t have a good way to do it automatically and keep you immersed in the experience you are in. However each experience can ideally represent the soft-bounds in some way that is consistent with its own look and that doesn’t really disrupt your overall presence. One great example of this is TheBlu from WEVR. You are standing underwater on the deck of a ship and its safe to walk anywhere up to the railing in one direction and up to a bunch of junk and wires in the other direction. If your play-space is smaller they could pretty easily move the junk and wires so that the clear area of the deck still represents the safe space. Other games might need to use things like subtle glowing lines on the floor (as seen in the Cloudhead video).

So the key is- use subtle cues to give the player hints about the right place to play. And when they move outside those limits give them less subtle warnings to keep them safe and help reorient themselves in the space.

The Cloudhead video also shows some great ideas about dealing with virtual walls. Obviously in VR you can’t prevent the user from moving the camera into some object just because its supposed to be solid in the virtual world. Remember that the camera IS their head, so if they move their head you have to move the camera. But it doesn’t mean you can’t do something about it. Now, sometimes its pretty fun to just let people put their heads inside your geometry and see the cool insides (people do this all the time with the robot in our Aperture demo and its a lot of fun). But if you don’t want to allow that you can either blur or fade the world when they put their head into the object. I especially like the Cloudhead solution of teleporting you back to some safe place if you stay in there for a while- the problem with just fading or bluring is it can become disorienting to not have that visual reference so it can be uncomfortable or actually difficult to find the correct direction to back out.

Finally I have head some reactions to the video of people expecting that the teleportation for locomotion will diminish presence. I’m sure this varies a bunch from person to person but for me at least it ends up very natural. Its hard to describe but after using it for hours I end up actually teleporting around fairly quickly and my brain doesn’t even think about it that much anymore. In contrast when I try a VR title that moves me around by gliding it feels very unnatural- something about the smooth motion seems wrong (and then I get sick and things feel even more wrong.. )

It all goes back to part of what is so exciting about working on VR right now- no one really knows the right way to do all this stuff. The space is wide open for people to come up with new ideas and test them to see what really works.

posted in Technology, VR | 0 Comments

27th July 2015

Quick thoughts on GPUs for VR

I see lots of speculation around about what graphics card you need for good VR experiences. At this point we are at least a few months away from availability of commercial VR systems for PCs. As such the advice I can give is pretty easy right now- wait if you can! If you absolutely positively need to buy a new graphics card now, go ahead. But if you can wait until you are actually receiving your VR headset things will be cheaper and there will be more information about upcoming cards, relative performance for VR scenarios and the needs of various titles needs.

In the end the answer won’t be straight-forward because it will all depend on content. Can you run VR with a HTC Vive with an older graphics card? Probably if all you want to do are more simple experiences. Heck, you could almost get the required 90FPS on an integrated GPU if your scene is just a simple cube (but maybe not quite). But the key thing is that you REALLY NEED to make 90fps. All the time. Techniques like async time warp are interesting but they have really visible artifacts when used with positional tracking and if you have tracked controllers they will be VERY noticeable (since every timewarp your controllers will glitch in their positions). Good experiences in VR are more about consistent smooth motion that 100% matches what you expect even more than realistic rendering. For example one of the demos we show is Job Simulator where everything is a cartoon representation of objects. It doesn’t look like reality but the Owlchemy guys did such a good job with it that it FEELS like reality.

So what am I going to do for my personal rig? First of all, wait as long as possible. Second, I’m not going to cut corners. Like most tech VR equipment will great much less expensive in the future. But this is the first generation of real products and its not going to be inexpensive for a quality setup. In the past I have never bought a GPU for much over $300, but I’m expecting to spend in the $650-range for either a 980ti/equivalent or maybe dual slightly smaller GPUs since stereo rendering can do such a great job on dual-GPU setups. I know its a lot, but then again 15 years ago I used to have to spend $4000 for a high-end PC and the whole system with an amazing GPU should be less than $2000 now. Until now the difference between that $300 GPU and something better was really hard to tell for most people, but with these VR experiences it becomes worth it.

posted in Graphics, Technology, Virtualization | 0 Comments

27th July 2015

VR Growing Pains

One of the things we are very passionate about at Valve is helping the industry create a whole new set of VR experiences that people love. We want to make sure that the exposure that the world gets to VR is really great and it doesn’t make people sick/sore/unhappy.

Surprisingly this is actually somewhat controversial. While many people can get sick easily if they are in VR experiences that move the world around them without the player actually moving, there are some people who are (mostly) immune to it. They can play some of the FPS games adapted for VR or just “tough it out” to try experiences. I’m the last person who is going to tell them they can’t do that to themselves, but I do get concerned when these people are excited about sharing those experiences with their friends and I certainly advise software developers working on new VR titles to make experiences that will be enjoyed by a wide audience, not just that 20%.

Another way to think about it is you don’t want the reviews of your game to read like these (actual reviews)-
“New X Title ‘Y’ Is Beautiful and Engaging and It Makes My Neck Hurt”
“With a little luck, (and dramamine for motion sickness) the VR mode will make it to the final release”
“I intentionally limited the amount of barrel rolls and hard turns as to avoid the dreaded ‘hot sweats’”
“In the demo I played stick-yaw was the method of turning, which made me a little motion sick by the demo’s end”
“X will test your skills… and potentially your stomach”

The bottom line is that the real audience isn’t going to tough it out, train themselves to avoid sim sickness or anything like that. If your game doesn’t get this right your audience will be extremely limited. There are already some fairly well known techniques to do this right but we are all learning more- again, the key is honest testing.

posted in Technology, VR | 0 Comments

7th June 2015

Macs and VR

The funny thing is that Macs should be great for VR. On average they tend to have slightly better graphics than the typical PC because Apple controls the ecosystem and has always needed good baseline graphics performance to support the visuals in their OS.

HOWEVER, quality VR is really hard to run on middle of the road GPUs. And to be clear, middle of the road in this case is not 50% of the performance of the high end (what you would see from a 970 or 980M). Its 10%. 70% of Mac gamers according to the Steam Hardware Survey are running on laptops. And really no Macs at least as configured by Apple short of the $3999 MacPro models have enough GPU to make 90 frames per second with an HMD for any reasonably complex content. The new Retina 5k display iMac can be gotten with an MD Radeon M295X that should be good enough but that is over $2500 and doesn’t even have an HDMI output so hooking up an HMD would need an adapter that might cause some interesting issues.

By comparison its pretty easy to build out a PC for about $1500 that has a top of the line Core i7 and a GeForce GTX 980 that should be equivalent to what we are using for demos. I should say that most of the PC OEMs don’t make this easy at the moment- good luck for example finding a system on the Dell site with reasonable specs without getting a very expensive Alienware. Some of the OEMs that let you customize a bunch more are a more reasonable bet for now. Hopefully as VR becomes more of a think on the marketplace both Apple and the PC OEMs will catch on and offer some great configurations optimized to be good for these scenarios (which also means they will be good general gaming PCs).

posted in Apple, Technology, VR | 0 Comments

6th June 2015

VR- Getting Sick

The topic of getting sick in VR (often called “simulator sickness”) is a complex one. The simplified core of the issue is that when your visual perception doesn’t match what your body (and especially inner ear) feels, many people feel sick. This is obviously not exclusive to VR- it can manifest in cars, airplanes, spaceflight, amusement park rides and more. Note that whatever the mechanism is for this in the brain isn’t 100% tuned to all exact physical experiences. Many people get sick on a roller coaster even though their motion perfectly matches what they see- just the brain didn’t evolve to be prepared for tight accelerating turns, changes in the gravity and so in those cases the mismatches between reality and brain-reality are enough to cause the problem.

Some people even get sick playing first-person video games without VR. The types of motions in a typical FPS with lots of dashing around, quick turns, and strafing movements can be especially difficult for people who experiences these issues.

In VR I think of two main sources of mismatch between what your body feels and the photons entering your retina. Both can be equally important for VR experience designers to consider and both can be very complicated to get right. System-mismatch is the mismatch caused by the core VR technology. At some basic level a VR system is measuring your head’s current position, velocity and acceleration and rendering a frame. Ideally that frame will be positioned so that it exactly corresponds to your head position at the moment in time the photons hit your retina. The other type is content-mismatch. This is by-design movement of the camera/head controlled by the experience that doesn’t exactly match the player’s physical movement.

For the fundamental technology we are working on at Valve we spend a lot of effort on system-mismatch. There are a number of techniques to address this, but they involve accurate tracking of head position, prediction of future head position, reducing latency in the rendering pipeline, deferring locking in head position in the rendering pipeline, adjusting rendering at the last minute, making sure rendering actually makes framerate (since there is no worse added latency than missing a frame totally) and finally reducing latency between your GPU and the actual photons being emitted (HDMI has a fair amount of latency and this relates to why wireless displays are very difficult).

As I said, most of these are issues for the core VR system. As a content/experience/game developer there isn’t that much you can do to change the tracking accuracy of whatever VR hardware you are using. Your main challenge is to never miss framerate. On the HTC Vive we run at 90hz so overall there are 11ms per frame. But there is plenty of overhead between various system things, the compositor, GPU scheduling, etc, so realistically you only have about 8ms to render your frame. It is critically important to design your content so that it fits within budget 100% (or at least 99.9%) of the time. Ideally game engines will help since its really hard to tune content to be consistent- most environments have something simple most of the time punctuated by some really complicated stuff occasionally (when all the explosions happen or something). If your game engine can automatically adjust and reduce render target sizes, AA, texture mips, etc, that can help a ton.

Content-mismatch is however a more complicated issue. In general I’m a fan of never making your players sick. Its true that there are some people who are less susceptible but its not clear to me that they are actually comfortable after spending a bunch of time in an experience that is questionable. I’d love to do some research where we wire some of them up to HR, breathing, and other monitors and have them play an experience that pushes the boundaries for a full hour. Personally I’m middle of the road- I’m far from immune to getting sick but not nearly as bad as some. Part of what confuses some of the debate so far is that so few people have extended time in VR. When I’m playing an experience that moves the camera I’m usually fine for 2 or 3 minutes (typical demo length), feeling just a little “off” at 5 minutes and it doesn’t get bad for a bit longer. But VR is still not going to be successful if 80% of people are feeling even a little uncomfortable after 5 minutes. They might not even realize they feel sick, but they certainly won’t enjoy the experience the same way they would if they were in well designed experiences that didn’t have these issues.

With all of the above preface, its useful to discuss what you can actually do. Most of the demos that Valve showed at GDC and since then had almost no forced-camera movement. We wanted to be especially careful to make sure people had great experiences while we are still learning what works. At this point I don’t think we can definitively say that we know what works, but there are a bunch of techniques that look promising.

In general it seems fine to have the player in a slow-moving elevator with good visual references. CloudHead Games does this well in The Gallery demo. Cockpits seem to help so driving/flying airplanes/etc look like they might work when done carefully (with the caveat that many people get sick in the real thing with these scenarios). The agency of directing the vehicle (car/airplane) yourself seems to help some. I have also seen good results with locomotion with teleporting- I often build prototypes that let you use one of the controllers to point to where you want to be and press the button to teleport yourself there to move over distances greater than the physical space you have available.

RoadToVR has some interesting stuff about experimenting with locomotion at the Austin VR Jam here.

posted in Technology, VR | 0 Comments

5th June 2015

VR Topics and Performance

The really exciting thing about working on VR at the moment is just how little we actually know. It really feels like the good old days of first working with a mouse and trying to figure out this brand new thing and having to learn how users interact with it. Some of my colleagues have been working on this stuff for years and certainly have learned a lot but since real mass-audience commercial products are not really on the market yet its early to think you actually have the answers.

There are tons of controversial topics at the moment- performance requirements, issues around making people sick, input, interactions, APIs, support for multiple hardware platforms, and more. I’ll try to touch on a couple of these things, but keep in mind that while it’s a subtle distinction these comments are a bit more “a few things we have learned” rather than “what we know”. And even more importantly I’m reluctant to say “you need to do it this way” because people are still inventing great ways to make stuff work.

Having said that I’ll touch on some performance stuff. A bunch of people were working at Valve for years development experimental hardware to try to understand the basic characteristics of what a VR system needs to really create a solid comfortable sense of presence. Of course there are lots of caveats and I’ll go into the details of what I think comfortable means but for me that means you need a display that updates at 90 frames per second with really solid tracking so that as you move your head the photons that hit your eyes really correspond to what would normally be happening as you move your head in the real world.

Hitting that 90 fps can be quite a performance challenge. When I’m playing games on my normal monitor I tend to prioritize beautiful rendering over frame-rate. I play on my 30″ monitor and crank up the quality settings and if the frame-rate ranges between 25fps to 50fps, fine. VR is a different beast- if you don’t hit 90fps consistently your users will feel sick. As they move the world won’t keep up with what the brain expects to see and for most people this can cause motion sickness which really ruins the experience.

So its critical that you hit 90fps. There are a bunch of techniques that game engines need to adopt to accomplish this since they traditionally have been designed to run at a fixed quality level with variable frame-rate. But there are also the questions about what hardware you need for this and its difficult to provide clear answers. Our demos at GDC all ran on a powerful gaming PC (although one that used normal parts) with a NVidia GTX 980 GPU. We wanted to show some high end experiences and preparing demos for an event like that doesn’t give you much time to optimize performance. But you could certainly run a bunch of great VR experiences on a machine with lower end parts. I guess the difficult part is just that its so hard to provide clear guidelines on what hardware you need given all the variables of the different kinds of content, how much optimization will happen before release and the need to hit that frame-rate with a lot of pixels consistently.

posted in Gaming, Technology, VR | 0 Comments

4th June 2015

15 Years of the Web

(this was written by me back in January 2009 but I didn’t hit publish at the time.. At this point its past 20 years…)

15 years ago in January 1994 was the first time I saw the web. Other folks can use whatever dates they want to to mark the history of the web- it certainly started earlier back in 1991 or 1993 when NCSA Mosaic came out or any of several other dates.

The timing of the web is more personal for me in two ways. First of all, its been a pretty central part of my professional life pretty much continuously since 1994. With a couple of small exceptions just about every project I’ve been involved in for the past 15 years has involved the web in some way. But beyond that, the web has fundamentally changed day to day life in a way that few other technologies have.

There are many technologies that I just didn’t “get” for a long time (IM and blogs to pick on two of the more embarrassing examples), but the web is something that seemed important instantly. Our experiences with the web are dramatically different than they were back in 1994, but the basics- HTML, HTTP, server side applications, browsers, client side UI projected from the server all haven’t fundamentally changed since then. What was obvious about this set of technologies was that this was an amazingly malleable platform, one consistent set of parts that could serve as the basis for all of these different kinds of things. Sometimes it can be frustrating that these core pieces are so flexible- it means that we have continued to evolve them, sometimes painfully, rather than being able to jump to a new “better” way to do things. But in the end, these same pieces have been able to pull it all off, and their global reach makes a really high barrier for any alternatives to surpass.

Looking back on the past 15 years, I have had a sense of amazement at some of the progress. I remember just being blown away the first time I saw URLs on the side of buses and in TV ads. There have been tons of similar “wow, its really happening” moments along the way, but the funny thing is it hasn’t been at all surprising most of the things that people have done with the web- collaborative workplaces, online communities, entertainment, games, they were all pretty obvious (at a high level- not all the smart things that people figured out to make them real) from the very beginning.

posted in Technology | 0 Comments